Abstract
Most event sequences in everyday human movement exhibit temporal structure: for instance, footsteps in walking, the striking of balls in a tennis match, the movements of a dancer set to rhythmic music, and the gestures of an orchestra conductor. These events generate prior expectancies regarding the occurrence of future events. Moreover, these expectancies play a critical role in conveying expressive qualities and communicative intent through themovement; thus they are of considerable interest in expressive musical control contexts. To this end, we introduce a novel gestural control paradigm for musical expression based on temporal expectancies induced by human movement via a general Bayesian framework called temporal expectancy network. We realize this paradigm in the form of a conducting analysis tool which infers beat, tempo, and articulation jointly with temporal expectancies regarding beat (ictus and preparation instances) from conducting gesture. Our system operates on data obtained from a marker based motion capture system, but can be easily adapted for more affordable technologies combining video cameras and inertial sensors. Using our analysis framework, we observe a significant effect on the patterns of temporal expectancies generated through varying expressive qualities of the gesture (e.g., staccato vs legato articulation) which at least partially confirms the role of temporal expectancies in musical expression.
Original language | English (US) |
---|---|
Pages | 348-355 |
Number of pages | 8 |
State | Published - 2007 |
Event | International Computer Music Conference, ICMC 2007 - Copenhagen, Denmark Duration: Aug 27 2007 → Aug 31 2007 |
Conference
Conference | International Computer Music Conference, ICMC 2007 |
---|---|
Country/Territory | Denmark |
City | Copenhagen |
Period | 8/27/07 → 8/31/07 |
ASJC Scopus subject areas
- Media Technology
- Computer Science Applications
- Music