TY - GEN
T1 - Capturing expressive and indicative qualities of conducting gesture
T2 - 4th International Symposium on Computer Music Modeling and Retrieval: Sense of Sounds, CMMR 2007
AU - Swaminathan, Dilip
AU - Thornburg, Harvey
AU - Ingalls, Todd
AU - Rajko, Stjepan
AU - James, Jodi
AU - Campana, Ellen
AU - Afanador, Kathleya
AU - Leistikow, Randal
PY - 2008
Y1 - 2008
N2 - Many event sequences in everyday human movement exhibit temporal structure: for instance, footsteps in walking, the striking of balls in a tennis match, the movements of a dancer set to rhythmic music, and the gestures of an orchestra conductor. These events generate prior expectancies regarding the occurrence of future events. Moreover, these expectancies play a critical role in conveying expressive qualities and communicative intent through the movement; thus they are of considerable interest in musical control contexts. To this end, we introduce a novel Bayesian framework which we call the temporal expectancy model and use it to develop an analysis tool for capturing expressive and indicative qualities of the conducting gesture based on temporal expectancies. The temporal expectancy model is a general dynamic Bayesian network (DBN) that can be used to encode prior knowledge regarding temporal structure to improve event segmentation. The conducting analysis tool infers beat and tempo, which are indicative and articulation which is expressive, as well as temporal expectancies regarding beat (ictus and preparation instances) from conducting gesture. Experimental results using our analysis framework reveal a very strong correlation in how significantly the preparation expectancy builds up for staccato vs legato articulation, which bolsters the case for temporal expectancy as cognitive model for event anticipation, and as a key factor in the communication of expressive qualities of conducting gesture. Our system operates on data obtained from a marker based motion capture system, but can be easily adapted for more affordable technologies like video camera arrays.
AB - Many event sequences in everyday human movement exhibit temporal structure: for instance, footsteps in walking, the striking of balls in a tennis match, the movements of a dancer set to rhythmic music, and the gestures of an orchestra conductor. These events generate prior expectancies regarding the occurrence of future events. Moreover, these expectancies play a critical role in conveying expressive qualities and communicative intent through the movement; thus they are of considerable interest in musical control contexts. To this end, we introduce a novel Bayesian framework which we call the temporal expectancy model and use it to develop an analysis tool for capturing expressive and indicative qualities of the conducting gesture based on temporal expectancies. The temporal expectancy model is a general dynamic Bayesian network (DBN) that can be used to encode prior knowledge regarding temporal structure to improve event segmentation. The conducting analysis tool infers beat and tempo, which are indicative and articulation which is expressive, as well as temporal expectancies regarding beat (ictus and preparation instances) from conducting gesture. Experimental results using our analysis framework reveal a very strong correlation in how significantly the preparation expectancy builds up for staccato vs legato articulation, which bolsters the case for temporal expectancy as cognitive model for event anticipation, and as a key factor in the communication of expressive qualities of conducting gesture. Our system operates on data obtained from a marker based motion capture system, but can be easily adapted for more affordable technologies like video camera arrays.
UR - http://www.scopus.com/inward/record.url?scp=50349095384&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=50349095384&partnerID=8YFLogxK
U2 - 10.1007/978-3-540-85035-9_3
DO - 10.1007/978-3-540-85035-9_3
M3 - Conference contribution
SN - 3540850341
SN - 9783540850342
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 34
EP - 55
BT - Computer Music Modeling and Retrieval
Y2 - 27 August 2007 through 31 August 2007
ER -