TY - JOUR
T1 - AutoTutor
T2 - Incorporating back-channel feedback and other human-like conversational behaviors into an intelligent tutoring system
AU - Rajan, S.
AU - Craig, S. D.
AU - Gholson, B.
AU - Person, N. K.
AU - Graesser, A. C.
N1 - Funding Information: This research was funded by the National Science Foundation (SBR 9720314) in a grant awarded to the Tutoring Research Group. The following members of the Tutoring Research Group at the University of Memphis conducted research on this project: Ashraf Anwar, Laura Bautista, Myles Bogner, Tim Brogdon, Patrick Chipman, Scotty Craig, Rachel DiPaolo, Stan Franklin, Max Garzon, Barry Gholson, Art Graesser, Doug Hacker, Peggy Halde, Derek Harter, Jim Ho-effner, Xiangen Hu, Jeff Janovetz, Ashish Karnavat, Bianca Klettke, Roger Kreuz, Kristen Link, Shulan Lu, Zhijun Lu, William Marks, Johanna Marineau, Eric Mathews, Lee McCauley, Brent Olde, Natalie Person, Victoria Pomeroy, Penelope Price, Sonya Rajan, Charanjit Singh, Mat Weeks, Holly White, Shannon Whitten, Katja Wiemer-Hastings, Peter Wiemer-Hastings, Shoujie Yang, and Zhaohua Zhang.
PY - 2001/6
Y1 - 2001/6
N2 - This paper describes our recent attempts to incorporate human-like conversational behaviors into the dialog moves delivered by an animated pedagogical agent that simulates human tutors. We first present a brief overview of the modules comprising AutoTutor, an intelligent tutoring system. The second section describes a set of conversational behaviors that are being incorporated into AutoTutor. The behaviors of interest involve variations in intonation, head movements, arm and hand movements, facial expressions, eye blinking, gaze direction, and back-channel feedback. The final section presents a recent empirical study concerned with back-channel feedback events during human-to-human tutoring sessions. The back-channel feedback events emitted by tutors are mostly positive (63%), mostly verbal (77%), and immediately follow speech-act boundaries or noun-phrase boundaries (83%). Tutors also deliver back-channel events at a very high rate when students are emitting dialog, about 13 events per minute. Conversely, 88% of students' back-channel feedback events are head nods, and they occur at unbounded locations (63%).
AB - This paper describes our recent attempts to incorporate human-like conversational behaviors into the dialog moves delivered by an animated pedagogical agent that simulates human tutors. We first present a brief overview of the modules comprising AutoTutor, an intelligent tutoring system. The second section describes a set of conversational behaviors that are being incorporated into AutoTutor. The behaviors of interest involve variations in intonation, head movements, arm and hand movements, facial expressions, eye blinking, gaze direction, and back-channel feedback. The final section presents a recent empirical study concerned with back-channel feedback events during human-to-human tutoring sessions. The back-channel feedback events emitted by tutors are mostly positive (63%), mostly verbal (77%), and immediately follow speech-act boundaries or noun-phrase boundaries (83%). Tutors also deliver back-channel events at a very high rate when students are emitting dialog, about 13 events per minute. Conversely, 88% of students' back-channel feedback events are head nods, and they occur at unbounded locations (63%).
KW - AutoTutor
KW - Back-channel feedback
KW - Intelligent tutoring
UR - https://www.scopus.com/pages/publications/0035358515
UR - https://www.scopus.com/pages/publications/0035358515#tab=citedBy
U2 - 10.1023/A:1017319110294
DO - 10.1023/A:1017319110294
M3 - Article
SN - 1381-2416
VL - 4
SP - 117
EP - 126
JO - International Journal of Speech Technology
JF - International Journal of Speech Technology
IS - 2
ER -