We developed a model of gaze aversion, the intentional redirection of gaze away from the face of a conversational partner, for robots to signal cognitive effort, regulate a conversation’s intimacy level, and manage the conversational floor. The implementation of the model enabled the robot to autonomously generate and blend three distinct types of robot head movements with different purposes: face-tracking movements to engage in mutual gaze, idle head motion to increase lifelikeness, and purposeful gaze aversions to achieve conversational functions. Our evaluation of this implementation showed that gaze aversions are perceived as intentional by people and that robots can use gaze aversions to appear more thoughtful and effectively manage the conversational floor. We have presented our results at the 2014 ACM/IEEE International Conference on Human-Robot Interaction (HRI), in Bielefeld, Germany where it was nominated for the Best Paper Award. Our work was also featured in national and international media.
- University of Wisconsin–Madison News (US) “Bridging the uncanny valley between humans, robots”
- New Scientist (UK) “The robot tricks to bridge the uncanny valley”
- Popular Science (US) “Robots Seem More Thoughtful If They Glance Away While They Talk”
- AAAS Science Update (US) “Robot Gaze Aversion”
- Badger Herald (US) “UW student researches ways to make robots more human”