Embodied social agents, through their ability to afford embodied interaction using nonverbal human communicative cues, hold great promise in application areas such as education, training, rehabilitation, and collaborative work. Gaze cues are particularly important for achieving significant social and communicative goals. In this research, we explore how agents, both virtual and physical, might achieve these goals through various gaze mechanisms. We are developing control models of gaze behavior that treat gaze as the output of a system with a number of multimodal inputs.

By giving embodied agents the ability to draw on the full communicative power of gaze cues, this work will lead to human-agent interactions that are more engaging and rewarding. The primary outcome of this research will be a set of gaze models which can be dynamically combined to achieve any and all functions of gaze for a wide array of embodied characters and interaction modalities. These models will range from low-level computational models to high-level qualitative models. The primary hypothesis is that gaze cues generated by these models, which will be theoretically grounded in literature on human gaze, will evoke positive social and cognitive responses, and these results will generalize across agent representations and task contexts.

Please contact Sean Andrist at sandrist{at}cs.wisc.edu for any questions regarding this project.

This research is supported by the National Science Foundation award #1017952.

Publications

Pejsa, T., Andrist, S., Mutlu, B., and Gleicher, M. (Under Review). Gaze and Attention Management for Embodied Conversational Agents. Submitted to ACM Transactions on Interactive and Intelligent Systems (TiiS).

Andrist, S., Tan, X. Z., Gleicher, M., and Mutlu, B. (2014). Conversational Gaze Aversion for Humanlike Robots. In Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction (HRI '14). ACM. New York, NY, USA. 25-32. (pdf) [Best Paper Award Nominee]

More...

Code and Supplemental Materials

Here you can find some code and pseudocode for our gaze models, as well as some videos of agents carrying out gaze shifts using the models.

Media

New Scientist (UK), 2014: "The robot tricks to bridge the uncanny valley"

AAAS Science Update(US), 2014: "Robot gaze aversion"

Science Nation (US), 2012: "Robots that can teach humans"