Lab member Chien-Ming Huang gave his talk, “Enabling Human-Robot Joint Actions,” at the Google office in Madison on Friday. Drawing on his recently published research on multimodal behaviors (including speech, gaze, and gestures), Chien-Ming highlighted their importance in enabling successful human-robot interaction.

According to his speech, robots are a growing presence in human environments and must coordinate their actions with those of their users. Multimodal behaviors, as shown in Chien-Ming’s study, are a key factor in this join-action behavior. His research implemented a novel approach to modeling human behavior based on mimicking observed humanlike patterns, which is proving to be more useful for successful engagement with humans.

C.-M. Huang and B. Mutlu. Learning-Based Modeling of Multimodal Behaviors for Humanlike Robots. Proceedings of the 2014 ACM/IEEE Conference on Human-Robot Interaction (HRI 2014). March 2014. Bielefeld, Germany.