The purpose of this workshop is to explore the relationship between gaze and speech during ”face-to-face” human-robot interaction. As advances in speech recognition have made speech-based interaction with robots possible, it has become increasingly apparent that robots need to exhibit nonverbal social cues in order to disambiguate and structure their spoken communication with humans. Gaze behavior is one of the most powerful and fundamental sources of supplementary information to spoken communication. Gaze structures turn-taking, indicates attention, and implicitly communicates information about social roles and relationships. There is a growing body of work on gaze and speech based interaction in HRI, involving both the measurement and evaluation of human speech and gaze during interaction with robots and the design and implementation of robot speech and accompanying gaze behavior for interaction with humans.