Lab member Henny Admoni successfully defended her dissertation “Nonverbal Communication in Socially Assistive Human-Robot Interaction” on November, 23, 2015. Henny moves on to a position as Postdoctoral Associate in Professor Siddharth Srinivasa’s Personal Robotics Lab at Carnegie Mellon University.
Defense Abstract:
Socially assistive robots provide assistance to human users through interactions that are inherently social. Socially assistive robots include robot tutors that instruct students through personalized one-on-one lessons, robot therapy assistants that help mediate social interactions between children with developmental disorders and adult therapists, and robot caretakers that assist elderly or disabled people in their homes. To succeed in their role of social assistance, these robots must be capable of natural communication with people. Natural communication is multimodal, with both verbal (speech) and nonverbal (eye gaze, gestures, and other behaviors) channels.
This dissertation focuses on enabling human-robot communication by building models for understanding human nonverbal behavior and generating robot nonverbal behavior in socially assistive domains. It investigates how to computationally model eye gaze and other nonverbal behaviors so that these behaviors can be used by socially assistive robots to improve collaboration between people and robots. Developing effective nonverbal communication for robots engages a number of disciplines including autonomous control, machine learning, computer vision, design, and cognitive psychology. This dissertation contributes across all of these disciplines, providing a greater understanding of the computational and human requirements for successful human-robot interactions.
To help focus these models on the features that most strongly influence human-robot interactions, I first conducted a series of studies that draw out human responses to specific robot nonverbal behaviors. These carefully controlled laboratory-based studies investigate how robot eye gaze compares to human eye gaze in eliciting reflexive attention shifts from human viewers; how different features of robot gaze behavior promote the perception of a robot’s attention toward a viewer; whether people use robot eye gaze to support verbal object references and how they resolve conflicts in this multimodal communication; and what is the role of eye gaze and gesture in guiding behavior during human-robot collaboration.
Based on this understanding of nonverbal communication between people and robots, I develop a set of models for understanding and generating nonverbal behavior in human-robot interactions. The first model uses a data-driven approach based in the domain of tutoring. It is trained on examples from human-human behavior, in which a teacher instructs a student about a map-based board game. This model can predict the context of a communication from a new observation of nonverbal behavior, as well as suggest appropriate nonverbal behaviors to support a desired context.
The second model takes a scene-based approach to generate nonverbal behavior for a socially assistive robot. It does not rely on a priori collection and annotation of human examples, as the first model does. Instead, it calculates how a user will perceive a visual scene from their own perspective based on cognitive psychology principles, and it then selects the best robot nonverbal behavior to direct the user’s attention based on this predicted perception. The model can be flexibly applied to a range of scenes and a variety of robots with different physical capabilities. I show that this second model performs well in both a targeted evaluation and in a naturalistic human-robot collaborative interaction.
Advisor: Brian Scassellati
Other committee members:
Holly Rushmeier
Drew McDermott
Greg Trafton (NRL)