Sociable humanoid robots pose a dramatic and intriguing shift in the
way one thinks about control of autonomous robots. Traditionally,
autonomous robots are designed to operate as independently and
remotely as possible from humans, often performing tasks in hazardous
and hostile environments. However, a new range of application domains
(domestic, entertainment, health care, etc.) are driving the
development of robots that can interact and cooperate
with people, and play a part in their daily lives.
Humanoid robots are arguably well suited to this. Sharing a similar
morphology, they can communicate in a manner that supports the natural
communication modalities of humans. Examples include facial
expression, body posture, gesture, gaze direction, and voice. The
ability for people to naturally communicate with these machines is
important. However, for suitably complex environments and tasks, the
ability for people to intuitively teach these robots will also be
important. Social aspects enter profoundly into both of these
challenging problem domains.
The Sociable Machines Project develops an expressive anthropomorphic
robot called Kismet that engages people in natural and expressive
face-to-face interaction. Inspired by infant social development,
psychology, ethology, and evolution, this work integrates theories and
concepts from these diverse viewpoints to enable Kismet to enter into
natural and intuitive social interaction with a human caregiver and to
learn from them, reminiscent of parent-infant exchanges. To do this,
Kismet perceives a variety of natural social cues from visual and
auditory channels, and delivers social signals to the human caregiver
through gaze direction, facial expression, body posture, and vocal
babbles. The robot has been designed to support several social cues
and skills that could ultimately play an important role in socially
situated learning with a human instructor. These capabilities are
evaluated with respect to the ability of naive subjects to read and
interpret the robot's social cues, the robot's ability to perceive and
appropriately respond to human social cues, the human's willingness to
provide scaffolding to facilitate the robot's learning, and how this
produces a rich, flexible, dynamic interaction that is physical,
affective, social, and affords a rich opportunity for learning.
We gratefully acknowledge our sponsors for their
generous support. The vision system was funded by a MURI research
grant from ONR and DARPA. The social interaction work is funded in
part by a MARS grant through DARPA and in part by NTT.
Video: Kismet overview
In this video clip, Cynthia presents an overview of Kismet. She describes the goals of the project and the motivation behind it. The robot's expressions, perceptions, and behavior are demonstrated as she interacts with Kismet in a wide variety of scenarios.
Quicktime (33 fps) -- (6.1 Meg) Quicktime (33 fps) -- (2.3 Meg)
Author: MIT Video Productions
Length: approximately 4 minutes
(image courtesy of P. Menzel)
View other
video,
publications.
contact information: cynthia@ai.mit.edu