Context-based visual feedback recognition

Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2007. === Includes bibliographical references (p. 183-195). === During face-to-face conversation, people use visual feedback (e.g., head and eye gesture) to communicate relevant inf...

Full description

Bibliographic Details
Main Author: Morency, Louis-Philippe, 1977-
Other Authors: Trevor Darrell.
Format: Others
Language:English
Published: Massachusetts Institute of Technology 2007
Subjects:
Online Access:http://hdl.handle.net/1721.1/38686
Description
Summary:Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2007. === Includes bibliographical references (p. 183-195). === During face-to-face conversation, people use visual feedback (e.g., head and eye gesture) to communicate relevant information and to synchronize rhythm between participants. When recognizing visual feedback, people often rely on more than their visual perception. For instance, knowledge about the current topic and from previous utterances help guide the recognition of nonverbal cues. The goal of this thesis is to augment computer interfaces with the ability to perceive visual feedback gestures and to enable the exploitation of contextual information from the current interaction state to improve visual feedback recognition. We introduce the concept of visual feedback anticipation where contextual knowledge from an interactive system (e.g. last spoken utterance from the robot or system events from the GUI interface) is analyzed online to anticipate visual feedback from a human participant and improve visual feedback recognition. Our multi-modal framework for context-based visual feedback recognition was successfully tested on conversational and non-embodied interfaces for head and eye gesture recognition. We also introduce Frame-based Hidden-state Conditional Random Field model, a new discriminative model for visual gesture recognition which can model the substructure of a gesture sequence, learn the dynamics between gesture labels, and can be directly applied to label unsegmented sequences. The FHCRF model outperforms previous approaches (i.e. HMM, SVM and CRF) for visual gesture recognition and can efficiently learn relevant contextual information necessary for visual feedback anticipation. A real-time visual feedback recognition library for interactive interfaces (called Watson) was developed to recognize head gaze, head gestures, and eye gaze using the images from a monocular or stereo camera and the context information from the interactive system. Watson was downloaded by more then 70 researchers around the world and was successfully used by MERL, USC, NTT, MIT Media Lab and many other research groups. === by Louis-Philippe Morency. === Ph.D.