Multi-modal expression recognition

Date

2008-05-14T16:07:07Z

Journal Title

Journal ISSN

Volume Title

Publisher

Kansas State University

Abstract

Robots will eventually become common everyday items. However before this becomes a reality, robots would need to learn be socially interactive. Since humans communicate much more information through expression than through actual spoken words, expression recognition is an important aspect in the development of social robots. Automatic recognition of emotional expressions has a number of potential applications other than just social robots. It can be used in systems that make sure the operator is alert at all times, or it can be used for psycho-analysis or cognitive studies. Emotional expressions are not always deliberate and can also occur without the person being aware of them. Recognizing these involuntary expressions provide an insight into the persons thought, state of mind and could be used as indicators for a hidden intent. In this research we developed an initial multi-modal emotion recognition system using cues from emotional expressions in face and voice. This is achieved by extracting features from each of the modalities using signal processing techniques, and then classifying these features with the help of artificial neural networks. The features extracted from the face are the eyes, eyebrows, mouth and nose; this is done using image processing techniques such as seeded region growing algorithm, particle swarm optimization and general properties of the feature being extracted. In contrast features of interest in speech are pitch, formant frequencies and mel spectrum along with some statistical properties such as mean and median and also the rate of change of these properties. These features are extracted using techniques such as Fourier transform and linear predictive coding. We have developed a toolbox that can read an audio and/or video file and perform emotion recognition on the face in the video and speech in the audio channel. The features extracted from the face and voices are independently classified into emotions using two separate feed forward type of artificial neural networks. This toolbox then presents the output of the artificial neural networks from one/both the modalities on a synchronized time scale. Some interesting results from this research is consistent misclassification of facial expressions between two databases, suggesting a cultural basis for this confusion. Addition of voice component has been shown to partially help in better classification.

Description

Keywords

facial expression recognition, emotion recognition in speech, multimodal expression recognition, neural networks, social robots, signal processing

Graduation Month

May

Degree

Master of Science

Department

Department of Mechanical and Nuclear Engineering

Major Professor

Akira T. Tokuhiro

Date

2008

Type

Thesis

Citation