Multi-modal expression recognition

K-REx Repository

Show simple item record

dc.contributor.author Chandrapati, Srivardhan
dc.date.accessioned 2008-05-14T16:07:07Z
dc.date.available 2008-05-14T16:07:07Z
dc.date.issued 2008-05-14T16:07:07Z
dc.date.submitted May 2008 en
dc.identifier.uri http://hdl.handle.net/2097/762
dc.description.abstract Robots will eventually become common everyday items. However before this becomes a reality, robots would need to learn be socially interactive. Since humans communicate much more information through expression than through actual spoken words, expression recognition is an important aspect in the development of social robots. Automatic recognition of emotional expressions has a number of potential applications other than just social robots. It can be used in systems that make sure the operator is alert at all times, or it can be used for psycho-analysis or cognitive studies. Emotional expressions are not always deliberate and can also occur without the person being aware of them. Recognizing these involuntary expressions provide an insight into the persons thought, state of mind and could be used as indicators for a hidden intent. In this research we developed an initial multi-modal emotion recognition system using cues from emotional expressions in face and voice. This is achieved by extracting features from each of the modalities using signal processing techniques, and then classifying these features with the help of artificial neural networks. The features extracted from the face are the eyes, eyebrows, mouth and nose; this is done using image processing techniques such as seeded region growing algorithm, particle swarm optimization and general properties of the feature being extracted. In contrast features of interest in speech are pitch, formant frequencies and mel spectrum along with some statistical properties such as mean and median and also the rate of change of these properties. These features are extracted using techniques such as Fourier transform and linear predictive coding. We have developed a toolbox that can read an audio and/or video file and perform emotion recognition on the face in the video and speech in the audio channel. The features extracted from the face and voices are independently classified into emotions using two separate feed forward type of artificial neural networks. This toolbox then presents the output of the artificial neural networks from one/both the modalities on a synchronized time scale. Some interesting results from this research is consistent misclassification of facial expressions between two databases, suggesting a cultural basis for this confusion. Addition of voice component has been shown to partially help in better classification. en
dc.language.iso en_US en
dc.publisher Kansas State University en
dc.subject facial expression recognition en
dc.subject emotion recognition in speech en
dc.subject multimodal expression recognition en
dc.subject neural networks en
dc.subject social robots en
dc.subject signal processing en
dc.title Multi-modal expression recognition en
dc.type Thesis en
dc.description.degree Master of Science en
dc.description.level Masters en
dc.description.department Department of Mechanical and Nuclear Engineering en
dc.description.advisor Akira T. Tokuhiro en
dc.subject.umi Artificial Intelligence (0800) en
dc.subject.umi Engineering, Electronics and Electrical (0544) en
dc.subject.umi Engineering, Mechanical (0548) en
dc.date.published 2008 en
dc.date.graduationmonth May en

Files in this item


Files Size Format View

This item appears in the following Collection(s)

Show simple item record