Multi-modal expression recognition

dc.contributor.authorChandrapati, Srivardhan
dc.date.accessioned2008-05-14T16:07:07Z
dc.date.available2008-05-14T16:07:07Z
dc.date.graduationmonthMayen
dc.date.issued2008-05-14T16:07:07Z
dc.date.published2008en
dc.description.abstractRobots will eventually become common everyday items. However before this becomes a reality, robots would need to learn be socially interactive. Since humans communicate much more information through expression than through actual spoken words, expression recognition is an important aspect in the development of social robots. Automatic recognition of emotional expressions has a number of potential applications other than just social robots. It can be used in systems that make sure the operator is alert at all times, or it can be used for psycho-analysis or cognitive studies. Emotional expressions are not always deliberate and can also occur without the person being aware of them. Recognizing these involuntary expressions provide an insight into the persons thought, state of mind and could be used as indicators for a hidden intent. In this research we developed an initial multi-modal emotion recognition system using cues from emotional expressions in face and voice. This is achieved by extracting features from each of the modalities using signal processing techniques, and then classifying these features with the help of artificial neural networks. The features extracted from the face are the eyes, eyebrows, mouth and nose; this is done using image processing techniques such as seeded region growing algorithm, particle swarm optimization and general properties of the feature being extracted. In contrast features of interest in speech are pitch, formant frequencies and mel spectrum along with some statistical properties such as mean and median and also the rate of change of these properties. These features are extracted using techniques such as Fourier transform and linear predictive coding. We have developed a toolbox that can read an audio and/or video file and perform emotion recognition on the face in the video and speech in the audio channel. The features extracted from the face and voices are independently classified into emotions using two separate feed forward type of artificial neural networks. This toolbox then presents the output of the artificial neural networks from one/both the modalities on a synchronized time scale. Some interesting results from this research is consistent misclassification of facial expressions between two databases, suggesting a cultural basis for this confusion. Addition of voice component has been shown to partially help in better classification.en
dc.description.advisorAkira T. Tokuhiroen
dc.description.degreeMaster of Scienceen
dc.description.departmentDepartment of Mechanical and Nuclear Engineeringen
dc.description.levelMastersen
dc.identifier.urihttp://hdl.handle.net/2097/762
dc.language.isoen_USen
dc.publisherKansas State Universityen
dc.subjectfacial expression recognitionen
dc.subjectemotion recognition in speechen
dc.subjectmultimodal expression recognitionen
dc.subjectneural networksen
dc.subjectsocial robotsen
dc.subjectsignal processingen
dc.subject.umiArtificial Intelligence (0800)en
dc.subject.umiEngineering, Electronics and Electrical (0544)en
dc.subject.umiEngineering, Mechanical (0548)en
dc.titleMulti-modal expression recognitionen
dc.typeThesisen

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
SrivardhanChandrapati2008.pdf
Size:
6.98 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.69 KB
Format:
Item-specific license agreed upon to submission
Description: