Classification of emotion using sub audible frequencies in vocal data

K-REx Repository

Show simple item record

dc.contributor.author Narber, Cody G.
dc.date.accessioned 2008-05-15T15:35:34Z
dc.date.available 2008-05-15T15:35:34Z
dc.date.issued 2008-05-15T15:35:34Z
dc.identifier.uri http://hdl.handle.net/2097/776
dc.description.abstract Current research involving vocal emotion detection has taken a variety of different approaches, and has found certain acoustic attributes to characterize different emotional states. While there have been improvements in classification over the past few years, computer classification is not nearly as accurate as human classification. This paper proposes the existence of an attribute that has not been examined, which can be used as a measure for detecting emotion in human vocal samples. It is shown that the new infrasonic attribute is significant when examining agitated emotions. Therefore, it can be used to help improve vocal emotion detection. en
dc.language.iso en_US en
dc.publisher Kansas State University en
dc.subject Emotion en
dc.subject Computer en
dc.subject Detecting en
dc.subject Stress en
dc.title Classification of emotion using sub audible frequencies in vocal data en
dc.type Thesis en
dc.description.degree Master of Science en
dc.description.level Masters en
dc.description.department Department of Computing and Information Sciences en
dc.description.advisor David A. Gustafson en
dc.subject.umi Computer Science (0984) en
dc.date.published 2008 en
dc.date.graduationmonth May en


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search K-REx


Advanced Search

Browse

My Account

Statistics








Center for the

Advancement of Digital

Scholarship

cads@k-state.edu