Classification of emotion using sub audible frequencies in vocal data

K-REx Repository

Show simple item record Narber, Cody G. 2008-05-15T15:35:34Z 2008-05-15T15:35:34Z 2008-05-15T15:35:34Z
dc.description.abstract Current research involving vocal emotion detection has taken a variety of different approaches, and has found certain acoustic attributes to characterize different emotional states. While there have been improvements in classification over the past few years, computer classification is not nearly as accurate as human classification. This paper proposes the existence of an attribute that has not been examined, which can be used as a measure for detecting emotion in human vocal samples. It is shown that the new infrasonic attribute is significant when examining agitated emotions. Therefore, it can be used to help improve vocal emotion detection. en
dc.language.iso en_US en
dc.publisher Kansas State University en
dc.subject Emotion en
dc.subject Computer en
dc.subject Detecting en
dc.subject Stress en
dc.title Classification of emotion using sub audible frequencies in vocal data en
dc.type Thesis en Master of Science en
dc.description.level Masters en
dc.description.department Department of Computing and Information Sciences en
dc.description.advisor David A. Gustafson en
dc.subject.umi Computer Science (0984) en 2008 en May en

Files in this item

This item appears in the following Collection(s)

Show simple item record

Search K-REx

Advanced Search


My Account


Center for the

Advancement of Digital