A convolutive model for polyphonic instrument identification and pitch detection using combined classification

K-REx Repository

Show simple item record

dc.contributor.author Weese, Joshua L.
dc.date.accessioned 2013-04-25T15:16:42Z
dc.date.available 2013-04-25T15:16:42Z
dc.date.issued 2013-04-25
dc.identifier.uri http://hdl.handle.net/2097/15599
dc.description.abstract Pitch detection and instrument identification can be achieved with relatively high accuracy when considering monophonic signals in music; however, accurately classifying polyphonic signals in music remains an unsolved research problem. Pitch and instrument classification is a subset of Music Information Retrieval (MIR) and automatic music transcription, both having numerous research and real-world applications. Several areas of research are covered in this thesis, including the fast Fourier transform, onset detection, convolution, and filtering. Basic music theory and terms are also presented in order to explain the context and structure of data used. The focus of this thesis is on the representation of musical signals in the frequency domain. Polyphonic signals with many different voices and frequencies can be exceptionally complex. This thesis presents a new model for representing the spectral structure of polyphonic signals: Uniform MAx Gaussian Envelope (UMAGE). The new spectral envelope precisely approximates the distribution of frequency parts in the spectrum while still being resilient to oscillating rapidly (noise) and is able to generalize well without losing the representation of the original spectrum. When subjectively compared to other spectral envelope methods, such as the linear predictive coding envelope method and the cepstrum envelope method, UMAGE is able to model high order polyphonic signals without dropping partials (frequencies present in the signal). In other words, UMAGE is able to model a signal independent of the signal’s periodicity. The performance of UMAGE is evaluated both objectively and subjectively. It is shown that UMAGE is robust at modeling the distribution of frequencies in simple and complex polyphonic signals. Combined classification (combiners), a methodology for learning large concepts, is used to simplify the learning process and boost classification results. The output of each learner is then averaged to get the final result. UMAGE is less accurate when identifying pitches; however, it is able to achieve accuracy in identifying instrument groups on order-10 polyphonic signals (ten voices), which is competitive with the current state of the field. en_US
dc.language.iso en_US en_US
dc.publisher Kansas State University en
dc.subject Machine learning en_US
dc.subject Digital signal processing en_US
dc.subject Music information retrieval en_US
dc.subject Polyphonic Instrument Identification en_US
dc.subject Polyphonic pitch detection en_US
dc.subject Gaussian mixture en_US
dc.title A convolutive model for polyphonic instrument identification and pitch detection using combined classification en_US
dc.type Thesis en_US
dc.description.degree Master of Science en_US
dc.description.level Masters en_US
dc.description.department Department of Computing and Information Sciences en_US
dc.description.advisor William H. Hsu en_US
dc.subject.umi Computer Science (0984) en_US
dc.subject.umi Information Science (0723) en_US
dc.subject.umi Music (0413) en_US
dc.date.published 2013 en_US
dc.date.graduationmonth May en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search K-REx


Advanced Search

Browse

My Account

Statistics








Center for the

Advancement of Digital

Scholarship

118 Hale Library

Manhattan KS 66506


(785) 532-7444

cads@k-state.edu