A convolutive model for polyphonic instrument identification and pitch detection using combined classification

dc.contributor.authorWeese, Joshua L.
dc.date.accessioned2013-04-25T15:16:42Z
dc.date.available2013-04-25T15:16:42Z
dc.date.graduationmonthMayen_US
dc.date.issued2013-05-01
dc.date.published2013en_US
dc.description.abstractPitch detection and instrument identification can be achieved with relatively high accuracy when considering monophonic signals in music; however, accurately classifying polyphonic signals in music remains an unsolved research problem. Pitch and instrument classification is a subset of Music Information Retrieval (MIR) and automatic music transcription, both having numerous research and real-world applications. Several areas of research are covered in this thesis, including the fast Fourier transform, onset detection, convolution, and filtering. Basic music theory and terms are also presented in order to explain the context and structure of data used. The focus of this thesis is on the representation of musical signals in the frequency domain. Polyphonic signals with many different voices and frequencies can be exceptionally complex. This thesis presents a new model for representing the spectral structure of polyphonic signals: Uniform MAx Gaussian Envelope (UMAGE). The new spectral envelope precisely approximates the distribution of frequency parts in the spectrum while still being resilient to oscillating rapidly (noise) and is able to generalize well without losing the representation of the original spectrum. When subjectively compared to other spectral envelope methods, such as the linear predictive coding envelope method and the cepstrum envelope method, UMAGE is able to model high order polyphonic signals without dropping partials (frequencies present in the signal). In other words, UMAGE is able to model a signal independent of the signal’s periodicity. The performance of UMAGE is evaluated both objectively and subjectively. It is shown that UMAGE is robust at modeling the distribution of frequencies in simple and complex polyphonic signals. Combined classification (combiners), a methodology for learning large concepts, is used to simplify the learning process and boost classification results. The output of each learner is then averaged to get the final result. UMAGE is less accurate when identifying pitches; however, it is able to achieve accuracy in identifying instrument groups on order-10 polyphonic signals (ten voices), which is competitive with the current state of the field.en_US
dc.description.advisorWilliam H. Hsuen_US
dc.description.degreeMaster of Scienceen_US
dc.description.departmentDepartment of Computing and Information Sciencesen_US
dc.description.levelMastersen_US
dc.identifier.urihttp://hdl.handle.net/2097/15599
dc.language.isoen_USen_US
dc.publisherKansas State Universityen
dc.subjectMachine learningen_US
dc.subjectDigital signal processingen_US
dc.subjectMusic information retrievalen_US
dc.subjectPolyphonic Instrument Identificationen_US
dc.subjectPolyphonic pitch detectionen_US
dc.subjectGaussian mixtureen_US
dc.subject.umiComputer Science (0984)en_US
dc.subject.umiInformation Science (0723)en_US
dc.subject.umiMusic (0413)en_US
dc.titleA convolutive model for polyphonic instrument identification and pitch detection using combined classificationen_US
dc.typeThesisen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
JoshWeese2013.pdf
Size:
1.49 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.62 KB
Format:
Item-specific license agreed upon to submission
Description: