Gestural musical interfaces using real time machine learning

Date

2018-12-01

Journal Title

Journal ISSN

Volume Title

Publisher

Kansas State University

Abstract

We present gestural music instruments and interfaces that aid musicians and audio engineers to express themselves efficiently. While we have mastered building a wide variety of physical instruments, the quest for virtual instruments and sound synthesis is on the rise. Virtual instruments are essentially software that enable musicians to interact with a sound module in the computer. Since the invention of MIDI (Musical Instrument Digital Interface), devices and interfaces to interact with sound modules like keyboards, drum machines, joysticks, mixing and mastering systems have been flooding the music industry. Research in the past decade gone one step further in interacting through simple musical gestures to create, shape and arrange music in real time. Machine learning is a powerful tool that can be smartly used to teach simple gestures to the interface. The ability to teach innovative gestures and shape the way a sound module behaves unleashes the untapped creativity of an artist. Timed music and multimedia programs such as Max/MSP/Jitter along with machine learning techniques open gateways to embodied musical experiences without physical touch. This master's report presents my research, observations and how this interdisciplinary field of research could be used to study wider neuroscience problems like embodied music cognition and human-computer interactions.

Description

Keywords

Machine Learning, Gesture Recognition, Music Technology, Neuroscience, Audio Engineering, Support Vector Machines

Graduation Month

December

Degree

Master of Science

Department

Department of Computer Science

Major Professor

William H. Hsu

Date

Type

Report

Citation