Musicians have control over a lot of different aspects of sound that they use to create expressive, dynamic music. A violinist alters their bowing speed and pressure to effect timbre, a horn player can create a swell in volume, a vocalist varies their vibrato. This is what makes music interesting, it gives it variation over time. With computer generated music, controlling these parameters is the difference between static, lifeless sounds and interesting expressive musical sounds.
Gestures in Digital Music Instruments
With acoustic instruments the connection between the physical gesture and the resulting sound is obvious – its tied directly to the way the instrument is played. With digital music instruments there is no direct physical mapping – the mappings are up to the designer and is done in software. However, this doesn’t mean you should make arbitrary mappings! In order to create something that is playable, or even intuitive it’s important to consider how a player wil play a new interface.
Some questions to consider:
– What physical gestures do you want to use to play or interact with your instrument?
– What does each hand do? What about other parts of the body?
– Is the instrument to be held, set on a table, around the neck with a strap or some other interaction? Where do the various inputs (buttons, switches and sensors) go?
– What elements of music or sound parameters do the different interactions control?
– Is there a metaphor for this gesture?
– And finally, for each gesture or sound parameter under control, does it need to be a digital or analog input?
Once you have some ideas to explore – its time to try creating an interface and do the mapping in software.
Go to – Mapping Inputs to Sound – for more on how to use the modular-muse interface objects to bring in data from buttons, switches and sensors.