![]() ![]() |
Sound Mind: A Creative Approach | |||||
![]() ![]() ![]() ![]()
![]() The discussion here is based on techniques I developed for specific projects that were used to control sound synthesis equipment through MIDI (a standard communication protocol for interfacing electronic musical instruments and computers); but, the ideas could easily be applied to other applications, such as graphics and video generation. As you read, remember that the intention is not to substitute one recipe for another; but, to open your mind to the potential of becoming your own master chef. The goal is to expose the reader to new ideas, not to detail how to accomplish a particular task. For the most part, readers will not have access to the equipment originally used for these projects. Remember, the method used is not what we are stressing here, its the potential for inspiration. Hopefully, these ideas will be used as a starting point to something new and wonderful.
Control Systems A control system can provide data for any kind of destination, be it the control of laser lighting, visual art or even music.
Implementing a Control System You do not have to be a programming master to implement a control system. The basic process required by the program is as follows:
Weather or Not? While working on a film project, it became necessary to create a sound that added a particularly moody atmosphere to a windy scene. Numerous attempts to create a wind-like sound using sequenced pitch changing, sound filtering and other processing had failed. Though the results were somewhat pleasing, they were mechanical sounding and seemed to be missing some level of authenticity. After much experimentation, it was decided that the humanized manipulation of the sounds was the culprit. When you watch leaves, for example, being blown around by the wind, what you are seeing is not the wind itself, but the result of the wind's blow. The effect I had been trying to create, by manually manipulating sound, was based on what I could see - not what was actually happening. To achieve a more realistic effect, a control system was developed that used precision weather monitoring equipment in an outdoor environment. By applying real-time data, wind speed and direction, to the control of the sequenced sounds, a more realistic result was obtained. Another possible application of a control system like this, is to use weather data to control computer generation of clouds (ImageFX). In this way, you could use wind speed and direction to control the rate of drift as animated clouds move across the screen. Radio Shack has introduced a consumer weather monitoring system, the WX200 Weather Station (63-1015), that can measure barometric pressure, dew point, humidity, rainfall, temperature, wind direction and wind speed. It also includes a standard serial interface, so you can easily get it connected to your Amiga (aminet: misc/misc/wx200_1.10b.lha).
The Sky's NOT the Limit The project involved the projection of abstract graphics onto a large backdrop, during a live musical performance. Distant Suns was used to set up a simulation of two satellites, whose orbital paths would eventually result in a collision. The two satellites were tracked through their orbital paths and their distance and velocity data was used to generate geometric patterns that changed shape and size. The simulation was designed such that the collision would occur at a peak moment during the musical performance, causing a wild display of color and abstract art.
Putting Your Brain To Work Another type, and one that provides a more useful control source, is called the P300. It is believed to reflect the cortical response to sound (higher level functioning / actual thinking). The P300 gives a waveform which is dependent on the subject's thoughts, while "concentrating" on different sound stimuli (e.g. frequency). To gather AEP data, electrodes are attached to the subject's head. These electrodes pick up electrical activity in the brain. There are many areas of ongoing electrical activity in the brain and so it is necessary to separate this activity (typically called noise) from the signal to be monitored. This is accomplished using signal averaging equipment, which works by adding and averaging the monitored signal over and over again. This method provides an amplification of the wanted signal, causing the signal to "grow" and the noise to "shrink". By transferring this monitored signal to a computer, this data can be used to control almost anything. An interesting application of this technique is to monitor a person's brain response while listening to a passage of music. This response is then used to control the music being listened to, allowing the person to participate in the composition of the song. Another application would involve monitoring a person while they listen to pre-recorded audio through headphones. This audio could be anything from jack-hammer sounds to classical music. The person's responses can be used to control the selection of instruments, sound effects and musical notes. These selections can be recorded by computer and used as source information for the generation of a new composition. For further reading, see Richard Seabrook's The Brain-Computer Interface: Techniques for Controlling Machines online at enterprise.aacc.cc.md.us/~rhs/bcipaper.html.
Same or Different Shepard Tones are made up of a number of sinusoidal tones, which are in octave relation to each other. They are typically constructed from frequencies which correspond to notes on the piano/musical scale. So, in this manner, there would be twelve possible Shepard Tones (each corresponding to one of the twelve different notes of the musical scale, from A to G#). Each of the notes is referred to as a pitch class. So, pitch class C is made up of ...C0, C1, C2... that is, all frequencies which are half of or double (octave relation) a C frequency. This applies to the other eleven notes in the musical scale, as well. Because Shepard Tones are made up of all of one pitch class, they are ambiguous in terms of how high or low their pitch is; but clear in terms of what pitch class they are. If you play one Shepard Tone pitch class, followed by another Shepard Tone pitch class, it can be difficult to determine which tone is higher in pitch than the other - some people hear high, while others hear low. This principle/phenomenon suggests that a song could be written using Shepard Tones, which would give a melody different for each listener.
Saying It Like It Is A database of this type would allow the music composer to enter "sentences" to generate a control sequence for a composition, and to select appropriate sound structures for creating it. The database could be filled with common word associations for words such as FEAR, JOY, FLOWER, etc., based on the requirements of the composition and the composer. This idea can easily be applied to graphics... What might a PRETTY PINK FLOWER effect do to a video stream?
Help at Hand
MIDI (Musical Instrument Digital Interface) MIDI is a communications protocol, originally created for interfacing synthesizers and other electronic musical instruments. It has evolved into a communication system that can link virtually all of the equipment used in music and video production. Specialized systems, called sequencers, allow MIDI information to be stored and played back. Common MIDI commands include: musical notes, volume level, sustain pedal, tempo, etc. A special area of MIDI, called System Exclusive, allows for information control, specific to a particular device. Each device recognizes its specific control commands and responds only to them. |