From Outside the window: Electronic sound performance
May 6 2008
“All musical instruments are tools that map human motoric input on an acoustic output.”
“How can I translate the sense of corporeal fallibility and virtuosity present in acoustic performances into performances of electronic sounds that have no previously established gestural analog?”
Though I have been working with a personal computer since 1983 and feel more or less married to the computer, I have remained an outsider to the world of computer music. This is probably because in the 1960s I eschewed synthesis and found my own way to generate electronic sound. Rather than adding up sine waves to create sounds and cutting and splicing tape together to make a piece out of those sounds to be played back on tape, I found a way that allowed me to perform my music in real time in the so called classical electronic music studio. I needed to map my “human motoric input” onto the machine and “feel my corporeal fallibility and virtuosity” in the process.
The San Francisco Tape Music Center3 was home base in the 60s for all kinds of musical adventures and experiments for my colleagues and me. The studio consisted of a small variety of electronic test equipment, a large telephone style patch bay, amplifiers, speakers and two professional Ampex stereo tape machines. Those tube oscillators with the big frequency dial on the front did not seem inviting for performance. The instruments were made for setting frequencies for scientific tests. Furthermore there were switches to shift the range of frequency, as the dial could not cover the whole frequency range that was accessible on the oscillator.
After staring for a long time at the large Hewlett Packard war surplus test oscillators4 and wondering how I could make any music with them an idea popped into mind: My accordion5 teacher Willard Palmer6 had taught me to listen to difference tones. If I played an interval in the high register of my instrument and pulled hard on the bellows I could hear the difference tones. I had always wondered how it would be to just hear the difference tones without the generating tones.
I noticed that the oscillator frequency range went from 1hz to 500khz– above and below the range of human hearing. I set two of our three oscillators to around 40khz and 39950hz. I patched them through to the sound system expecting to hear the difference. At first there was nothing. I thought, “there must be something – maybe the amplitude is too low”. I placed line amplifiers in the patch and sure enough I heard my first difference tone low and clear and rich in quality. I was startled, delighted and thrilled with the 50hz sound! It felt rather magical drawing down this tone from sound vibrations that I could not hear. I touched the dial tentatively and discovered immediately that I could now sweep the audio range by barely turning the dial. “Now”, I thought: – “how do I use this to play music”?
I had discovered tape head delay and also that I could use more than one tape machine to add distance between play back heads for longer time delays.7 I could route the tracks via a patch bay (there were no mixers available then) in a variety of configurations. . I fed the output of the oscillators into the first tape machine, listening and tuning the configuration or signal routing to build my performance instrument. The bias frequencies of the tape machines also modulated the sounds I was getting from the heterodyning of the oscillators. I could send sounds into track 1 of the stereo tape, hear both the record head and playback head with the brief delay then hear the sound come back from a second playback machine when the tape passed over to the distant supply reel. I could patch the tracks back to the first machine so that new loops were created. Feedback and canonical form became a core element of my electronic music.
In my experience with long practice sessions playing my accordion I had noticed that my hands usually had a very pleasant tingling sensation that I enjoyed – an after effect of mapping “motoric input onto an acoustic output” or my fingers extended to accordion keys and buttons extended to valves so bellows action could blow air through reeds to make sounds.
Soon through improvisation I was creating my first electronic music. In creating my electronic instrument with the oscillators the huge dials that had seemed so unfriendly to performance now became receivers for the musical knowledge embodied in my hands and fingers. I had created a very unstable non-linear music making system: Difference tones from tones set above the range of hearing manipulated by the bias frequency of the electromagnetic tape recording, feedback from a second tape machine in parallel with newly generated difference tones as I responded instantaneously with my hands on those dials to what I was hearing from the delays and as the sounds were all being recorded on magnetic tape.
I had created a new musical instrument that included my “human motoric input” mapped onto the machine for analog output to speakers. This meant that I could play my electronic music in real time without editing or overdubbing. The first pieces made at the San Francisco Tape Music Center were called Mnemonics I-V. (1965). My first electronic music record release was I of IV (1966)8 (CBS Odyssey 32 16 0160) made at the University of Toronto Electronic Music Studio. . An enthusiastic reviewer lauded the sound of the piece and at the end turned negative and said that because it was made in real time it must have just been thrown together. Clearly according to the reviewer I was an outsider throwing music together rather than constructing it carefully.
In 1966 the Buchla Modular synthesizer rendered the classical electronic music studio obsolescent9. I made some real time pieces performing the Buchla with my delay system. I missed the sound of those tube oscillators though. Transistor oscillators had a very different sound quality and feeling. Around 1967 I began to play my accordion with the tape delay system. I needed a trusty performance sound source. I played concerts dragging around a couple of Sony 77710 tape recorders for my delay system. Eventually in 1983 I replaced the tape machines with Lexicon PC4211 digital delay processors – one for each hand. I called the evolving instrument Expanded Instrument System12 I applied what I had learned from playing the oscillators with delays to my accordion playing. I wanted more and more delay processors. This desire led inevitably to the adoption of computer control of the processors. There was no way to manage all the knobs and switches that I wanted to manipulate while performing. I was not interested in studio techniques for constructing a composition. Clearly my devotion was to improvisational real time performance.
My adoption of a computer version of my EIS was slowed until 16-bit CD quality sound became more readily available around 1991. I was reluctant to give up the warm sound of the Lexicons. Just as I had been reluctant to give up the sound of those beautiful tube oscillators of the 60s.
The Lexicon PCM42s had special performance features designed by Gary Hall13 that I liked as well as the excellent sound reproduction that led me to choose them in the first place. Unlike the tape machine delay system the digital delay processors allowed for capturing phrases for looping, modulation of delayed material with sine and square waves, and voltage control of clock time for pitch bending. My feet on accelerator type pedals became sensitized much as my hands and fingers had for motoric mapping to acoustic output. I became very adept at foot controlled pitch bending with both feet as I was performing new material with my fingers.
I worked in a residency at the Banff Center for three weeks on The Lightning Box - a project with programmer Cornelia Colyer14 to develop what I called super foot. I wanted a program that could control the Lexicons like I was doing with my feet only going beyond what I could do physically with my feet. The program would modulate the delays as if my foot was moving at impossible rates and configurations. The first execution of that program was quite thrilling to me as I had extended my motoric input into the computer and imagined something beyond what I could physically accomplish. This was a start towards what eventually became part of the EIS. I created recipes or algorithms that could process delayed input and also the delays as well. Rather than knobs and switches there was a program to control what I could not manage. The program behaved as though it was subject to “human corporeality and fallibility”. The recipes or algorithms that I designed for the EIS had assimilated my way of managing the knobs and switches.
I continue to make music in real time with my accordion and with a variety of small instruments and the EIS on the computer. Currently the program is learning to learn. I want a real time digital partner that interacts with my input in a humanly motoric way even though the ways may be distinctly beyond human capability. It seems important to me after more than forty years of dealing with my way of making electronic music both analog and digital that we need to find a way to work with non human forms that present us with musical intelligence and new challenges. I remain outside the window.
Since 1990 another window has attracted my attention – telematic performance. The history of this odyssey is described in my article: From Telephone to High Speed Internet: A Brief History of My Tele-Musical Performances15. The Telematic Circle16 a group of associates interested in telematic performance has gathered. Performing with musical partners in different locations opens an entirely new window – a new venue. The space between locations is new territory. How to navigate this space and make it as palpable as performing in the same room is a complex and challenging task.
Working together with colleagues Chris Chafe – director of CCRMA17 at Stanford University and Jonas Braasch, director of CARL18 at Renssalaer Polytechnic Institute has brought us part way to a satisfying realization of telematic performance. Chafe’s open source software JackTrip19 has provided us with low latency eight channel CD quality audio transmission, Jeremy Cooperstock’s UltraVideo Conferencing20 software gives us DV quality video and Jonas Braasch’s ViMic21 software makes virtual room simulation possible. My improvisation ensemble Tintinnabulate has jammed together with Chafe’s SoundWire22 ensemble for two years. These sessions have given us an enormous amount of artistic and technical data in a synergistic effort. The result of some years of exploration from video telephone to the high speed internet has attracted a National Science Foundation grant to research and produce A Robust Distributed Intelligent System for Telematic Applications. This system will incorporate ViMic and the Expanded Instrument System along with an avatar/agent that will be able to give conducting cues to ensembles across the network and learn from the performances to develop creative solutions in improvisational situations.
For the next two years Jonas Braasch and I as co-principal investigators and Doug Van Nort, post doctoral research associate, will engage in designing this intelligent system as we continue to make music with out partners in the Telematic Circle. The main objective of this work is to improve communications with distant partners and to open the window much wider for all that would like to participate.
Stay tuned and keep the windows open! I might like to look in.
1. Gesture controlled virtual musical instruments: A practical report, dr.Godfried-Willem Raes, postdoctoral researcher, Ghent University College & Logos FoundationHogeschool Gent - Music & Drama Department, 1999/2002/2003/2004/2007
2. Peter Musselman, Welcoming – MFA Thesis Mills College May 2008
3. The San Francisco Tape Music Center: 60’s Counterculture And The Avant Garde, edited by David Bernstein, University Of California Press, 2008 (DVD included)
4. Hewlett Packard HP200C Wide Range Precision Audio Oscillator Vacuum Tube Type [7 KB]
• 1 - 500 KHz Precision Oscillator
• 600 Ohm Balanced, Unbalanced Output
• Low distortion sine wave output
• Highly stable bridge circuit
• Excellent choice for Audio Work
• Made in USA
5. The Accordion (& The Outsider), Pauline Oliveros, The Squid’s Ear http://www.squidco.com/cgi-bin/news/newsView.cgi?newsID=429
6. Willard A. Palmer http://www.willardpalmer.org/
7. Tape Delay Techniques for Electronic Music Composers, in Software for People, Pauline Oliveros, Smith Publications 1984
8. Liner Notes From Odyssey 32 16 0160:: "I of IV was made in July, 1966, at the University of Toronto Electronic Music Studio. It is a real time studio performance composition (no editing or tape splicing), utilizing the techniques of amplifying combination tones and tape repetition. The combination-tone technique was one, which I developed in 1965 at the San Francisco Tape Music Center.
"The equipment consisted of twelve sine-tone square-wave generators connected to an organ keyboard, two line amplifiers, mixer, Hammond spring-type reverb and two stereo tape recorders. Eleven generators were set to operate above twenty thousand cycles per second, and one generator at below one cycle per second. The keyboard output was routed to the line amplifiers, reverb, and then to channel A of recorder 1. The tape was threaded from recorder 1 to recorder 2. Recorder 2 was on playback only. Recorder 2 provided playback repetition approximately eight seconds later. Recorder 1 channel A was routed to recorder 1 channel B, and recorder 1 channel B to recorder 1 channel A in a double feedback loop. Recorder 2 channel A was routed to recorder 1 channel A, and recorder 2 channel B was routed to recorder 1 channel B. The tape repetition contributed timbre and dynamic changes to steady state sounds. The combination tones produced by the eleven generators and the bias frequencies of the tape recorders were pulse modulated by the sub-audio generator."
9. Buchla Modular Synthesizer http://www.synthmuseum.com/buchla/buc10001.html
10. Sony TC 777 http://www.sony.net/SonyInfo/CorporateInfo/History/sonyhistory-a.html
11. Lexicon PC42 delay processors http://emusician.com/dsp/emusic_max_factor/
12. The Expanded Instrument System: Introduction and Brief History, P. Oliveros, included in The Future of Creative Technologies, Journal of the Institute of Creative Technologies, De Montfort University, Leicester, UK June 2008.
13. Gary Hall http://emusician.com/dsp/emusic_max_factor/
14. Cornelia Colyer http://www.jstor.org/pss/1575535
15. From Telephone to High Speed Internet: A Brief History of My Tele-Musical Performances, presented at the International Society of Improvised Music second annual conference during the Telematic Panel December 14, 2007 at Northwestern University, Evanston IL. Forthcoming publication in Leonardo Music Journal.
16. The Telematic Circle was formed in 2997 as an association of institutions and individuals interested in telematic music performance. See http://www.deeplistening.org/site/telematic for further information.
17. CCRMA, The Stanford University Center for Computer Research in Music and Acoustics is a multi-disciplinary facility where composers and researchers work together using computer-based technology both as an artistic medium and as a research tool. http://ccrma.stanford.edu/
18. Jonas Braasch, director of Communication Acoustics Research Laboratory (CARL) at Renssalaer Polytechnic Institute http://symphony.arch.rpi.edu/acoustics/bio.html
19. JackTrip is a Linux-based system used for multi-machine jam sessions over Internet2. It supports any number of channels (as much as the computer/network can handle) of bidirectional, high quality, uncompressed audio signal steaming: http://ccrma.stanford.edu/groups/soundwire/software/jacktrip/
20. Jeremy Cooperstock’s UltraVideo Conferencing http://ultravideo.mcgill.edu/
21. Jonas Braasch’s ViMic (Virtual Microphone) http://symphony.arch.rpi.edu/~braasj/JonasBraaschResearch.html
22. SoundWire ensemble http://ccrma.stanford.edu/groups/soundwire/