Expanded Instrument System 

 Expanded Instrument System Artistic Statement: 

Expanded Instrument System (EIS) provides musicians and composers with a challenging improvisational environment for exploration of interactions with technology. EIS acts as a "time machine" where players provide the present moment of sounding that technology will feedback in the future either in replica or modification. The feedback becomes part of the present. Thus the player is performing in the past, present and future simultaneously. Furthermore sounds of the past, present and future are spatialized through a multi-channel array of speakers so that space is expanded in a similar manner with time.

_____________________

Background information on Expanded Instrument System:

Expanded Instrument System (EIS) is an evolving electronic sound-processing environment. EIS is dedicated to providing improvising musicians individual performance control over a variety of parameters that can transform their acoustic input to the system during live performance.

EIS has always been intended for acoustic instruments and voices even though electronic sound sources can also be used as well as pre-recorded sources. Until more recently digital signal processing worked well for electronic sound and less well for acoustic sounds. Acoustic sounds are generally far more complex than electronically generated sounds.

Performers each have their own setup that includes their own microphones, control devices, a computer with sound card and audio interface. The computer provides the digital signal processing that includes variable delays, ambiance and modulation, and translates and displays control information for this processing from midi controllers, foot pedals and switches. 

The musicians and their instruments are the sources of all the sounds, which they pick up with their microphones and subject to several kinds of pitch, time and spatial ambiance transformations and manipulations. 

The Expanded Instrument System (EIS) has undergone continual development since 1965, from tape delay with tape machines to computers. This is a long trajectory and history involving acoustic, analog and digital means.

 Software for the EIS designed and developed by Pauline Oliveros was programmed over the last twenty years by Panaiotis, David Gamper, Stephan Moore, Jonathan Marcus, Olivia Robinson, Jesse Stiles and Zevin Polzin.


 

The Expanded Instrument System (EIS): An Introduction and Brief History

Keynote address given at the Music, Technology, Innovation, Research Center Colloquium
November 2007 at De Montfort University, Leicester UK. By Pauline Oliveros

The Expanded Instrument System (EIS) is an evolving electronic sound-processing environment. EIS is dedicated to providing improvising musicians individual performance control over a variety of parameters that can transform their acoustic input to the system during live performance. EIS has always been intended for acoustic instruments and voices even though electronic sound sources can also be used as well as pre-recorded sources. Until more recently digital signal processing worked well for electronic sound and less well for acoustic sounds. Acoustic sounds are generally far more complex that electronically generated sounds.

Performers each have their own setup that includes their own microphones, control devices, a computer with sound card and audio interface. The computer provides the digital signal processing that includes variable delays, ambiance and modulation, and translates and displays control information for this processing from midi controllers, foot pedals and switches. The musicians and their instruments are the sources of all the sounds, which they pick up with their microphones and subject to several kinds of pitch, time and spatial ambiance transformations and manipulations.

The Expanded Instrument System (EIS) has undergone continual development since 1965 - forty-two years - from tape delay with tape machines to computers. This is a long trajectory and history involving acoustic, analog and digital means. Software for the EIS designed and developed by Pauline Oliveros was programmed over the last twenty years by Panaiotis, David Gamper, Stephan Moore, Jonathan Marcus, Olivia Robinson, Jesse Stiles and Zevin Polzin.

The EIS began with awareness and use of the delay that could be heard between the record and playback heads on reel to reel tape machines that is now emulated and elaborated in Max/MSP1 software. The early development of EIS with reel-to-reel tape machines is described in my article Tape Delay Techniques for Electronic Music Composers written in 19692. The article describes and illustrates the configurations of multiple tape machines with the heads connected by stringing tape from the supply reel of one machine to the take up reel of another machine.

The premise of the EIS back in 1965 was to challenge myself as an improvising performer. I felt that I could handle more musical information than I was able to perform without the extension of electronic feedback. I began experimenting as I performed with delayed sound fed back to audiences and me from the outputs of tape machines to loud speakers. It was important to me that all the sound that I made be live rather than pre-recorded. This was because a much more nuanced performance could be realized if none of the sources were pre-recorded.

I noticed that the layering in time could change the timbre of the original acoustic sound input. I enjoyed the sensations of my acoustic sounds transforming before my ears as the sounds came back to me. I learned that timing was important in introducing new input and that I could disguise the entry points depending on timing and attack so that richly changing timbres could emerge from the delays.

Through the years I understood the Expanded Instrument System to mean "time machine" – what is expanded is temporal - present/past/future is occurring simultaneously with transformations. What I play in the present comes back in the future while I am still playing, is transformed and becomes a part of the past. This situation keeps you busy listening.

This notion of a time machine is not unlike canonical forms such as the inventions and fugues of J.S. Bach and the repetitions of motives and sequences in the classical forms of Haydn, Mozart and Beethoven. Fascination with echo has always inspired composers from the depths of the myth of Echo and Narcissus to the 20th century when popular musicians and producers began realizing the expressive potential of echo and reverberation. Many of the songs and sound tracks wonderfully described in Echo and Reverb in Popular Music: Fabricating Space in Popular Music Recording 1900 to 19603 by Peter Doyle influenced me directly. I was particularly impressed by songs like Riders in the Sky sung by Vaughn Monroe, Steel Guitar Rag by Bob Wills and the Texas Playboys, How High the Moon by Les Paul and Mary Ford, Juke by Little Walter and many other selections that I heard on the radio and juke box as a teenager. It was the sound of the spaces differentiated by echo and reverberation within the songs that captured me.

Canons that are produced by the EIS can be disguised by the modulations that cause variations in the returns of sound input and also by the variety of spaces created by the multiple and varying delay times. These canons can be but are not necessarily pitch canons – they can be time and timbre canons. The EIS is both a time and space machine. The EIS imperative (and improvisation imperative) is to listen and respond: spatial relationships and progressions are as important as the traditional parameters of music (melody, harmony, rhythm, timbre).

Timbre particularly is affected by space. I discuss this effect in my article Acoustic and Virtual Space as a Dynamic Parameter of Music (1995)4:

“Virtual acoustics - a perceptual phenomenon - is created with electronic processing within an actual physical space. Simulated walls or reflective surfaces may cause a listener to perceive differences in room size and the tone quality of a musical instrument.”

These virtual acoustics are also part of the expansion of EIS. Virtual acoustics gives the improvising performer new possibilities.

“With the advent of signal processors and sophisticated sound systems, it is possible to tamper with the container of music in imaginative ways The walls of a virtual acoustic space created electronically can expand or contract, assume new angles or virtual surfaces. The resulting resonances and reflections changing continuously during the course of a performance create spatial progressions much as one would create chord progressions or timbre transformations (changing the tone quality of an instrument while performing a single pitch). The audience and performers can experience sensations of moving in space as well as sounds moving through space. They can also experience the relationship of moving in space in relation to sounds moving in the same space and while the space itself is changing. Such audio illusions or virtual acoustics can function as a new parameter of music much as timbre became new in Klangfarben Melodie (tone color melody) - where the notes of a melody are distributed to different instruments successively as in the music of Arnold Schöenberg who coined the term and Anton Webern. (See Five Orchestral Pieces opus 16 (1909 revised 1949) - Schöenberg and Five Pieces for Orchestra opus 10 (1913) Webern.) 5

The EIS is fun! Acoustic input from an instrument or voice now can be processed with up to 40 variable delays, modulated with fluctuating waveforms, layered and spatialized. Sounds may be diffused in four, six or eight channels. More outputs could be programmed for sixteen, thirty-two, sixty-four and beyond. The current version programmed in MAX/MSP undergoes continual revision due to new performer demands and is a long way from the limited tape delay system of the beginnings of EIS. Time delays range from mille seconds to one minute or more depending on CPU power.

The EIS though is never finished there is always more to explore. Even though the idea derived from echo is very simple the applications of digital signal processing and routing result in endless variations and possibilities. The current revision of the patch also includes some intelligent controls for the innumerable parameters involved. These controllers can learn from experience. The controls can be set to run from a chaos generator or from a random event generator as well. The result is like having several partners turning knobs, faders and tripping switches that could not be affected by a single performer. There is a need for a smart meta controller that understands from experience how to direct all of the sub controllers and switches with intelligent guesses.

The EIS consists of modules that can be configured in the interface window from one to all in any way that the performer desires. Modules can be switched on with the window launcher and dragged to any position. The current list of modules includes the following:

Window launcher
Performance clock
Master volume control
Matrix Mixer
Matrix
Control Mapping Interface
Performance parameters
Looper (1-4)
Delays (1-2) include modulation functions and 20 delays each.
Reverb (1-2)
VBAP (1-2) includes geometric patterns

In Figure 1 a selected configuration of EIS modules including looper 1, delay 1, matrix mixer, Lexicon 1, main volume, performance clock, reverb, VBAP and window launcher is shown. Other modules could be selected, moved around in the window or removed as desired.

Figure 1.
Image of what is described in Figure 1.

Figure 2. - EIS Window Launcher: Each module of EIS may be opened separately or together. Module configurations may be saved by the patcher in MAX and reopened later.

Figure 2.
Image of what is described in Figure 2.

Figure 3. - Matrix Mixer: Any input can be connected to any output. Matrix configurations may be stored and recalled as presets. Presets can be recalled in performance or removed. The matrix window can be cleared at any time.

Figure 3.
Image of what is described in Figure 3.

Figure 4. - EIS Matrix: A configuration of the open matrix shows analog inputs 1 and 2 going to loopers 1 and 2. loopers 1 and 2 going to Lexicons 1 and 2, Lexicons 1 and 2 going to delays 1 and 2, delays 1 and 2 going to reverbs 1 and 2 and reverbs 1 and 2 going to vbap 1 and 2.

Figure 4.
Image of what is described in Figure 4.

Figure 5. - Control Mapping Interface: Midi controllers may be mapped to any modules. Upon launching EIS detects any midi controllers that are connected to the audio interface as well as users connected through the Ethernet connection. Midi mapping also may be passed to another performer on a different system through the IP Ethernet connection. A remote performer on the local network or interactively between performers may vary the amount of control of another system module.

Figure 5.
Image of what is described in Figure 5.

Figure 6 – Lexicon PSP42 VST: Simulation of the original Lexicon PCM426 digital delay processor used from 1983 in earlier versions of EIS. Selected controllable functions used are volume, feedback, delay time (0-10”), manual VCO, depth, waveform and rate. The manual VCO and any of the PSP42 functions can be mapped to a midi foot controller or any midi controller.

Figure 6.
Image of what is described in Figure 6

Figure 7 - Delays (1 of 2): There are up to 20 variable time delays on each module that can be modulated with a variety of waveforms. The list includes sweep, sine, triangle etc, as can be seen under modulator type on the module. Waveform at the bottom of the list is a special modulator that allows the performer to draw waveforms. The depth of modulation can be varied. Modulation type can be randomized or switched off entirely. The degree of shift in randomization can be varied. All controls can be varied manually, with a random event generator, chaos generator or through learned behavior. Although midi mapping to external controllers could be an option it is obvious that no performer could manage all the controllers needed during performance. Thus algorithmic control is the best option for these delay modules.

Figure 7.
Image of what is described in Figure 7

Figure 8 - Looper: (1 of 4) analog input can be recorded or input from any other module including other loopers. The speed control allows the input to increase pitch with + values and play backward with – values. Both speed and volume can be varied manually, or algorithmically with a random event generator, chaos generator or with learned behavior.

Figure 8.
Image of what is described in Figure 8

Figure 9. - Reverb: any module or analog input may be mapped to either reverb 1 or 2 or both in parallel or series. All four Monoverb7 variables – room size, damping, and wet level and dry level - may be controlled manually or with a random event generator (REG), chaos generator or learned behavior. Monoverb works well within limited CPU power. With sufficient CPU power Altiverb8 may be used for richer reverberation choices using impulse/responses as another option.

Figure 9.
Image of what is described in Figure 9

Figure 10. - VBAP: Vector Based Amplitude Panning uses objects developed by Ville Pulkki9. Configuration of speakers is accomplished on the left side of the VBAP module. The functions available are: number of speakers, select speakers, speaker angle, store, presets and remove. The large circle containing smaller blue and pink moving circles indicates the positions of the speakers and the paths of the two sound sources in the speaker field.

On the right side of the VBAP Spatializer module shapes provides patterns of movement through the sound field. Available shapes are listed in the pull down menu and include ten geometric patterns plus a recordable pattern that can be drawn by the performer, stored and recalled.

Five variable functions can affect the path of the sound sources and amplitude. These functions are radius, speed, modulation (modulation affects Radius and speed), spread, modulation (modulation affects the size of the sound source in the speaker field). The functions can be controlled manually or by the REG, Chaos or Learn controllers.

The default VBAP configuration uses eight speakers evenly spaced in a circle.
Speakers can be arranged in any circular position. Minimum speakers are four.

Figure 10.
Image of what is described in Figure 10

EIS runs on OS X 10.2 or higher using 1.25 GHz processor or faster with 512 MB RAM or more.

EIS is available as a stand-alone application or as a patch for MAX/MSP. The patch version of EIS requires MAX/MSP 4.5.3. For more information see http://www.deeplistening.org/site/EIS. EIS is a project of the Deep Listening Institute, Ltd.10


Notes:

1. MAX/MSP is a graphical environment for music, audio, and multimedia. See http://www.cycling74.com/products/maxmsp for details on the Cycling74 web site.

2. Oliveros P., Tape Delay Techniques for Electronic Music Composers in Software for People: Collected Writings 1963-1980, Smith Publications and Printed Editions, 1983.

3. Doyle, Peter, Echo and Reverb in Popular Music: Fabricating Space in Popular Music Recording 1900 to 1960, Wesleyan University Press, 2005

4. Oliveros, P., Virtual and Acoustic Space as a Dynamic Parameter of Music in The Roots of the Moment: Collected Writings 1980-1996, Drogue Press, 1998)

5. Anton Webern. (See Five Orchestral Pieces opus 16 (1909 revised 1949) - Schöenberg and Five Pieces for Orchestra opus 10 (1913) Webern.)

6. Lexicon PCM42 digital delay processor. Gary Hall designed and built the PCM42 for Lexicon. The performance parameters and excellent sound are what attracted me to the PCM42. Here is an article by Gary Hall on this device: http://emusician.com/dsp/emusic_max_factor/
Here is where to find the PSP42 VST plug in version of the PCM42: http://www.pspaudioware.com/plugins/psp42.html

7. Monoverb is an external object – a mono implementation of the Schroeder/Moorer reverb model (mono version of freeverb~) see http://maxobjects.com/?v=authors&id_auteur=39
See also The Journal of the Acoustical Society of America—May 2006—Volume 119, *ss7e 5, p.3368
Moving Spaces (A) Pauline Oliveros
Moving Spaces (2006) is a composition for 5.1 surround sound system. Sustained sounds morph in quality as space changes and space changes as sounds move. The piece uses vector based amplitude panning (VBAP) and Monoverb mono implementation of the Schroeder/Moorer reverb model (mono version of freeverb~to achieve these changes. Sounds are improvised and processed by the expanded instrument system (EIS), a MAX program designed by the composer consisting of multiple delays and processing algorithms.
http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=JAS...

8. Altiverb see http://www.audioease.com/Pages/Altiverb/AltiverbMain.html for complete details.

9. For in depth information on Ville Pulkki, VBAP and his articles on VBAP see http://www.acoustics.hut.fi/research/cat/vbap/

10. Deep Listening Institute, Ltd. see http://www.deeplistening.org for mission and detailed information about the organization.

Subscribe to Deep Listening's E-newsletter!

Receive the latest Deep Listening Institute News on events, workshops & new releases from Deep Listening Institute!

*     



Center for Deep Listening at Rensselaer
Tomie Hahn Director
hahnt "at" rpi.edu

Deep Listening Institute Programs are made possible by the New York State Council on the Arts with the support of Governor Andrew Cuomo and the New York State Legislature.