Spatialization in Electroacoustic Music

In electronic and electroacoustic music, spatialization is the distribution of sound in space, using loudspeakers or headphones. Although spatial aspects have also been considered in acoustic music through the spatial arrangement of musicians and instruments, electronic means allow a more dynamic use of space as a composition parameter. This can happen during studio productions or in live performance. These possibilities have been explored since the early days of electroacoustic music and have evolved with the technology over the decades.


Pupitre D'Espace

Pierre Schaeffer developed the Pupitre D'Espace, a device for spatializing tape music in real time, in 1951. This special case of spatialization for acousmatic music is also referred do as diffusion. The device works with three induction coils for detecting the position of the hand-held transponder in space.

/images/nsmi/pupitre-despace.jpg

Image from Nicolau Centola's PhD thesis.


Poème électronique

/images/nsmi/philips-pavilion.jpg

The Philips Pavilion (from medienkunstnetz )


The Philips Pavilion, built for the 1958 World's Fair in Brussels, featured a multi-channel sound system with an unconventional loudspeaker arrangement. It was used for Edgard Varèse's Poème électronique to move sound along paths defined by the loudspeaker positions.

/images/nsmi/poeme-electronique.jpg

Source movements in 'Poème électronique'


Rotationstisch

Starting in 1959, Stockhausen used a rotating table (Rotationstisch) for creating sound movements in quadraphonic tape compositions. A loudspeaker in the center is rotated with the table, captured by four fixed microphones surrounding it. The directivity but also the inherent Doppler effect creates the image of a rotating sound source, when played back on a quadraphonic setup.

/images/nsmi/rotationstisch.jpg

Loudspeaker Orchestras

A loudspeaker orchestra uses loudspeakers themselves as musical instruments, rather than as means for reproducing sound. A typical setup uses models with very different, distinct characteristics, placed at individual positions, rather than in a geometric shape. During a performance, pre-composed music, often stereo, can be sent to the different speakers or speaker groups. This process, referred to as diffusion, is a standard technique in Acousmatic Music.


The Acousmonium

The Acousmonium, launched by French GRM (Groupe de Recherches Musicales) in 1974, is the original and most prominent loudspeaker orchestra.

/images/nsmi/acousmonium.jpg

Francois Bayle with the Acousmonium from "Our Research for Lost Route to Root" (Jérôme Barthélemy, 2008)


BEAST

The BEAST (Birmingham ElectroAcoustic Sound Theatre) is a younger system, following the principles of the Acousmonium. It was brought to Berlin for the 2010 edition of the festival Inventionen, when Jonty Harrison was guest professor at TU Berlin.

/images/nsmi/BEAST_vornPult-14.jpg

The BEAST at Elisabeth-Kirche, Berlin


HaLaPhon

Principle

The HaLaPhon, developed by Hans Peter Haller at SWR in the 70s and 80s, is a device for spatialized performances of mixed music, and live electronics. The first version was a fully analog design, whereas the following ones used analog signal processing with digital control.

The HaLaPhon principle is based on digitally controlled amplifiers (DCA), which are placed between a source signal and loudspeakers. It is thus a channel-based-panning paradigm. Source signals can be tape or microphones:

/images/nsmi/halaphon/halaphon_GATE.png

DCA (called 'Gate') in the HaLaPhon.


Each DCA can be used with an individual characteristic curve for different applications:

/images/nsmi/halaphon/halaphon_kennlinien.png

DCA: Different characteristic curves.


Quadraphonic Rotation

A simple example shows how the DCAs can be used to realize a rotation in a quadraphonic setup:

/images/nsmi/halaphon/halaphon_circle.png

Circular movement with four speakers.


/images/nsmi/halaphon/halaphon_4kanal.png

Quadraphonic setup with four DCAs.


Envelopes

The digital process control of the HaLaPhon generates control signals, referred to as envelopes by Haller. Envelopes are generated through LFOs with the following waveforms:

/images/nsmi/halaphon/halaphon_huellkurven1.png

Circular movement with four speakers.


Envelopes for each loudspeaker gain are synchronized in the control unit, resulting in movement patterns. These can be stored on the device and triggered by the sound director or by signal analysis:

/images/nsmi/halaphon/halaphon_programm1.png

Quadraphonic setup with four DCAs.



References

  • Andreas Pysiewicz and Stefan Weinzierl. Instruments for spatial sound control in real time music performances. a review. In Musical Instruments in the 21st Century, pages 273–296. Springer, 2017.
    [BibTeX▼]
  • Martha Brech and Henrik von Coler. Aspects of space in Luigi Nono's Prometeo and the use of the Halaphon. In Martha Brech and Ralph Paland, editors, Compositions for Audible Space, Music and Sound Culture, pages 193–204. transctript, 2015.
    [BibTeX▼]
  • Martha Brech and Henrik von Coler. The Halaphon and its use in Luigi Nono's 'Prometeo' in Venice. In Proceedings of the 9th Conference on Interdisciplinary Musicology (CIM). 2014.
    [BibTeX▼]
  • Hans Peter Haller. Das Experimentalstudio der Heinrich-Strobel-Stiftung des Südwestfunks Freiburg 1971-1989: die Erforschung der elektronischen Klangumformung und ihre Geschichte, Teil 1. Nomos, Baden-Baden, 1995.
    [BibTeX▼]
  • Hans Peter Haller. Das Experimentalstudio der Heinrich-Strobel-Stiftung des Südwestfunks Freiburg 1971-1989: die Erforschung der elektronischen Klangumformung und ihre Geschichte, Teil 2. Nomos, Baden-Baden, 1995.
    [BibTeX▼]
  • Playing Samples in SuperCollider

    The Buffer class manages samples in SuperCollider. There are many ways to use samples, based on these buffers. The following example loads a WAV file (find it in the download) and creates a looping node. When running, the playback speed can be changed:

    s.boot;
    
    // get and enter the absolute path to a sample
    ~sample_path = "/some/directory/sala_formanten.wav";
    
    ~buffer  = Buffer.read(s,~sample_path);
    
    (
    ~sampler = {
    
          |rate= 0.1|
    
          var out = LoopBuf.ar(1,~buffer.bufnum, BufRateScale.kr(~buffer.bufnum) * rate, 1, 0,0,~buffer.numFrames);
    
          Out.ar(0, out);
    
    }.play;
    
    )
    
    // set the play rate manually
    ~sampler.set(\rate,-0.1);
    

    Exercise

    Exercise

    Combine the sample looper example with the control bus and mouse input example to create a synth for scratching sound files.

    Create Classes in SuperCollider

    At its core, SuperCollider works in a strictly object oriented way. Although SynthDefs already allow to work with multiple instances of a definition, actual classes can help in many ways. This includes the typical OOP paradigms, such as member variables and methods for quick access to properties and actions.

    While SynthDefs can be sent to a server during run time, classes are compiled when booting the interpreter or recompiling the class library. Some possible errors in class definitions are detected and reported by the compiler.

    This is just a brief overview, introducing the basic principles. Read the SC documentation on writing classes for a detailed explanation.


    Where to put SC Classes

    SuperCollider classes are defined in .sc files with a specific structure. For compiling a class when booting the interpreter, it needs to be located in a directory which is scanned by SC. For this reason, an installation of SC creates a directory for user-defined content. Inside sclang, this directory can be shown with the following command:

    Platform.userExtensionDir
    

    On Linux systems, this is usually:

    /home/someusername/.local/share/SuperCollider/Extensions
    

    For more information, read the SC documentation on extensions.


    Structure of SC Classes

    The following explanations are based on the example in the repository. A class is defined inside brackets, with the class name:

    SimpleSynth
    {
      ...
    }
    

    Member Variables

    Member variables are declared in the standard way for local variables. They can be accessed anywhere inside the class.

    var dur;
    

    Constructor and Init

    The constructor calls the init() function in the following way for initializing values and other tasks on object creations:

    // constructor
    *new { | p |
            ^super.new.init(p)
    }
    
    // initialize method
    init { | p |
            dur    = 1;
    }
    

    Member Functions

    Member functions are defined as follows, using either the |...| or the arg ...; syntax for defining their arguments:

    play
          { | f |
        ...
      }
    

    Creating Help Files

    In SC, help files are integrated into the SCIde for quick access. Help files for classes are also created during compilation. They need to be placed in a directory relative to the .sc file with the extension .schelp:

    HelpSource/Classes/SimpleSynth.schelp
    

    Read the SC documentation on help files for more information.

    Links and Course Material

    TU Website

    The official TU website with information on the schedule and assessment:

    https://www.ak.tu-berlin.de/menue/lehre/sommersemester_2021/network_systems_for_music_interaction/

    Download

    The download area features papers, audio files and other materials used for this class. Users need a password to access this area.

    Student Wiki

    This Wiki can be used for sharing knowledge:

    http://teaching-wiki.hvc.berlin

    SPRAWL GIT Repository

    The repository contains SHELL scripts, SuperCollider code and configuration files for the server and the clients of the SPRAWL system:

    https://github.com/anwaldt/SPRAWL

    JackTrip GIT Repository

    Jacktrip is the open source audio over IP software used in the SPRAWL system:

    https://github.com/jacktrip/jacktrip


    Back to NSMI Contents

    Using the Terminal for Doing Stuff

    People working a lot with MAC or Linux systems might be used to doing things from the terminal or console. For novices, it usually appears more frightening than it actually is. Especially when working on remote servers - but also for embedded devices for audio processing - the terminal is the standard tool.


    Directories

    You can maneuver through the system's directories using the cd command. Changing to an absolute path /foo/bar can be done like this:

    $ cd /foo/bar
    

    New directories are created with the mkdir command. For creating a new directory mydir in your recent location, type:

    $ mkdir mydir
    

    The content of the recent directory can be listed with the ls command. The following arguments improve the results:

    $ ls -lah
    
    drwxrwxr-x  2 username group 4,0K Mar 25 12:25 .
    drwxrwxr-x 16 username group 4,0K Mar 25 12:25 ..
    -rwxrwxr-x  1 username group 9,1M Feb  3 17:47 jacktrip
    -rwxrwxr-x  1 username group 334K Feb  3 17:47 sprawl-compositor
    

    Exercise

    Create a directory with your name in the student user home directory /home/student/students/YOURNAME.


    Create and Edit Files

    A file can be created by touching it:

    $ touch filename
    

    The default editor on most systems is nano. It is a minimal terminal based tool and does not require X forwarding. To edit an existing file or create a new file, type:

    $ nano filename
    

    Terminal Only

    Inside nano, you have a lot of defined keystrokes for editing, file handling and other tasks. See the nano cheat sheet for a full list: https://www.nano-editor.org/dist/latest/cheatsheet.html

    Exercise

    Create a text file in your personal directory and fill it with some text.


    GUI Based

    When working with X forwarding, simple text editors with GUI features can be used. On the SPRAWL server, this includes mouspad or the minimal Python IDE idle.


    Starting Applications

    System wide binaries must be located in a directory that is listed in $PATH (see the final section on this page for details). They can be started by simply typing the name:

    $ foo
    

    A local binary named foo can be started with the following command:

    $ ./foo
    

    You can terminate your command with an ampersand (&) to run a process in the background. You can continue to work in the terminal, afterwards:

    $ ./foo &
    [1] 5459
    

    If you start a command this way, it gives you an id of the background process in brackets and the actual process ID (PID).

    You can get the process back into foreground with the fg command followed by the background process id:

    $ fg 1
    

    Check for Running Applications

    At some point, users may want to know whether a process is running or which processes have been started. The command top lets you monitor the system processes with additional information on CPU and memory usage, updated with a fixed interval:

    $ top
    

    htop is a slightly polished version, using colored results:

    $ htop
    

    You can get a list of all running processes, including auxiliary ones, by typing:

    $ ps aux
    

    Usually, these are way to many results. If you want to check whether an instance of a specific program is running, you can use grep after the ps aux to filter the results:

    $ ps aux | grep foo
    

    Shell Variables

    Sometimes it is convenient to store information in variables for later use. Some common variables that are used in Unix like operating systems like Linux, BSD or MacOS are for example PATH and DISPLAY.

    Shell variables are usually uppercase. To get the content of a variable it is prefixed by a dollar sign. The command echo is used to print the content:

    $ echo $PATH
    /home/username/.local/bin:/home/username/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin
    $ echo $DISPLAY
    :0
    

    Defining a variable is done with an equal sign. It happens quite often that the program that should use the variable, opens another environment. To access the variable in that sub-environment, it has to be exported before:

    $ NAME=username
    $ echo $NAME
    username
    $ bash
    $ echo $NAME
    
    $ exit
    exit
    $ export NAME
    $ bash
    $ echo $NAME
    username
    

    Distortion Synthesis

    In contrast to subtractive synthesis, where timbre is controlled by shaping the spectra of waveforms with a many spectral components, distortion methods shape the sound by adding overtones with different principles. Quasi parallel to Bob Moog, Don Buchla invented his own system of analog sound synthesis in the 1960s, based on distortion, modulation and additive principles. This approach is also entitled West Coast Synthesis.

    The Buchla 100 was released in 1965, and was used by Morton Subotnick for his 1967 experimental work Silver Apples of the Moon.

    Audio Programming in C++

    C++ is the standard for programming professional, efficient audio software. Most of the languages and environments introduced in the Computer Music Basics class are programmed in C++ themselves. When adding low level components, such as UGens in SuperCollider, objects in Pure Data or VST plugins for Digital Audio Workstations (DAW), these are programmed in C++, based on the respective API. These APIs take over the communication with the hardware and offer convenient features for control and communication.

    JUCE

    JUCE is the most widely used framework for developing commercial audio software, such as VST plugins and standalone applications.

    JACK

    JACK offers a simple API for developing audio software on Linux, Mac and Windows systems.

    Getting Started with Web Audio

    The Web Audio API is a JavaScript based API for sound synthesis and processing in web applications. It is compatible to most browsers and can thus be used on almost any device. This makes it a powerful tool in many areas. In the scope of this introduction it is introduced as a means for data sonification with web-based data APIs and for interactive sound examples. Read the W3C Candidate Recommendation for an in-depth documentation.


    The Sine Example

    The following Web Audio example features a simple sine wave oscillator with frequency control and a mute button:

    Sine Example

    Sine Example.

    Frequency


    Code

    Building Web Audio projects involves three components:

    • HTML for control elements and website layout
    • CSS for website appearance
    • JavaScript for audio processing

    Since HTML is kept minimal, the code is compact but the GUI is very basic.

    sine_example/sine_example.html (Source)

    <!doctype html>
    <html>
    
    <head>
      <title>Sine Example</title>
    
      <!-- embedded CSS for slider appearance -------------------------------------->
    
      <style>
      /* The slider look */
      .minmalslider {
        -webkit-appearance: none;
        appearance: none;
        width: 100%;
        height: 25px;
        background: #d3d3d3;
        outline: none;
      }
      </style>
    </head>
    
    <!-- HTML control elements  --------------------------------------------------->
    
    <blockquote style="border: 2px solid #122; padding: 10px; background-color: #ccc;">
    
      <body>
        <p>Sine Example.</p>
        <p>
          <button onclick="play()">Play</button>
          <button onclick="stop()">Stop</button>
          <span>
            <input  class="minmalslider"  id="pan" type="range" min="10" max="1000" step="1" value="440" oninput="frequency(this.value);">
            Frequency
          </span>
        </p>
      </body>
    
    </blockquote>
    
    
    <!-- JavaScript for audio processing ------------------------------------------>
    
      <script>
    
        var audioContext = new window.AudioContext
        var oscillator = audioContext.createOscillator()
        var gainNode = audioContext.createGain()
    
        gainNode.gain.value = 0
    
        oscillator.connect(gainNode)
        gainNode.connect(audioContext.destination)
    
        oscillator.start(0)
    
        // callback functions for HTML elements
        function play()
        {
          gainNode.gain.value = 1
        }
        function stop()
        {
          gainNode.gain.value = 0
        }
        function frequency(y)
        {
          oscillator.frequency.value = y
        }
    
      </script>
    </html>
    

    Class Outline

    Sessions & Topics
    Theory   Practical  
    1 History, Signals & Systems, Environments and Languages    
           
        1 Getting started with PD, SuperCollider, WebAudio, Faust
           
    2 Theory: Additive, Modulation, Distortion 2 Implement: Additive, Modulation, Distortion
           
    3 Theory: Subtractive, Processed Recording, Physical Modeling    
           
        3 Implement: Subtractive, Processed Recording, Physical Modeling
           
    4 Theory: Spatialization & Network Systems    
           
        4 Working with a network performance system.

    OSC: Open Sound Control

    Open Sound Control (OSC) is the standard for exchanging control data between audio applications in distributed systems and on local setups with multiple components. Almost any programming language and environment for computer music offers means for using OSC, usually builtin.

    OSC is based on the UDP/IP protocol in a client-server paradigm. A server needs to be started for listening to incoming messages sent from a client. For bidirectional communication, each participant needs to implement both a server and a client. Servers start listening on a freely chosen port, whereas clients send their messages to an arbitrary IP address and port.


    OSC Messages

    A typical OSC message consists of a path and an arbitrary number of arguments. The following message sends a single floating point value, using the path /synthesizer/volume/:

    /synthesizer/volume/ 0.5
    

    The path can be any string with slash-separated sub-strings, as paths in an operating system. OSC receivers can sort the messages according to the path. Parameters can be integers, floats and strings. Unlike MIDI, OSC offers only the transport protocol but does not define a standard for musical parameters. Hence, the paths used for a certain software are completely arbitrary and can be defined by the developers.



    Contents © Henrik von Coler 2021 - Contact