Sampling & Aliasing: Theory and Math

Frequency Domain

Background

The EOC

The Electronic Orchestra Charlottenburg (EOC) was founded at the TU Studio in 2017 as a place for developing and performing with custom musical instruments on large loudspeaker setups.

EOC Website: https://eo-charlottenburg.de/

Initially, the EOC worked in a traditional live setup with sound director. Several requests arose during the first years:

  • enable control of the mixing and rendering system through musicians
    • control spatialization
  • flexible spatial arrangement of musicians
    • break up rigid stage setup
  • distribution of data
    • scores
    • playing instructions
    • visualization of system states

The SPRAWL System

During Winter Semester 2019-20 Chris Chafe was invited as guest professor at Audio Communication Group. In combined classes, the SPRAWL network system was designed and implemented to solve the above introduced problems in local networks:

https://hvc.berlin/projects/sprawl_system/

Quarantine Sessions

The quarantine sessions are an ongoing concert series between CCRMA at Stanford, the TU Studio in Berlin, the Orpheus Institute in Gent, Belgium and various guests:

These sessions use the same software components as the SPRAWL System. Audio is transmitted via JackTrip and SuperCollider is used for signal processing.


Back to NSMI Contents

SuperCollider for the Remote Server

SuperCollider is per default built with Qt and X for GUI elements and the ScIde. This can be a problem when running it on a remote server without a persistent SSH connection and starting it as a system service. However, for service reasons a version with full GUI support is a useful tool. One solution is to compile and install both versions and make them selectable via symbolic links:

  1. build and standard-install a full version of SuperCollider
  2. build a headless version of SuperCollider (without system install)
  3. replace the following binaries in /usr/bin with symbolic links to the headless version
    • scsynth
    • sclang
    • supernova
  4. create scripts for changing the symlink targets

This allows you to redirect the symlinks to the GUI version for development and testing whereas they point to the headless version otherwise.


Compiling a Headless SC

The SC Linux build instructions are very detailed: https://github.com/supercollider/supercollider/blob/develop/README_LINUX.md Compiling it without all graphical components is straightforward. Simply add the flags NO_X11=ON and -DSC_QT=OFF for building a headless version of SuperCollider.

Using JackTrip in the HUB Mode

About JackTrip

In this class we will use JackTrip for audio over network connections but there were some successful tests with the Zita-njbridge. JackTrip can be used for peer-to-peer connections and for server-client setups. For the latter, JackTrip was extended with the so called HUB Mode for the SPRAWL System and the EOC in 2019-20.

---

Basics

For connecting to a server or hosting your own instance, the machine needs to be connected to a router directly via Ethernet. WiFi will not result in a robust connection and leads to significant dropouts. JackTrip needs the following ports for communication. If a machine is behind a firewall, these need to be added as an exception:

JackTrip Ports
Port Protocol Purpose
4464 TCP/UDP audio packages
61002-62000 UDP establish connection (server only)

The Nils Branch

Due to the increasing interest, caused by the pandemic, and the endless list of feature requests, the Jacktrip project has been growing rapidly in since early 2020 and the repository has many branches. In this class we are using the nils branch, which implements some unique features we need for the flexible routing system. Please check the instructions for compiling and installing a specific branch: Compiling JackTrip


Starting JackTrip

JACK Parameters

Before starting JackTrip on the server or the clients, a JACK server needs to be booted on the system. Read the chapter Using JACK Audio from the Computer Music Basics class for getting started with JACK. A purely remote server, as used in this class, does not have or need an audio interface and can thus be booted with the dummy client:

$ jackd -d dummy [additional parameters]

To this point, the version of JackTrip used with the SPRAWL system requires all participants to run their JACK server at the same sample rate and buffer size. Recent changes to JackTrip dev branch allow the mixing of different buffer sizes but have not been tested with this setup. The overall system's buffer size is defined by the weakest link, respectively the client with the worst connection. Although tests between two sites have shown to work with down to $16$ samples, a buffer size of $128$ or $256$ samples usually works for a group. Experience has shown that about a tenth of all participants has an insufficient internet connection for participating without significant dropouts.


JackTrip Parameters

As with most command line programs, JackTrip gives you a list of all available parameters with the help flag: $ jacktrip -h A single instance is launched on the SPRAWL Server with the following arguments:

$ jacktrip -S -p 5 -D --udprt

The following arguments are needed for starting a JackTrip client instance and connecting to the SPRAWL server (the server.address can be found in the private data area):

$ jacktrip -C server.address -n 2 -K AP_

Using Shell Scripts

Shell scripts can be helpful for organizing sequences of terminal commands to execute them in a specific order with a single call. Shell scripts usually have the extension .sh and should start with a so called shebang (#!...), telling the interpreter what binary to use. After that, single commands can be added as separate lines, just as using them in the terminal. The following script test.sh starts the JACK server in the background, waits for 3 seconds and afterwards launches the simple client, playing a sine tone.

#! /bin/bash

jackd &
sleep 3
jack_simple_client

The script can be executed from its source location as follows:

$ bash test.sh

Shell scripts can be made executable with the following command:

$ chmod +x test.sh

Afterwards, they can be started like binaries, when including the correct shebang:

$ ./test.sh

Exercise

Create an executable shell script in your personal directory on the server. Use the echo command in that script to print a simple message and run the file via SSH.

Faust

Faust is a functional audio programming language, developed at GRAME, Lyon. It is a community-driven, free open source project. Faust is specifically suited for quickly designing musical synthesis and processing software and compiling it for a large variety of targets. The fastest way for getting started with Faust in the Faust online IDE which allows programming and testing code in the browser, without any installation. The online materials for the class Sound Synthesis- Building Instruments with Faust introduce the basics of the Faust language and give examples for different synthesis techniques.

Faust and Web Audio

Besides many targets, Faust can also be used to create ScriptProcessor nodes (Letz, 2015).

References

  • Stephane Letz, Sarah Denoux, Yann Orlarey, and Dominique Fober. Faust audio dsp language in the web. In Proceedings of the Linux Audio Conference. 2015.
    [BibTeX▼]
  • Realtime Weather Sonification

    OpenWeatherMap

    This first, simple Web Audio sonification application makes use of the Weather API for real-time, browser-based sonification of weather data. For fetching data, a free subscription is necessary: https://home.openweathermap.org

    Once subscribed, the API key can be used to get current weather information in the browser:

    https://api.openweathermap.org/data/2.5/weather?q=Potsdam&appid=eab7c410674e15bfdd841f66941a92c2

    JSON Data Structure

    The resulting output in JSON looks like this:

    {
      "coord": {
        "lon": 13.41,
        "lat": 52.52
      },
      "weather": [
        {
          "id": 804,
          "main": "Clouds",
          "description": "overcast clouds",
          "icon": "04d"
        }
      ],
      "base": "stations",
      "main": {
        "temp": 9.74,
        "feels_like": 6.57,
        "temp_min": 9,
        "temp_max": 10.56,
        "pressure": 1034,
        "humidity": 93
      },
      "visibility": 8000,
      "wind": {
        "speed": 4.1,
        "deg": 270
      },
      "clouds": {
        "all": 90
      },
      "dt": 1604655648,
      "sys": {
        "type": 1,
        "id": 1275,
        "country": "DE",
        "sunrise": 1604643143,
        "sunset": 1604676458
      },
      "timezone": 3600,
      "id": 2950159,
      "name": "Berlin",
      "cod": 200
    }
    

    All entries of this data structure can be used as synthesis parameters in a sonification system with Web Audio.

    Temperatures to Frequencies

    Mapping

    In this example we are using a simple frequency modulation formula for turning temperature and humidity into more or less pleasing (annoying) sounds. The frequency of a first oscillator is derived from the temperature:

    \(\displaystyle f_1 = 10 \frac{1}{{T^2 / C^{\circ} }}\)

    The modulator frequency is controlled by the humidity \(H\):

    \(y = sin(2 \pi (f_1 + 100 \cdot \sin(2 \pi H t))t)\)


    The Result

    The resulting app fetches the weather data of a chosen city, extracts temperature and humidity and sets the parameters of the audio processes:

    Where would you rather be?

    What does the weather sound like in ...?


    Code

    weather/weather.html (Source)

    <!doctype html>
    <html>
    
    <head>
    <title>Where would you rather be?</title>
    </head>
    <blockquote style="border: 2px solid #122; padding: 10px; background-color: #ccc;">
    <body>
    <p>What does the weather sound like in ...?</p>
    <p>
    <button onclick="myFunction()">Enter City Name</button>
    <button onclick="stop()">Stop</button>
    <p id="demo"></p>
    
    </p>
    
    </body>
    <div id="location"></div>
    <div id="weather">
    <div id="description"></div>
    <h1 id="temp"></h1>
    <h1 id="humidity"></h1>
    </div>
    </blockquote>
    
    <script>
    
    var audioContext = new window.AudioContext
    var oscillator   = audioContext.createOscillator()
    var modulator    = audioContext.createOscillator()
    
    // the output gain
    var gainNode     = audioContext.createGain()
    
    var modInd        = audioContext.createGain();
    modInd.gain.value = 100;
    
    gainNode.gain.value = 0
    
    modulator.connect(modInd)
    modInd.connect(oscillator.detune)
    oscillator.connect(gainNode)
    gainNode.connect(audioContext.destination)
    
    oscillator.start(0)
    oscillator.frequency.setValueAtTime(100, audioContext.currentTime);
    
    modulator.start(0)
    modulator.frequency.setValueAtTime(100, audioContext.currentTime);
    
    function myFunction() {
      var city = prompt("Enter City Name", "Potsdam");
      if (city != null) {
      get_weather(city)
      }
    }
    
    
    function stop()
    {
    gainNode.gain.linearRampToValueAtTime(0, audioContext.currentTime + 1);
    }
    
    function frequency(y)
    {
    oscillator.frequency.value = y
    }
    
    function get_weather( cityName )
    {
        var key = 'eab7c410674e15bfdd841f66941a92c2';
        fetch('https://api.openweathermap.org/data/2.5/weather?q=' + cityName+ '&appid=' + key)
        .then(function(resp) { return resp.json()}) // Convert data to json
        .then(function(data) {
        setSynth(data);
        })
        .catch(function() {
        // catch any errors
        });
    }
    
    function setSynth(d)
    {
        var celcius = Math.round(parseFloat(d.main.temp)-273.15);
        var fahrenheit = Math.round(((parseFloat(d.main.temp)-273.15)*1.8)+32);
    
        var humidity = d.main.humidity;
    
        oscillator.frequency.linearRampToValueAtTime(1000*(100/(celcius*celcius)), audioContext.currentTime + 1);
    
        modulator.frequency.linearRampToValueAtTime(humidity, audioContext.currentTime + 1);
    
        gainNode.gain.linearRampToValueAtTime(1, audioContext.currentTime + 1);
    
        document.getElementById('description').innerHTML = d.weather[0].description;
        document.getElementById('temp').innerHTML = celcius + '&deg;';
        document.getElementById('location').innerHTML = d.name;
        document.getElementById('humidity').innerHTML = 'Humidity: '+humidity;
    }
    
    </script>
    </html>
    

    A Brief History

    Beginnings of Computer Music

    First experiments on digital sound creation took place 1951 in Australia, on the CSIRAC computer system.

    Besides from these experiments, digital sound synthesis dates back to the first experiments of Max Mathews at Bell Labs in the mid 1950s. Mathews created the MUSIC I programming language for generating musical sounds through synthesis of a single triangular waveform on an IBM 704. The Silver Scale, realized by psychologist Newman Guttman in 1957, is one of the first ever digitally synthesized piece of music (Roads, 1980).


    MUSIC and its versions (I, II, III, ...) are direct or indirect ancestors to most recent languages for sound processing. Mathews defined the building blocks for digital sound synthesis and processing in these frameworks (Mathews, 1969, p. 48). This concept of unit generators is still used today. Although the first experiments sound amusing from today's perspective, he already anticipated the potential of the computer as a musical instrument:

    “There are no theoretical limitations to the performance of the computer as a source of musical sounds, in contrast to the performance of ordinary instruments.” (Mathews, 1963)

    Mathews created the first digital musical pieces himself, but in order to fully explore the musical potential, he was joined by composers, artists and other researchers, such as Newman Guttman, James Tenney and Jean Claude Risset. Risset contributed to the development of electronic music by exploring the possibilities of spectral analysis-resynthesis (1:20) and psychoacoustic phenomena like the Shepard tone (4:43):


    Later, the Bell Labs were visited by many renowned composers of various styles genres, including John Cage, Edgard Varèse and Laurie Spiegel (Park, 2009). The work at Bell Labs will be in focus again in the section on additive synthesis.


    A Pedigree

    The synthesis experiments at Bell Labs are the origin of most music programming languages and methods for digital sound synthesis. On different branches, techniques developed from that seed (Bilbao, 2009):

    /images/basics/bilbao_history.png

    Chowning & CCRMA

    The foundation for many further developments was laid when John Chowning brought the software MUSIC VI to Stanford from a visit at Bell Labs in the 1060s. After migrating it to a PDP-6 computer, Chowning worked on his groundbreaking digital compositions, such as Turenas (1972), using the frequency modulation synthesis (FM) and spatial techniques. Although in particular known for discovering the FM synthesis, these works are far more than mere studies of technical means:


    Puckette & IRCAM

    Most of the active music programming environments, such as Puredata, Max/MSP, SuperCollider or CSound, are descendants of the MUSIC languages. Graphical programming languages like Max/MSP and Puredata were actually born as patching and mapping environments. Their common ancestor, the Patcher (Puckette, 1986; Puckette, 1988), developed by Miller Puckette at IRCAM in the 1980s, was a graphical environment for connecting MAX real-time processes and for controlling MIDI instruments.

    The new means of programming and the increase in computational power allowed musique mixte with digital signal processing means. Pluton (1988-89) by Philippe Manoury is one of the first pieces to use MAX for processing piano sounds in real time (6:00-8:30):



    References

  • John Chowning. Turenas: the realization of a dream. Proc. of the 17es Journées d’Informatique Musicale, Saint-Etienne, France, 2011.
    [BibTeX▼]
  • Bilbao, Stefan. Numerical Sound Synthesis. Wiley Online Library, 2009. ISBN 9780470749012. doi:10.1002/9780470749012.
    [BibTeX▼]
  • Ananya Misra and Perry R Cook. Toward Synthesized Environments: A Survey of Analysis and Synthesis Methods for Sound Designers and Composers. In Proceedings of the International Computer Music Conference (ICMC 2009). 2009.
    [BibTeX▼]
  • Tae Hong Park. An interview with max mathews. Computer Music Journal, 33(3):9–22, 2009.
    [BibTeX▼]
  • Julius O. Smith. Viewpoints on the History of Digital Synthesis. In Proceedings of the International Computer Music Conference, 1–10. 1991.
    [BibTeX▼]
  • Miller S. Puckette. The patcher. In Proceedings of the International Computer Music Conference (ICMC). 1988.
    [BibTeX▼]
  • Emmanuel Favreau, Michel Fingerhut, Olivier Koechlin, Patrick Potacsek, Miller S. Puckette, and Robert Rowe. Software developments for the 4x real-time system. In Proceedings of the International Computer Music Conference (ICMC). 1986.
    [BibTeX▼]
  • Curtis Roads and Max Mathews. Interview with max mathews. Computer Music Journal, 4(4):15–22, 1980.
    [BibTeX▼]
  • Max V. Mathews. The Technology of Computer Music. MIT Press, 1969.
    [BibTeX▼]
  • Max V Mathews. The Digital Computer as a Musical Instrument. Science, 142(3592):553–557, 1963.
    [BibTeX▼]


  • Contents © Henrik von Coler 2021 - Contact