JACK

The JACK API implements an audio server, allowing the connection of various software clients and hardware interfaces. In short, it turns the whole system into a digital audio workstation (DAW). It is the pro audio standard on Linux systems, but is also available for Mac and Windows.

/images/basics/qjackctl_connect.png

Qjackctl with hardware connections and two clients.

Realtime Weather Sonification

OpenWeatherMap

This first, simple Web Audio sonification application makes use of the Weather API for real-time, browser-based sonification of weather data. For fetching data, a free subscription is necessary: https://home.openweathermap.org

Once subscribed, the API key can be used to get current weather information in the browser:

https://api.openweathermap.org/data/2.5/weather?q=Potsdam&appid=eab7c410674e15bfdd841f66941a92c2

JSON Data Structure

The resulting output in JSON looks like this:

{
  "coord": {
    "lon": 13.41,
    "lat": 52.52
  },
  "weather": [
    {
      "id": 804,
      "main": "Clouds",
      "description": "overcast clouds",
      "icon": "04d"
    }
  ],
  "base": "stations",
  "main": {
    "temp": 9.74,
    "feels_like": 6.57,
    "temp_min": 9,
    "temp_max": 10.56,
    "pressure": 1034,
    "humidity": 93
  },
  "visibility": 8000,
  "wind": {
    "speed": 4.1,
    "deg": 270
  },
  "clouds": {
    "all": 90
  },
  "dt": 1604655648,
  "sys": {
    "type": 1,
    "id": 1275,
    "country": "DE",
    "sunrise": 1604643143,
    "sunset": 1604676458
  },
  "timezone": 3600,
  "id": 2950159,
  "name": "Berlin",
  "cod": 200
}

All entries of this data structure can be used as synthesis parameters in a sonification system with Web Audio.

Temperatures to Frequencies

Mapping

In this example we are using a simple freuency modulation formula for turning temperature and humidity into more or less pleasing (annoying) sounds. The frequency of a first oscillator is derived from the temperature:

\(\displaystyle f_1 = 10 \frac{1}{{T^2 / C^{\circ} }}\)

The modulator frequency is controlled by the humidity \(H\):

\(y = sin(2 \pi (f_1 + 100*sin(2 \pi H t))t)\)


The Result

The resulting app fetches the weather data of a chosen city, extracts temperature and humidity and sets the parameters of the audio processes:

Where would you rather be?

What does the weather sound like in ...?


Code

weather/weather.html (Source)

<!doctype html>
<html>

<head>
<title>Where would you rather be?</title>
</head>
<blockquote style="border: 2px solid #122; padding: 10px; background-color: #ccc;">
<body>
<p>What does the weather sound like in ...?</p>
<p>
<button onclick="myFunction()">Enter City Name</button>
<button onclick="stop()">Stop</button>
<p id="demo"></p>

</p>

</body>
<div id="location"></div>
<div id="weather">
<div id="description"></div>
<h1 id="temp"></h1>
<h1 id="humidity"></h1>
</div>
</blockquote>

<script>

var audioContext = new window.AudioContext
var oscillator   = audioContext.createOscillator()
var modulator    = audioContext.createOscillator()

// the output gain
var gainNode     = audioContext.createGain()

var modInd        = audioContext.createGain();
modInd.gain.value = 100;

gainNode.gain.value = 0

modulator.connect(modInd)
modInd.connect(oscillator.detune)
oscillator.connect(gainNode)
gainNode.connect(audioContext.destination)

oscillator.start(0)
oscillator.frequency.setValueAtTime(100, audioContext.currentTime);

modulator.start(0)
modulator.frequency.setValueAtTime(100, audioContext.currentTime);

function myFunction() {
  var city = prompt("Enter City Name", "Potsdam");
  if (city != null) {
  get_weather(city)
  }
}


function stop()
{
gainNode.gain.linearRampToValueAtTime(0, audioContext.currentTime + 1);
}

function frequency(y)
{
oscillator.frequency.value = y
}

function get_weather( cityName )
{
    var key = 'eab7c410674e15bfdd841f66941a92c2';
    fetch('https://api.openweathermap.org/data/2.5/weather?q=' + cityName+ '&appid=' + key)
    .then(function(resp) { return resp.json()}) // Convert data to json
    .then(function(data) {
    setSynth(data);
    })
    .catch(function() {
    // catch any errors
    });
}

function setSynth(d)
{
    var celcius = Math.round(parseFloat(d.main.temp)-273.15);
    var fahrenheit = Math.round(((parseFloat(d.main.temp)-273.15)*1.8)+32);

    var humidity = d.main.humidity;

    oscillator.frequency.linearRampToValueAtTime(1000*(100/(celcius*celcius)), audioContext.currentTime + 1);

    modulator.frequency.linearRampToValueAtTime(humidity, audioContext.currentTime + 1);

    gainNode.gain.linearRampToValueAtTime(1, audioContext.currentTime + 1);

    document.getElementById('description').innerHTML = d.weather[0].description;
    document.getElementById('temp').innerHTML = celcius + '&deg;';
    document.getElementById('location').innerHTML = d.name;
    document.getElementById('humidity').innerHTML = 'Humidity: '+humidity;
}

</script>
</html>

Schedule and Milestones

Phase 1

  • November 11
    • hand out access points
    • order components

  • November 25
    • individual connection checks
    • troubleshooting

Phase 2

  • December 05
    • jam session
    • project ideas
  • January 20
    • small presentation
    • project discussions

Phase 3

  • January 23
    • programming
    • rehearsal
  • Fabruary 20
    • concert

Back to NMPS 2020 Contents

Going Global

Quarantine Sessions

The quarantine sessions are an ongoing concert series between CCRMA at Stanford, the TU Studio in Berlin, the Orpheus Institute in Gent, Belgium and various guests:

https://hvc.berlin/projects/quarantine-sessions/

These sessions use the same software components as the SPRAWL System. Audio is transmitted via Jacktrip and SuperCollider is used for signal processing.

Discount

Closed Captions

Online Rehearsals

Background

The EOC

The Electronic Orchestra Charlottenburg (EOC) was founded at the TU Studio in 2017 as a place for developing and performing with custom musical instruments on large loudspeaker setups.

EOC Website: https://eo-charlottenburg.de/

Initially, the EOC worked in a traditional live setup with sound director. Several requests arose during the first years:

  • enable control of the mixing and rendering system through musicians
    • control spatialization
  • flexible spatial arrangement of musicians
    • break up rigid stage setup
  • distribution of data
    • scores
    • playing instructions
    • visualization of system states

The SPRAWL System

During Winter Semester 2019-20 Chris Chafe was invited as guest professor at Audio Communication Group. In combined classes, the SPRAWL network system was designed and implemented to solve the above introduced problems in local networks:

https://hvc.berlin/projects/sprawl_system/

Back to NMPS 2020 Contents

Links and Course Material

TU Website

The official TU website with information on the schedule and assessment:

https://www.ak.tu-berlin.de/menue/lehre/wintersemester_202021/network_music_performance_systems/

TUB Cloud Drive

The TUB cloud drive is used for additional materials like audio, scores and papers:

https://tubcloud.tu-berlin.de/s/eRsoiXL9kStZdMr

SPRAWL GIT Repository

The repository contains SHELL scripts, SuperCollider code and configuration files for the server and the clients of the SPRAWL system:

https://gitlab.tubit.tu-berlin.de/henrikvoncoler/sprawl

JackTrip GIT Repository

Jacktrip is the open source audio over IP software used in the SPRAWL system:

https://github.com/jacktrip/jacktrip


Back to NMPS 2020 Contents

Wavetable Oscillator with Phase Reset

The Faust oscillators.lib comes with many different implementations of oscillators for various waveforms. At some point one might still need a behavior not included and lower level approaches are necessary.

This example shows how to use a phasor to read a wavetable with a sine waveform. This implementation has an additional trigger input for resetting the phase of the oscillator on each positive zero crossing. This can come handy in various applications, especially for phase-sensitive transients, as for example in kick drums.

The example is derived from Barkati et. al (2013) and part of the repository:

import("stdfaust.lib");

// some basic stuff
sr = SR;
twopi = 2.0*ma.PI;

// define the waveform in table
ts =  1<<16; // size = 65536 samples (max of unsigned short)
time = (+(1) ~ _ ) , 1 : - ;
sinewave =  ((float(time) / float(ts)) * twopi) : sin;

phase = os.hs_phasor(ts,freq,trig);

// read from table
sin_osc( freq) = rdtable(ts ,sinewave , int(phase)) ;

// generate a one sample impulse from the gate
trig =  pm.impulseExcitation(reset);

reset = button ("reset");
freq = hslider("freq", 100, 0, 16000, 0.00001);

// offset = hslider("offset", 0, 0, 1, 0.00001);

process = sin_osc(freq);
  • Karim Barkati and Pierre Jouvelot. Synchronous programming in audio processing: a lookup table oscillator case study. ACM Computing Surveys (CSUR), 46(2):1–35, 2013.
    [BibTeX▼]


  • Contents © Henrik von Coler 2020 - Contact