# OSC: Open Sound Control

Open Sound Control (OSC) is the standard for exchanging control data between audio applications in distributed systems and on local setups with multiple components. Almost any programming language and environment for computer music offers means for using OSC, usually builtin.

OSC is based on the UDP/IP protocol in a client-server paradigm. A server needs to be started for listening to incoming messages sent from a client. For bidirectional communication, each participant needs to implement both a server and a client. Servers start listening on a freely chosen port, whereas clients send their messages to an arbitrary IP address and port.

## OSC Messages

A typical OSC message consists of a path and an arbitrary number of arguments. The following message sends a single floating point value, using the path /synthesizer/volume/:

/synthesizer/volume/ 0.5


The path can be any string with slash-separated sub-strings, as paths in an operating system. OSC receivers can sort the messages according to the path. Parameters can be integers, floats and strings. Unlike MIDI, OSC offers only the transport protocol but does not define a standard for musical parameters. Hence, the paths used for a certain software are completely arbitrary and can be defined by the developers.

# First Sounds with SuperCollider

## Boot a Server

Synthesis and processing happens inside an SC server. So the first thing to do when creating sound with SuperCollider is to boot a server. The ScIDE offers menu entries for doing that. However, using code for doing so increases the flexibility. In this first example we will boot the default server. It is per default associated with the global variable s:

// boot the server
s.boot;


### A First Node

In the SC server, sound is generated and processed inside nodes. These nodes can later be manipulated, arranged and connected. A simple node can be defined inside a function curly brackets:

// play a sine wave
(
{
// calculate a sine wave with frequency and amplitude
var x = SinOsc.ar(1000);

// send the signal to the output bus '0'
Out.ar(0, x);

}.play;

)


In the ScIDE, there are several ways to get information on the active nodes on the SC server. The node tree can be visualized in the server menu options or printed from sclang, by evaluating:

s.queryAllNodes


After creating just the sine wave node, the server will show the following node state:

NODE TREE Group 0
1 group
1001 temp__1


The GUI version of the node tree looks as follows. This representation is updated in real time, when left open:

Note

The server itself does not know any variable names but addresses all nodes by their ID. IDs are assigned in an ascending order. The sine wave node can be accessed with the ID 1001.

## Removing Nodes

Any node can be removed from a server, provided its unique ID:

s.sendMsg("/n_free",1003)


All active nodes can be removed from the server at once. This can be very handy when experiments get out of hand or a simple sine wave does not quit. It is done by pressing Shift + . or evaluating:

// free all nodes from the server
s.freeAll


## Running SC Files

SuperCollider code is written in text files with the extensions .sc or .scd. On Linux and Mac systems, a complete SC file can be executed in the terminal by calling the language with the file as argument:



## Finding ALSA Devices

One way of finding the ALSA name of your interface is to type the following command:

$aplay -l  The output shows all ALSA capable devices, their name listed after the card x:. PCH is usually the default onboard sound card: **** List of PLAYBACK Hardware Devices **** card 0: HDMI [HDA Intel HDMI], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: HDMI [HDA Intel HDMI], device 7: HDMI 1 [HDMI 1] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: HDMI [HDA Intel HDMI], device 8: HDMI 2 [HDMI 2] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: HDMI [HDA Intel HDMI], device 9: HDMI 3 [HDMI 3] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: HDMI [HDA Intel HDMI], device 10: HDMI 4 [HDMI 4] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: PCH [HDA Intel PCH], device 0: CX20751/2 Analog [CX20751/2 Analog] Subdevices: 1/1 Subdevice #0: subdevice #0  ## Connecting JACK Clients As almost everything, JACK connections can be modified from the terminal. All available JACK ports can be listed with the following command: $ jack_lsp


Two ports can be connected with the following command:

$jack_connect client1:output client2:input  Disconnecting two ports is done as follows: $ jack_disconnect client1:output client2:input


If possible, a GUI-based tool, such as QjackCtl, can be more handy for connecting clients. It can be started via the a Desktop environment or from the command line:

## Software Instruments

Today, some physical modeling software emerged for high quality piano and organ synthesis (Amazona article). Other implementations aim at strings:

• Pianoteq Pro 6
• Organteq Alpha
• Strum GS 2
• AAS Chromophone 2

### Modular

Since simple physical models are nowadays easily implemented on small embedded systems, various modules exist on the market. It a modular setup, this is especially interesting, since arbitrary excitation signals can be generated and patched. These are just two examples:

## Physical Models in Experimental Music

### Eikasia

Unlike FM synthesis, subtractive synthesis or sampling, physical modeling does not come with genre-defining examples from popular music. However, the technique has been used a lot in the context of experimental music (Chafe, 2004). Eikasia (1999) by Hans Tutschku was realized using the IRCAM software Modalys:

### S-Morphe-S

In his 2002 work S-Morphe-S, Matthew Burtner used physical models of singing bowls, excited by a saxophone:

## References

• Stefan Bilbao, Charlotte Desvages, Michele Ducceschi, Brian Hamilton, Reginald Harrison-Harsley, Alberto Torin, and Craig Webb. Physical modeling, algorithms, and sound synthesis: the ness project. Computer Music Journal, 43(2-3):15–30, 2019.
[BibTeX▼]
• Chris Chafe. Case studies of physical models in music composition. In Proceedings of the 18th International Congress on Acoustics. 2004.
[BibTeX▼]
• Vesa Välimäki. Discrete-time modeling of acoustic tubes using fractional delay filters. Helsinki University of Technology, 1995.
[BibTeX▼]
• Gijs de Bruin and Maarten van Walstijn. Physical models of wind instruments: A generalized excitation coupled with a modular tube simulation platform*. Journal of New Music Research, 24(2):148–163, 1995.
[BibTeX▼]
• Matti Karjalainen, Vesa Välimäki, and Zoltán Jánosy. Towards High-Quality Sound Synthesis of the Guitar and String Instruments. In Computer Music Association, 56–63. 1993.
[BibTeX▼]
• Julius O Smith. Physical modeling using digital waveguides. Computer music journal, 16(4):74–91, 1992.
[BibTeX▼]
• Lejaren Hiller and Pierre Ruiz. Synthesizing musical sounds by solving the wave equation for vibrating objects: part 1. Journal of the Audio Engineering Society, 19(6):462–470, 1971.
[BibTeX▼]
• Lejaren Hiller and Pierre Ruiz. Synthesizing musical sounds by solving the wave equation for vibrating objects: part 2. Journal of the Audio Engineering Society, 19(7):542–551, 1971.
[BibTeX▼]
• # Concept of Subtractive Synthesis

## Functional Units

Subtractive synthesis is probably the best known and most popular method of sound synthesis. The basic idea is to start with signals with rich spectral content which are then shaped afterwards by filters. Although the possibilities of subtractive synthesis are quasi-unlimited, especially when combined with other methods, the principle can be explained with three groups of functional units:

• Generators
• Manipulators
• Modulators

[Fig.1] gives an overview how these functional units are arranged in a subtractive synthesizer. Modulators and generators overlap, since they are interchangeable in many aspects. This section uses the terminology from the (modular) analog domain, with Voltage Controlled Oscillators (VCO), Voltage Controlled Filters (VCF) and Voltage Controlled Amplifiers (VCA).

 [Fig.1] Functional units in subtractive synthesis.

Generators

• Oscillators (VCO)
• Noise Generators
• ...

Frequently used oscillators in subtractive synthesis are the basic waveforms with high frequency energy, such as the sawtooth, square wave or the triangular wave (See the section on additive synthesis). Noise generators can be used for adding non-harmonic components.

Manipulators

• Filters (VFC)
• Amplifiers (VCA)
• ...

The most important manipulators are filters and amplifiers, respectively attenuators. Filters will be explained in detail in the following sections.

Modulators

• LFO (Low Frequency Oscillators)
• ...

Modulators are such units which control the parameters of generators and manipulators over time. This includes periodic modulations, such as the LFO, and envelopes, which are triggered by keyboard interaction.

Like with all methods for sound synthesis, the dynamic change of timbre is an essential target for generating vivid sounds. [Fig.2] shows a more specific signal flow which is a typical subtractive synth patch for generating lead or bass sounds.

• The signal from a VCO is manipulated by a VCF and then attenuated by a VCA.
• The VCO has a sawtooth or square waveform.
• The cutoff frequency of the VCF and the amplitude of the VCA are controlled with individual envelopes.
• If ENV2 has a faster decay than ENV1, the sound will have a crisp onset and a low decay, resulting in the typical thump.

 [Fig.2] Subtractive patch for bass and lead synth.

# AM & Ringmodulation: Formula & Spectrum

## Amplitude Modulation vs Ringmodulation¶

Both amplitude modulation and ringmodulation are a multiplication of two signals. The basic formula is the same for both:

$y[n] = x[n] \cdot m[n]$

However, for ringmodulation the modulation signal is symmetric:

$y[n] = \sin\left(2 \pi f_c \frac{n}{f_s}\right) \cdot \left(\sin\left[2 \pi f_m \frac{n}{f_s}\right]\right)$

Whereas for amplitude modulation, the signal ist asymetric:

$y[n] = \sin\left(2 \pi f_c \frac{n}{f_s}\right) \cdot \left( 1+ \sin\left[2 \pi f_m \frac{n}{f_s}\right]\right)$

This differnce has an influence on the resulting spectrum and on the sound, as the following examples show.

### AM Spectrum¶

The spectrum for amplitude modulation can be calculated as follows:

$Y[k] = DFT(y[n])$

$\displaystyle Y[k] = \sum_{n=0}^{N-1} y[n] \cdot e^{-j 2 \pi k \frac{n}{N}}$

$\displaystyle = \sum_{n=0}^{N-1} \sin\left(2 \pi f_c \frac{n}{f_s}\right) \cdot \left( 1+ \sin\left[2 \pi f_m \frac{n}{f_s}\right]\right) \cdot e^{-j 2 \pi k \frac{n}{N}}$

$\displaystyle =\sum_{n=0}^{N-1} \left( \sin\left(2 \pi f_c \frac{n}{f_s}\right) + 0.5 \left( \cos\left(2 \pi (f_c - f_m)\frac{n}{f_s}\right) - \cos\left(2 \pi (f_1 + f_m)\frac{n}{f_s}\right) \right) \right) \cdot e^{-j 2 \pi k \frac{n}{N}}$

$\displaystyle= \delta[f_1] + 0.5 \delta[f_c - f_m] + 0.5 \ \delta[f_c + f_m]$

AM creates a spectrum with a peak at the carrier frequency and two peaks below and above it. Their position is defined by the difference between carrier and modulator.

### Ringmod Spectrum¶

$\mathcal{F} [ y(t)] = \int\limits_{-\inf}^{\inf} y(t) e^{-j 2 \pi f t} \mathrm{d}t$

$= \int\limits_{-\inf}^{\inf} \left( \sin(2 \pi f_c t) \sin(2 \pi f_s t) \right) e^{-j 2 \pi f t} \mathrm{d}t$

$= \frac{1}{2 j} \int\limits_{-\inf}^{\inf} \left( (-e^{-j 2 \pi f_c t} +e^{j 2 \pi f_c t}) (-e^{-j 2 \pi f_s t} +e^{j 2 \pi f_s t}) \right) \ e^{-j 2 \pi f t} \mathrm{d}t$

$= \frac{1}{2 j} \int\limits_{-\inf}^{\inf} \left( e^{j 2 \pi (f_c+f_s) t} - e^{j 2 \pi (f_c-f_s) t} - e^{j 2 \pi (-f_c+f_s) t} + e^{j 2 \pi (-f_c-f_s) t} \right) e^{-j 2 \pi f t}$

$= \frac{1}{2 j} \left[ \delta(f_c+f_s) -\delta(f_c-f_s) - \delta(-f_c+f_s) + \delta(-f_c-f_s) \right]$

Ringmodulation creates a spectrum with
two peaks below and above the carrier frequency. Their position is defined by the difference between carrier and modulator.
The modulator is supressed, since it is symmetric.

The sine wave can be considered the atomic unit of timbre and thus of musical sounds. Additive synthesis and related approaches build musical sounds from scratch, using these integral components. When a sound is composed of several sinusoids, they are referred to as partials, regardless of their properties. Partials which are integer multiples of a fundamental frequency are called harmonics or overtones, when related to the first harmonic.

## Fourier Series

According to the Fourier theorem, any periodic signal can be represented by an infinite sum of sinusoids with individual

• amplitude $a_i$
• frequency $f_i$
• phase $\varphi_i$

$\displaystyle y = \sum\limits_{i=1}^{\infty} a_i \ sin(2 \pi f_i \ t +\varphi_i )$

When applying this principle to musical sounds, a simplified model can be used to generate basic timbres. All sinusoidal components become integer multiples of a fundamental freuquency $f_0$, so called harmonics, with a maximum number of partials $N_{part}$. In an even further reduced model, the phases of the partials can be ignored:

$\displaystyle y (t) = \sum\limits_{n=1}^{N_{part}} a_n(t) \ sin(2 \ \pi \ n \ f_0 (t) \ t)$

---

As following sections on spectral modeling show, a more advanced model is needed to synthesize musical sounds which are indistinguishable from the original. This includes the partials' phase, inharmonicities as deviations from exact integer multiples, noise components and transients. However, depending of the number of partials and the driving function for their parameters, this limited formula can generate convincing harmonic sounds.

## A Brief History

### Early Mechanical

Early use of the Fourier representation, respectively additive synthesis, for modeling musical sounds has been made by Hermann von Helmholtz. He built mechanical devices, based on tuning forks, resonant tubes and electromagnetic excitation for additive synthesis. Von Helmholtz used these devices for investigating various aspects of harmonic sounds, including spectral distribution and relative phases.

### Early Analog

The history of Elektronische Musik started with additive synthesis. In his composition Studie II, Karlheinz Stockhausen composed timbres by superimposing sinusoidal components. In that era this was realized through single sine wave oscillators, tuned to the desired frequency and recorded on tape.

Studie II is the attempt to fully compose music on a timbral level in a rigid score. Stockhausen therefor generated tables with frequencies and mixed tones for creating source material. [Fig.1] shows an excerpt from the timeline, which was used to arrange the material. The timbres are recognizable through their vertical position in the upper system, whereas the lower system represents articulation, respectively fades and amplitudes.

 [Fig.1] From the score of Studie II.

### Early Digital

Max Mathews

As mentioned in Introduction II, Max Mathews used additive synthesis to generate the first digitally synthesized pieces of music in the 1950s. In the early 1960s, Mathews had advanced the method to synthesize dynamic timbres, as in Bycicle Built for Two:

Iannis Xenakis

In his Electroacoustic compositions, Iannis Xenakis made use of the UPIC system for additive synthesis (Di Scipio, 1998), as for example is Mycenae-Alpha (1977):

#### References

• Agostino Di Scipio. Compositional models in xenakis's electroacoustic music. Perspectives of New Music, pages 201–243, 1998.
[BibTeX▼]
• Hermann von Helmholtz. Die Lehre von den Tonempfindungen als physiologische Grundlage für die Theorie der Musik, 3. umgearbeitete Ausgabe. Braunschweig: Vieweg, 1870.
[BibTeX▼]
• # Faust: Sequential Composition

Sequential composition is used for passing signals directly from one block to another block. In Faust, this is done with the : operator. The following example illustrates this with a square wave signal, which is processed with a low pass filter:

The square wave has a fixed frequency of $50\ \mathrm{Hz}$. The lowpass filter has two arguments, the first being the filter order, the second the cutoff frequency, which is controlled with a horizontal slider. Both blocks are defined and subsequently connected in the process function with the : operator. The adjustable cutoff parameter is additionally smoothed with si.smoo to avoid clicks.

Load this example in the Faust online IDE for a quick start:

import("stdfaust.lib");

freq  = hslider("frequency",100, 10, 1000, 0.001);

sig  = os.square(50);
filt = fi.lowpass(5,freq);

process = sig:filt;


Contents © Henrik von Coler 2021 - Contact