Pulse Width Modulation

Pulse Width Modulation (PWM) is a method for changing the timbre of a square wave. It is frequently found in analog and digital synthesizers as a means for enriching a sound, for example in pads.

A PWM signal can be generated with a case based logic, respectively a threshold. In programming, this can be implemented with a phasor and the pulse width $\tau$:

$$ \mathrm{PWM}(t) = \begin{cases} 1 \ \ \ \ \text{for} \ t<= \frac{T \tau}{100} , \\ -1 \ \text{for} \ t> \frac{T \tau}{100} , \end{cases} $$

The pulse width usually takes values beween $0\%$ and $100\%$:

Organizing Processes with systemd

systemd is a set of tools managing, among other things, the startup procedure on Linux systems. Within the Linux user and developer community, there is a debate whether systemd violates the Unix philosophy - however, it works well for starting all the software components we need when booting the server or the Access Points.

System services can be started in dependence on other services, This makes it possible to start a system with many interrelations. They can be started and stopped during operation. Depending on the configuration, the services can also be restarted automatically if they crash.


Creating Services

systemd services can be declared as user services or system services. They are defined in text files, which need to be located in one of the the following directories:

/usr/lib/systemd/user
/usr/lib/systemd/system

The JACK Service

The JACK service needs to be started before all other audio applications, since they rely on a running JACK server. A file jack.service defines the complete service:

[Unit]
Description=Jack audio server
After=sound.target local-fs.target

[Install]
WantedBy=multi-user.target

[Service]
Type=simple
PrivateTmp=true
Environment="JACK_NO_AUDIO_RESERVATION=1"
ExecStart=/usr/bin/jackd -P 90 -a a -d dummy -p 128
LimitRTPRIO=95
LimitRTTIME=infinity
LimitMEMLOCK=infinity
User=studio

JACK must be run as a normal user. The file above describes a system service that starts jack as user studio. But only administrators are allowed to control system services. If we want to control a service as a normal user we need a user service without the User=studio entry.

Managing Services

Once the service files are in place, several simple commands are available for controlling them. They differ, depending on whether a user service or system service is controlled. The following examples refer to the JACK user service. Controlling system services requires root privileges and do not need the --user flag.

Starting a Service

systemctl --user start jack.service

Stopping a Service

systemctl --user stop jack.service

Activating a Service

Activating a service creates a symlink in ~/.config/systemd/user/multi-user.target.wants/jack.service, pointing to the original file /usr/lib/systemd/user/jack.service. Afterwards, the system is launched after the first login of the user and stopped after the last user session exits.

systemctl --user enable jack.service

Deactivating a Service

systemctl --user disable jack.service

Getting a Service's Status

The following command prints a system's status:

systemctl --user status jack.service

When the JACK sevice has been started sucessfully, the output looks as follows:

 ● jack.service - Jack audio server
  Loaded: loaded (/usr/lib/systemd/user/jack.service; enabled; vendor preset: enabled)
  Active: active (running) since Tue 2021-04-13 23:00:14 BST; 3s ago
Main PID: 214518 (jackd)
  CGroup: /user.slice/user-1000.slice/user@1000.service/jack.service
          └─214518 /usr/bin/jackd -P 90 -a a -d dummy -p 256

Start user service on boot

Sometimes it is practical to have a user session running after the last session closes. For example if you access a server only via SSH. To achieve this we have to set the specific user to be lingering. This user's services will start at boot and quit at shutdown now.

# loginctl enable-linger studio

Wavefolding

Wavefolding

Wavefolding is a special case of waveshaping, working with periodic transfer functions. Depending on the pre-gain, the source signal gets folded back, once a maximum of the transfer function is reached. Compared to the previously introduced soft clipping or other methods of waveshaping, this adds many strong harmonics.

Periodic Shaping Function

A simple basic transfer function is a sine with the appropriate scaling factor. The pre-gain $g$ is the parameter for controling the intensity of the folding effect:

$$ y[n] = sin( g \frac{\pi}{2} x[n]) $$

For an input signal $x$, limited to values between $-1$ and $1$, respectively for gains $g\leq1$, this results in a sinusoidal waveshaping function with saturation:

Text(0,0.5,'y')

When the input signal exceeds the boundaries $-1$ and $1$, the signal does not clip but is folded back. This can be achieved by amplifying the input with an additional gain:

For a gain of $g=3$, the time-domain output signal looks as follows:

Spectrum for a Sinusoidal Input

The spectrum of wavefolding can be calculated by expressing the folding term as a Fourier series: The Jacobi–Anger expansion can be used for this purpose, with the pre-gain $g$:

$$ \sin(g \sin(x)) = 2 \sum\limits_{m=1}^{\infty} J_{2m-1}(g) \sin((2m-1)x) $$

At this point it is already apparent that the resulting signal contains harmonics at odd integer multiples of the fundamental frequency $f_m = 100 \mathrm{Hz}\ (2 m -1)$. Their gain is determined by first kind Bessel functions $J_{2m-1}(g)$:

For the DFT this leads to:

$$ \begin{eqnarray} X[k] &=& 2 \sum\limits_{m=1}^{\infty} J_{2m-1}(g) \sin((2m-1)x) \sum\limits_{n=0}^{N-1} e^{-j 2 \pi k \frac{n}{N}} \\ % % % X[k] &=& 2 \sum\limits_{n=0}^{N-1} \sum\limits_{m=1}^{\infty} J_{2m-1}(g) \sin((2m-1)x)\ e^{-j 2 \pi k \frac{n}{N}} \\ % % % X[k] &=& 2 \sum\limits_{n=0}^{N-1} \sum\limits_{m=1}^{\infty} J_{2m-1}(g) \frac{1}{2} \left( e^{j (2m-1)x} - e^{-j(2m-1)x} \right) % \sin((2n-1)x)\ e^{-j 2 \pi k \frac{n}{N}} \\ % % % X[k] &=& \sum\limits_{n=0}^{N-1} \sum\limits_{m=1}^{\infty} J_{2m-1}(g) \left( e^{-j 2 \pi k \frac{n}{N} + j (2m-1)x } - e^{-j 2 \pi k \frac{n}{N} -j(2m-1)x} \right) % \sin((2n-1)x)\ \end{eqnarray} $$

With $x = 2 \pi \frac{f_0}{f_s} n$

$$ X[k] = \sum\limits_{n=0}^{N-1} \sum\limits_{m=1}^{\infty} J_{2m-1}(g) \left( e^{-j 2 \pi k \frac{n}{N} + j (2m-1) 2 \pi \frac{f_0}{f_s} } - e^{-j 2 \pi k \frac{n}{N} -j(2m-1) 2 \pi \frac{f_0}{f_s}} \right) % \sin((2n-1)x)\ $$

1 Hints on this by Peyam Tabrizian can be found here: https://youtu.be/C641y-z3aI0

DFT Plots

The below plots show the spectra of the folding operation for a sine input of $100 \mathrm{Hz}$ at different gains. With increasing gain, partials are added at the odd integer multiples of the fundamental frequency $f_m = 100 \mathrm{Hz}\ (2 m -1)$:

[ 100  300  500  700  900 1100 1300 1500 1700 1900] ...

Using APIs with Python

Python & APIs

With the modules requests and json it is easy to get data from APIs with Python. Using the above introduced methods for sequencing, the following example requests a response from https://www.boredapi.com/:

#!/usr/bin/env python3

import requests
import json

response = requests.get("https://www.boredapi.com/api/activity")
data     = response.json()

print(json.dumps(data, sort_keys=True, indent=4))

# print(data["activity"])

Combining Nodes in SuperCollider

Creating and Connecting Nodes

Audio buses can be used to connect synth nodes. In this example we will create two nodes - one for generating a sound and one for processing it. First thing is an audio bus:

~aBus = Bus.audio(s,1);

The ~osc node generates a sawtooth signal and the output is routed to the audio bus:

~osc = {arg out=1; Out.ar(out,Saw.ar())}.play;

~osc.set(\out,~aBus.index);

The second node is a simple filter. Its input is set to the index of the audio bus:

~lpf = {arg in=0; Out.ar(0, LPF.ar(In.ar(in),100))}.play;

~lpf.set(\in,~aBus.index);

Warning

Although everything is connected, there is no sound at this point. SuperCollider can only process such chains if the nodes are arranged in the right order. The filter node can be moved after the oscillator node:


Moving Nodes

/images/basics/sc-order-1.png

Node Tree before moving the processor node.


The moveAfter() function is a quick way for moving a node directly after a node specified as the argument. The target node can be either referred to by its node index or by the related name in sclang:

~lpf.moveAfter(~osc)

/images/basics/sc-order-2.png

Node Tree after moving the processor node.

Pure Data: Send-Receive & Throw-Catch

Send & Receive

Control Rate

Send and receive objects allow a wireless connection of both control and audio signals. The objects are created with send and receive or short s and r for control rate signals and get one argument - a string labeling the connection.

Local Sends

Prepending a $0- to a send label turns it into a local connection. These are only valid inside a patch and its subpatches but not across different abstractions. The example send-receive-help.pd shows the difference between local and global sends when used in both cases. It relies on the additional abstraction send-receive.pd which needs to be in the same directory:

/images/basics/pd-send-receive.png

Send and receive of control signals with subpatch and abstraction.


The inside of both the subpatch and the abstraction are identical:

/images/basics/pd-send-receive-sub.png

Inside of send-receive and the subpatch.


Audio Rate

Audio send and receives follow the same rules as control ones. They are created with an additional ~, as usual for audio objects. The example send-receive-audio.pd shows the use of these buses:

/images/basics/pd-send-receive.png

Send and receive of audio signals with subpatch and abstraction.


Throw & Catch

Throw and catch are bus extensions of the above introduced send-receive method, only for audio signals. Unlike with s~ and r~, it is possible to send multiple signals to one catch~. This allows a flexible audio routing and grouping without a lot of lines. The example throw-catch.pd throws four sine waves to a common bus for a minimal additive synthesis:

/images/basics/pd-throw-catch.png

Using throw and catch to merge four signals.

Digital Waveguides: Discrete Wave Equation

Wave Equation for Ideal Strings

The ideal string results in an oscillation without losses. The differential wave-equation for this process is defined as follows. The velocity \(c\) determines the propagation speed of the wave and this the frequency of the oscillation.

\begin{equation*} \frac{\partial^2 y}{\partial t^2} = c^2 \frac{\partial^2 y}{\partial x^2} \end{equation*}

A solution for the different equation without losses is given by d'Alembert (1746). The oscillation is composed of two waves - one left-traveling and one right traveling component.

\begin{equation*} y(x,t) = y^+ (x-ct) + y^- (x+ct)$ \end{equation*}
  • \(y^+\) = left traveling wave
  • \(y^-\) = right traveling wave

Tuning the String

The velocity \(c\) depends on tension \(K\) and mass-density \(\epsilon\) of the string:

\begin{equation*} c^2 = \sqrt{\frac{K}{\epsilon}} = \sqrt{\frac{K}{\rho S}} \end{equation*}

With tension \(K\), cross sectional area \(S\) and density \(\rho\) in \({\frac{g}{cm^3}}\).

Frequency \(f\) of the vibrating string depends on the velocity and the string length:

\begin{equation*} f = \frac{c}{2 L} \end{equation*}

Make it Discrete

For an implementation in digital systems, both time and space have to be discretized. This is the discrete version of the above introduced solution:

\begin{equation*} y(m,n) = y^+ (m,n) + y^- (m,n) \end{equation*}

For the time, this discretization is bound to the sampling frequency \(f_s\). Spatial sample distance \(X\) depends on sampling-rate \(f_s = \frac{1}{T}\) and velocity \(c\).

  • \(t = \ nT\)
  • \(x = \ mX\)
  • \(X = cT\)

References

  • Stefan Bilbao, Charlotte Desvages, Michele Ducceschi, Brian Hamilton, Reginald Harrison-Harsley, Alberto Torin, and Craig Webb. Physical modeling, algorithms, and sound synthesis: the ness project. Computer Music Journal, 43(2-3):15–30, 2019.
    [BibTeX▼]
  • Chris Chafe. Case studies of physical models in music composition. In Proceedings of the 18th International Congress on Acoustics. 2004.
    [BibTeX▼]
  • Vesa Välimäki. Discrete-time modeling of acoustic tubes using fractional delay filters. Helsinki University of Technology, 1995.
    [BibTeX▼]
  • Gijs de Bruin and Maarten van Walstijn. Physical models of wind instruments: A generalized excitation coupled with a modular tube simulation platform*. Journal of New Music Research, 24(2):148–163, 1995.
    [BibTeX▼]
  • Matti Karjalainen, Vesa Välimäki, and Zoltán Jánosy. Towards High-Quality Sound Synthesis of the Guitar and String Instruments. In Computer Music Association, 56–63. 1993.
    [BibTeX▼]
  • Julius O Smith. Physical modeling using digital waveguides. Computer music journal, 16(4):74–91, 1992.
    [BibTeX▼]
  • Lejaren Hiller and Pierre Ruiz. Synthesizing musical sounds by solving the wave equation for vibrating objects: part 1. Journal of the Audio Engineering Society, 19(6):462–470, 1971.
    [BibTeX▼]
  • Lejaren Hiller and Pierre Ruiz. Synthesizing musical sounds by solving the wave equation for vibrating objects: part 2. Journal of the Audio Engineering Society, 19(7):542–551, 1971.
    [BibTeX▼]
  • Faust: MIDI

    Using MIDI CC

    Using MIDI in Faust requires only minor additions to the code and compiler arguments. For first steps it can be helpful to control single synth parameters with MIDI controllers. This can be configured via the UI elements. The following example uses MIDI controller number 48 to control the frequency of a sine wave by adding [midi:ctrl 48] to the hslider parameters.


    // midi-example.dsp
    //
    // Control a sine wave frequency with a MIDI controller.
    //
    // Henrik von Coler
    // 2020-05-17
    
    import("stdfaust.lib");
    
    freq = hslider("frequency[midi:ctrl 48]",100,20,1000,0.1) : si.smoo;
    
    process = os.osc(freq) <: _,_ ;
    

    CC 48 has been chosen since it is the first slider on the AKAI APC mini. If the controller numbers for other devices are not known, they can be found using the PD patch reverse_midi.pd.

    Compiling with MIDI

    In order to enable the MIDI functions, the compiler needs to be called with an additional flag -midi:

    $ faust2xxxx -midi midi_example.dsp
    

    This flag can also be combined with the -osc flag to make synths listen to both MIDI and OSC.

    Note Handling & Polyphony

    Typical monophonic and polyphonic synth control can be added to Faust programs by defining and mapping three parameters:

    • freq
    • gain
    • gate

    When used like in the following example, they will be linked to the parameters of MIDI note on and note off events with a frequency and a velocity.

    // midi_trigger.dsp
    //
    // Henrik von Coler
    // 2020-05-17
    
    import("stdfaust.lib");
    freq    = nentry("freq",200,40,2000,0.01) : si.polySmooth(gate,0.999,2);
    gain   = nentry("gain",1,0,1,0.01) : si.polySmooth(gate,0.999,2);
    gate   = button("gate") : si.smoo;
    
    process = vgroup("synth",os.sawtooth(freq)*gain*gate <: _,_);
    

    Compiling Polyphonic Code

    $ faust2xxxx -midi -nvoices 12 midi_trigger.dsp
    

    MIDI on Linux

    Faust programs use Jack MIDI, whereas MIDI controllers usually connect via ALSA MIDI. In order to control the synth with an external controller, a bridge is nedded:

    $ a2jmidi_bridge
    

    The MIDI controller can now connect to the a2j_bridge input, which is then connected to the synth input.

    Faust: Splitting and Merging Signals

    Splitting a Signal

    To Stereo

    The <: operator can be used to split a signal into an arbitrary number of branches. This is frequently used to send a signal to both the left and the right channel of a computer's output device. In the following example, an impulse train with a frequency of $5\ \mathrm{Hz}$ is generated and split into a stereo signal.

    text


    import("stdfaust.lib");
    
    // a source signal
    signal = os.imptrain(5);
    
    // split signal to stereo in process function:
    process = signal <: _,_;
    

    To Many

    The splitting operator can be used to create more than just two branches. The following example splits the source signal into 8 signals:

    text


    To achieve this, the splitting directive can be extended by the desired number of outputs:

    process = signal <: _,_,_,_,_,_,_,_;
    

    Merging Signals

    Merging to Single

    The merging operator :> in Faust is the inversion of the splitting operator. It can combine an arbitrary number of signals to a single output. In the following example, four individual sine waves are merged:

    text


    Input signals are separated by commas and then joined with the merging operator.

    import("stdfaust.lib");
    
    // create four sine waves
    // with arbitrary frequencies
    s1 = 0.2*os.osc(120);
    s2 = 0.2*os.osc(340);
    s3 = 0.2*os.osc(1560);
    s4 = 0.2*os.osc(780);
    
    // merge them to two signals
    process = s1,s2,s3,s4 :> _;
    

    Merging to Multiple

    Merging can be used to create multiple individual signals from a number of input signals. The following example generates a stereo signal with individual channels from the four sine waves:

    text


    To achieve this, two output signals need to be assigned after merging:

    // merge them to two signals
    process = s1,s2,s3,s4 :> _,_;
    

    Exercise

    Exercise

    Extend the Merging to Single example to a stereo output with individual left and right channels.

    Subtractive Example

    The following example uses a continuous square wave generator with different filters for exploring their effect on a harmonic signal.

    Pitch (VCF):

    Filter Type:

    Lowpass Highpass Bandpass Notch (Band Reject)

    Cutoff (VFC):

    Q (VCF):

    Gain (VCA):

    Time Domain:

    Frequency Domain:



    Contents © Henrik von Coler 2021 - Contact