Concept

This module focuses on fundamental principles of sound synthesis algorithms in C++, covering paradigms like subtractive synthesis, additive synthesis, physical modeling, distortion methods and processed recording. Theory and background of these approaches are covered in the contents of the Sound Synthesis Introduction.

The concept is based on Linux audio systems as development and runtime systems (von Coler & Runge, 2017). Using Raspberry PIs, classes can be supplied with an ultra low cost computer pool, resolving any compatibility issues of individual systems. Besides, the single board computers can be integrated into embedded projects for actual hardware instruments. Participants can also install Linux systems on their own hardware for increased performance.

Only few software libraries are part of the system used in this class, taking care of audio input and output, communication (OSC, MIDI), configuration and audio file processing. This minimal required framework allows the focus on the actual implementation of the algorithms on a sample-by-sample level, not relying on extensive higher level abstractions.


Although the concept of this class has advantages, there are different alternatives with their own benefits. There is a variety of frameworks to consider for implementing sound synthesis paradigms and building digital musical instruments with C/C++. The JUCE framework allows the compilation of 'desktop and mobile applications, including VST, VST3, AU, AUv3, RTAS and AAX audio plug-ins'. It comes with many helpful features and can be used to create DAW-ready software components. Environments like Puredata or SuperCollider come with APIs for programming user extensions. The resulting software components can be integrated into existing projects, easily.


References

2017

  • Henrik von Coler and David Runge. Teaching Sound Synthesis in C/C++ on the Raspberry Pi. In Proceedings of the Linux Audio Conference. 2017.
    [details] [BibTeX▼]

The JACK API

All examples in this class are implemented as JACK clients. Audio input and output is thus based on the JACK Audio API. The JACK framework takes over a lot of management and allows a quick entry point for programmers. Professional Linux audio systems are usually based on JACK servers, allowing the flexible connection of different software components. Read more in the JACK Section of the Computer Music Basics.


The ThroughExample

The ThroughExample is a slightly adapted version of the Simple Client. It wraps the same functionality into a C++ class, adding multi-channel capabilities.


Main

The file main.cpp creates an instance of the ThroughExample class. No command line arguments are passed and the object is created without any arguments:

ThroughExample *t = new ThroughExample();

Member Variables

jack_client_t   *client;

The pointer to a jack client is needed for connecting this piece of software to the JACK server.

NIME 2020: Spatialization

Virtual Source Model

Spectral spatialization in this system is based on a virtual sound source with a position an space and spatial extent, as shown in [Fig.1]. The source center is defined by two angles (Azimuth, Elevation) and the Distance. The Spread defines the diameter of the virtual source. This model is compliant with many theoretical frameworks from the fields of electroacoustic music and virtual acoustics.

/images/NIME_2020/source_in_space.png
Fig.1(1,2)

Virtual sound source with position an space and spatial extent.


Point Cloud Realization

The virtual source from [Fig.1] is realized as a cloud of point sources in an Ambisonics system using the IRCAM software Panoramix. 24 point sources can be controlled jointly. The following figures show the viewer of Panoramix, the left half representing the top view, the right half the rear view.


[Fig.2] shows a dense point cloud of a confined virtual sound source without elevation:

/images/NIME_2020/panoramix_confined.png
Fig.2

Confined virtual sound source.


The virtual sound source in [Fig.3] has a wider spread and is elevated:

/images/NIME_2020/panoramix_spread.png
Fig.3

Spread virtual sound source with elevation.


For small distances and large spreads, the source is enveloping the listener, as shown in [Fig.4]:

/images/NIME_2020/panoramix_enveloping.png
Fig.4

Enveloping virtual sound source.


Dispersion

In a nutshell, the synthesizer outputs the spectral components of a violin sound to 24 individual outputs. Different ways of assigning spectral content to the outputs are possible, shown as Partial to Source Mapping in [Fig.5]. In these experiments, each output represents a Bark scale frequency band. For the point cloud shown above, the distribution of spectral content is thus neither homogenous nor stationary.

/images/NIME_2020/dispersion.png
Fig.5

Dispersion - routing partials to point sources.


Back to NIME 2020 Contents

NIME 2020: Mapping

Extended DMI Model

The typical DMI model connects the musical interface with the sound generation through a mapping stage. [Fig.1] shows the extended DMI model for spatial sound synthesis. The joint control of spatial and timbral characteristics offers new possibilities yet makes the mapping and the resulting control more complex.

/images/NIME_2020/mapping_dmi.png
Fig.1

Mapping in the extended DMI model.


Mapping in Puredata

We chose Puredata as a graphical interface for mapping controller parameters to sound synthesis and spatialization. Especially in the early stage of development this solution offers maximum flexibility. [Fig.2] shows the mapping GUI as it was used by the participants in the mapping study:

/images/NIME_2020/patching.png
Fig.2

Puredata patch for user-defined mappings.


Back to NIME 2020 Contents

NIME 2020: User Study

User-defined Mappings

In the first stage of the user study, participants had 30 minutes to create their own mapping, following this basic instruction:

The objective of this part is to create an enjoyable mapping, which offers the most expressive control over all synthesis and spatialization parameters.

A set of rules allowed one to many mappings and excluded many to one mappings:

  • Every rendering parameter of synthesis and spatialization must be influenced through the mapping.

  • Control parameters may remain unconnected.

  • A single control parameter may be mapped to multiple synthesizer or spatialization parameters.

  • A synthesis or spatialization parameter must not have more than one control parameter connected to its input.


Mapping Frequencies

The mapping matrix shows how some control parameters are preferred for specific tasks, considering the final mappings of all participants:

/images/NIME_2020/matrix.png
Fig.1

Mapping matrix: how often was a control parameter mapped to a specific rendering parameter?


Back to NIME 2020 Contents

NIME 2020: Setup

The experiment took place at the Small Studio at Technical University Berlin. The room features three loudspeaker systems, including a dome of 21 Genelec 8020 loudspeakers with two subwoofers. This system is used for Ambisonics rendering in the experiments of this project. For the purpose of the study furniture was removed from the studio, making it suitable for free movement in the sweet spot of the loudspeaker dome.


/images/NIME_2020/setup_1.JPG
Fig.1

Studio setup for user study.


[Fig.1] shows the studio as it was equipped for the user study. An area of about \(1 \ \mathrm{m}^2 \)is marked with tape on the floor. This area is intended as the sweet area, where participants should operate the synthesis system. A table with chair, display, mouse and keyboard is placed close the sweet area, allowing the users to change the mapping. A second table for paperwork is placed at the edges of the loudspeaker system.


Back to NIME 2020 Contents