Faust

Faust is a functional audio programming language, developed at GRAME, Lyon. It is a community-driven, free open source project. Faust is specifically suited for quickly designing musical synthesis and processing software and compiling it for a large variety of targets. The fastest way for getting started with Faust in the online IDE which allows programming and testing code in the browser, without any installation: https://faustide.grame.fr/

The online materials for the class introduce the basics of the Faust language and give examples for different synthesis techniques: Sound Synthesis- Building Instruments with Faust

Faust and Web Audio

Besides many targets, Faust can also be used to create ScriptProcessor nodes (Letz, 2005).

References

  • Stephane Letz, Sarah Denoux, Yann Orlarey, and Dominique Fober. Faust audio dsp language in the web. In Proceedings of the Linux Audio Conference. 2015.
    [BibTeX▼]
  • Web Audio

    The Web Audio API is a JavaScript based API for sound synthesis and processing in web applications. It is compatible to most browsers and can thus be used on almost any device. Rad the W3C Candidate Recommendation for an in-depth introduction and documentation:

    https://www.w3.org/TR/webaudio/

    The Sine Example

    The following Web Audio example features a simple sine wave oscillator with frequency control and mute button. Since HTML is kept minimal, the code is compact but the GUI is very basic.

    Sine Example

    Sine Example.

    Frequency

    Code

    sine_example/sine_example.html (Source)

    <!doctype html>
    <html>
    
    <blockquote style="border: 2px solid #122; padding: 10px; background-color: #ccc;">
    
      <head>
        <title>Sine Example</title>
      </head>
    
      <body>
        <p>Sine Example.</p>
        <p>
          <button onclick="play()">Play</button>
          <button onclick="stop()">Stop</button>
          <span>
            <input id="pan" type="range" min="10" max="1000" step="1" value="440" oninput="frequency(this.value);">
            Frequency
          </span>
        </p>
    
      </body>
    
    </blockquote>
    
    
      <script>
        var audioContext = new window.AudioContext
        var oscillator = audioContext.createOscillator()
        var gainNode = audioContext.createGain()
    
        gainNode.gain.value = 0
    
        oscillator.connect(gainNode)
        gainNode.connect(audioContext.destination)
    
        oscillator.start(0)
    
        function play()
        {
          gainNode.gain.value = 1
        }
    
        function stop()
        {
          gainNode.gain.value = 0
        }
    
        function frequency(y)
        {
          oscillator.frequency.value = y
        }
    
    
      </script>
    </html>
    

    References

  • Chris Chafe. Browser-based sonification. In Proceedings of the 17th Linux Audio Conference. 2019.
    [BibTeX▼]
  • C++

    C++ is the standard for programming professional, efficient audio software.

    JUCE

    JUCE is the most widely used framework for developping commercial audio software, such as VSTs and standalone applications: https://juce.com/

    JACK

    JACK offers a simple API for developing audio software on Linux, Mac and Windows systems.

    Icecast

    Icecast is a free software solution for creating accessible web-radio streams: https://icecast.org/

    It plays nicely with JACK on Linux audio servers, allowing the broadcasting of complex sound synthesis and sonification projects.

    Using Python for Control

    Python offers many useful tools for preparing data and controlling synthesis processes. Although it can also be used for actual digital signal processing, its versatility makes it a great tool for auxuliary tasks. Most notably, it can be used for flexible processing and routing of OSC messages, especially in the field of data sonification.

    Python & OSC

    A large variety of Python packages offers the possibility of using OSC. They can be installed using pip:

    $ pip install python-osc $ pip install pythonosc

    An example project for controlling a Faust-built synthesizer with Python is featured in this software repository: https://github.com/anwaldt/py2faust_synth

    Python & JACK

    The JACK Audio Connection Kit Client for Python by Matthias Geier connects Python processes to the JACK server: https://github.com/spatialaudio/jackclient-python/

    This integration of Python in a JACK ecosystem can be helpful not only for audio processing, but also for synchronization of processes.

    SuperCollider

    Supercollider (SC) is a server-client-based tool for sound synthesis and composition. SC was started by James McCartney in 1996 and is free software since 2002. It can be used on Mac, Linux and Windows systems and comes with a large collection of community-developed extensions. The client-server pronciple makes it a powerful tool for distributed and embedded systems, allowing the full remote control of synthesis processes and live coding.


    Getting Started

    Binaries, source code and build or installation instructions can be found at the SC github site. If possible, it is recommended to build the latest version from the repository:

    https://supercollider.github.io/download

    Code snippets in this example are taken from the accompanying repository: SC Example. You can simple copy and paste them into your ScIDE.


    sclang

    sclang is the SuperCollider language. It represents the client side when working with SC. It can for example be started in a terminal by running:

    $ sclang

    The terminal will then change into sclang mode and SC commands can be entered:

    sc3>

    Running SC Files

    SuperCollider code is written in text files with the extensions .sc or .scd. On Linux and Mac systems, a complete SC file can be executed in the terminal by calling the language with the file as argument:

    $ sclang sine-example.sc

    The program will then run in the background or launch the included GUI elements.

    Variable Names

    Global variables are either single letters (s is preserved for the server) or start with a tilde: ~var_name). Local variables, used in functions, need to be defined explicitly:

    var var_name;

    Control Rate vs Audio Rate

    SC works with two internal signal types. When something is used with the extension .ar, this refers to audio signals (audio rate), whereas .kr uses the control rate.


    ScIDE

    Working with SC in the terminal is rather inconvinient. The SuperCollider IDE (ScIDE) is the environment for live coding in sclang, allowing the control of the SuperCollider language:

    /images/basics/scide.thumbnail.png

    When editing .sc files in the ScIDE, they can be executed as a whole. Moreover, single blocks, respectively single lines can be evaluated, which is the standard way of using SC, especially when exploring the possibilities.


    Server

    Synthesis and processing happens inside an SC server. A server can be booted from the

    // boot the server
    s.boot;
    

    Synths

    Inside the SC server, sound is usually generated and processed inside Synths. A synth can be defined inside curly brackets:

    // Play a Synth
    (
    {
        // calculate a sine wave with frequency and amplitude
        var x = 100 * SinOsc.ar(1000);
    
        // send the signal to the output bus '0'
        Out.ar(0, x);
    
    }.play;
    
    )
    

    All playing nodes can be removed from the server, if they are not associated with a cient side variable:

    // free all nodes from the server
    s.freeAll
    

    SynthDef

    SynthDefs are templates for Synths, which are sent to a server:

    // define a SynthDef and send it to the server
    (
    
    SynthDef(\sine_example,
    {
       // define arguments of the SynthDef
       |f = 100, a = 1|
    
       // calculate a sine wave with frequency and amplitude
       var x = a * SinOsc.ar(f);
    
       // send the signal to the output bus '0'
       Out.ar(0, x);
    
    }).send(s);
    
    )
    

    Once a SynthDef has been sent to the server, instances can be created:

    // create a synth from the SynthDef
    ~my_synth = Synth(\sine_example, [\f, 1000, \a, 1]);
    
    // create another synth from the SynthDef
    ~another_synth = Synth(\sine_example, [\f, 1100, \a, 1]);
    

    Parameters of running synths can be changed, using the associated variable on the client side:

    // set a parameter
    ~my_synth.set(\f,900);
    

    Running synths with a client-side variable can be removed from the server:

    // free the nodes
    ~my_synth.free();
    ~another_synth.free();
    

    Puredata

    About

    Puredate (PD) is the free and open source version of Max/MSP, also developed and maintained by Miller Puckette. PD is one of the best options for people new to computer music, due to the obvious signal flow. It is a very helpful for exploring the basics of sound synthesis and programming but can also be used for advanced applications: https://puredata.info/community/member-downloads/patches In addition, PD offers simple and flexible means for creating control and GUI software.

    The Sine Example

    This first example creates a sine wave oscillator. Its frequency can be controlled with a slider:

    /images/basics/pd-sine.png

    Working with OSC

    Dependencies

    OSC can be used in Puredata without further packages, by means of the ojects netsend, oscformat and oscparse. The following examples are based on additional externals, since this results in more compact patches. For using them, install the external mrpeach with the Deken tool inside Puredata: https://puredata.info/docs/Deken

    Sending OSC

    This example sends data via osc between two puredata patches on the same machine. It uses the hostname localhost instead of an IP address. The path oscillator/frequency of the OSC message has been defined arbitrarily - it has to match between client and receiver. Before sending osc messages, the connect message needs to be clicked.

    /images/basics/pd-osc-send.png

    Receiving OSC

    Before receiving osc messages, the udpreceive oject needs to know which port to listen on. Messages are then unpacked and routed according to their path.

    /images/basics/pd-osc-receive.png

    References

  • Miller S. Puckette. Pure Data. In Proceedings of the International Computer Music Conference (ICMC). Thessaloniki, \\ Greece, 1997.
    [BibTeX▼]
  • Miller S. Puckette. The patcher. In Proceedings of the International Computer Music Conference (ICMC). Computer Music Association, 1988.
    [BibTeX▼]
  • Raspberry Pi

    The class Sound Synthesis at TU Berlin makes use of the Raspberry PI as a development and runtime system for sound synthesis in C++ (von Coler, 2017). Firtly, this is the cheapest way of setting up a computer pool with unified hard- and software. In addition, the PIs can serve as standalone synthesizers and sonification tools. All examples can be found in a dedicated software repository:

    https://gitlab.tubit.tu-berlin.de/henrikvoncoler/SoundSynthesis_PI

    The full development system is based on free, open source software. The examples are based on the JACK API for audio input and output, RtAudio for MIDI, as well as the liblo for OSC communication and libyaml-cpp for data and configuration files.

    The advantage and disadvantage of this setup is that every element needs to be implemented from scratch. In this way, synthesis agorithms can be understood in detail and customized limitlessly. For quick solutions it makes sense to switch to a framework with more basic elements.

    The source code can also be used on any linux system, provided the necessary libraries are installed.

    The Gain Example

    The gain example is the entry point for coding on the PI system:

    https://gitlab.tubit.tu-berlin.de/henrikvoncoler/SoundSynthesis_PI/tree/master/examples/gain_example

    References

  • Henrik von Coler and David Runge. Teaching sound synthesis in c/c++ on the raspberry pi. In Proceedings of the Linux Audio Conference. 2017.
    [BibTeX▼]
  • Origins

    Beginnings of Computer Music

    Digital sound synthesis dates back to the first experiments of Max Mathews at Bell Labs in the 1950s. Mathews created the MUSIC programming language for generating musical sounds through additive synthesis on an IBM 704. The Silver Scale, realized by Newman Guttman in 1957, is (probably) the first ever digitally synthesized piece of music (Roads, 1980):

    MUSIC and its versions (I, II, III, ...) are direct or indirect ancestors to most recent languages for sound processing. Although the first experiments sound amusing from todays perspective, Mathews already grasped the potential of the computer as a musical instrument:

    “There are no theoretical limitations to the performance of the computer as a source of musical sounds, in contrast to the performance of ordinary instruments.” (Mathews, 1963)

    Mathews created the first digital musical pieces himself, but in order to fully explore the musical potential, he was joined by composers, artists and other researchers, such as Newman Guttman, James Tenney and Jean Claude Risset. Later, the Bell Labs were visited by renowned composers of various genres, including John Cage, Edgard Varèse and Laurie Spiegel (Park, 2009).


    Chowning & CCRMA

    The synthesis experiments at Bell Labs are the origin of most music programming languages and methods for digital sound synthesis. The foundation for many further developments was laid when John Chowning brought the software MUSIC VI to Stanford from a visit at Bell Labs in the 1060s. After migrating it to a PDP-6 computer, Chowning worked on his groundbreaking digital compositions, using the FM method and spatial techniques.

    Puckette & IRCAM

    Most of the active music programming environments, such as Puredata, Max/MSP, SuperCollider or CSound, are descendants of the MUSIC languages. Graphical programming languages like Max/MSP and Puredata were actually born as patching and mapping environments. Their common ancestor, the Patcher, developed by Miller Puckette at CCRMA in the 1980s, was a graphical environment for connecting MAX real-time processes and for controlling MIDI instruments.


    References

  • John Chowning. Turenas: the realization of a dream. Proc. of the 17es Journées d’Informatique Musicale, Saint-Etienne, France, 2011.
    [BibTeX▼]
  • Bilbao, Stefan. Numerical Sound Synthesis. Wiley Online Library, 2009. ISBN 9780470749012. doi:10.1002/9780470749012.
    [BibTeX▼]
  • Ananya Misra and Perry R Cook. Toward Synthesized Environments: A Survey of Analysis and Synthesis Methods for Sound Designers and Composers. In Proceedings of the International Computer Music Conference (ICMC 2009). 2009.
    [BibTeX▼]
  • Tae Hong Park. An interview with max mathews. Computer Music Journal, 33(3):9–22, 2009.
    [BibTeX▼]
  • Julius O. Smith. Viewpoints on the History of Digital Synthesis. In Proceedings of the International Computer Music Conference, 1–10. 1991.
    [BibTeX▼]
  • Curtis Roads and Max Mathews. Interview with max mathews. Computer Music Journal, 4(4):15–22, 1980.
    [BibTeX▼]
  • Max V Mathews. The Digital Computer as a Musical Instrument. Science, 142(3592):553–557, 1963.
    [BibTeX▼]
  • OSC: Open Sound Control

    Open Sound Control (OSC) is the de facto standard for exchanging control data between audio applications in distributed systems and on local setups with multiple components: http://opensoundcontrol.org/introduction-osc

    All programming languages and tools for computer music offer means for using OSC and specific solutions exist for data sonification: http://opensoundcontrol.org/mapping-nonmusical-data-sound

    OSC is based on UDP/IP protocol in a client-server way. A server needs to be started for listening to messages sent from a client. A typical OSC message consists of a path and arguments:

    /synthesizer/volume/ 0.5

    The path is a string with slash-separated substrings. Parameters can be integers, floats and strings. Unlike MIDI, OSC offers only the transport protocol but does not define a standard for musical parameters. Hence, the paths used for a certain software is completely arbitrary and can be defined by the developers.



    Contents © Henrik von Coler 2020 - Contact