Physical Modeling: Advanced Models

More advanced physical models can be designed, based on the principles explained in the previous sections.


Resonant Bodies & Coupling

The simple lowpass filter in the example can be replaced by more sophisticated models. For instruments with multiple strings, coupling between strings can be implemented.

/images/Sound_Synthesis/physical_modeling/plucked-string-instrument.png

Model of a wind instrument with several waveguides, connected with scattering junctions (de Bruin, 1995):

/images/Sound_Synthesis/physical_modeling/wind_waveguide.jpg

References

  • Vesa Välimäki. Discrete-time modeling of acoustic tubes using fractional delay filters. Helsinki University of Technology, 1995.
    [BibTeX▼]
  • Gijs de Bruin and Maarten van Walstijn. Physical models of wind instruments: A generalized excitation coupled with a modular tube simulation platform*. Journal of New Music Research, 24(2):148–163, 1995.
    [BibTeX▼]
  • Matti Karjalainen, Vesa Välimäki, and Zoltán Jánosy. Towards High-Quality Sound Synthesis of the Guitar and String Instruments. In Computer Music Association, 56–63. 1993.
    [BibTeX▼]
  • Julius O Smith. Physical modeling using digital waveguides. Computer music journal, 16(4):74–91, 1992.
    [BibTeX▼]
  • Lejaren Hiller and Pierre Ruiz. Synthesizing musical sounds by solving the wave equation for vibrating objects: part 1. Journal of the Audio Engineering Society, 19(6):462–470, 1971.
    [BibTeX▼]
  • Lejaren Hiller and Pierre Ruiz. Synthesizing musical sounds by solving the wave equation for vibrating objects: part 2. Journal of the Audio Engineering Society, 19(7):542–551, 1971.
    [BibTeX▼]
  • Physical Modeling: Faust Examples

    The functional principle of Faust is very well suited for programming physical models for sound synthesis, since these are usually described in block diagrams. Working with physical modeling in Faust can happen on many levels of complexity, from using ready instruments to basic operations.

    Ready Instruments

    For a quick start, fully functional physical modeling instruments can be used from the physmodels.lib library. These *_ui_MIDI functions just need to be called in the process function:

    import("all.lib");
    
    process = nylonGuitar_ui_MIDI : _;
    

    The same algortithms can also be used on a slightly lower level, combining them with custom control and embedding them into larger models:

    import("all.lib");
    
    process = nylonGuitarModel(3,1,button("trigger")) : _;
    

    Ready Elements

    The physmodels.lib library comes with many building blocks for physical modeling, which can be used to compose instruments. These blocks are instrument-specific, as for example:

    • (pm.)nylonString
    • (pm.)violinBridge
    • (pm.)fluteHead

    Bidirectional Utilities & Basic Elements

    The bidirectional utitlities and basic elements in Faust's physical modeling library offer a more direct way of assembling physical models. This includes waveguides, terminations, excitation and others:

    • (pm.)chain
    • (pm.)waveguide
    • (pm.)lTermination
    • (pm.)rTermination
    • (pm.)in

    From Scratch

    Taking a look at the physmodels.lib library, even the bidirectional utilities and basic elements are made of standard faust functions:

    https://github.com/grame-cncm/faustlibraries/blob/master/physmodels.lib

    chain(A:As) = ((ro.crossnn(1),_',_ : _,A : ro.crossnn(1),_,_ : _,chain(As) : ro.crossnn(1),_,_)) ~ _ : !,_,_,_;
    chain(A) = A;
    

    Karplus-Strong in Faust

    // karplus_strong.dsp
    //
    // Slightly modified version of the
    // Karplus-Strong plucked string algorithm.
    //
    // see: 'Making Virtual Electric Guitars and Associated Effects Using Faust'
    //             (Smith, )
    //
    // - one-pole lowpass in the feedback
    //
    // Henrik von Coler
    // 2020-06-07
    
    import("all.lib");
    
    ////////////////////////////////////////////////////////////////////////////////
    // Control parameters as horizonal sliders:
    ////////////////////////////////////////////////////////////////////////////////
    
    freq = hslider("freq Hz", 50, 20, 1000, 1) : si.smoo; // Hz
    
    // initial filter for the excitation noise
    initial_filter = hslider("initial_filter Hz",1000,10,10000,1) : si.smoo;
    lop = hslider("lop Hz",1000,10,10000,1) : si.smoo;
    
    level = hslider("level", 1, 0, 10, 0.01);
    gate = button("gate");
    gain = hslider("gain",  1, 0, 1, 0.01);
    
    ////////////////////////////////////////////////////////////////////////////////
    // processing elements:
    ////////////////////////////////////////////////////////////////////////////////
    
    diffgtz(x) = (x-x') > 0;
    decay(n,x) = x - (x>0)/n;
    release(n) = + ~ decay(n);
    trigger(n) = diffgtz : release(n) : > (0.0);
    
    
    P = SR/freq;
    
    // Resonator:
    resonator = (+ : delay(4096, P) * gain) ~ si.smooth(1.0-2*(lop/ma.SR));
    
    ////////////////////////////////////////////////////////////////////////////////
    // processing function:
    ////////////////////////////////////////////////////////////////////////////////
    
    
    process = noise : si.smooth(1.0-2*(initial_filter/ma.SR)):*(level)
    : *(gate : trigger(P)): resonator <: _,_;
    

    Waveguide-Strings in Faust

    // waveguide_string.dsp
    //
    // waveguide model of a string
    //
    // - one-pole lowpass termination
    //
    // Henrik von Coler
    // 2020-06-09
    
    
    import("all.lib");
    
    // use '(pm.)l2s' to calculate number of samples
    // from length in meters:
    
    segment(maxLength,length) = waveguide(nMax,n)
    with{
    nMax = maxLength : l2s;
    n = length : l2s/2;
    };
    
    
    
    
    // one lowpass terminator
    fc = hslider("lowpass",1000,10,10000,1);
    rt = rTermination(basicBlock,*(-1) : si.smooth(1.0-2*(fc/ma.SR)));
    
    // one gain terminator with control
    gain = hslider("gain",0.99,0,1,0.01);
    lt = lTermination(*(-1)* gain,basicBlock);
    
    
    idString(length,pos,excite) = endChain(wg)
    with{
    
    nUp   = length*pos;
    
    nDown = length*(1-pos);
    
    wg = chain(lt : segment(6,nUp) : in(excite) : out : segment(6,nDown) : rt); // waveguide chain
    };
    
    length = hslider("length",1,0.1,10,0.01);
    process = idString(length,0.15, button("pluck")) <: _,_;
    

    Physical Modeling: Karplus Strong - Implementation

    plucked_string_444.png


    import numpy as np
    from   numpy import linspace, sin, zeros
    from   math import pi
    %matplotlib notebook
    import matplotlib.pyplot as plt
    from   tikzplotlib import save as tikz_save
    
    from   IPython.display import display, Markdown, clear_output
    import IPython.display as ipd
    import ipywidgets as widgets
    from   ipywidgets import *
    
    
    
    
    fs          = 48000
    L           = 500
    
    
    
    # a function for appending the array again and again
    # arbitrary 300 times ...
    def appender(x):
        y = np.array([])
    
        for i in range(300):
            y = np.append(y,x*0.33)
    
        return y
    
    
    x = np.random.standard_normal(L)
    y = appender(x)
    
    t = np.linspace(0,len(y)/fs,len(y))
    f = np.linspace(0,1,len(y))
    
    fig   = plt.figure()
    ax    = fig.add_subplot(2, 1, 1)
    line, = ax.plot(t,y)
    
    ax2    = fig.add_subplot(2, 1, 2)
    Y = abs(np.fft.fft(y))
    
    Y = Y[0:5000]
    f = f[0:5000]
    
    line2, = ax2.plot(f,Y)
    
    def update(L = widgets.IntSlider(min = 10, max= 1500, step=1, value=500)):
    
        x = np.random.standard_normal(L)
        y = appender(x)
    
        t = np.linspace(0,len(y)/fs,len(y))
        f = np.linspace(0,1,len(y))
    
        Y = abs(np.fft.fft(y))
        Y = Y[0:5000]
        f = f[0:5000]
    
        line.set_ydata(y)
        line2.set_ydata(Y)
    
        fig.canvas.draw_idle()
        ipd.display(ipd.Audio(y, rate=fs))
    
    
    
    interact(update);
    
    <IPython.core.display.Javascript object>
    
    interactive(children=(IntSlider(value=500, description='L', max=1500, min=10), Output()), _dom_classes=('widge…
    

    Karplus-Strong

    Karplus-Strong makes use of the random buffer.

    # this implementation serves for a better
    # understanding and is not efficient
    #
    # - wait for process to be finished in
    #   interactive use
    
    
    fs          = 48000
    L           = 500
    
    # the feedback gain
    gain   = 0.99
    
    # the number of samples used for smoothing
    smooth = 10
    
    
    def karplus_strong(L,gain,smooth):
    
        x = np.random.standard_normal(L)
        y = np.array([])
    
    
        for i in range(96000):
            k   = i%L
            tmp = 0;
    
            for j in range(smooth):
                tmp += x[(k+j) %L]
    
            tmp = tmp/smooth
    
            x[k] = gain*tmp
            y = np.append(y,tmp)
    
        return y
    
    
    y = karplus_strong(L,gain,smooth)
    
    t = np.linspace(0,len(y)/fs,len(y))
    f = np.linspace(0,1,len(y))
    
    fig   = plt.figure()
    ax    = fig.add_subplot(2, 1, 1)
    line, = ax.plot(t,y)
    
    ax2    = fig.add_subplot(2, 1, 2)
    Y = abs(np.fft.fft(y))
    
    Y = Y[0:5000]
    f = f[0:5000]
    
    line2, = ax2.plot(f,Y)
    
    def update(b = widgets.ToggleButtons( options=['Recalculate','Recalculate'],disabled=False),
              L = widgets.IntSlider(min = 10, max= 1500, step=1, value=500),
               gain = widgets.FloatSlider(min = 0.8, max= 1, step=0.01, value=0.99),
              smooth = widgets.IntSlider(min = 1, max= 20, step=1, value=10)):
    
        print(b)
        y = karplus_strong(L,gain,smooth)
    
        t = np.linspace(0,len(y)/fs,len(y))
        f = np.linspace(0,1,len(y))
    
        Y = abs(np.fft.fft(y))
        Y = Y[0:5000]
        f = f[0:5000]
    
        line.set_ydata(y)
        line2.set_ydata(Y)
    
        fig.canvas.draw_idle()
        ipd.display(ipd.Audio(y, rate=fs))
    
    
    
    interact(update);
    
    <IPython.core.display.Javascript object>
    
    interactive(children=(ToggleButtons(description='b', options=('Recalculate', 'Recalculate'), value='Recalculat…
    

    References

  • Vesa Välimäki. Discrete-time modeling of acoustic tubes using fractional delay filters. Helsinki University of Technology, 1995.
    [BibTeX▼]
  • Gijs de Bruin and Maarten van Walstijn. Physical models of wind instruments: A generalized excitation coupled with a modular tube simulation platform*. Journal of New Music Research, 24(2):148–163, 1995.
    [BibTeX▼]
  • Matti Karjalainen, Vesa Välimäki, and Zoltán Jánosy. Towards High-Quality Sound Synthesis of the Guitar and String Instruments. In Computer Music Association, 56–63. 1993.
    [BibTeX▼]
  • Julius O Smith. Physical modeling using digital waveguides. Computer music journal, 16(4):74–91, 1992.
    [BibTeX▼]
  • Lejaren Hiller and Pierre Ruiz. Synthesizing musical sounds by solving the wave equation for vibrating objects: part 1. Journal of the Audio Engineering Society, 19(6):462–470, 1971.
    [BibTeX▼]
  • Lejaren Hiller and Pierre Ruiz. Synthesizing musical sounds by solving the wave equation for vibrating objects: part 2. Journal of the Audio Engineering Society, 19(7):542–551, 1971.
    [BibTeX▼]
  • Physical Modeling: Karplus Strong Algorithm

    References

  • Vesa Välimäki. Discrete-time modeling of acoustic tubes using fractional delay filters. Helsinki University of Technology, 1995.
    [BibTeX▼]
  • Gijs de Bruin and Maarten van Walstijn. Physical models of wind instruments: A generalized excitation coupled with a modular tube simulation platform*. Journal of New Music Research, 24(2):148–163, 1995.
    [BibTeX▼]
  • Matti Karjalainen, Vesa Välimäki, and Zoltán Jánosy. Towards High-Quality Sound Synthesis of the Guitar and String Instruments. In Computer Music Association, 56–63. 1993.
    [BibTeX▼]
  • Julius O Smith. Physical modeling using digital waveguides. Computer music journal, 16(4):74–91, 1992.
    [BibTeX▼]
  • Lejaren Hiller and Pierre Ruiz. Synthesizing musical sounds by solving the wave equation for vibrating objects: part 1. Journal of the Audio Engineering Society, 19(6):462–470, 1971.
    [BibTeX▼]
  • Lejaren Hiller and Pierre Ruiz. Synthesizing musical sounds by solving the wave equation for vibrating objects: part 2. Journal of the Audio Engineering Society, 19(7):542–551, 1971.
    [BibTeX▼]
  • Physical Modeling: Waveguides - Implementation

    import numpy as np
    from   numpy import linspace, sin, zeros
    from   math import pi
    %matplotlib notebook
    import matplotlib.pyplot as plt
    from   tikzplotlib import save as tikz_save
    
    from   IPython.display import display, Markdown, clear_output
    import IPython.display as ipd
    import ipywidgets as widgets
    from   ipywidgets import *
    
    
    
    fs          = 48000
    
    ###################################################################
    # function for plucking the string
    
    def pluck(L,P):
    
        x_L = np.zeros(L);
        x_R = np.zeros(L);
    
        x_L[1:P] = np.linspace(0,1,P-1)
        x_R[1:P] = np.linspace(0,1,P-1)
    
        x_L[P:L-1] = np.linspace(1,0,L-P-1)
        x_R[P:L-1] = np.linspace(1,0,L-P-1)
    
        return x_L, x_R
    
    ###################################################################
    # function: - get the next output sample
    #           - shift all buffers
    
    def next_step(x_L, x_R, filt, g, pick):
    
        # delay line outputs
        l_out = x_L[0]
        r_out = x_R[len(x_R)-1]
    
        # filter output
        f_out = sum(filt)/len(filt)
    
        # shift all arrays
        x_L   = np.roll(x_L,-1)
        x_R   = np.roll(x_R,1)
        filt  = np.roll(filt,1)
    
        # insert output values
        x_L[len(x_L)-1] = -f_out
        x_R[0]          = -l_out * g
        filt[0]         = r_out
    
        out =  x_L[pick] + x_L[pick]
    
        return x_L, x_R, filt, out
    
    
    ###################################################################
    
    # length of the delay line:
    L = 300
    # feedback gain:
    g = 0.95
    # pluck position:
    pluck_pos = 3
    # pickup position:
    pick = 5
    # filter length:
    N = 20
    
    
    ###################################################################
    # the update function offers control over all parameters
    # - wait for the process to be finished
    # - it can take a couple of seconds until the new sound is ready
    
    def update(L     = widgets.IntSlider(min = 100, max= 500, step=1, value=300, continuous_update=False),
               g     = widgets.FloatSlider(min = 0.5, max= 1, step=0.01, value=0.95, continuous_update=False),
               pluck_pos = widgets.IntSlider(min = 0, max= 99, step=1, value=3, continuous_update=False),
               pick_pos  = widgets.IntSlider(min = 0, max= 99, step=1, value=5, continuous_update=False),
               N     = widgets.IntSlider(min = 1, max= 50, step=1, value=20, continuous_update=False)):
    
        x_L, x_R = pluck(L,pluck_pos)
    
        y = np.array([])
    
        # the filter is a simple moving average
        filt = np.zeros(N)
    
        for idx in range(2*fs):
    
            x_L, x_R, filt, out = next_step(x_L, x_R, filt, g, pick_pos)
            y = np.append(y,out)
    
        ipd.display(ipd.Audio(y, rate=fs))
    
    interact(update);
    
    interactive(children=(IntSlider(value=300, continuous_update=False, description='L', max=500, min=100), FloatS…
    

    References

  • Vesa Välimäki. Discrete-time modeling of acoustic tubes using fractional delay filters. Helsinki University of Technology, 1995.
    [BibTeX▼]
  • Gijs de Bruin and Maarten van Walstijn. Physical models of wind instruments: A generalized excitation coupled with a modular tube simulation platform*. Journal of New Music Research, 24(2):148–163, 1995.
    [BibTeX▼]
  • Matti Karjalainen, Vesa Välimäki, and Zoltán Jánosy. Towards High-Quality Sound Synthesis of the Guitar and String Instruments. In Computer Music Association, 56–63. 1993.
    [BibTeX▼]
  • Julius O Smith. Physical modeling using digital waveguides. Computer music journal, 16(4):74–91, 1992.
    [BibTeX▼]
  • Lejaren Hiller and Pierre Ruiz. Synthesizing musical sounds by solving the wave equation for vibrating objects: part 1. Journal of the Audio Engineering Society, 19(6):462–470, 1971.
    [BibTeX▼]
  • Lejaren Hiller and Pierre Ruiz. Synthesizing musical sounds by solving the wave equation for vibrating objects: part 2. Journal of the Audio Engineering Society, 19(7):542–551, 1971.
    [BibTeX▼]
  • Physical Modeling: Waveguides

    Wave Equation for Virtual Strings

    The wave-equation for the one dimensional ideal string:

    \(\frac{\partial^2 y}{\partial t^2} = c^2 \frac{\partial^2 y}{\partial x^2}\)

    Solution without losses (d'Alembert):

    \(y(x,t) = y^+ (x-ct) + y^- (x+ct)$\)

    • \(y^+\) = left traveling wave
    • \(y^-\) = right traveling wave

    Tuning the String

    The velocity \(c\) depends on tension \(K\) and mass-density \(\epsilon\) of the string:

    \(c^2 = \sqrt{\frac{K}{\epsilon}} = \sqrt{\frac{K}{\rho S}}\)

    With tension \(K\), cross sectional area \(S\) and density \(\rho\) in \({\frac{g}{cm^3}}\).

    Frequency \(f\) of the vibrating string depends on the velocity and the string length:

    \(f = \frac{c}{2 L}\)

    Make it Discrete

    \(y(m,n) = y^+ (m,n) + y^- (m,n)\)

    \(t = \ nT\)

    \(x = \ mX\)

    Spatial sample distance \(X\) depends on sampling-rate \(f_s = \frac{1}{T}\) and velocity \(c\):

    \(X = cT\)


    An ideal, lossless string is represented by two delay lines with direct coupling.

    /images/Sound_Synthesis/physical_modeling/schematic_3.png

    Losses

    Losses can be implemented by inserting filters between the delay lines.

    /images/Sound_Synthesis/physical_modeling/schematic_1.png

    References

  • Vesa Välimäki. Discrete-time modeling of acoustic tubes using fractional delay filters. Helsinki University of Technology, 1995.
    [BibTeX▼]
  • Gijs de Bruin and Maarten van Walstijn. Physical models of wind instruments: A generalized excitation coupled with a modular tube simulation platform*. Journal of New Music Research, 24(2):148–163, 1995.
    [BibTeX▼]
  • Matti Karjalainen, Vesa Välimäki, and Zoltán Jánosy. Towards High-Quality Sound Synthesis of the Guitar and String Instruments. In Computer Music Association, 56–63. 1993.
    [BibTeX▼]
  • Julius O Smith. Physical modeling using digital waveguides. Computer music journal, 16(4):74–91, 1992.
    [BibTeX▼]
  • Lejaren Hiller and Pierre Ruiz. Synthesizing musical sounds by solving the wave equation for vibrating objects: part 1. Journal of the Audio Engineering Society, 19(6):462–470, 1971.
    [BibTeX▼]
  • Lejaren Hiller and Pierre Ruiz. Synthesizing musical sounds by solving the wave equation for vibrating objects: part 2. Journal of the Audio Engineering Society, 19(7):542–551, 1971.
    [BibTeX▼]
  • Physical Modeling: Introduction

    Physical modeling emulates actual physical processes with digital means - oscillators, resonators and acoustic impedances are modeled with buffers and filters, respectively LTI systems. Although first realized when computers had sufficient power, the foudations are much older. Hiller et al. (1971) were the first to transport the 1774 wave equation by d'Alambert to the digital domain for synthesizing sounds of plucked strings.


    Early Hardware

    Although physical modeling algorithms sound great, offer good means for control and enable the design of interesting instruments, their had less impact on the evolution of music and digital instruments. Hardware synths for physical modeling from the 1990s, like the Korg Prophecy or the Yamaha VL1 did not become a success, in the first place. There are many possible reasons for the lack of success. Cheaper and larger memory made sampling instruments more powerful and virtual analog synthesizers sounded more attractive, followed by the second wave of analog synths.

    Yamaha VL1 (1994)

    Software Instruments

    • Pianoteq Pro 6
    • Organteq Alpha
    • Strum GS 2
    • AAS Chromophone 2

    Modular

    Since simple physical models are easily implemented on small embedded systems, various modules exist on the market:

    /images/Sound_Synthesis/physical_modeling/mysteron.jpg
    /images/Sound_Synthesis/physical_modeling/rings.jpg

    Physical Models in Experimental Music

    Eikasia (1999) by Hans Tutschku was realized using the IRAM software Modalys:



    http://www.tutschku.com/content/works-eikasia.en.php

    References

  • Vesa Välimäki. Discrete-time modeling of acoustic tubes using fractional delay filters. Helsinki University of Technology, 1995.
    [BibTeX▼]
  • Gijs de Bruin and Maarten van Walstijn. Physical models of wind instruments: A generalized excitation coupled with a modular tube simulation platform*. Journal of New Music Research, 24(2):148–163, 1995.
    [BibTeX▼]
  • Matti Karjalainen, Vesa Välimäki, and Zoltán Jánosy. Towards High-Quality Sound Synthesis of the Guitar and String Instruments. In Computer Music Association, 56–63. 1993.
    [BibTeX▼]
  • Julius O Smith. Physical modeling using digital waveguides. Computer music journal, 16(4):74–91, 1992.
    [BibTeX▼]
  • Lejaren Hiller and Pierre Ruiz. Synthesizing musical sounds by solving the wave equation for vibrating objects: part 1. Journal of the Audio Engineering Society, 19(6):462–470, 1971.
    [BibTeX▼]
  • Lejaren Hiller and Pierre Ruiz. Synthesizing musical sounds by solving the wave equation for vibrating objects: part 2. Journal of the Audio Engineering Society, 19(7):542–551, 1971.
    [BibTeX▼]
  • Concatenative: Crowd Noise Synthesis

    Two master's thesis in collaboration between Audiocommunication Group and IRCAM aimed at a parametric synthesis of crowd noises, more precisely of many people speaking simultaneously (Grimaldi, 2016; Knörzer, 2017). Using a concatenative approach, the resulting synthesis system can be used to dynamically change the affective state of the virtual crowd. The resulting algorithm was applied in user studies in virtual acoustic environments.

    Recordings

    The corpus of speech was gathered in two group sessions, each with five persons, in the anechoic chamber at TU Berlin. For each speaker, the recording was annotated into regions of different valence and arousal and then segmented into syllables, automatically.

    Features

    /images/Sound_Synthesis/concatenative/valence_arousal_1.png

    Synthesis

    The following example synthesizes a crowd with a valence of -90 and an arousal of 80, which can be categorized as frustrated, annoyed or upset. No virtual acoustic environment is used, and the result is rather direct:


    References

  • Grimaldi, Vincent and Böhm, Christoph and Weinzierl, Stefan and von Coler, Henrik. Parametric Synthesis of Crowd Noises in Virtual Acoustic Environments. In Proceedings of the 142nd Audio Engineering Society Convention. Audio Engineering Society, 2017.
    [BibTeX▼]
  • Christian Knörzer. Concatenative crowd noise synthesis. Master's thesis, TU Berlin, 2017.
    [BibTeX▼]
  • Vincent Grimaldi. Parametric crowd synthesis for virtualacoustic environments. Master's thesis, IRCAM, 2016.
    [BibTeX▼]
  • Diemo Schwarz. Concatenative sound synthesis: The early years. Journal of New Music Research, 35(1):3–22, 2006.
    [BibTeX▼]
  • Diemo Schwarz, Grégory Beller, Bruno Verbrugghe, and Sam Britton. Real-Time Corpus-Based Concatenative Synthesis with CataRT. In In DAFx. 2006.
    [BibTeX▼]
  • Diemo Schwarz. A System for Data-Driven Concatenative Sound Synthesis. In Proceedings of the COST-G6 Conference on Digital Audio Effects (DAFx-00). Verona, Italy, 2000.
    [BibTeX▼]
  • C. Hamon, E. Mouline, and F. Charpentier. A diphone synthesis system based on time-domain prosodic modifications of speech. In International Conference on Acoustics, Speech, and Signal Processing,, 238–241 vol.1. May 1989. doi:10.1109/ICASSP.1989.266409.
    [BibTeX▼]
  • F. Charpentier and M. Stella. Diphone synthesis using an overlap-add technique for speech waveforms concatenation. In ICASSP '86. IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 11, 2015–2018. April 1986. doi:10.1109/ICASSP.1986.1168657.
    [BibTeX▼]
  • Faust: Conditional Logic

    The select2() directive can be used as a switch condition with two cases, as shown in switch_example.dsp

    // switch_example.dsp
    //
    //
    // Henrik von Coler
    // 2020-05-28
    
    import("all.lib");
    
    // outputs 0 if x is greater 1
    // and 1 if x is below 0
    // 'l' is used as an implicit argument
    sel(l,x) = select2((x>=0), 0, 1);
    
    process = -0.1 : sel(2);
    

    Concatenative: Introduction

    Concatenative synthesis is an evolution of granular synthesis, first introduced in the context of speech synthesis and processing (Charpentier, 1986; Hamon, 1989).

    Concatenative synthesis for musical applications has been introduced by Diemo Schwarz. Corpus-based concatenative synthesis (Schwarz, 2000; Schwarz 2006) splices audio recordings into units and calculates audio features for each unit. During synthesis, unit selection can be performed by navigating the multidimensional feature space and selected units are concatenated.

    /images/Sound_Synthesis/concatenative/concatenative-flow-1.png
    [Fig.1] (Schwarz, 2006)

    /images/Sound_Synthesis/concatenative/concatenative-flow-2.png
    [Fig.2] (Schwarz, 2006)

    References

  • Grimaldi, Vincent and Böhm, Christoph and Weinzierl, Stefan and von Coler, Henrik. Parametric Synthesis of Crowd Noises in Virtual Acoustic Environments. In Proceedings of the 142nd Audio Engineering Society Convention. Audio Engineering Society, 2017.
    [BibTeX▼]
  • Christian Knörzer. Concatenative crowd noise synthesis. Master's thesis, TU Berlin, 2017.
    [BibTeX▼]
  • Vincent Grimaldi. Parametric crowd synthesis for virtualacoustic environments. Master's thesis, IRCAM, 2016.
    [BibTeX▼]
  • Diemo Schwarz. Concatenative sound synthesis: The early years. Journal of New Music Research, 35(1):3–22, 2006.
    [BibTeX▼]
  • Diemo Schwarz, Grégory Beller, Bruno Verbrugghe, and Sam Britton. Real-Time Corpus-Based Concatenative Synthesis with CataRT. In In DAFx. 2006.
    [BibTeX▼]
  • Diemo Schwarz. A System for Data-Driven Concatenative Sound Synthesis. In Proceedings of the COST-G6 Conference on Digital Audio Effects (DAFx-00). Verona, Italy, 2000.
    [BibTeX▼]
  • C. Hamon, E. Mouline, and F. Charpentier. A diphone synthesis system based on time-domain prosodic modifications of speech. In International Conference on Acoustics, Speech, and Signal Processing,, 238–241 vol.1. May 1989. doi:10.1109/ICASSP.1989.266409.
    [BibTeX▼]
  • F. Charpentier and M. Stella. Diphone synthesis using an overlap-add technique for speech waveforms concatenation. In ICASSP '86. IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 11, 2015–2018. April 1986. doi:10.1109/ICASSP.1986.1168657.
    [BibTeX▼]


  • Contents © Henrik von Coler 2020 - Contact