NIME 2020: Spatialization

Virtual Source Model

Spectral spatialization in this system is based on a virtual sound source with a position an space and spatial extent, as shown in [Fig.1]. The source center is defined by two angles (Azimuth, Elevation) and the Distance. The Spread defines the diameter of the virtual source. This model is compliant with many theoretical frameworks from the fields of electroacoustic music and virtual acoustics.

/images/NIME_2020/source_in_space.png
Fig.1(1,2)

Virtual sound source with position an space and spatial extent.


Point Cloud Realization

The virtual source from [Fig.1] is realized as a cloud of point sources in an Ambisonics system using the IRCAM software Panoramix. 24 point sources can be controlled jointly. The following figures show the viewer of Panoramix, the left half representing the top view, the right half the rear view.


[Fig.2] shows a dense point cloud of a confined virtual sound source without elevation:

/images/NIME_2020/panoramix_confined.png
Fig.2

Confined virtual sound source.


The virtual sound source in [Fig.3] has a wider spread and is elevated:

/images/NIME_2020/panoramix_spread.png
Fig.3

Spread virtual sound source with elevation.


For small distances and large spreads, the source is enveloping the listener, as shown in [Fig.4]:

/images/NIME_2020/panoramix_enveloping.png
Fig.4

Enveloping virtual sound source.


Dispersion

In a nutshell, the synthesizer outputs the spectral components of a violin sound to 24 individual outputs. Different ways of assigning spectral content to the outputs are possible, shown as Partial to Source Mapping in [Fig.5]. In these experiments, each output represents a Bark scale frequency band. For the point cloud shown above, the distribution of spectral content is thus neither homogenous nor stationary.

/images/NIME_2020/dispersion.png
Fig.5

Dispersion - routing partials to point sources.


Back to NIME 2020 Contents