Flow Synthesizer: Universal Audio Synthesizer Control with Normalizing Flows

The ubiquity of sound synthesizers has reshaped modern music production, and novel music genres are now sometimes even entirely defined by their use. However, the increasing complexity and number of parameters in modern synthesizers make them extremely hard to master. Hence, the development of metho...

Full description

Bibliographic Details
Main Authors: Philippe Esling, Naotake Masuda, Adrien Bardet, Romeo Despres, Axel Chemla-Romeu-Santos
Format: Article
Language:English
Published: MDPI AG 2019-12-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/10/1/302
Description
Summary:The ubiquity of sound synthesizers has reshaped modern music production, and novel music genres are now sometimes even entirely defined by their use. However, the increasing complexity and number of parameters in modern synthesizers make them extremely hard to master. Hence, the development of methods allowing to easily create and explore with synthesizers is a crucial need. Recently, we introduced a novel formulation of audio synthesizer control based on learning an organized latent audio space of the synthesizer&#8217;s capabilities, while constructing an invertible mapping to the space of its parameters. We showed that this formulation allows to simultaneously address <i>automatic parameters inference</i>, <i>macro-control learning</i>, and <i>audio-based preset exploration</i> within a single model. We showed that this formulation can be efficiently addressed by relying on Variational Auto-Encoders (VAE) and Normalizing Flows (NF). In this paper, we extend our results by evaluating our proposal on larger sets of parameters and show its superiority in both parameter inference and audio reconstruction against various baseline models. Furthermore, we introduce <i>disentangling flows</i>, which allow to learn the invertible mapping between two separate latent spaces, while steering the organization of some latent dimensions to match target variation factors by splitting the objective as partial density evaluation. We show that the model disentangles the major factors of audio variations as latent dimensions, which can be directly used as <i>macro-parameters</i>. We also show that our model is able to learn semantic controls of a synthesizer, while smoothly mapping to its parameters. Finally, we introduce an open-source implementation of our models inside a real-time Max4Live device that is readily available to evaluate creative applications of our proposal.
ISSN:2076-3417