Parametric Sound Texture Synthesis

Hello all,

having read this very interesting article i was wondering in what way the Flucoma tools are similar/ or differ from such a “parametric” approach as they also resort to convolutional neural networks?

im assuming that this synthesis approach might also be interesting to the development of flucoma…(or maybe it already even has been considered)

greetings,
jan

Thanks Jan. That’s a really interesting paper. We don’t include a convolutional neural network in the toolkit, although @groma has done a great deal of research around them (and this certainly informs our choices about what we have included so far).

CNNs are pretty heavyweight, and for working with audio data it’s unlikely that they are practical to train or run on current hardware without access to a GPU. I see that the paper has some discussion about how to reduce the complexity of their model, but even so, it remains very heavyweight.

At one point did look at the idea of wrapping a general purpose ML toolkit like Torch. However, we were deterred by the added complexity it would entail both in development and usage, especially whilst we’re still collectively figuring out what sorts of approach are desirable / feasible in environments like Max or SC.

Hi weefuzzy,
thanks for the reply!
I can imagine that the added complexity to implement something of that kind would be quite a challenge… also that its beyond the reach of normal processors now (eventhough the new macs seem promising in this regard)
Regarding the (creative) practicality of such a heavyweight tool though, i do find that environments like SC are really something that open a path to that kind of “nonrealtime” working with sound synthesis, which for me is a fascinating process really.
and also i find the discussion in such papers quite inspiring, that in spite of the scientific language, they open up perspectives regarding experimentation and “sonorous imaginations”.