Google's NSynth (neural deep learning synth)

I imagine many of you would have seen this already as it made the tech blog rounds earlier this year, but figured it might be interesting to post here:

Also came across a couple of more detailed blog posts from the development team:

One of the goals of Magenta is to use machine learning to develop new avenues of human expression. And so today we are proud to announce NSynth (Neural Synthesizer), a novel approach to music synthesis designed to aid the creative process.

Unlike a traditional synthesizer which generates audio from hand-designed components like oscillators and wavetables, NSynth uses deep neural networks to generate sounds at the level of individual samples. Learning directly from data, NSynth provides artists with intuitive control over timbre and dynamics and the ability to explore new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer.

///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

In a previous post, we described the details of NSynth (Neural Audio Synthesis), a new approach to audio synthesis using neural networks. We hinted at further releases to enable you to make your own music with these technologies. Today, we’re excited to follow through on that promise by releasing a playable set of neural synthesizer instruments:

  • An interactive AI Experiment made in collaboration with Google Creative Lab that lets you interpolate between pairs of instruments to create new sounds.

  • A MaxForLive Device that integrates into both Max MSP and Ableton Live. It allows you to explore the space of NSynth sounds through an intuitive grid interface. [ DOWNLOAD ]

///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

The sounds in their examples are profoundly boring, but a somewhat interesting approach nonetheless.

I believe @weefuzzy has toyed with Magenta.

I do agree, the examples they provide seem pretty boring.

2 Likes

I had a quick look to see if it was possible to create custom sounds sets but didn’t see anything. Shame.

It’s funny, in watching the videos, I can hardly hear the difference between the sounds they are morphing between…

1 Like

it is indeed quite straight for now - but I think there is potential for the 2nd toolbox. I know the team will make concrete proposals for this sort of hybridisation, although I’m told by those in the know (@weefuzzy and @groma obviously) that training times will be an issue for those without Google’s computational farm :wink:

Maybe during the first couple years of the project you can setup the mac pro as a computational server that composers on the project can queue long training projects on? (as in, upload some to your “box” thing, audio along with the desired settings, and it would spit out the files when done).

Can probably do that natively in a Max patch too(?).

Particularly for things that take 8+ hours, it can be difficult to keep a computer in one place for it all, and there’s no way to pause the training (or know how long it’s going to take).

It is a good idea, although not as trivial as you make it sound. I’ll check with @groma once we have more solid training tools.

1 Like

I think Richard Knight has done a bunch of similar things using SC as a backend with some ftp upload/download.

I can ask him about if if you want.

(unrelatedly he just sent me an mp3 of “The Quiet Upstairs” which he extracted from all the 30h+ archives on the Noise Upstairs webpage which is trimmed down to just the “quiet” sections. sounds fucking great)

Please indeed ask and forward us if he is still interested in sharing. It would be great if the quiet upstairs was also available, but more importantly to know how he removed the talking for instance…

Ok, will do.

He didn’t remove the talking, well, not specifically. It’s just an rms thing he did with batch processing. Does make for some interesting moments.

(we might make it a free release of some kind, will see)

1 Like