NNSize question

So, I am really just checking my math here.

If I have a 16 dimensional space, where each dimension has 100 points, then my space would have 100^16 points or 10^32?

Then, if I train a Neural Net to traverse this space, and the NN is able to find 12,000,000 points in this space

Then my NN is able to traverse 10^7/10^32 or 10^-23 percent of the space.

Anyone with me or did I mess it up?

I miss all of you dearly, btw.

Sam

Iā€™m confused and probably out of my depth here but do you mean you have 16 dimensions and 100 samples (in the musical sense of mapping things)?

If you do have a ā€˜dimensionā€™ that itself has 100 points then I think your math is right in that its 100 ^ 16.

Can you divulge more about your vast neural network :)?

That is the total number of combinations indeed, but the neural net does not connect everything to everything, nor does it need to traverse all combinations to be usefulā€¦ if you let us know (this is now on the 2nd toolbox private discussion) what you want to do, there might be a few shortcuts @weefuzzy and @groma can come up with - maybe even me if it is sampling a space of modular synth control for instance, which Iā€™ve done a bitā€¦

I have a synth that has 16 parameters. Each has a slider. If each slider increments by 0.01, I would have a space of 100^16, I think.

I am using an NN to control it. Basically, I find 4-6 sounds I like and train an NN on those 4-6 settings using 4 input values. So, two XY pads control 16 parameters. If I run ā€œall of the pointsā€ of the two XY pads through the NN, it will give me about 12million points, more or less (and I am oversimplifying here).

What I am trying to do is figure out what percentage of the space I am traversing. And if my math is right, then I am traversing a tiny portion of the space.

To answer PA, this is exactly the point I am trying to make - that the space is vast and by training the NN, you can focus in on a little space within vast one. This isnā€™t really a FluCoMa question, but it will be when I get there.

This is exactly what I do indeed, but I cheated a bit: I do a sampling in 3 passes and dismiss the useless bits of space automatically, and scale the FFT according to the pitch of what I get. My space has 3 parameters now, but 10 x 10 x 10 at full FFT resolution gives me a space that I can then re-focus on. I only have very partial results yet and no complex FluCoMa ML on this but thatā€™ll come

Yes itā€™s 10016 possible distinct states for the synth. Iā€™m confused about the 12 million though, where does that come from?

In any case: yes, an n:m mapping where m > n will mean youā€™re only going to end up sampling the output space (whatever mechanism you use), but in a way thatā€™s a feature, rather than a bug, because you get to curate it, on the offchance that portions of that 10016 state space sound similar, or dull.

I am kind of tackling the same problem right now and trying to find ways to narrow down parameter spacers for sound generation.

Where do the 4 values come from and how are they mapped? or is that where the neural network comes into play?

I basically put ā€œallā€ of the points from my 4 dimensional control space (4 sliders each with a step of 0.01) and then added the points, rounded to the nearest 1/100, to a python Set. Since the set wonā€™t repeat values it only added unique values, and there were 12 million unique outputs to this particular training (which is actually 8 trainings).

16 sliders control the synth
2 2D sliders are what I am using to control the 16 - so 4 values
I move the 16 sliders around until I have a sound I like. Then I put the 2D sliders in a corner or along a border or really anywhere. I add this point to a list.
Once I have 4 to 6 or 8 sounds I like for a particular mapping, I train the NN using those 4-8 vectors (I am using Keras in Python as my NN).
Then I play around with the mapping and see if I like it. I can add or remove points from the list and retrain whenever. The idea is to be in the feedback loop between the synth and the mappings.

This week I went crazy and made a synth with 53 controls. It is insane because I have no idea how this synth actually works (it is the DX7 algorithm), but I can be super musical with it using the NN, which is I think the whole point.

Okay so your NN is learning off raw input values not any kind of descriptor information to describe the sounds you ā€˜likeā€™?

Can you share the keras code?

Yeah. I am just using my ears. The ā€œsounds goodā€ descriptor.

The earlier version is here, which should work:

In the new version, I can have N dimensions for my NN. The Keras stuff is super simple.

Thanks for the super clear Python code - definitely will study it and implement it my side for a play.

How does it do for speed? Is it fast enough?

Super fast. But if you want to implement multiple Keras models and switch between them, there is (or at least was about 2 months ago) a bug that makes Keras struggle with multithreading. This is why my setup creates a different python instance for each model. The unused ones just run in the background and donā€™t take up CPU.

The other thing I ran across was that the Keras model easily gets overloaded if you open it up then just send it a million vectors to predict (like moving the OSC sliders around). So I came up with a way for it to ā€œprime the pumpā€ (it just sends a couple test runs) before I start send it tons of OSC messages.

Lemur actually sends a ton of OSC data. I switched to a newer iPad at some point and had to filter out half the messages because the newer iPad sent twice as many as my old one and the models were barfing.

Yeah unfortunately the threading model in Python is garbage - I really hate writing anything using multiprocessing. Unfortunately the suggested or accepted way of doing these things is to bypass the global interpreter lock and spawn new instances of things which is crude but usually works without a hitch.

Iā€™ll take a look at this tomorrow then - its super interesting and its in a language Iā€™m familiar with :slight_smile:

That is what I was missing in your explanation. So youā€™re doing a 4 to 16 mapping. I am dreaming of doing something semi assisted, where I could do 4 settings by hand, and then vote for various competing mappings in between them, through descriptor analysisā€¦ not dissimilar to what was presented in this thread

@tutschku was looking for a similar idea, where you place a few components of a corpus in a line and ask the algo to map everything on the ā€˜Hansā€™ descriptor spaceā€¦ we did a crude implementation in FluCoMa-land (with a knn regressor) to show the limits of the current implementation and drive interface research forward. With @jamesbradbury @weefuzzy and @groma doing a lot of Python, Iā€™m curious to see where yours will goā€¦

I am exploiting the fact that, in these complex synthesis spaces (I am doing digital synthesis not concatenative) the interesting part is actually moving between the mappings. So, while I like the mappings I chose, when I play around with the synth I find more interesting things that I didnā€™t choose. Then I can either accept the mapping as is or add and remove vectors to focus in on areas of the multidimensional space that I like better.

When doing my rudimentary ā€œareaā€ of the mapping calculations, most of my individual NNā€™s covered about 1million points in space. But one of them covered just 5000. This is a hyper-focused region of the synth that kind of just does one thing. The sliders barely move when I am playing in that space.

Iā€™m doing this in non-linear modular analogue feedbacking patches so I understand. but I try to curate some sort of behaviour quickly as this patch might change and values are almost infinite (24bit accuracy per controlā€¦) so mapping could be crude, but I want to interact with it.

Anyway, letā€™s see what we come up with, this is exciting!

Sam, this tool steps through control parameter space in a way similar to what youā€™re describing (I think). It sounds like youā€™re not sampling the audio output, but perhaps there are some ideas in here to play with.

Another approach that Iā€™ve worked with is using Poisson Disc sampling in n dimensions (in your case 16 dimensions). It basically keeps trying to randomly add points into the space, but only adds them if they are sufficiently distant (euclidean) from all other points that have been added (so that the final set is evenly distributed). Once it fails to find a place that it can add a point k times in a row it stops. What youā€™ll end up with is a bunch of n dimensional points that are all at least a certain distance from each other, meaning that they will be pretty evenly spread out. You can control the number of points it creates by changing the minimum distance. Larger distance = fewer points (obviously).

Then you can use these points as your points for stepping through your control parameter space.
n_dims_poisson_sampling.py.zip (2.4 KB)

Has anyone else thought about using this or have thoughts about it?

1 Like

This is a good pointer for my next exploration indeed (instead of being systematic, or maybe after the first 10 in each dim) thanks!

Thanks for this. I will check these out. Good ole Poisson. I love that guy.

I think my goal is exactly the opposite of this idea. Equidistance of points of parameters has no corollary in the sonic meaning of complex systems. In linear systems it probably does. But in certain kinds of feedback systems, for instance, I really donā€™t think it helps. Tiny movements can create enormous change and large movements can create none. When I move my 4 dimensional control space, all of the 16 dim parameter vectors are moving at different rates in different directions.