That is the total number of combinations indeed, but the neural net does not connect everything to everything, nor does it need to traverse all combinations to be usefulā¦ if you let us know (this is now on the 2nd toolbox private discussion) what you want to do, there might be a few shortcuts @weefuzzy and @groma can come up with - maybe even me if it is sampling a space of modular synth control for instance, which Iāve done a bitā¦
I have a synth that has 16 parameters. Each has a slider. If each slider increments by 0.01, I would have a space of 100^16, I think.
I am using an NN to control it. Basically, I find 4-6 sounds I like and train an NN on those 4-6 settings using 4 input values. So, two XY pads control 16 parameters. If I run āall of the pointsā of the two XY pads through the NN, it will give me about 12million points, more or less (and I am oversimplifying here).
What I am trying to do is figure out what percentage of the space I am traversing. And if my math is right, then I am traversing a tiny portion of the space.
To answer PA, this is exactly the point I am trying to make - that the space is vast and by training the NN, you can focus in on a little space within vast one. This isnāt really a FluCoMa question, but it will be when I get there.
This is exactly what I do indeed, but I cheated a bit: I do a sampling in 3 passes and dismiss the useless bits of space automatically, and scale the FFT according to the pitch of what I get. My space has 3 parameters now, but 10 x 10 x 10 at full FFT resolution gives me a space that I can then re-focus on. I only have very partial results yet and no complex FluCoMa ML on this but thatāll come
Yes itās 10016 possible distinct states for the synth. Iām confused about the 12 million though, where does that come from?
In any case: yes, an n:m mapping where m > n will mean youāre only going to end up sampling the output space (whatever mechanism you use), but in a way thatās a feature, rather than a bug, because you get to curate it, on the offchance that portions of that 10016 state space sound similar, or dull.
I basically put āallā of the points from my 4 dimensional control space (4 sliders each with a step of 0.01) and then added the points, rounded to the nearest 1/100, to a python Set. Since the set wonāt repeat values it only added unique values, and there were 12 million unique outputs to this particular training (which is actually 8 trainings).
16 sliders control the synth
2 2D sliders are what I am using to control the 16 - so 4 values
I move the 16 sliders around until I have a sound I like. Then I put the 2D sliders in a corner or along a border or really anywhere. I add this point to a list.
Once I have 4 to 6 or 8 sounds I like for a particular mapping, I train the NN using those 4-8 vectors (I am using Keras in Python as my NN).
Then I play around with the mapping and see if I like it. I can add or remove points from the list and retrain whenever. The idea is to be in the feedback loop between the synth and the mappings.
This week I went crazy and made a synth with 53 controls. It is insane because I have no idea how this synth actually works (it is the DX7 algorithm), but I can be super musical with it using the NN, which is I think the whole point.
Super fast. But if you want to implement multiple Keras models and switch between them, there is (or at least was about 2 months ago) a bug that makes Keras struggle with multithreading. This is why my setup creates a different python instance for each model. The unused ones just run in the background and donāt take up CPU.
The other thing I ran across was that the Keras model easily gets overloaded if you open it up then just send it a million vectors to predict (like moving the OSC sliders around). So I came up with a way for it to āprime the pumpā (it just sends a couple test runs) before I start send it tons of OSC messages.
Lemur actually sends a ton of OSC data. I switched to a newer iPad at some point and had to filter out half the messages because the newer iPad sent twice as many as my old one and the models were barfing.
Yeah unfortunately the threading model in Python is garbage - I really hate writing anything using multiprocessing. Unfortunately the suggested or accepted way of doing these things is to bypass the global interpreter lock and spawn new instances of things which is crude but usually works without a hitch.
Iāll take a look at this tomorrow then - its super interesting and its in a language Iām familiar with
That is what I was missing in your explanation. So youāre doing a 4 to 16 mapping. I am dreaming of doing something semi assisted, where I could do 4 settings by hand, and then vote for various competing mappings in between them, through descriptor analysisā¦ not dissimilar to what was presented in this thread
@tutschku was looking for a similar idea, where you place a few components of a corpus in a line and ask the algo to map everything on the āHansā descriptor spaceā¦ we did a crude implementation in FluCoMa-land (with a knn regressor) to show the limits of the current implementation and drive interface research forward. With @jamesbradbury@weefuzzy and @groma doing a lot of Python, Iām curious to see where yours will goā¦
I am exploiting the fact that, in these complex synthesis spaces (I am doing digital synthesis not concatenative) the interesting part is actually moving between the mappings. So, while I like the mappings I chose, when I play around with the synth I find more interesting things that I didnāt choose. Then I can either accept the mapping as is or add and remove vectors to focus in on areas of the multidimensional space that I like better.
When doing my rudimentary āareaā of the mapping calculations, most of my individual NNās covered about 1million points in space. But one of them covered just 5000. This is a hyper-focused region of the synth that kind of just does one thing. The sliders barely move when I am playing in that space.
Iām doing this in non-linear modular analogue feedbacking patches so I understand. but I try to curate some sort of behaviour quickly as this patch might change and values are almost infinite (24bit accuracy per controlā¦) so mapping could be crude, but I want to interact with it.
Anyway, letās see what we come up with, this is exciting!
Sam, this tool steps through control parameter space in a way similar to what youāre describing (I think). It sounds like youāre not sampling the audio output, but perhaps there are some ideas in here to play with.
Another approach that Iāve worked with is using Poisson Disc sampling in n dimensions (in your case 16 dimensions). It basically keeps trying to randomly add points into the space, but only adds them if they are sufficiently distant (euclidean) from all other points that have been added (so that the final set is evenly distributed). Once it fails to find a place that it can add a point k times in a row it stops. What youāll end up with is a bunch of n dimensional points that are all at least a certain distance from each other, meaning that they will be pretty evenly spread out. You can control the number of points it creates by changing the minimum distance. Larger distance = fewer points (obviously).
Then you can use these points as your points for stepping through your control parameter space. n_dims_poisson_sampling.py.zip (2.4 KB)
Has anyone else thought about using this or have thoughts about it?
Thanks for this. I will check these out. Good ole Poisson. I love that guy.
I think my goal is exactly the opposite of this idea. Equidistance of points of parameters has no corollary in the sonic meaning of complex systems. In linear systems it probably does. But in certain kinds of feedback systems, for instance, I really donāt think it helps. Tiny movements can create enormous change and large movements can create none. When I move my 4 dimensional control space, all of the 16 dim parameter vectors are moving at different rates in different directions.