NNSize question

I basically put “all” of the points from my 4 dimensional control space (4 sliders each with a step of 0.01) and then added the points, rounded to the nearest 1/100, to a python Set. Since the set won’t repeat values it only added unique values, and there were 12 million unique outputs to this particular training (which is actually 8 trainings).

16 sliders control the synth
2 2D sliders are what I am using to control the 16 - so 4 values
I move the 16 sliders around until I have a sound I like. Then I put the 2D sliders in a corner or along a border or really anywhere. I add this point to a list.
Once I have 4 to 6 or 8 sounds I like for a particular mapping, I train the NN using those 4-8 vectors (I am using Keras in Python as my NN).
Then I play around with the mapping and see if I like it. I can add or remove points from the list and retrain whenever. The idea is to be in the feedback loop between the synth and the mappings.

This week I went crazy and made a synth with 53 controls. It is insane because I have no idea how this synth actually works (it is the DX7 algorithm), but I can be super musical with it using the NN, which is I think the whole point.

Okay so your NN is learning off raw input values not any kind of descriptor information to describe the sounds you ‘like’?

Can you share the keras code?

Yeah. I am just using my ears. The “sounds good” descriptor.

The earlier version is here, which should work:

In the new version, I can have N dimensions for my NN. The Keras stuff is super simple.

Thanks for the super clear Python code - definitely will study it and implement it my side for a play.

How does it do for speed? Is it fast enough?

Super fast. But if you want to implement multiple Keras models and switch between them, there is (or at least was about 2 months ago) a bug that makes Keras struggle with multithreading. This is why my setup creates a different python instance for each model. The unused ones just run in the background and don’t take up CPU.

The other thing I ran across was that the Keras model easily gets overloaded if you open it up then just send it a million vectors to predict (like moving the OSC sliders around). So I came up with a way for it to “prime the pump” (it just sends a couple test runs) before I start send it tons of OSC messages.

Lemur actually sends a ton of OSC data. I switched to a newer iPad at some point and had to filter out half the messages because the newer iPad sent twice as many as my old one and the models were barfing.

Yeah unfortunately the threading model in Python is garbage - I really hate writing anything using multiprocessing. Unfortunately the suggested or accepted way of doing these things is to bypass the global interpreter lock and spawn new instances of things which is crude but usually works without a hitch.

I’ll take a look at this tomorrow then - its super interesting and its in a language I’m familiar with :slight_smile:

That is what I was missing in your explanation. So you’re doing a 4 to 16 mapping. I am dreaming of doing something semi assisted, where I could do 4 settings by hand, and then vote for various competing mappings in between them, through descriptor analysis… not dissimilar to what was presented in this thread

@tutschku was looking for a similar idea, where you place a few components of a corpus in a line and ask the algo to map everything on the ‘Hans’ descriptor space… we did a crude implementation in FluCoMa-land (with a knn regressor) to show the limits of the current implementation and drive interface research forward. With @jamesbradbury @weefuzzy and @groma doing a lot of Python, I’m curious to see where yours will go…

I am exploiting the fact that, in these complex synthesis spaces (I am doing digital synthesis not concatenative) the interesting part is actually moving between the mappings. So, while I like the mappings I chose, when I play around with the synth I find more interesting things that I didn’t choose. Then I can either accept the mapping as is or add and remove vectors to focus in on areas of the multidimensional space that I like better.

When doing my rudimentary “area” of the mapping calculations, most of my individual NN’s covered about 1million points in space. But one of them covered just 5000. This is a hyper-focused region of the synth that kind of just does one thing. The sliders barely move when I am playing in that space.

I’m doing this in non-linear modular analogue feedbacking patches so I understand. but I try to curate some sort of behaviour quickly as this patch might change and values are almost infinite (24bit accuracy per control…) so mapping could be crude, but I want to interact with it.

Anyway, let’s see what we come up with, this is exciting!

Sam, this tool steps through control parameter space in a way similar to what you’re describing (I think). It sounds like you’re not sampling the audio output, but perhaps there are some ideas in here to play with.

Another approach that I’ve worked with is using Poisson Disc sampling in n dimensions (in your case 16 dimensions). It basically keeps trying to randomly add points into the space, but only adds them if they are sufficiently distant (euclidean) from all other points that have been added (so that the final set is evenly distributed). Once it fails to find a place that it can add a point k times in a row it stops. What you’ll end up with is a bunch of n dimensional points that are all at least a certain distance from each other, meaning that they will be pretty evenly spread out. You can control the number of points it creates by changing the minimum distance. Larger distance = fewer points (obviously).

Then you can use these points as your points for stepping through your control parameter space.
n_dims_poisson_sampling.py.zip (2.4 KB)

Has anyone else thought about using this or have thoughts about it?

1 Like

This is a good pointer for my next exploration indeed (instead of being systematic, or maybe after the first 10 in each dim) thanks!

Thanks for this. I will check these out. Good ole Poisson. I love that guy.

I think my goal is exactly the opposite of this idea. Equidistance of points of parameters has no corollary in the sonic meaning of complex systems. In linear systems it probably does. But in certain kinds of feedback systems, for instance, I really don’t think it helps. Tiny movements can create enormous change and large movements can create none. When I move my 4 dimensional control space, all of the 16 dim parameter vectors are moving at different rates in different directions.

I think we are aiming at the same thing here: I’m trying to curate the space and mapping so my movements are made slightly more spread in term of their interest: various curves and mapping between the extreme could give me a bit more nuance and control on sub-states.

For instance, and in 1D to express it clearly, if between corner 0 and 1 there are 2 zones of interests, one between 0.2 and 0.21 and one between 0.666 and 0.669, then maybe I’d like to allow my controller to spend most of its run around those places so I can have fun, and take less care in the rest… does it make sense?

I found a cut and paste solution to this. You need to redo the mapping. Copy the output vector at 0.2, which I assume is not currently a vector in your training and then then paste that solution, but now mapped to 0.05. Copy the output at 0.21 and map that to 0.45. Now do the same with Satan’s vector and the sexy one, mapping them closer to 0.5 and 1. Retrain and it should solve your problem.

OK this looks clever and simple. Sexy Satan will be explored. I now need to get that going!

thanks for sharing

I don’t have anything useful to contribute to this conversation other than saying it’s interesting, but I’d love to hear some of this and/or read some further explanation about it, perhaps in a separate forum post. That is, if it’s somewhere in a semi-sharable state.

1 Like

I’m devising as many small examples as possible. m2n mapping will happen soon-ish from our object and there will be simple examples, because i need those to get my head around the affordances…

1 Like

Cool. Would just love to see some practical examples of the stuff, even if scratchy and so on.

Having many and more examples early in the process (of learning/digesting this stuff) is, I think, more useful than waiting for slick and polished examples at the end of the project.

I agree. At the moment we are confirming the design and challenges of the building blocks of the database, as you saw, and there are plenty of things to fight against already. @tutschku is onto something for instance, and @tedmoore too. it is important that this foundation is solid, then building ML stuff on it is ‘easy’ for someone like @groma and @weefuzzy who used and abused such technologies for years… Sam’s and @tedmoore’s hybrid approaches are good too, as the scikit learn has many, many well documented methods to help thinking and prototyping forward… I’m sure you want to do all this in RT and because of that, it needs to be built on a solid foundation :wink:

1 Like