Video for my students about MLP and autoencoder

This is certainly not a beginners video as it requires some basic understanding of machine learning and the FluComa tools. It summarizes the concepts around MLP and autoencoder in Max MSP.
The patches are based on the FluComa tutorials.

1 Like

I think the link is to your accounts management of the video rather than the public facing one :slight_smile:

Is it fixed now?

Yes! Its embedded in the post as well which is handy.

7:14 is interesting when you bring up the point about the differences to linear interpolation. I know @a.harker is sceptical about the behaviour of neural networks for this sort of thing offering beneifts over a linear interpolation approach but I think you are right - if the presets were more complex to warp between the nn would more likely be able to figure out a non-linear pathway. Theres no guarantee that will sound ā€˜betterā€™ but it certainly is a difference.

Probably worth investigating with some kind of a/b comparison imo.

I should clarify that I didnā€™t necessarily say linear interpolation. The question for some cases of controller mapping is how different it would be to some form of interpolation, which might be (for instance) cubic or something else. The level of non-linearity is important.

And - yes - as with other debates of this nature - a true test would be better than speculation.

In that regard the point to make would be that you donā€™t have to go and design your function, rather the computer invents it for you and that process itself can be fruitfulā€¦

When you make an interpolator you choose a function or approach, you donā€™t necessarily design it. I have no doubt that the exact numerical results will be different from an NN, but the question is how nonlinear is it (or more generally how far away from what an interpolator would do). Whether that difference is fruitful is the question for me. If itā€™s not nonlinear in a complex way other methods may produce pretty similar results.

Anecdotally, when Iā€™ve watched the outputs of some examples people have given in the past I think I see the outputs moving towards the training points, much like they might with an interpolator. But:

1 - do I know if the difference is meaningful? No - that would need to be tested (by the person who was defining what meaningful would be).

2 - is the NN a valid way to reach the result - of course from a musical point of view, but Iā€™m wary of the potential to imagine it is doing something more magical than building an interpolator for you - my hunch is that for a low number of training points there might be other less complex ways to reach a very similar result, but that is just a hunch and I might be wrong, perhaps very much so. I think the test might be worth doing thoughā€¦

Iā€™ve now watched the segment of the video in question and the shapes I see look very very similar to some kind of interpolator (in fact they look quite linear), but weā€™d have to decide which kind - a simple model might be to add something from all training points based on distance to the query point (such that they also sum to one).

Most importantly what I see is all the points moving towards the training point that the query is moving towards in the direction we would expect, so we arenā€™t seeing complex non-linearity emerge between two points, which might lead to something quite different from a straightforward interpolation.

I think youā€™re splitting hairs here. Although I agree you donā€™t design an interpolation function (although you could) it is definitely a component of designing some patch/program/module that performs interpolation. It is undeniably a choice that one makes knowingly or unknowingly, and whether or not you implement it from a set of well known interpolation functions or have a nn approximate the function are two different things.

I think this is largely a result of the example being quite basic and illustrative. The canonical XOR example is a better demonstration of how non-linearity can be captured and where something linear would fail.

Technically and in terms of process yes - but the question for me is whether they are produce perceptually different results or not. [As a less important aside - for the NN one makes other choices (like activation function), so seeing the need to make a choice of interpolator as a downside in comparison to an NN is for me a false comparison.]

Iā€™m not sure how that relates to this situation. My claim is not that NNs never provide solutions to non-linear problems that are hard to solve in other ways. My claim is simply that for some limited scenarios (controller mapping with limited training points being potentially one) they may be essentially a means to design an interpolator that may not be particularly perceptually different to something that would be much simpler to implement in a different way.

Sorry, your claims were never ratified in text so Iā€™m going off my memory of you being quite underwhelmed by them (mlpā€™s) :slight_smile: I do agree with you in this case. I would be interested to see more examples where the function the nn learns could be quite challenging or unexpected in very simple mapping problems.

I would love someone much smarter than me to be able to explain this definitively, as, intuitively I agree with what Hans has said - especially once your control space is more than 2 dimensions! I mean, once it is 3 or 4 or 5, etc, I donā€™t even know how you would design that interpolation. And that is part of the magic here. 5 dims is just as easy as 2 dims.

I was looking at this book today for some kind of clarity:

I didnā€™t get it. But for now I have drunk the cool-aid. I just donā€™t really understand what is in the cool-aid. But I know I like it.

1 Like

There are two obvious approaches.

  • One creates only linear combinations of control points and thus the number of dimensions is irrelevant, you simply find the contribution of each control point to the result and multiply and take the sum.

  • The second is to assume that one can decouple the dimensions for the purpose of interpolation.

A really important distinction to make here is NN used to model systems that are known to be non-linear and the assumption that the non-linearity of a NN might ā€œinventā€ or ā€œcreateā€ some kind of odd interpolation path that a simpler approach wouldnā€™t and that is the bit Iā€™m less sure about

I

To add to this, I think that the process of working with the NN can confirm or deny suspicions about whether or not something was non-linear as one thinks. The XOR to me is quite simple, but is non-linear by definition and so in a way our intuitive understanding can betray the reality of the problem space.

My approach to this comes from the very cool assumption that I donā€™t need to care about linearity or not. Rebecca Fiebrink is quite clear about this, especially for small dataset like ours. In effect, I point at a few points in my arbitrary mapping space that might be imprecise and incomplete and the machine tries to devise a mapping that will be approximately good. I even get extrapolation, aka guestimates of things outside my space. So Iā€™m happy.

As for rigorous comparison, we could randomise the test but again, if it works and is musically expressive, I really want to make more music, not try to maybe find a system that might work better for certain casesā€¦

Speaking of complicated systems: time to practice the fretless scales - linear Iā€™m told, but behold, it does not sound like it when i skip a day :slight_smile:

A key thing that I keep tripping myself up on is the nonlinearity of the parameter space and the nonlinearity of the sonic space - and though related, these are not the same thing.

2 Likes

Agreed, and there is the non-linearity of your preset space too - you might create ā€˜illogicalā€™ mappings in the space you attribute positions in, yet MLP will try to make sense of it all in fascinating smooth ways.

Thank you all for chiming in on this discussion. There are very valid details captured here which will continue to make me think. This group is just amazing. What are we going to do when the formal part of FluComa is coming to an end? I will certainly miss our weekly encounters VERY MUCH.

2 Likes

They donā€™t need to stop :slight_smile: