Play with biases/weights of a fluid.mlpregressor~

Hi everyone,

I’m trying to play with the biases/weights of a trained fluid.mlpregressor~ model.

I’m currently using a personalized version of an autoencoder patch from @tremblap (autoencoder.maxpat (99.0 KB)). I thought that by dumping a trained model in a dict, then changing its biases before reloading it would allow me to get different predictions.

But that doesn’t seem to affect the model at all.

If I keep training while changing its biases it kinda works (as I’m screwing its understanding), but as expected it quickly returns to a more stable state (in addition to being quite CPU heavy between training/dumping/loading).

So I wonder if there is a nicer way to change the biases of a trained mlp.regressor~ for live use?

that sounds like a bug. It should, but in unpredictable ways… so you say that dumping, modding, and loading the modded data doesn’t change the prediction? This is strange. What if you load from scratch (i.e. after loading the patch, blank)?

with this approach, I would run a separate server (on SC) or a separate instance of the app (on Max) or a pd~ object (in pd). in all 3 cases, you create a separate thread so the machine won’t stall. I would (in the playing part) dump to disk. then in the 2nd instance, read-mod-write via a temp file. then in the playing part I would read back the model. that way, you have a clear segregation.

but that implies that loading/reading a modified model actually works. It is supposed to. What version of the toolit have you got, and if the one from package manager, can you try 1.0.6 from our GitHub (it is in test mode)

Same result, the model seems to be uninfluenced.

Good idea, I will do it if I get better results. I am currently using version 1.0.6.

Maybe changing the biases only is not enough to disrupt a model?

I am trying with absurd numbers like +/-5000, and I also tried to change the weights directly in a dict before loading it in a fluid.mlpregressor~, but without success either.

if you load from scratch, the object shouldn’t work until you load a model. then it should work.

to do a test, change completely randomly - it should give you garbage.

Extremely curious to know how you get on here @jpjpjp !

I did some more testing, it works when perturbing a model by hand and then loading it to get weird predictions. But is it possible to change the internal wiring of an already trained loaded model without reloading it?

You can try the patch posted here:

Load a sample in the live.drop at the top left of the patch, then wait a bit for the loss to get low and stop training. Then, everything happens in the blue changeBiases subpatch, where I unpack the internal layer (2 neurons) in a dict and use 2 sliders to get away from its original biases. Then the model is reloaded in a fluid.mlpregressor~. If you manage to get there more efficiently I would love to see that :slight_smile:

no. but reloading (via dict) is very efficient and allows you to mess it up as much as you want. dump/load are your friends there.

This is quite efficient actually - and customisable. Is there any bottleneck of performance when you use it?

Oh ok great, I just thought using dicts was a bit expensive as I am trying to do it in ~real time. But you’re right there is no real bottleneck atm. I will continue my adventures!

1 Like

thanks and please continue sharing code/ideas/sounds!