Hey gang,
Pls excuse if this is covered elsewhere in this now epic discourse.
We are having loads of fun with MLP trained on cello mic (20 MFCCs) controlling CV on some analogue modular processing of the cello. Kinda inspired by Ashby style double feedback loops, but seeking ultra interestingness, rather than ultra stability.
We’ve just stumbled on a cool one. We can save the training data. But.
Q: can we save the weights and then reload them later?
There is a caveat though: the object’s interface will not be aware of the model you load yet.
So the object will work with the loaded state (for instance the hidden layer topology) but if you ask your instance what is its hidden parameter, it will give you the wrong answer.
In other words, you can read and write and soon it’ll behave 100% but for now you have to know the topology of what you are loading back. Does that help?