I need to give this a good college try I think as, at best, I’ve left it running for hours with some generic-ish settings and that went nowhere. Obviously not an ideal way to go, though I did use a slightly modified version of @tutschku’s looping patch from the last plenary as a starting off point. I just haven’t gotten my head around all the parameters and how to massage them in a way that works.
(Much like the dance of “pick the correct descriptors with your fleshy human brain so I can tell you you chose wrongly”, it feels like picking parameters for this stuff is pressing a button that says “change random numbers” and then the computer buzzes you and says “wrong, try again”, over and over until it, somehow, magically works. (Surely unsupervised parameter selection for supervised machine learning is a thing?!)).
Ah right. Forgot about that (one’s ability to transform point). I’ll compare and update the speed comparison thread accordingly.
All good to know. I remember running to some issues with that (see point1 about lack of convergence) when we first got these as I had no idea what ramifications of the @activation
param were (ala fluid.mds~
's zesty @distancemetric
).
I can totally see that. I don’t want to derail a very useful conversation here, but I’d say 80% of my faff/friction at the moment has to do with getting the specific numbers I want, in the places that I want them. So I’m hardly even at the point of legitimate confusion, even though I’m quite obviously confused a lot (this thread included).
But indeed, the paradigm of moving around and transforming huge fields of numbers, with every tiny thing mattering in a way that is (often) unintelligible by humans, is pretty hard to decipher. Particularly with how “it depends” things can be.