It’s very much on the cards, because one of our starting convictions is that it’s that’s sort of tweaking and tuning that needs to be available, where the algorithm affords it. When @tremblap talks about getting the ‘granularity’ right, this is what he’s getting at. By and large the starting approach has been to expose everything and then struggle with the interface problems that arise One of the things we need to find out (hence involving people in the project) is how the tuning etc. pans out in practice, so we can improve on interfaces, and find a hopefully optimal blend of flexibility and usability. I realise we’re not there yet…
All this becomes a more acute concern for the next toolbox, because there will be more of these more abstract algorithms like NMF that get trained and tuned, and with that, great scope for making things bewildering.
Like, architecturally speaking, some of the stuff discussed in the thread around realtime NMF matching (and semi-reworking the concept from the machine learning drum trigger thing) just isn’t possible in the
fluid.verse~
because there is no step that allows a pre-training/pre-seeding of an expected material to then optimize towards.
Not quite I sure follow: pre-training and seeding is possible with NMF via the actmode
and basesmode
attributes? Perhaps the docs need to make more of this, but that gives you a range of ways of steering it in a supervised or semi-supervised way.
As before, we still don’t know the extent of creative possibilities here, because there hasn’t been much creative work with this stuff before. As an algorithm, it definitely has its quirks, and seeing which of those get in people’s way will be helpful in narrowing down what might be the most helpful extensions to NMF to add (there are loads).