Interesting. Is that something that is doable with the kind of outputs the NNs in FluCoMa?
I’m not married to the one-hot thing at all, just after this idea of trying to create gradients/interpolation, with the one-hot thing being (by a large margin) the best results I’ve gotten trying to do that so far.
By supervised regression problem, do you mean just training the individual points along the continuum? Or does that mean something else in this context?
I did do some testing with multiple “in between” classes upthread with not-meaningfully better results:
I think back then I landed on 3 zones giving the best results given a reasonable faff-to-improvement ratio, but my hesitation here is that this would create a separate training/plumbing process than training individual classes (or loading classes that have already been trained).
I don’t know what the Sensory Percussion software itself is doing, but I want to say I heard a podcast ages ago where they mention using the latent space for interpolation.
Looking at this paper from AIL on classifying percussive guitar hits, they talk about latent spaces as well, but in the context of VAEs, which as far as I know, we don’t have access to in Max.
Here’s a (very short) video example of their results:
Sadly, much of the paper goes way way over my head in terms of technical detail/theory, but also the use of CNN/VAEs mean that I can’t directly map what they are doing in FluCoMa either (though would be nice!).
Maybe we can start a new thread and work out what a usefully generic collection of utilities for this would look like.