Regression + Classification = Regressification?

I thought about that after posting, that rather than waiting 256 samples, and then having a summary of 7 frames, that I could probably feed it a frame at a time (via a parallel real-time fluid.descriptors~ thing(?)) and then hope for the best.

I could see something like this being tricky in that I’d have to make a decision as to when to “take” the prediction, which would ideally line up nicely with the querying process so that I can query with A + B.

I was also wondering about the temporal aspect of it. Since I only have 7 frames to begin with, and since I’m ignoring the outer ones for fluid.bufspectralshape~ anyways, that perhaps the derivative (c/w)ould encapsulate “enough” information about what was happening in those few frames.

So if I have loudness, centroid, flatness, rolloff, each with a derivative (and std for good measure), to take those 12 static values and then run the classification on that?

//////////////////////////////////////////////////////////////////////////////////////////////////////////////

So if I understand this part right, I would train loads of discrete A attacks by manually tagging them, then ask for the best match as I normally do now, but then use the known B section from the matched A to feed further down the chain?

Or more specifically, the classifier itself would have no idea about B, it would just tell me the nearest A and then I go off and pull up the corresponding data?

//////////////////////////////////////////////////////////////////////////////////////////////////////////////
I guess with this it would be better/sexier if it was more unsupervised(?) where I could just record a bunch of different attacks/hits, and not have to worry about inidividually training/tagging them, and then ask for matching based on that input.

These two things, I suppose, are not mutually exclusive in that I can just assign a random classification to each attack, and feed it a load of attacks, so each class would have a single training point, but that seems wrong.

Both of (my likely interpretations of what you said) approaches would be limited to a finite training set. That is, I’d have to play loads of different attacks and loads of different dynamics, and the precision of the system would be limited to the proximity of an incoming hit to a training point, which is why I was initially thinking of some kind of regression(?) thing, where I could also extrapolate to spaces where there is no/sparse training data (e.g. a quieter version of an attack I do, or hitting a different drum/object altogether).