Intelligent Feature Selection (with SVM or PCA)

That seems a lot more sensible and what I would expect, so something’s gone funny in the code/math somewhere.

Does the squaring/weighting produce this for you?

0.547614 0.138316 0.081888 0.062887 0.031371 0.022768 0.02044 0.01675 0.013668 0.011574 0.009137 0.008754 0.007181 0.006379 0.005358 0.004786 0.004126 0.003053 0.002383 0.00116 0.000405

I don’t see how multiplying those numbers by the matrix (unless the matrix itself heavily skews another way) that I wouldn’t end up with what I did in the end for mine (a sequence of items in the order that they are listed in the values field.

Yeah, curious how you get on with that. With my approach, I don’t so much have the luxury of time as my real-world waiting period is 256 samples (so even a single derivative is kind of pushing it there…) and the longer (predicted) window is 4410 samples, so again, not a lot of meat on those bones. I guess I can further segment the 4410, but I’m not entirely sure what I’ll find there (in terms of analyzing the sounds coming from my real-time input).

The reason I’m more interested in morphologies/gestures is that if they are separable from (absolute) durations, I could potentially query a much longer sample for a similar overall morphology regardless of it being 500ms or 5s long.

If that’s the case I’ve got oodles of “labelled” files. Now how much that corresponds to sounding results is yet to be determined (e.g. hitting a fork vs hitting a spoon).

I was poking at this a couple years ago in this thread, about how to optimize some of these algorithms/descriptors/stats/searches if you have known inputs. This is somewhat where I’m with the approach(es) from this patch to try to get the most differentiation for a given set of inputs, but I’m sure there’s way more than that that can be done. I just wouldn’t even know where to begin.