Regression + Classification = Regressification?

After some growing pains in adapting the patch, I got it working with MFCCs. With no other fine tuning and literally dropping in the same mfccs/stats from the JIT-MFCC patch and it’s already worlds more robust.

I get 67.05% accuracy out of the gate (out of a sample of 2000 tests), which is more than twice the improvement.

Speed has gone down some though. It was around 0.58ms or so when running a KDTree with 12 dimensions in it, and up to 1.5ms with the 96dimensional mfcc/stats thing.

I haven’t optimized anything, so it’s possible I’m messy in places with how I’m going from data types. I do remember it being faster than this than in the JIT-MFCC patch in context though, but I’ll worry about that later.

edit:
Went and compared the original JIT-MFCC patch, and it is as “slow” as this, coming in at 1.5ms per query. I guess that seemed fast at the time all things considered.

What I want to try next is taking more stats/descriptors and trying some fluid.mds~ on it, to bring it down to a manageable amount. For the purposes of what I’m trying to do here, I think not including a loudness descriptor is probably good, since I wouldn’t have to worry about the limitations of variations in loudness for my initial training set.

The only thing I’m concerned about (which I’ll do some testing for) is the difference in speed in querying a larger dimensional space, vs fit-ting a dimensionality reduction scheme quickly in real-time. Like is the latter (shrinking dimensions on real-time data from a pre-computed fit) faster than just querying the larger dimensional space in the first place…

//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

So in effect, my “real world” 0-256 analysis would include:

  • (whatever stats work best for actual corpus querying(?))
  • (loudness-related analysis for loudness compensation)
  • (40melbands for spectral compensation)
  • (buttloads of mfccs/stats for kdtree prediction) <---- focus of this thread

So I would use the mfccs/stats to query a fluid.kdtree~, and then pull up the actual descriptors/stats that are good for querying from the relevant fluid.dataset~ to create a composite search entry that may or may not include the mfcc soup itself…