That makes sense.
Yeah that’s what I’m thinking.
It will always have to touch the scheduler somewhere cuz I have to bang
the fluid.buf()~
stuff, but I can hopefully minimize needless slop and jitter for that part of the patch.
For the most part, I’m still not doing this yet. Having a way to bias a query would be super handy, especially since I’m analyzing multiple time windows now (for the offline/buffer stuff).
I was thinking today that I do need to sit down and compare speeds for that stuff too, though I imagine part of the appeal of the knn stuff is that it pre-computes the distances, making querying/matching faster(?).
More than anything, I just need to hunker down and figure out how to do this “new school” and try things out.
I was thinking of trying to combine the knn stuff with dimensionality reduction/clustering where the clusters are “directly” mapped to the classifiers, and then do distance matching within each classifier. Could be an interesting way to leverage both of those approaches.