Dimensionality reduction on real-time input

(we need a “usage questions” category for the TB2 stuff)

So I’m working on a speed comparison for either fit-ting and querying a large dimensional fluid.kdtree~ or applying some fluid.mds~ or fluid.pca~ (the latter for now) to the real-time input and comparing it against a lower dimensional space.

BUT

I’m not sure I understand the workflow here.

I get how I can apply PCA/MDS to a fluid.dataset~ and this is useful for creating a static corpus, and the same goes for applying the same fit another fluid.dataset~ which I want to compare it against. I guess with dimensionality reduction it is critical to have the dimensional scaling be the same on everything you are trying to compare, as otherwise it’s just a bunch of random numbers (right?).

Now where I get lost is, if I’m doing this on real-time input, I have analyses that are dumped out into buffer~s, usually with some fluid.bufcompose~-ing to get what I want where I want it. For using a fluid.kdtree~ I would then refer the buffer itself to the fluid.kdtree~ to look for my nearest match.

If I’m using a dimensionally reduced space, does that mean I have to do something like this:

[audioInput] -> [analysis] -> {buffer} -> [bufcompose] -> {buffer} -> [dataset] -> [pca/mds] -> [dataset] -> {buffer} -> [kdtree]

Per query(!).
Is that right?
More specifically, is the intention to go from buffer, to dataset, pca/mds/normalize, then back into a dataset, and finally a buffer?

Would it be possible (at all) to apply a dimensionality reduction to a single “point” in a buffer?

on its way!

edit done and moved!

1 Like

Yes it is. Check the example (in Max) entitled 6a or 6 (dim redux) and you will see my query with the mouse (which is an arbitrary point).

1 Like