Ran a quick test and running it with 2d (vs 20) is a tiiny amount faster. On my (faster) laptop, I get 0.37ms for the real-time transform on 20d space and 0.35ms for the same but going down to a 2d space.
I’m curious to see how far down I can push it. I’m aiming for keeping as much accuracy as possible since it will make a bit part of how I actually query the corpus space (pseudo-predictive matching). If that can be 2d, even better!
I may very well do something where I make a whole bunch of 2d reductions (e.g. 2d “loudness” space, 2d "spectral space, etc…), and then concatenate them into a manually curated dimensionality reduced space, for the final matching step. But mucho experiments to try before I get it there.