Descriptors comparison (oldschool vs newschool)

I totally get how splitting them up into categories makes sense, though I don’t know that the general level pitch of the project (“being able to code a granular synth from scratch”) would need too much help with descriptors. When you’re dealing with stuff like nmfs, audio descriptors seems like the least of one’s worries!

It is also possible that one can use multiple fluid.descriptors~ objects at the same time, going to separate parts of a patch, so synchronicity isn’t important.

I only brought up the realtime (and later, matching) use cases because those also seem to be problematic with this architecture, on top of the problems in my specific use case(s).

So at the moment my options are using JIT (with a 10-20x latency increase in computation time) or use the realtime objects and have decorrelated frames of analysis for each descriptor.