Descriptors comparison (oldschool vs newschool)

Ah ok, I understand the intrinsic vs operational thing now.

The operational difference is what I meant with the whole “real-world” stuff, as the intrinsic speed, although handy, doesn’t tell the whole story.

I do like the idea of having multiple contexts, but it strikes me that so much of that appears to be having to jump hoops to avoid the core paradigm(/problem?) of ‘everything is a buffer’.

Because you have to go in/out buffers to manage the data between processes, it makes threading a problematic/contentious issue. Which makes sense if you are working with audio that natively lives in buffers, but that isn’t the case here. The buffer is just being bent to work as a data container, which brings lots of problems with it.

Although I haven’t mocked up a proper example of my intended use case, but I am hoping to do “real-time” sample playback via onset descriptor analysis, so already taking on a 512sample (11ms) delay between my attack and a sample being played back is knocking on the door of perception. Tacking on another 1ms to that doesn’t help, but if the operational (“real world”) delay/latency there is upwards of 20-40ms, this approach (with fluid. objects anyways) becomes impossible/impractical.

Now as @tremblap mentioned, I could always use @a.harker’s vanilla descriptors~ in this context (which I will/would do, though there is some funky behavior as can be seen in the comparison patch above), but I do like a lot of what the fluid.descriptors objects bring ot the table. Plus, this sort of idea will make up the core of real-time matching/replacement/resynthesis in the future, with the 2nd toolbox. So this isn’t a problem that is unique to my intended use case, it appears to be a problem that will be present in many use cases (that at all involve real-time applications).

So this circles back to the core architecture/paradigm decisions, and how they impact scalability and application in “real world patches”, similar to what @jamesbradbury is going through with his patch trying to analyze multiple slices from an audio file. A seemingly simple use case, which brings with it a massive amount of overhead and conceptualizing due to how things are structured.