Descriptors comparison (oldschool vs newschool)

Yeah totally. I don’t want to seem like I’m (needlessly (and superfluously)) busting balls here. I’m just pushing at the edges of the existing paradigm/architecture, and offering thoughts and solutions(/problems?) as to how it can be made to work better.

There are lots of decisions that I don’t understand, but I’m rolling with it and trying to build the things I want to work with it, but that doesn’t always lead to somewhere that “works”. [So far, almost every avenue of exploration has led to a dead end (barring the CV thing, which I want to explore further still). That’s ok. I’m still playing and learning the tools.]

They scale in length, but not quantity. You can have an arbitrarily long buffer, but you can’t have (without great hassle and messiness) an arbitrary amount of buffers. Sure, you can use a single ‘container’ buffer and bufcompose~ everything into it, but then you need a secondary data structure, for your primary data structure, to know what was where in the mega-buffer.

It’s not worth beating on that drum for too long/hard though, as my thoughts on the buffer-as-data-container are well known at this point!

Indeed, but I’m not worried about the listeners (nor halls with listeners in them!), it’s my playing and “feel” that I’m concerned with. If it hit a drum and then get a sample playing back 20-40ms later, it doesn’t make performative sense. 11ms (or rather, 512 samples) is a “happy middle ground” where I can still feel/hear it, but it would be worth it if it worked well and was more accurate, hence my resistance in this thread to anything that would be slower than that.

In the future, once there are (more sophisticated) querying/playback tools, I’ll probably do something ala multiconvolve~ where I analyze 64samples (maybe even 32), then play back a transient that matches that immediately, while I am then analyzing the next size up, and play the ‘post-transient’ from another sample, then the next bit onward, etc… “stitching” together a sample as quickly as possible, and as accurately as possible. That would be an ideal implementation for this idea/use case.

So even with a multiconvolve~ approach, the first tiny “transient” analysis window could still take the intrinsic latency + the operational latency, regardless of how tiny the window was (32 samples + 20-40ms?).

That would be fantastic. I don’t know what that would mean in terms of analyzing a specific (sample accurate) onset (window) though. Plus you lose all the time series info, and potentially run into syncing problems between the different fluid.descriptors objects as well.

It seems like, fundamentally, I’m in between the buf and the realtime objects, in a way that neither is built for the use case(s) that I’m looking at.