I meant analyzing only 100ms and then querying that with 11ms fullstop. Not stitching. As potentially a better solution than analyzing the entire sample and querying it with a tiny fragment of audio.
Even with stitching, there will be the problem of temporal mapping as I’m largely using sounds that are quite short, and want to have samples that are arbitrarily long. So even with perfect stitching, I wouldn’t want to play a 100ms sound and get a 100ms sound back. (an idea I have for this is to use the time series from fluid.descriptors~
objects and time-stretch them onto longer samples, so something like multiconvolve~
meets a clock that’s slowing down)