Right. No, no, I get that those are the same. I just meant in chaining together subsequent queries, requiring new copying (if you need a filter to happen before moving on, to further filter that area).
In a performance/fixed context, something like this would make sense. For example, in Kaizo Snare, throughout the performance I go between filtering by duration (duration > 500, or no filtering on duration) and filtering by onsets (onsets == 1, or no filter by onsets), so I could easy pre-“render” four datasets/kdtrees and select what I need, but this couldn’t cover all the bases.
I’m totally down for rethinking the problem and flipping it on its head, but I don’t see how some of the problems could be avoided.
A use case that I’ve been meaning to implement for a while now is mixing descriptor types to widen the navigable descriptor area. For example, taking a slow envelope follower and using that to filter by a timeness descriptor such that the louder/busier I play, the shorter the samples that are being pulled from are. So rather than having discreet datasets/kdtrees, it would be a continuous query ala duration > $1
where $1 = [scaledOutputOfEnvelopeFollower].