I mentioned this during the latest FluCoMa geek chat, but I wanted to bump this thread with the buffer~
stuff in light of some of what may be the suggestions from the use case prompts in the thread about “fluid.datasetfilter~
”.
As I mentioned to @tremblap, I spent a few weeks away from FluCoMa stuff working on some other things (largely 3d-printing!) but I came back to it last week in order to tidy up some of the code I was working on before.
One of the things that struck me was the amount of friction involved in “data house keeping stuff”. As in, I knew what descriptors I wanted, what stats I wanted, and where I wanted them all, but it still took me the better part of an hour to get it up and running. And if one thing changed (another stat or descriptor), that often broke everything, particularly in @blocking 2
mode.
Granted, the @blocking 2
means that things will always be fiddly (actually, is there any overhead with just making buffer~ foo @samps 500 500
for everything? I imagine the @samps 500
probably doesn’t matter, but I have no idea if having loads of “channels” does weird stuff), but the kind of coding required to manage this stuff is pretty far removed from any creative process. fluid.buf.select
mitigates some of the unpleasantness of this (though the js-based nature of it won’t jive with @blocking 2
) but even with that, you need to think in “channels” and “indices”, which is not the same as “descriptors” and “statistics”. So figuring out I want the 3rd sample from the 2nd channel is still required.
I’ve rung this bell long enough, but I only bring it back up because the prospect of doing all of this “at a zoomed out level” with fluid.dataset~
s is pretty daunting.
So if building something that lets you do a (simple) binary hierarchical search inside a fluid.dataset~
requires you to remember (or make note of elsewhere, since there is no symbolic notation anywhere) what indices in your fluid.dataset~
correspond with what, in order to be able to cleave off the bits you need, it starts to be the same kind of fiddly “data management” problem which kills the creative coding flow (for me, at least).
So this bump is part-bump for having better ‘low-level’ data management, but it’s also a pre-bump for future ‘high-level’ data management.