I wanted to bump this thread too, after hearing that when fluid.ampslice~
is working properly, it will write to two separate buffers for demarking ‘onsets’ and ‘offsets’.
That seems like it is going to get messy quickly, and even more so when you have multidimensional data.
Is the “everything is a buffer” thing now finally set in stone?
rodrigo.constanzo:
Is it flexible if the data is divorced from symbolic meaning? (i.e. if you have an “everything is a buffer” database, you have literally no idea what kind of data it is, at any positions, or in any order. without some point of reference it would become noise.)
And it would require translation for literally any usage at all.
It’s like saying writing it as a memory blob is the most flexible because it can be turned into anything…
Or like choosing MIDI (flat, context-less, serial) over OSC (symbolic, data hierarchy, etc…) because it’s more “portable”.
Why wouldn’t a json, text, or actual database file be more cross-platform-y? And be more relevant to the type of data being stored.
Will there be tools/abstractions provided to translate the data, or is it up to every user to bake their own for anything they want to do?
And are you planning on using buffers for the stage2 database matching stuff? (will there have to be a meta-data file which references what is what in the “audio” data? if so, what’s the point of having it be separate files?)