I’m in the middle of expanding my live system, also trying to learn something in the process.
Currently, I have a SC script that records and stores automatically individual samples into a folder according to given constraints.
Now, I would like to use FluCoMa to analyze all these samples, allowing me to recall them afterwards using similarities and other criteria.
I actually explored several code examples, but each one I found was basically merging all the sounds into a single buffer and slice it using FluidBufOnsetSlice. Since my samples are already sliced, this operation makes no sense.
So my question is: is there a way to directly produce a dataset from a bunch of individual files, or to store their bounds into a simple buffer of indeces (maybe the latter is a better solution since it involves a single buffer…)?
There’s nothing that requires you to use monolithic buffers at all. If you don’t need to segment, you can just run analyses on individual buffers and put the results of each analysis into a dataset as a point.
Usually when this kind of data gets to the server I want both the start position and the duration of the slices, so I interleave that information language side, then load it to a 2-channel buffer so that I can index into that buffer according to what slice I want to play, then use channel 0 for the start position and channel 1 for the duration.