Processing a long duration File with FluCoMa in SuperCollider

Hi there -

I’m attempting to analyze a longer duration file in FluCoMa, using SuperCollider - but it seems to lock up for quite some time.

I know that with the Plotter example, it is shown to use an “s.sync” every 100 frames or so - but even this, at least how I am using it, does not seem to expedite the process.

I’m wondering if there is a way to do this that runs a little faster.

Thanks!

analysis = {|bufPath, onThresh = -22, offThresh = -30|

var indices = Buffer(s);
var display_buffer = Buffer(s);
concat_buf = Buffer(s);
FluidBufCompose.processBlocking(s,bufPath,destination:display_buffer);

FluidBufAmpGate.processBlocking(s,display_buffer,
		indices:indices);

indices.loadToFloatArray(action:{
    arg fa;
    var current_frame = 0;
    fa = fa.clump(2);
    fa.do{
        arg arr, i;
        var startFrame = arr[0];
        var numFrames = arr[1] - startFrame;
       "%\tstart: %\tend: %".format(i,startFrame,numFrames).postln;
        FluidBufCompose.processBlocking(s,display_buffer,startFrame,numFrames,destination:concat_buf,destStartFrame:current_frame);
        current_frame = current_frame + numFrames;
		if((i%100) == 0){s.sync};
    };
​
    FluidBufCompose.processBlocking(s,concat_buf,destination:display_buffer,destStartChan:1);

		~x = concat_buf;
​
});
};
)

Hi @areacode ,

Can you give us a bit more info?
How long is the file?
Stereo or mono?
How long is it taking?
How many slices are being discovered by FluidBufAmpGate?
What do you mean by “lock up”?
Does it ever complete?

Thanks!

T

Hi @tedmoore -
It’s a stereo file, 24 minutes in duration.
It seems to crash at around 600 slices - it disconnects from the server, after extended periods of yellow idling. It doesn’t ever complete.
I’m not sure if this is unusual - or if the normal protocol is to chunk out larger files before trying to process them, so any insight would be helpful.

Is it stalling in the fa.do{...} loop? If so, then one way to make this more efficient would be to pre-allocate concat_buf to the size you’re going to need, otherwise the server is having to re-allocate and copy the buffer however many hundred times.

That will involve a little bit more language-side work before you launch the loop, but seeing as you have the float array in the language anyway, it’s not so bad. E.g, between fa.clump(2) and fa.do you could say

dur = fa.collect{|arr| arr[1] - arr[0] }.sum 
concat_buf = Buffer.alloc(s, dur ...
2 Likes

Wow - this works incredibly well. Thank you.

1 Like

In a scenario where the final buffer size might be un-calculated - is there any any advantage/possibility to simply “overshooting” by specifying a larger buffer than expected? Is there a simple command for trimming the excess after it has been filled?

Yeah, if you don’t know the final size then this can work. IIRC, you can trim with FluidBufCompose by setting the source and destination to the same Buffer