Hi there -
I’m attempting to save my sliced audio into a folder, so that I can skip the FluidBufOnsetSlice step when working with my analyzed audio at a later time. Ideally, I’d be able to pull up the audio, pull up the analysis, and pull up the plot without having any major processing/additional analysis.
As of now, I am using the following analysis function (posted below) - but it expects indices of a large buffer with all of the corpus stored inside of it. I’d like to perform this analysis on the sliced audio stored in my folder - is there a common way that this is handled? The only thing I can currently think of is to load it back into a big, continuous buffer and slice it again - but I am hoping there is a better option.
Thanks for the advice!
analyzer ={|slices, currentBuf, analysisData, playData, playDict|
slices.loadToFloatArray(action:{
arg fa;
analysisData.size({
arg o;
fork{
var mfccs = Buffer(s);
var stats = Buffer(s);
var flat = Buffer(s);
fa.doAdjacentPairs{
arg start, end, i;
var num = end - start;
var id = o + i;
FluidBufMFCC.processBlocking(s,currentBuf,start,num,features:mfccs,numCoeffs:13,startCoeff:1);
FluidBufStats.processBlocking(s,mfccs,stats:stats,select:[\mean]);
FluidBufFlatten.processBlocking(s,stats,destination:flat);
playDict[“data”][id] = [currentBuf.bufnum,start,end];
analysisData.addPoint(id,flat);
if((i % 100) == 99){s.sync;};
};
s.sync;
playData.load(playDict);
mfccs.free;
stats.free;
flat.free;
~ds1 = analysisData;
~ds2 = playData;
~dict = playDict;
“analyzed”.postln;
};
});
});
};
there are many ways, but what I do is to save everything as I need it in the performing patch.
If I understand your process, you might use various buffers as currentBuf so the challenge will be to have them be the same numerical reference. I have done this loading arrays of buffer which you can do in a straightforward way in SC with PathName.filesDo{|x|} but a numerical representation in your dataset is a bit of a problem… but you can do a reference table in an array in the language for that, where currentBuf is actually a lookup to know what are the actual buf number for each file. This is a SC problem but SC has a lot of solutions for that. in Max people use polybuffers. in pd, I cry 
What I have done for pieces is to save all the audio files in a single buffer. then I reload that buf, reload the idx buf, reload the datasets, and I’m sorted.
2 Likes
also, you can look at the implementation of our FluidLoadFolder where we reload to a single buffer - if you know the files and the lenght you can actually run an array of file paths and it will always be the same order. that is probably the cheapest in term of avoiding duplicate of harddisk space
1 Like
Thanks - I think just loading up and saving a big buffer is probably the best practice for this.. I was thinking to continually build out a buffer from multiple folders, but I’m a little stuck on how to format this function. Am I on the right track?
(edited to fix error)
~storeBuf = Buffer(s);~f = {var current;FileDialog({|paths|
current = FluidLoadFolder(paths[0]).play(s, {
"done loading".postln;
current = current.buffer;
FluidBufCompose.processBlocking(s,
current,
destStartFrame: ~storeBuf.numFrames,
destination: ~storeBuf);
});
}, fileMode:2);
};
~f.()~storeBuf.duration / 60;
FluidLoadFolder is not meant to be used iteratively. Check its SC-only code, you’ll see what it does.
I think this is a (fun) SC problem… let’s start from scratch and forget FluCoMa. What is the structure of the folders you are trying to load, and are you trying to make a unified buffer or you would prefer to keep it separate? I can think of many ways to do this in SC but I just need to go back to your specifications.
(this is where it is very FluCoMa: knowing your data, where it is, how it behaves, is harder when you have loads of sounds - I’m stuck with this problem in each new project, and tackle it differently each time)
Do not despair, there are many solutions, each as good as the next. I just need to understand where are the files you want to use, and if making a single huge buffer is an option or not after the loading process.
It’s helpful and interesting to hear that you approach these sorts of problems differently each time - I think my goal has been to come up with something sort of uniform and “live.”
At the moment, I’m grabbing files from several different folders and also generating and processing new files in real-time. Generally, the audio files that I’m loading are not “permanent” (there isn’t a fixed folder where things live indefinitely) - so I’ve been hesitating a bit to create a big directory index.
The “big buffer” approach sounds the easiest to implement, but I wonder if it runs into issues with SC not being 32-bit, when the file sizes are larger?
the ‘live’ bit is what needs to make it re-computable easily… but if hard disk is no issue, you could have a program to ‘redo’ the database/slices/singlebuf - this is what I do in works-in-progress.
then another ‘client’ program that reloads the last loaded ‘snapshot’
tldr: not going to be a problem for real.
Geek answer: the server runs in 32 bit float. That gives the high ceiling of consecutive integers it can represent to 16,777,216, then it starts skipping. For instance, you can observe this in this code:
//create a small buffer
b = Buffer.alloc(s,1)
//write the biggest integer it can deal with
b.set(0,(2**24).postln)
//retrieve and print - same value
b.get(0,{|x|x.postln})
//now write one above that
b.set(0,(2**24 + 1).postln)
//retrieve and print - dropped value
b.get(0,{|x|x.postln})
you can keep on adding and you will see it skips every other sample. So the first 16,777,216 indices are right (6min20sec437ms at 44.1k) then for the other 6 minutes they will be rounded off. then for another 24 minutes you’ll loose 3 indices out of 4. etc etc
now, this has never been a problem in your use case, because you are losing precision of ‘attack time’ which is not so bad - then it plays right with most UGens.
I hope this is clear? The buffer has everything, but the ability to accurately keep splice points in your buffer containing those values will not be as precise as you go above 6 minutes.
I think I understand the 32-bit issue more clearly now, thank you.
There was another question I wanted to ask about FluidLoadPath - is there a way to use the channelFunc to convert incoming stereo files into mono or is it necessary to nest a FluidBufCompose stage?
channelFunc is a nested FluidBufCompose, no?