Concatenative Synthesis SC

Hi FluCoMa team,
is there and existing SC patch example that explains how to implement concatenative synthesis data set comparison in SC?

Best regards!

Hello

There are many, but I’m not sure I understand your question. Can you provide an example of what flavour of concatenative synth you are referring to? For instance, audio driven query? or 2d visualisation and playback?

Thank you for your reply!
what i would like to perform is to trigger sound files from a corpus according to an incoming signal that matches spectral features (if possible, in real time)
The steps of matching data sets and triggering it is what i’m looking for.

I would like to know what you did here! audio control of a chaotic synth via MFCCs and MLP - YouTube (around minute 5:00)
kind of similar?

Best!

That’s a question for @tremblap :slight_smile:

Hello!

Thanks for the interest. Did you have a quick read through this article - it comes with example code doing exactly what you hear in that patch, in (garage) supercollider code (examples 05 and 06)

If anything is unclear, as questions away :slight_smile:

Thank you very much! Pierre and James, I will take a look at that article and see what can i get!
Best

feel free to come back to me with questions and ideas!

Hi Again!
I’m still trying to figure out how to implement a concatenative synthesis sc patch. Do you know if there is anywhere a SC code guide that explains this?

Best!

My questions above still apply: concatenative synthesis is a very vague term. Please describe what you are trying to do, in the light of the examples I sent you and what they don’t do, and then pointers will be easier to give!

Hi!
the only way I could refer to the problem is with this course where Ted Moore, in chapter number 10, explains a concatenative synthesis patch. Looks very promising but it is implemented in Max.

I don’t have access to this course - @tedmoore is an SC magician so that should help us :slight_smile:

Hi @CGB ,

Check out this:

audio_query_with_scaler.scd (11.3 KB)

Let me know what questions you have!

Best,

T

1 Like

Thank you very very much, Pierre and Ted,
checking this now!

best!

Dear Ted,
I’m reviving this topic because I’m new to c-cat and I would really like to use it with SC in a piece of mine for flute and electronics.
I tried the code above, but it doesn’t compile. It kinda works with the code in the FluidKDTree documentation in SC which I arranged a little bit (see below).
I still get very unpredictable results or nothing at all depending on the dataset I use for the look-up. Also I assume that I can replace FluidBufMFCC for analysis with other tools, but none seems to work except for MFCC. I’m very confused and I wonder whether someone could help me out. Thanks!

(
s.waitForBoot({
var data = Buffer.readChannel(s, “C:/Users/pupil/Desktop/Marco/Marco’s recordings/wav/Prelúdio y Fuga.wav”, channels:0);
var input = Buffer.readChannel(s, “C:/Users/pupil/Desktop/Marco/Works/La vorágine fl y electr/Code/snd/flute_descent.wav”, channels:0);
var indices = Buffer(s);
var mfccsBuf1 = Buffer(s);
var stats = Buffer(s);
var flat = Buffer(s);
var look_out;
var playback_info_dict = Dictionary.newFrom([
“cols”,2,
“data”,Dictionary.new;
]);
var ds_mfccs = FluidDataSet(s);
var ds_playBack = FluidDataSet(s);
var tree = FluidKDTree(s);

fork{
	"thinking".postln;
	FluidBufOnsetSlice.processBlocking(s,data,indices:indices,metric:9,threshold:0.7);
	indices.loadToFloatArray(action:{
		arg fa;

		// go through each slice (from one slice point to the next)
		fa.doAdjacentPairs{
			arg start, end, i;
			var num = end - start;
			var id = "slice-%".format(i);

			// add playback info for this slice to this dict
			playback_info_dict["data"][id] = [start,num];

			FluidBufMFCC.processBlocking(s,data,start,num,features:mfccsBuf1);
			FluidBufStats.processBlocking(s,mfccsBuf1,stats:stats,select:[\mean]);
			FluidBufFlatten.processBlocking(s,stats,destination:flat);

			// add analysis info for this slice to this data set
			ds_mfccs.addPoint(id,flat);
		};

		ds_playBack.load(playback_info_dict);

		ds_mfccs.print;
		ds_playBack.print;
	});
	s.sync;
	"done".postln;

	tree = FluidKDTree(s);
	tree.fit(ds_mfccs);

	{
		var src = PlayBuf.ar(1,input,BufRateScale.ir(input),loop:1);
		var mfccs = FluidMFCC.kr(src,startCoeff:1);
		var mfccbuf2 = LocalBuf(mfccs.numChannels);
		var playbackinfo = LocalBuf(2);
		var trig = Dust.kr(10); // could change how often the lookup happens...
		var start, num, sig_looked_up;

		FluidKrToBuf.kr(mfccs,mfccbuf2);

		// kdtree finding the nearest neighbour in 13 dimensions
		tree.kr(trig,mfccbuf2,playbackinfo,1,lookupDataSet: ds_playBack);
		# start, num = FluidBufToKr.kr(playbackinfo);

		start.poll(label:"start frame");
		num.poll(label:"num frames");

		// not using num frames for playback here, but one certainly could!
		sig_looked_up = PlayBuf.ar(1,data,BufRateScale.ir(data),trig,start);
		[src,sig_looked_up * -7.dbamp];
	}.play;
};

});
)

Do you mean the code below? Just want to make sure we’re not missing something important!

you might want to put an s.sync in this loop to keep things from going too fast. or sync every 100 points or so.

Here you’re specifying startCoeff=1 but you didn’t do that in the analysis so all your values are off by 1 index!

See if that fixes it?

-T

Oh, sorry, with “the code above” I meant the code you posted one year ago as a reply to CGB: audio_query_with_scaler.scd I tried it, but I couldn’t make it work.
Anyway… I corrected as you suggested. My very big (and dumb) mistake was not having updated the analysis of the input, that is, I was changing tool (Chroma, Pitch, Loudness, etc.) in the slicing process, but not in the input analysis. So, even worse that having my MFCC values off by 1!! I was comparing apples with pears. My bad… and many thanks to point that out.

Now, I need to upload the whole OrchSOL flute repository and make c-cat work in real-time. Hope to be able to…

1 Like