Just-in-Time, NMF-based, classifier: a bit of help with my SuperCollider chops

Dear SuperColliderer

I’ve coded one of the convoluted examples of using NMF as a Just-in-Time (aka circular buffer non-real-time-but quick) classifier. It is as clean as my SC chops allow it to be, but I’m sure one or two of you could tell me off (or suggest improvements in my style :wink:

If that is of interest, I’ll post it here. It is part of the many upgrades of the RC1 being polished now so it’ll be public soon-ish anyway, but if anyone has pressing urge to do such a task, hit me!

1 Like

Yeah for sure I’d be happy to help. Send it my way!

T

1 Like

Here it is! It relied on a working AmpSlice but I’m sure it’ll work on (and that you’ll appreciate) my overproduced drums :wink:

p

// using nmf in 'real-time' as a classifier
// how it works: a circular buffer is recording and attacks trigger the process
// if in learning mode, it does a one component nmf which makes an approximation of the base. 3 of those will be copied in 3 different positions of our final 3-component base
// in in guessing mode, it does a thres component nmf from the trained bases and yields the 3 activation peaks, on which it thresholds resynth

//how to use:
// 1. start the server
// 2. select between parenthesis below and execute. You should get a window with 3 pads (bd sn hh) and various menus
// 3. train the 3 classes:
//    3.1 select the learn option
//    3.2 select which class you want to train
//    3.3 play the sound you want to associate with that class a few times (the left audio channel is the source)
//    3.4 click the transfer button
//    3.5 repeat (3.2-3.4) for the other 2 classes.
//    3.x you can observe the 3 bases here:
f.plot(numChannels:3)

// 4. classify
//    4.1 select the classify option
//    4.2 press a pad and look at the activation
//    4.3 tweak the thresholds and enjoy the resynthesis. (the right audio channel is the detected class where classA is a bd sound)
//    4.x you can observe the 3 activations here:
h.plot(numChannels:3)



/// code to execute first
(
b = Buffer.alloc(s,s.sampleRate * 2);
g = Bus.audio(s,1);
c = 0;
d = 0;
e = Buffer.alloc(s, 65);
f = Buffer.alloc(s, 65, 3);
h = Buffer.alloc(s, 65, 3);
j = [0.0,0.0,0.0];
k = [0.5,0.5,0.5];

// the circular buffer with triggered actions sending the location of the head at the attack
Routine {
	SynthDef(\JITcircular,{arg bufnum = 0, input = 0, env = 0;
		var head, head2, duration, audioin, halfdur, trig;
		duration = BufFrames.kr(bufnum) / 2;
		halfdur = duration / 2;
		head = Phasor.ar(0,1,0,duration);
		head2 = (head + halfdur) % duration;

		// circular buffer writer
		audioin = In.ar(input,1);
		BufWr.ar(audioin,bufnum,head,0);
		BufWr.ar(audioin,bufnum,head+duration,0);
		trig = FluidAmpSlice.ar(audioin,2205,2205,-47,-47,4410,4410,relRampUp: 10, relRampDown:1666, relThreshOn:12, relThreshOff: 9, highPassFreq: 85);

		// cue the calculations via the language
		SendReply.ar(trig, '/attack',head);

		Out.ar(0,audioin);
	}).add;

	// drum sounds taken from original code by snappizz
	// https://sccode.org/1-523
	// produced further and humanised by PA
	SynthDef(\fluidbd, {
		|out = 0|
		var body, bodyFreq, bodyAmp;
		var pop, popFreq, popAmp;
		var click, clickAmp;
		var snd;

		// body starts midrange, quickly drops down to low freqs, and trails off
		bodyFreq = EnvGen.ar(Env([Rand(200,300), 120, Rand(45,49)], [0.035, Rand(0.07,0.1)], curve: \exp));
		bodyAmp = EnvGen.ar(Env([0,Rand(0.8,1.3),1,0],[0.005,Rand(0.08,0.085),Rand(0.25,0.35)]), doneAction: 2);
		body = SinOsc.ar(bodyFreq) * bodyAmp;
		// pop sweeps over the midrange
		popFreq = XLine.kr(Rand(700,800), Rand(250,270), Rand(0.018,0.02));
		popAmp = EnvGen.ar(Env([0,Rand(0.8,1.3),1,0],[0.001,Rand(0.018,0.02),Rand(0.0008,0.0013)]));
		pop = SinOsc.ar(popFreq) * popAmp;
		// click is spectrally rich, covering the high-freq range
		// you can use Formant, FM, noise, whatever
		clickAmp = EnvGen.ar(Env.perc(0.001,Rand(0.008,0.012),Rand(0.07,0.12),-5));
		click = RLPF.ar(VarSaw.ar(Rand(900,920),0,0.1), 4760, 0.50150150150) * clickAmp;

		snd = body + pop + click;
		snd = snd.tanh;

		Out.ar(out, snd);
	}).add;

	SynthDef(\fluidsn, {
		|out = 0|
		var pop, popAmp, popFreq;
		var noise, noiseAmp;
		var click;
		var snd;

		// pop makes a click coming from very high frequencies
		// slowing down a little and stopping in mid-to-low
		popFreq = EnvGen.ar(Env([Rand(3210,3310), 410, Rand(150,170)], [0.005, Rand(0.008,0.012)], curve: \exp));
		popAmp = EnvGen.ar(Env.perc(0.001, Rand(0.1,0.12), Rand(0.7,0.9),-5));
		pop = SinOsc.ar(popFreq) * popAmp;
		// bandpass-filtered white noise
		noiseAmp = EnvGen.ar(Env.perc(0.001, Rand(0.13,0.15), Rand(1.2,1.5),-5), doneAction: 2);
		noise = BPF.ar(WhiteNoise.ar, 810, 1.6) * noiseAmp;

		click = Impulse.ar(0);
		snd = (pop  + click + noise) * 1.4;

		Out.ar(out, snd);
	}).add;

	SynthDef(\fluidhh, {
		|out = 0|
		var click, clickAmp;
		var noise, noiseAmp, noiseFreq;

		// noise -> resonance -> expodec envelope
		noiseAmp = EnvGen.ar(Env.perc(0.001, Rand(0.28,0.3), Rand(0.4,0.6), [-20,-15]), doneAction: 2);
		noiseFreq = Rand(3900,4100);
		noise = Mix(BPF.ar(ClipNoise.ar, [noiseFreq, noiseFreq+141], [0.12, 0.31], [2.0, 1.2])) * noiseAmp;

		Out.ar(out, noise);
	}).add;

	// makes sure all the synthdefs are on the server
	s.sync;

	// instantiate the JIT-circular-buffer
	x = Synth(\JITcircular,[\bufnum, b.bufnum, \input, g.index]);
	e.fill(0,65,0.1);

	// instantiate the listener to cue the processing from the language side
	r = OSCFunc({ arg msg;
		if (c == 0, {
			// if in training mode, makes a single component nmf
			FluidBufNMF.process(s, b, msg[3], 128, bases:e, basesMode: 1, windowSize: 128);
		}, {
			// if in classifying mode, makes a 3 component nmf from the pretrained bases and compares the activations with the set thresholds
			FluidBufNMF.process(s, b, msg[3], 128, components:3, bases:f, basesMode: 2, activations:h, windowSize: 128, action:{
				h.getn(3,3,{|x|
					j = x;
					if (j[0] >= k[0], {Synth(\fluidbd,[\out,1])});
					if (j[1] >= k[1], {Synth(\fluidsn,[\out,1])});
					if (j[2] >= k[2], {Synth(\fluidhh,[\out,1])});
				});
			};
			);
		});
	}, '/attack', s.addr);

	// make sure all the synths are instantiated
	s.sync;

	// GUI for control
	{
		w = Window("Control", Rect(100,100,590,100)).front;

		Button(w, Rect(10,10,80, 80)).states_([["bd",Color.black,Color.white]]).mouseDownAction_({Synth(\fluidbd, [\out, g.index], x, \addBefore)});
		Button(w, Rect(100,10,80, 80)).states_([["sn",Color.black,Color.white]]).mouseDownAction_({Synth(\fluidsn, [\out, g.index], x, \addBefore)});
		Button(w, Rect(190,10,80, 80)).states_([["hh",Color.black,Color.white]]).mouseDownAction_({Synth(\fluidhh, [\out, g.index], x,\addBefore)});
		StaticText(w, Rect(280,7,75,25)).string_("Select").align_(\center);
		PopUpMenu(w, Rect(280,32,75,25)).items_(["learn","classify"]).action_({|value| c = value.value; if (c == 0, {e.fill(0,65,0.1)});});
		PopUpMenu(w, Rect(280,65,75,25)).items_(["classA","classB","classC"]).action_({|value| d = value.value; e.fill(0,65,0.1);});
		Button(w, Rect(365,65,65,25)).states_([["transfer",Color.black,Color.white]]).mouseDownAction_({if (c == 0, {FluidBufCompose.process(s, e, numChans:1, destination:f, destStartChan:d);});});
		StaticText(w, Rect(440,7,75,25)).string_("Activations");
		l = Array.fill(3, {arg i;
			StaticText(w, Rect(440,((i+1) * 20 )+ 7,75,25));
		});
		StaticText(w, Rect(520,7,55,25)).string_("Thresh").align_(\center);
		3.do {arg i;
			TextField(w, Rect(520,((i+1) * 20 )+ 7,55,25)).string_("0.5").action_({|x| k[i] = x.value.asFloat;});
		};

		w.onClose_({b.free;g.free;r.clear;x.free; y.free;q.stop;});
	}.defer;

	s.sync;

	// updates the activations
	q = Routine {
		{
			{
				l[0].string_("A: " ++ j[0].round(0.001));
				l[1].string_("B: " ++ j[1].round(0.001));
				l[2].string_("C: " ++ j[2].round(0.001));
			}.defer;
			0.1.wait;
		}.loop;
	}.play;
}.play;
)

Your chops are totally cool. Your variable names drove me crazy though, so had to change them all just to wrap my head around the code… I think you call your synth x and then use x again later in a callback, so cleaning that up would be good before releasing it (SuperCollider can actually keep these straight for you [I think it just takes the variable highest on stack ?] but still).

  1. When you instantiate the activations Buffer, you make it 65 samples x 3 channels, but for what you’re doing, it only needs to be 3 samples by 3 channels. It took me a little while to figure out why it would be 65. Am I missing something? I just changed it to Buffer.new since FluidBufNMF will resize the buffer anyways as long as actMode=0. However I noticed that when it resized it, the sampleRate of the Buffer becomes 689.0625. Where does this number come from?

  2. Just a question: when you getn from the activations Buffer, you index into the second frame (of three). Is there a necessary reason why this is that I’m not seeing. Or is that what you’ve found to be most accurate for these sounds?

  3. The Routine you have at the end for updating the display doesn’t really need to run in a loop since it’s only looking into the activation_vals array (my variable name). You can just make it happen after an onset (in the OSCFunc). You just have to use defer, which sends it over to the AppClock scheduler.

  4. On a different note, I noticed that the help file for FluidBufNMF doesn’t say what the default values are. These might be important, especially for something like hopSize, since that is need to calculate how big to instantiate buffers.

Sick drum sounds. I love seeing the Fluid stuff fleshed out in SC code!

// using nmf in 'real-time' as a classifier
// how it works: a circular buffer is recording and attacks trigger the process
// if in learning mode, it does a one component nmf which makes an approximation of the base. 3 of those will be copied in 3 different positions of our final 3-component base
// in in guessing mode, it does a thres component nmf from the trained bases and yields the 3 activation peaks, on which it thresholds resynth

//how to use:
// 1. start the server
// 2. select between parenthesis below and execute. You should get a window with 3 pads (bd sn hh) and various menus
// 3. train the 3 classes:
//    3.1 select the learn option
//    3.2 select which class you want to train
//    3.3 play the sound you want to associate with that class a few times (the left audio channel is the source)
//    3.4 click the transfer button
//    3.5 repeat (3.2-3.4) for the other 2 classes.
//    3.x you can observe the 3 bases here:
~classify_bases.plot(numChannels:3)

// 4. classify
//    4.1 select the classify option
//    4.2 press a pad and look at the activation
//    4.3 tweak the thresholds and enjoy the resynthesis. (the right audio channel is the detected class where classA is a bd sound)
//    4.x you can observe the 3 activations here:
~activations.plot(numChannels:3)

/// code to execute first
(
var circle_buf = Buffer.alloc(s,s.sampleRate * 2); // b
var input_bus = Bus.audio(s,1); // g
var classifying = 0; // c
var cur_training_class = 0; // d
var train_base = Buffer.alloc(s, 65); // e
var activation_vals = [0.0,0.0,0.0]; // j
var thresholds = [0.5,0.5,0.5]; // k
var activations_disps;
var analysis_synth;
var osc_func;
var update_rout;

~classify_bases = Buffer.alloc(s, 65, 3); // f
//~activations = Buffer.alloc(s, 65, 3); // h
~activations = Buffer.new(s); // ************************** FluidBufNMF.process resizes this buffer anyways, don't need to specify it's size

~activations.postln;

// the circular buffer with triggered actions sending the location of the head at the attack
Routine {
	SynthDef(\JITcircular,{arg bufnum = 0, input = 0, env = 0;
		var head, head2, duration, audioin, halfdur, trig;
		duration = BufFrames.kr(bufnum) / 2;
		halfdur = duration / 2;
		head = Phasor.ar(0,1,0,duration);
		head2 = (head + halfdur) % duration;

		// circular buffer writer
		audioin = In.ar(input,1);
		BufWr.ar(audioin,bufnum,head,0);
		BufWr.ar(audioin,bufnum,head+duration,0);
		trig = FluidAmpSlice.ar(audioin,2205,2205,-47,-47,4410,4410,relRampUp: 10, relRampDown:1666, relThreshOn:12, relThreshOff: 9, highPassFreq: 85);

		// cue the calculations via the language
		SendReply.ar(trig, '/attack',head);

		Out.ar(0,audioin);
	}).add;

	// drum sounds taken from original code by snappizz
	// https://sccode.org/1-523
	// produced further and humanised by PA
	SynthDef(\fluidbd, {
		|out = 0|
		var body, bodyFreq, bodyAmp;
		var pop, popFreq, popAmp;
		var click, clickAmp;
		var snd;

		// body starts midrange, quickly drops down to low freqs, and trails off
		bodyFreq = EnvGen.ar(Env([Rand(200,300), 120, Rand(45,49)], [0.035, Rand(0.07,0.1)], curve: \exp));
		bodyAmp = EnvGen.ar(Env([0,Rand(0.8,1.3),1,0],[0.005,Rand(0.08,0.085),Rand(0.25,0.35)]), doneAction: 2);
		body = SinOsc.ar(bodyFreq) * bodyAmp;
		// pop sweeps over the midrange
		popFreq = XLine.kr(Rand(700,800), Rand(250,270), Rand(0.018,0.02));
		popAmp = EnvGen.ar(Env([0,Rand(0.8,1.3),1,0],[0.001,Rand(0.018,0.02),Rand(0.0008,0.0013)]));
		pop = SinOsc.ar(popFreq) * popAmp;
		// click is spectrally rich, covering the high-freq range
		// you can use Formant, FM, noise, whatever
		clickAmp = EnvGen.ar(Env.perc(0.001,Rand(0.008,0.012),Rand(0.07,0.12),-5));
		click = RLPF.ar(VarSaw.ar(Rand(900,920),0,0.1), 4760, 0.50150150150) * clickAmp;

		snd = body + pop + click;
		snd = snd.tanh;

		Out.ar(out, snd);
	}).add;

	SynthDef(\fluidsn, {
		|out = 0|
		var pop, popAmp, popFreq;
		var noise, noiseAmp;
		var click;
		var snd;

		// pop makes a click coming from very high frequencies
		// slowing down a little and stopping in mid-to-low
		popFreq = EnvGen.ar(Env([Rand(3210,3310), 410, Rand(150,170)], [0.005, Rand(0.008,0.012)], curve: \exp));
		popAmp = EnvGen.ar(Env.perc(0.001, Rand(0.1,0.12), Rand(0.7,0.9),-5));
		pop = SinOsc.ar(popFreq) * popAmp;
		// bandpass-filtered white noise
		noiseAmp = EnvGen.ar(Env.perc(0.001, Rand(0.13,0.15), Rand(1.2,1.5),-5), doneAction: 2);
		noise = BPF.ar(WhiteNoise.ar, 810, 1.6) * noiseAmp;

		click = Impulse.ar(0);
		snd = (pop  + click + noise) * 1.4;

		Out.ar(out, snd);
	}).add;

	SynthDef(\fluidhh, {
		|out = 0|
		var click, clickAmp;
		var noise, noiseAmp, noiseFreq;

		// noise -> resonance -> expodec envelope
		noiseAmp = EnvGen.ar(Env.perc(0.001, Rand(0.28,0.3), Rand(0.4,0.6), [-20,-15]), doneAction: 2);
		noiseFreq = Rand(3900,4100);
		noise = Mix(BPF.ar(ClipNoise.ar, [noiseFreq, noiseFreq+141], [0.12, 0.31], [2.0, 1.2])) * noiseAmp;

		Out.ar(out, noise);
	}).add;

	// makes sure all the synthdefs are on the server
	s.sync;

	// instantiate the JIT-circular-buffer
	analysis_synth = Synth(\JITcircular,[\bufnum, circle_buf, \input, input_bus]);
	train_base.fill(0,65,0.1);

	// instantiate the listener to cue the processing from the language side
	osc_func = OSCFunc({ arg msg;
		var head_pos = msg[3];
		"attack".postln;
		// when an attack happens
		if (classifying == 0, {
			// if in training mode, makes a single component nmf
			FluidBufNMF.process(s, circle_buf, head_pos, 128, bases:train_base, basesMode: 1, windowSize: 128);
			// ************* just out of curiosity, what is the default hopsize for FluidBufNMF.process()?
		}, {
			// if in classifying mode, makes a 3 component nmf from the pretrained bases and compares the activations with the set thresholds
			FluidBufNMF.process(s, circle_buf, head_pos, 128, components:3, bases:~classify_bases, basesMode: 2, activations:~activations, windowSize: 128, action:{
				//defer{~activations.plot};
				~activations.postln;
				~activations.numChannels.postln;
				~activations.numFrames.postln;
				~activations.sampleRate.postln;
				~activations.getn(3,3,{|x| // ************************************ explanation for why indexing into second frame here?
					x.postln;
					activation_vals = x;
					if (activation_vals[0] >= thresholds[0], {Synth(\fluidbd,[\out,1])});
					if (activation_vals[1] >= thresholds[1], {Synth(\fluidsn,[\out,1])});
					if (activation_vals[2] >= thresholds[2], {Synth(\fluidhh,[\out,1])});

					// ******************* since the displays will only change when this array changes anyway, no need to loop it in the routine,
					// ******************* just update the displays when the array updates.
					defer{
						activations_disps[0].string_("A: " ++ activation_vals[0].round(0.001));
						activations_disps[1].string_("B: " ++ activation_vals[1].round(0.001));
						activations_disps[2].string_("C: " ++ activation_vals[2].round(0.001));
					};
				});
			};
			);
		});
	}, '/attack', s.addr);

	// make sure all the synths are instantiated
	s.sync;

	// GUI for control
	{
		var win = Window("Control", Rect(100,100,590,100)).front;

		Button(win, Rect(10,10,80, 80)).states_([["bd",Color.black,Color.white]]).mouseDownAction_({Synth(\fluidbd, [\out, input_bus], analysis_synth, \addBefore)});
		Button(win, Rect(100,10,80, 80)).states_([["sn",Color.black,Color.white]]).mouseDownAction_({Synth(\fluidsn, [\out, input_bus], analysis_synth, \addBefore)});
		Button(win, Rect(190,10,80, 80)).states_([["hh",Color.black,Color.white]]).mouseDownAction_({Synth(\fluidhh, [\out, input_bus], analysis_synth,\addBefore)});
		StaticText(win, Rect(280,7,75,25)).string_("Select").align_(\center);
		PopUpMenu(win, Rect(280,32,75,25)).items_(["learn","classify"]).action_({|value|
			classifying = value.value;
			if(classifying == 0, {
				train_base.fill(0,65,0.1)
			});
		});
		PopUpMenu(win, Rect(280,65,75,25)).items_(["classA","classB","classC"]).action_({|value|
			cur_training_class = value.value;
			train_base.fill(0,65,0.1);
		});
		Button(win, Rect(365,65,65,25)).states_([["transfer",Color.black,Color.white]]).mouseDownAction_({
			if(classifying == 0, {
				// if training
				FluidBufCompose.process(s, train_base, numChans:1, destination:~classify_bases, destStartChan:cur_training_class);
			});
		});
		StaticText(win, Rect(440,7,75,25)).string_("Activations");
		activations_disps = Array.fill(3, {arg i;
			StaticText(win, Rect(440,((i+1) * 20 )+ 7,75,25));
		});
		StaticText(win, Rect(520,7,55,25)).string_("Thresh").align_(\center);
		3.do {arg i;
			TextField(win, Rect(520,((i+1) * 20 )+ 7,55,25)).string_("0.5").action_({|x| thresholds[i] = x.value.asFloat;});
		};

		win.onClose_({circle_buf.free;input_bus.free;osc_func.clear;analysis_synth.free;/*update_rout.stop;*/});
	}.defer;

	s.sync;

	// ************************** moved this up to the OSCFunc ***********************************
	// updates the activations
	/*	update_rout = Routine {
	{
	{
	activations_disps[0].string_("A: " ++ activation_vals[0].round(0.001));
	activations_disps[1].string_("B: " ++ activation_vals[1].round(0.001));
	activations_disps[2].string_("C: " ++ activation_vals[2].round(0.001));
	}.defer;
	0.1.wait;
	}.loop;
	}.play;*/
}.play;
)
1 Like

Thanks! One has to be good-ish at something :wink:

You’re too kind.

That’s more like it :wink: Seriously, yesterday I had to recode a bug report because my variable names (single letters) made it too abstract… I blame it on my laziness of the language’s use of tilde and refusal of caps after which I find incredibly irritating, but hey, I’ll let go for the greater good!

Now, real answers:

completely right. I just copied the code of the base buffer creation without thinking, so an empty buf is the best way indeed.

the SR is accurate. SR is the frequency at which you have to play that buffer to match real time, right? So in this case, it is the source SR (44100) divided by the hopSize (64) in this case.

Indeed I found that to be more accurate, because it is centred. I also tried maximum. Analysis are zero-padded (see below).

This is so nervous and short (for @rodrigo.constanzo originally) that the relation between the attack detection (sample-accurate) and the time you analyse (a few samples after the few sample-long spike which is similar for most perc ) and the activations is quite important. There is a lot of error as we do not average over longer, which would yield more accurate but slower results.

Understanding zero-padding, you might already do but in case someone else reads this thread: in this case, we analyse 128 samples of a 128 sample slice with an overlap of 2, so the first one has its first half as silence, the 2nd frame is ‘aligned’ to the source and the 3rd frame has its 2nd half silenced, and each is windowed - the graphic @weefuzzy did here might help picture this.

Indeed, thanks, this is where my SC routine ninja is colapsing. I’ll study your code.

I don’t understand this comment. They are at the top in the Class method, no? Do you mean that the -1 is not explained? @weefuzzy and I are planning to make a boiler plate text to explain it everywhere since all FluCoMa FFT uses the same syntax. It definitely seems to have disappeared from that version indeed but it is in HPSS for instance. Sorry for that!

Thanks so much for doing this, I’ll study the code and give credit where credit’s due :wink:

1 Like

At least you’re not doing things like what I saw the other day…

_x = 10
_f = _x + 4
x = _x + x * _f - f
f = _f + f * _f - _x

:frowning:

1 Like

A SC user will understand the -1 to mean something to the effect of “use the default”, so that’s cool.

When I was reading about actMode, and figuring out what the appropriate buffer size should be, I tried to use the equation that’s given for the activations buffer (components * numChannelsProcessed channels and (sourceDuration / hopsize + 1) lenght). Guessing that hopsize is an argument, I looked below to get the value but couldn’t find it.


The defaults are often given in this spot. Often the last sentence will be “The default value is ________”. I did see the -1, but of course that doesn’t help with the math.

There could also just be a link somewhere to the help file that has the Fluid FFT defaults. I just wouldn’t want a user to find themselves at a loss in this moment when they’re trying to create or use the activations buffer with an actMode and then decide move on to a different idea or tool.

(I ended up doing some reverse math and inferring what it must be, but that seems unnecessary, and by now I’ve forgotten it!)

I hope that’s helpful going forward. Thank you!

What will happen quite soon is that the SC docs* will join the reference materials for the other platforms in being generated from a common base; these generated docs now have some common boilerplate on all the FFT params documentation noting how the defaults work.

* Presently it’s a slightly weird situation where the hand-rolled SC docs tend to be ‘ahead’ of the other platforms because @tremblap will often draft these first, and I then use these as the basis for the generated stuff. Not terrifically efficient, but I need to finish up my reStructuredText → schelp translator before we can change.

2 Likes

I’m so sorry! Indeed I thought that I hand-rolled the FFT default sentence that you can find almost everywhere else in there too… it seems I missed a few, so @weefuzzy and I will decide how to dance on this to make sure RC1 has them, whichever way
p