FluidMLPRegressor.kr has same cpu usage whether it is begin triggered or not

Well, the title says it all. The code below adds 32 FluidMLPRegressors and then makes 32 kr objects. The cpu usage is the same whether the UGens are being triggered or not (multiply the trigger by 0 to get no trigger). This is really messing me up because for my instrument I need 64 (or more) on one server, even though I am only triggering one at a time. The cpu usage should actually be quite low, but it is wigging out.

I can provide the model file, but the discourse will not let me upload.

(
{32.do{|i| i.postln;
a = FluidMLPRegressor(s, [3,3]);
0.5.wait;
a.read("/Users/spluta1/Library/Application Support/SuperCollider/Extensions/LiveModularInstrument/modules/NN_Synths/06_CrossFeedback1/model0/modelFile0.json", {


{
	var output, input = LFNoise2.kr(1).range(0,1)!4;
			var trig = Impulse.kr(50)*1;
			var inputPoint = LocalBuf(4);
			var outputPoint = LocalBuf(16);

			input.collect{|p, i| BufWr.kr([p],inputPoint,i)};
			a.kr(trig,inputPoint,outputPoint,0,-1);
	output = (0..15).collect{|i| BufRd.kr(1,outputPoint,i)};

		}.play(s); 
	});
}
}.fork)

Replying to myself. I figured out that you CAN pause the synths, so that should work for me. I thought I had already tried that though and found it didn’t work. I’ll try again.

Edit: This is not a great solution as it adds a lot of shifty book-keeping, and it shouldn’t be that these are taking up so much cpu when they aren’t being triggered.

1 Like

Looks like someone’s getting triggered!

You could say it has been bugging me.

But I did figure out the pausing of synths method to get things to stop their hoggin.

Edit: Yeah. Yikes. I didn’t mean to sound so whiny in the post. Apologies. I meant to simply be reporting a behavior that I found inconsistent with the UGen design.

2 Likes

I didn’t feel whined-at :heart: Thanks for the report – upon profiling, there’s a whole slew of sub-optimal things there, which are all my fault. Some are easy to fix, some will require much more thought, so I’m glad you’ve found a workaround in the meantime.

1 Like

1 Like

It needs a caption to be canon :wink:
womanyellingcat

2 Likes

PLAN

3 Likes

3 Likes

At first, I thought this was an issue with FluidMLPRegressor, as the kr implementation was giving me tons-o-NaNs on instantiation, which can cause synths (especially filters) to blow up. BUT, it was actually my fault - or the fault of the help file. The help file uses this suggested syntax to create the input/output buffers for the FmlpR:

var inputPoint = LocalBuf(controlValsList.size);
var outputPoint = LocalBuf(valsList.size);

But I think you want this:

var inputPoint = (0!controlValsList.size).as(LocalBuf);
var outputPoint = (0!valsList.size).as(LocalBuf);

Creating the LocalBuf out of 0s guarantees a valid starting buffer for the output, without having a problem before the kr jammer gets going.

1 Like

Hmm yes. This is a good idea. I also started putting a Sanitize.kr after the output of a FluidMLP so that any NaNs that come out (on instantiation or otherwise) are changed to zeros–just to protect agains other problems down the line…