KDTree super inefficient

I just want to make sure there is a ticket for KDTree Server side stuff running at super high cpu even when not being queried. This discussion happened over slack during the plenary, but was never registered here. Would be great to get that fixed.

Doing nothing? Like just existing on the server taking cpu? @weefuzzy does it make sense to you? @spluta do you have an example? at least a number of points? If we are not fed info, it is a bit hard to troubleshoot. Here it is not inefficient, so maybe we sorted it, or maybe our trees differ…

There is a quiescent overhead with all the objects,

For NRT objects it got improved somewhat by me a couple of days ago because I noticed we were updating and constraining the parameters every vector rather than just when a trigger came in.

For RT objects (which is now most of the things that are causing you woe) here and in FluidMLPRegressor killing sound on a server) it’s less straightforward. I’ll have a look to see if

  1. For these classes, the parameter updates can also be made conditional (not so sure)
  2. Whether the parameter updating code itself can be made less greedy. The problem is parsing strings and arrays.

You guys don’t remember this conversation from when you were running the plenary after you hadn’t slept for 3 weeks?

This should clarify. All is honky dory until you add the outbuffer to the ~tree. CPU goes from 3% to 60% just when adding the buffers.

Sam

//run this
(
~inputPoint = Buffer.alloc(s,2);
~predictPoint = Buffer.alloc(s,2);
~pitchingBus = Bus.control;
~catchingBus = Bus.control;

~ds = FluidDataSet.new(s,\randomName);
~tree = FluidKDTree.new(s, 1, ~ds);

d = Dictionary.with(
        *[\cols -> 2,\data -> Dictionary.newFrom(
            4000.collect{|i| [i, [ 1.0.linrand,1.0.linrand]]}.flatten)]);

~ds.load(d, {~tree.fit(~ds)});
)

//now do these one by one and watch the cpu
~tree.inBus_(~pitchingBus);
~tree.outBus_(~catchingBus);
~tree.inBuffer_(~inputPoint);
~tree.outBuffer_(~predictPoint);

I’ve pushed a fix that @tremblap is going to test tonight

this code runs at 0.17% cpu. Is that what you have @spluta?

No. Once I add the inBuffer and outBuffer to the KDTree, I am at 48.4%, so maybe this fixed it.

Sam

1 Like

Hey all. Unfortunately this isn’t fixed for me in Alpha5. While the base server % is very low, around 5%, the spikes are seemingly worse, jumping to around 250% constantly. It doesn’t matter how many times you ping the server, the search just can’t hang. Run it with the SinOsc, and it distorts like crazy.

(
~inputPoint = Buffer.alloc(s,2);
~predictPoint = Buffer.alloc(s,2);
~pitchingBus = Bus.control;
~catchingBus = Bus.control;

~ds = FluidDataSet.new(s,\randomName);
~tree = FluidKDTree.new(s, 1, ~ds);

d = Dictionary.with(
        *[\cols -> 2,\data -> Dictionary.newFrom(
            4000.collect{|i| [i, [ 1.0.linrand,1.0.linrand]]}.flatten)]);

~ds.load(d, {~tree.fit(~ds)});
)

~tree.inBus_(~pitchingBus);
~tree.outBus_(~catchingBus);
~tree.inBuffer_(~inputPoint);
~tree.outBuffer_(~predictPoint);

(

{
    var trig = Impulse.kr(10); //can go as fast as ControlRate.ir/2
    var point = 2.collect{TRand.kr(0,1,trig)};
    point.collect{|p,i| BufWr.kr([p],~inputPoint,i)};
    //Poll.kr(trig,point);
    Out.kr(~pitchingBus.index,[trig]);
    //Poll.kr(In.kr(~catchingBus.index),BufRd.kr(1,~predictPoint,Array.iota(5)));
    Silent.ar;
}.play(~tree.synth,addAction:\addBefore);

)

{SinOsc.ar(200, 0, 0.1)}.play

Hmmm. That code is only peaking 30-40% on my machine here (2015 mbp), on a debug build of SC itself and a not-fully optimised build of the FluCoMa stuff. I wonder what’s going on.

I was wondering if the distro we got is debug mode or something?

2013 Mac Pro has the same problem.

@tedmoore, can you try this out?

Both my compus are on 3.11.0 [e341b495]. I could try the RC they made today.

Yeah, when I run that my peak cpu is bouncing around 85-90%. Also it jumped up to 117% for a moment when I initialized the KDTree.

I am on Alpha05.

SC 3.11.0

Both of my compus have slower single core performance than both of yours, but either way, “the peaks are too goddamn high”! in case that reference is missed: (https://www.youtube.com/watch?v=79KzZ0YqLvo)

2 Likes

Still freaking out on 3.11.1 RC.

Sam

Yeah, that reference didn’t make it across the atlantic intact :laughing:

@tedmoore yes, there are spikes when initialising, but these are a separate thing I think.

Both: how big is your FluidManipulation.scx file? If I do a release build it’s about 2.9 MB, but in debug it’s about 22, so not a sutble difference…

Dang. 2.7

Ho hum. I’m going to email you both a definitely-release-build of the plugins just now, see if that changes anything. I’m just en route to bed, but will look at this more tomorrow (incl. road testing the build you folk have at the moment)

May your dreams bring you to the land of solutions…

But I don’t think you sent us the file. Could you do that?

I can confirm that it is release. I’m almost offended :slight_smile:

With the exact binaries that were posted yesterday, there is a spike at 70% on startup, then this has 3% average and 36-45 % peak…

In the past we had issues with @spluta having old binaries remaining active (during the plenary for instance) so can you run this code after yours:

~tree.prSendMsg(\version,[],{|x|x.postln},[string(FluidMessageResponse,_,_)]);

But first please restart your mac with the binaries replaced. If that is still problematic, I’ll rebuild them for you after manually deleting everything…

after a fresh restart (still same binaries but no cache potential) I hover at 2-3% and 25-35 peak. So all is good here… that reminds me of the problem we had during the plenary.

ok I remembered you run your hardwareBufferSize quite low. I get the same performances as you when I run at 64 i/o. at 128 i/o I get some glitch. at 256 i/o it is clean but near 80% peak. The results I have further up in the thread are at the default 512 i/o