SC people: interface questions

@spluta @alicee @tedmoore

I’m in the process of pulling apart the SC stuff to address a range of annoyances with respect to non-realtime objects, datasets etc. This is 1/n of these sorts of question, I’m afraid. (n might end up only being 2 though…)

I’m proposing to get rid of the symbolic name for FluidDataSet and FluidLabelSet and instead make them behave like Buffer, with an internally assigned integer ID than can be overridden if you really want. The name doesn’t seem to add anything in SC, and also accounts for quite a deal of undesirable complexity under the hood.

  1. how much of your extant code would this break,
  2. and how helpful would a graduated change be, given where you are with your pieces right now?

Code that would break would be anything that relies on the symbolic name explicitly (e.g. using FluidDataSet.*at, or the symbolic name directly in a UGen input).

The most brutal version of this change would simply ignore any value for the second constructor argument that wasn’t an integer. However, I can imagine some more graduated versions that essentially would deprecate the use of names, with a view to getting rid of them before public release.

You are saying to get rid of the label? I fully support this. It is completely unnecessary, since a DataSet is an object with a variable name. I think I have even suggested this at some point, and have gotten around the issue of the symbolic name by assigning a random name.

My 2¢ or whatever you say over there.

Sam

2 Likes

You did :wink: The only hold ups were that we need something behind the scenes (but this can be basically invisible in SC), and convention in Max/PD would be to use a symbol (as with buffer~). But it’s just a pain in SC because, as you say, there are variables, and types and other goodness.

How much extra work would it cause you if I
a. just nuke them,
vs
b . continue to support the symbolic argument constructor (but not doing anything with it)
vs
c. continue to support the symbolic argument and provide a temporary language side mapping between symbols and integer IDs

tuppence ha’penny :smiley:

tha mi a ’smaoineachadh gum bu chòir dhuinn nuke a dhèanamh air.

:joy: Ok, duly noted. I’ll see what Alice and Ted have to say too (but nuking is certainly simpler for me)

1 Like

If you had done it last week, I might not have spent 4 hours head scratching an error/ blockage which was based on trying to access the data set via a symbol, not a string.
(in other words, fine by me)

nuke it

1 Like

@alicee that was not that problem (not the labels of the data which we are keeping, but the name of the dataset object itself like in

~ds = FluidDataSet.new(server, \danamewewanttonuke)

oh yer. i knew that. obvs.

1 Like

@spluta @alicee @tedmoore

Similar flavour of question. We’re able to get rid of the need to communicate with things KDTree via a control Bus, and instead have it so you can instantiate a UGen directly in a synth via an instance kr method, like

(
 ~tree = FluidKDTree(); 
// fit the tree,then
{
   var input = <some RT feature thingy> 
   //shove input in a buffer 
  ~tree.kr(inputBuffer, outputBuffer,lookupDataset); 
 //get result from outputBuffer
}.play(s);
)

So, again, will your lives be made palpably easier if I do some kind of transition interface that allows you to keep using it the current way, or should I just nuke it ? :rocket: :fire: :radioactive:

This is great. I have a few questions.

  1. So does it search the tree every control block regardless if the input is changed? If so, keeping the control bus triggering may be useful if the lookup needs to be much less frequent.
  2. Does it still allow for a different dataset to be the return values? (I’m assuming this is what “lookupDataset” means?)
  3. What would be even more jazzy is if the values coming out of the ~tree.kr were a kr stream:
    out_vals = ~tree.kr(inputBuffer,lookupDataset)
    Getting them back in a buffer means I’ll very likely have to pull them out of that buffer anyway. It could also be sent from the FluidKDTree synth in a Bus? How possible is that at this moment of engineering?

Of course, curious to hear what @spluta and @alicee think.

Thank you thank you!
T

Yeah, sorry, the triggering stays. Bad Demo Code :grimacing:

Yes and yes. In principle you should be able to modulate the lookup dataset now as well (which might be useful sometimes).

I’d tend to agree. I’m not doing immediately in part because I’m trying to keep this round of refactoring contained (hah!), and there’s a bit of a can of worms about what we’d possibly lose by using control channels instead (the worry is scalability up to enormous dimension counts, but I don’t think this is necessarily more pressing than the usability)

Ooh yeah. This is nice.

Agreed with Ted here.

  1. To add an annoying wrinkle to Ted’s point. I have never understood why the Fluid stuff is running at k-rate. Why not make them demand-rate (triggered) or similar to the way FFTs run at their own rate. I say this for obvious reasons: I need to run my software at a block size of 64, which has made the kr Fluid stuff kind of impossible to use.

  2. Makes sense.

  3. Yes! Maybe both ways at the same time? (This would actually be super easy, I think, with a pseudo ugen just wrapping the tree.kr and adding the code to make it a kr stream)

  4. Would love all the kr stuff to work this way.

1 Like

Partly because I have an e-mail from you from 2018 opining that demand rate was an unusable abomination (I paraphrase) :heart: TBH, I’m not too clear on (idiomatically) when one should prefer dr to kr triggers.

When you say the fluid kr stuff, though, do you mean the non-Buf versions like, say, FluidMFCC? I can see the use in making that triggered, I guess, though it essentially is w/r/t to your hop size for the FFT stuff (internally, using magic)

Hmm, yes, could probably make something convenient along those lines .

1 Like

I need to see what I actually said. I don’t remember saying that nor do I agree with myself, so…

BUT - the FFT stuff is actually a better model. FFT and IFFT get data in and out through buffers (just like Fluid things) and calculate at their own rate that is independent of block size. In their case, this is determined by window size, but it could at least conceivably be determined by the user. I do not know the source code there, so I don’t know.

What I am saying about the kr stuff is that I have been unable to use it, mostly because my block size is 64, and the efficiency goes out the window at that block size. I mean KDTree, MLPRegressor and such. I have had to revert to staying in the lang, which is super inconsistent as far as speed is concerned - sometimes quick…sometimes not.

I’ll forward you the e-mail, but I did paraphrase somewhat :wink:

The UGen access to FluidKDTree, FluidMLPRegressor et al oughtn’t be affected intrinsically by the block size, because they’re triggered (I’ll check). However, it could be that the overhead of accessing those models is just impractical at such a small vector size (depends on the size of the tree / network). The next release has made the KDTree appreciably faster for both fitting and querying after Gerard and I gave it some love.

In due course, I’ll also have another look at the demand rate semantics and see if they make sense for any of stuff.

so I’m trying to reproduce to see if the cueing (which is not dissimilar to the FFT model) is the problem, or has a problem. It should not compute more often than you ask for it, but I need to confirm a few behaviours with you.

  • in FFT land, if your process is super expensive CPU-wise (as most of our stuff is quite expensive) what happens if it is not able to compute within one block? Do you get drop-outs? Do you have a simple heavy FFT/IFFT patch that is so hard it cannot compute in your 64 samples and therefore loses a packet? I could code one like dirty convolution but you might know the answer

  • in FluidLand, what are the settings you are trying to use that choke it? Just name a task and I’ll reproduce on Alpha06 then check if it is still the case on Alpha07.

@weefuzzy is deep in refactor of the SC stuff so I’m putting together more and more examples. This would be good to have.

thanks

I’m gonna bow to greater experience of y’all on this one; I don’t have any bespoke requests atm … & no need for gradual change on my account.

1 Like

OK. I don’t think my suggestion helps the issue. The block size is set to 64 in SC by default. But this isn’t the issue. The issue is the hardwareBufferSize, which is 512 by default. Changing this to 64, which I personally need to do, is what causes horrendous cpu outcomes. I checked on FFT processes, and those also get exponentially more cpu intensive at a lower hardwareBufferSize. Running an intense process jumps from 0.4% cpu to 8% cpu.

I don’t fully understand why this is the case. The hardwareBufferSize does not change the control rate, which is a separate value. It changes how often you access the audio hardware.

So…I don’t get it, but my idea won’t help.