A fluid.datasetfilter~ object for conditional querying

Indeed. I guess this is all about the plumbing that goes before/after it. Less useful in a more complex querying context.

Wait, does that mean (theoretically) it would take 1 vector, like, per step of this? So fluid.ampslice~ to fluid.bufloudness~ (1vector) to fluid.bufstats~ (1vector) etc… Or would “everything in the signal chain” take 1vector to compute? (again, theoretically) So it would be fluid.ampslice~ to [a whole bunch of stuff] to a signal trigger (1vector) instead.

I kind of pictured it happening in a JIT-y way. Say, if you signal triggered a fat NMF that it would take however long it took to compute, but then it would respond with a signal trigger when it’s good and ready.

Well that makes me want it even more!

No, that was rather my point: anything that happens on the audio thread must happen within a vector’s amount of time, and the buf* objects can’t remotely promise this. So, signal triggers / outputs would just have to palm of to another thread to do the processing (which takes time), and to notify of completion. In effect, no different at all than using click~ / egde~ to get in and out of signal land (i.e. slop, but worse because it’s invisible).

What’s in place for (some) of the dataset clients (like the classifiers and regressors) in SC is that they have ‘true’ signal rate functions that would do the equivalent of transformpoint in signal land. Even this is slightly risky, because the time they actually take is still a function of the size of the model etc., so aren’t immune from causing drop outs, but it’s less obviously doomed than, e.g., trying to do NMF.

:man_shrugging: We could just make you a gen~ that causes unpredictable CPU spikes in the audio thread, if you like…

2 Likes

To be clearer, everything in your whole DSP graph has to be done within a vector’s worth of time, otherwise audio doesn’t reach the sound card in time, and people get sad.

1 Like

Yup yup, that makes sense.

Consider my enthusiasm adequately curbed!

This is what my “explanation” of signal-rate-is-no-panacea was meant to explain.

Back to the original idea then - analyse the input audio, and fork on single value in Max land between two separate bits of patch. I’m still trying to see that working so that will eventually maybe emerge as an example (or maybe just around here as a failed experiment to be documented…)

1 Like

Yeah, curious to hear your results. I’m wondering if the overall “matching space” will change a lot between the two forks, even though you’re only ignoring pitch. On a theoretical level I would think it would be seamless (in terms of sonic output), but that’s often the case that things go exactly as one thinks.