@outputbuffer and @deststartchan for fluid.buf-splitters

In making the patch for the thread about fades and JIT buffers I became aware of the fact that most of the time when I use fluid.bufhpss~ (or the other buffer-based decomposition objects) I just end up fluid.bufcompose~-ing the resulting material back into a single (stereo) buffer. Like, I don’t think I’ve ever HPSS’d something, and kept the resulting buffers completely separate.

So this is a feature request to be able to designate independent @deststartchans for each of the @output types of each of the buffer-based decomposition objects (e.g. @harmonic and @percussive in the context of fluid.bufhpss~).

This would also bring it inline with the behavior of fluid.bufnmf~ which creates a “single” buffer from the resulting decomposition, rather than individual buffers for each @component (which would obviously be a nightmare for the n-amount-ed output of the process).

edit:

OR

Instead of specifying @deststartchan for each of the buffers, the overall behavior can be treated like other decompositions in general. That is, when using fluid.bufhpss~ instead of setting independent @harmonic and @percussive output buffers, you just set a @features (or whatever) buffer, which works like (nearly all of) the other fluid. objects (nmf, descriptors, etc…).

the latter suggestion is interesting - @weefuzzy might have objections, but maybe not as much as mine to your first proposal. The problem is to find an interface that satisfies everyone, which is impossible. So consistency is what we aim for (and the interest in your 2nd interface for me) yet all layer objects are giving an independent output for the given layer…

Yeah, the interface for my first suggestion could potentially be awkward.

Your final sentence confuses me though, as I don’t know if you are saying that having independent output is or is not the norm across the toolbox. It seems to me, hence my suggestion, that it is the latter, with nmf/descriptor/stats all dumping everything into a single buffer.

if you look per type of outputs, they are consistent:
slicer gives one buffer, slices
layers give one buffer per archetype. that is to respect channel counts
object give one channel per object, one buffer per archetype, because we often need them together… and because it is consistent with the layers too
stats and descriptors interleave because we change paradigm (from audio to control to stats)

From your own list, everything spits out a single buffer, except the layers stuff.

Yeah, I’m generally sympathetic to the idea but wouldn’t like to change the interface in such a way that we lose the ability to route to distinct buffers. The perspective on this w/r/t NMF is interesting, because I really hadn’t thought of it in those terms!

I guess a simple possibility is that the clients for these sorts of object could check to see if the output buffers all point to the same thing and then deal with appropriate offsets itself. The open question is then what we do with multichannel inputs: e.g., do we want to output all Hs followed by all Ps, or alternate? My gut says that the first is simpler all round, but I know how some people love their interleaving.

An important advantage to not interleaving would be if you were going to then apply something further to a single class: let’s say you’re using sines and you want to then process the residual with transients to get a sines + transients + residual type thing. That would be more of a pain in the multichannel case if the output didn’t go sines[1-N] followed by residual[1-N], because you couldn’t then do the second step in a single call.

UI bike-shedding in real-time folks, I hope you enjoyed it.

There’s likely going to be fluid.bufcompose~-ing involved in most processes anyways. Par for the course I think.

That’s complicated for lots of objects though, and not unique to this. I don’t remember what way you did it before, but it’s easy enough to be consistent with that here too.

This is indeed an important use case, but one that will require more minding (and potentially abstract-ing) once there’s a good sounding “three way split” for audio. (does the transient of a sines residual sound any good?!)

As in, this kind of processing would be well served with code snippets where you copy/paste what you need to split it in three ways (in the best sounding manner, or generally preferred manner etc…). Like, that would be a common enough thing to be a pain in the butt to have to code from scratch every time (i.e. making buffers, setting offsets, etc…)

Only sometimes: it depends (as with all these things) on how the source material and the models in use relate to one another. For some things you will probably get better results doing sines first then transients, but for other things the obverse will be true. Part of the challenge is that it’s a chicken-and-egg problem: both analysis steps have an easier job if the other one has been done before. Intuitively, I 'd guess that for very clearly harmonic material, with soft transients and fft settings that are generous enough to separate the spectral peaks well, then it would make sense to do sines first.

1 Like

Ah right. I just meant the residual from sines in general. I’ve not done too much “three way” stuff, but what I did do, I HPSS’d then transient-ed the P. (this was all on percussive-ish material, to add a caveat)

This is getting a bit tangential, but what would be great is having some kind of matrix interface/configuration/“thing” where you can feed it a sound, then select which versions you find best sounding (while still null-summing) of all the available permutations. (like selecting one sound you like, removes the options that don’t null-sum with it, etc…)

1 Like

Actually, in remembering the discussion around @jamesbradbury’s Reaper scripts, your thoughts/vote on that was to go with ‘takes’, rather than separate tracks, which would be more akin to channels if we map the metaphor over.

Thinking about this further, this could be a cool way to test out some of these iterative/dependent processes in an interactive way. Even if it’s only a semi-static thing for the KE (like what you teased for the NMF stuff).

I’ll make another thread for it now to add it as a feature request.

Yes, it’s just that I’m not sure the metaphor does map, because Max / SC / PD are inherently more open ended. I like takes in Reaper because it’s tidy-by-default but easy enough to explode out to new tracks if that’s what you want.

Yes most of the choices made around the reaper scripts make sense in reaper for reaper reasons. It is easier to audition takes, keep them sample accurate aligned with their source and to as Owen pointed out explode them out if you see fit. It is much harder to do all of things and to implode tracks and so those choices were made for those reasons. Also because Owen told me to

Fair enough.

I guess for me, conceptually they are the “same thing”, like an NMF.