Clip (or zmap flag) attribute for fluid.bufscale~

I could have sworn I made a post about this before already, but search doesn’t turn it up, so apologies if this is redundant.

I think fluid.bufscale~ should have a clip attribute (and/or a separate fluid.bufclip~, which would be a pain…) since you have to guess a useful @inputlow value before starting otherwise you risk wasting resolution or getting loads of garbage data further down the line from loads of errors like these:
fluid.bufstats~: Negative weights clipped to 0

So granted, it’s being clipped at the fluid.bufstats~ level, but this not only spams the window with yellow comments, but it can also heavily skew your data if you’re expecting one thing and get another (again, having to “guess” what a useful @inputlow value would be).

With this coming down the pipeline (hopefully):

Being able to dBtoa would help here as I could more easily plug the output of fluid.bufloudness~ into fluid.bufscale~ and know what I’ll get, but for now I’m having to eyeball values or get spam.

These are the sorts of thing that @tremblap has coded up.

I’ve grudgingly come round to the idea of having clipping in-object (but we’re still haggling about how complicated PA is allowed to make it, and I wouldn’t rule out a standalone clip operation in the future).
FWIW, I don’t see that you’d end up doing less guessing, you’d just deform your data differently (although perhaps more safely).

dB to linear conversion + hz to midi pitch are also in there, but also in the midst of interface discussions.

Then we get to argue about implementation :grin:

2 Likes

Since @rodrigo.constanzo you will moan share his thoughts about our choices anyway, it would be fun to know what your intuitive interface would be, with the caveat that 1) I love you, 2) as usually I’ll consider your proposal in-depth, but 3) there are already 3 perspectives on this :slight_smile:

seriously, why did you think bufscale was the right place, and what type of interface do you like? personally, I hate scale because I never get the expon value to behave so in effect I have to use atodb etc aka one object for each case which I hate but then maybe you like that. feel free to riff now, again with the caveat that if it does not happen as such it is not because you are ignored (there is already a running version on my hard disk and since mid-December, so I know what I like already but I am also more and more aware of my idiosyncratic CCE-ing)

1 Like

If it explained the current behavior loudly/clearly in the help file it’d be ok as it is, as I was only aware of the (clipless) behavior further downstream when another object threw up errors. In a normal scale vs zmap context you can normally spy on the output to see what you’re getting (easily).

My ideal interface would have not been buffer-based in the first place, so you can then use vanilla scale, dbtoa, ftom, etc… but that’s a separate (and well-trodden) discussion!

That being said, having a @clip attribute so it can clamp the output (or not). In terms of my intended use/functionality, I would not really expect to scale anything, but rather be able to directly weigh spectral descriptors by loudness, without an interim step by feeding a loudness buffer directly as a @weights buffer, with something like fluid.bufscale~ being for specific cases where you’re massaging the values to do something clever like pitch/confidence stuff. Or even more ideal would be to have a @loudnessweighted flag in the descriptor processes ala AudioGuide, though that wouldn’t work with loudness being a separate processing path.

I suspect you’ve made fluid.bufdbtoa~, fluid.bufatodb~, fluid.bufftom~, and fluid.bufmtof~, as that tends to be the house style. In this case I wouldn’t mind that so much as it would happen instead of fluid.bufscale~ for (most of) my use case(s). Though I would lean towards that being a flag selection in a central fluid.bufscale~ object to tighten things up.

The interest I see here would be the possibility to “quantize” a dataset and buffer on a pre-defined (irregular periodic?) grid.
So something like a modulo, int() division, wrap the space etc

nah… we don’t do many objects here. although beware you may tempt us :slight_smile:

indeed. the list approach you want is very centred around your personal usage, but not very generalisable. One day, you will embrace many dimensions as a whole, and then you will see the light. The buffer interface keeps on giving - it just needs tweaking.

I’d love to see the flattened buffer where a single point of your data is >32k!

I do see the limits of lists for general use, though it’d be great for a ton of uses.

I remember you telling @jamesbradbury how annoying it can be to troubleshoot framelib stuff cuz you couldn’t poke and see stuff at each stage…

At this point there is a native (fluid) data container in the dataset, though it shares some of the same impermeability problems as the buffer which proceeded it (getting/removing single points, symbolic labels, symbols, etc…).

Again, getting off topic though!

This starts getting quite general purpose and Ears-y, which is because there isn’t a pleasant way to transform buffers in this way natively. So short of reinventing Ears, I’m curious to see what the team have come up with.

Oh, I don’t know how granular the update/changes to this stuff are, but it would be fantastic to be able to apply different scalings to different indices of buffers as well. For example fluid.bufspectralshape~ spits out a smorgasbord of different unit and scaling types (Hz/dB) and in my patches up to this point I’ve manually peek~'d and scale the units I was interested in. So having some native ftom and atodb stuff would be great, but if it only applies to the entire contents of the buffer, it’s going to require loads of buffer shuffling to get things in the right buffers for conversion along the way.

An alternative would be to be able to request the units/scaling you want at the source (e.g. fluid.bufspectralshape @output centroid flatness @scaling midi amplitude).

this is not the scale of the update nor the object. You will be able to scale datasets. you can scale differently by doing different subdatasets as in the demos. That’s the plan, for now, to see if this actually make any real-life changes.

So if have a “Timbre” dataset (that has Hz and dB in it, due to the nature of the descriptor types) I would have to make a new dataset for each (so potentially a dataset with a single column) just to scale them?

At that point it seems easier just to peek~ individual values and scale them in Max-land.

that is what I did indeed - since no interface nor pattern can be generalised yet, feel free to try that too.

It’s totally possible, it just seems a faffy workflow to go dataset~buffer~peek~[maths]peek~buffer~dataset~ every time you want to change unit types. And of course it’d be a separate chain of objects/processes (with unique buffer names along the way) for each unit or column you want to modify.