Load not working properly with fluid.normalize~?

As per usual, before creating a git issue, want to check to make sure it’s not user error.

Basically trying to implement what I suggested here with regards to “hacking” fluid.normalize~ based on creating a fake dict to load into it.

I notice that when I do this, the min and max values from the dict don’t appear to get loaded into the actual normalization:


I would think that the dict values for min and max to be used when normalizing, but it appears they are ignored completely.

In this specific case it doesn’t matter too much as I can just set the attribute versions in fluid.normalize~ to @min -1 and @max 1, but it does not bode well when normalizing to a different output range than the default 0. and 1..

I may also have cases where I’m wanting to adjust the output range dynamically based on what @activation I am using in fluid.mlpregressor~.


So is this a bug, or am I overlooking something in terms of “hacking” the loaded dict?

Hmm, weirder still, the dump output of fluid.normalize~ reports the correct values based on the input even though the attruis (and resultant processing) do not:

Are the min / max columns of loaded/dumped dicts just ignored and overwritten by the state of the @attribute versions of min / max?

Aaaaand even more weirdness.

It seems like load isn’t internally declaring the state to have been fit or something like that. In fluid.standardize~ load-ing in a dict means that any processing happens just fine (on a dataset at least, I’ve not tried it on a buffer~ via transformpoint).

I did narrow down some steps where it behaves as it seems like it should:

Steps for weirdness:

  1. follow instructions as they are on in the patch

  2. manually change the min value via the attrui

  3. dump output doesn’t reflect any change

  4. transformpoint again - the correct values are output

  5. dump output reflects correct change

This leads me to believe that the dict is in fact being load-ed, but crucially not being defined as having been “fit”. And that for some reason load-ing values only partially triggers this, whereas manually editing via attrui and then fittransform-ing does it reflect correctly.

this does look like a bug. let me recompile and check it is actually loading the values

1 Like

can you feel the fan of my laptop (i9 I’m not cool with the Mx kids) freaking under the compiling? feeeeel the buuuuuurnnnnn

Ha! You found a real bug!!! it’s been there since the beginning of times!
Sorted in the next nightlies.

wait for them to recompile, give GitHub 30 minutes

actually, as usual, it is a lot more complicated than it seems… at the moment we have a strange behaviour of enabling the scale of the output (min and max) to be changed without refitting. To correct the current bug, I need to stop that behaviour. It is of no real cost, but it just means that the output range will be set at fitting time…

Let’s try that for now and see how it breaks the world in the nightly?

I’ve asked the code gods if my fix was valid even if it breaks something. in the meantime, can you roadtest this? Just replace in your package.

fluid.libmanipulation.mxo.zip (2.8 MB)

for the cpp-curious: fixing fluid.normalize json reloading issue by tremblap · Pull Request #262 · flucoma/flucoma-core · GitHub

This works! (complains about quarantine obviously, but the patch above shows the correct numbers now)

Shame, but it’s not like fitting is heavy/onerous with this object, and it means being able to load any saved normalization and it working properly.

1 Like