Bit of a stats noob question here, but I’m organizing and scaling data and for things like loudness, it makes sense to
dbtoa things like mean/min/mid/max so I get a smooth linear output from 0. to 1. but I’m not sure what to do with standard deviation, which is in decibels (since it’s computed off the mean). If I run it through
dbtoa I get a much higher value than I think makes sense, and it ends up more than likely being > 1.0.
The analysis frame I’m working on at the moment has the raw output values of:
loudness_mean: -8.939504 loudness_std: 2.4 loudness_min: -13.748664 loudness_max: -5.938645
If I pipe all that through dBtoA I get:
loudness_mean: 0.357293 loudness_std: 1.31821 loudness_min: 0.205384 loudness_max: 0.50474
The mean/min/max look good, but the std is all jacked. Similarly if I convert the other values and leave that one alone, I have a similar problem, though I guess they are all in the same “units” at that point.
So the question is, do I leave the values raw and then let something like
fluid.robustscale~ sort it out, or do I scale (which I imagine would be better since it gets linearized along the way)? And if so, how much A does 2.4dB convert to?
I imagine I will run into similar problems with Hz and
ftom when dealing with standard deviations of frequencies.
This feels like a super solved thing and I’m just too dumdum to know what it is.