The slow and steady process of converting my analysis/matching to ML-land is carrying on, and as part of this I’m going to start simply by replacing my real-time analysis and matching with the ML equivalents, as a jumping off point.
The first step of this is simply removing the final
peek~ -> list part of my analysis chain with
fluid.bufcompose~ in order to transfer everything into a
fluid.dataset~. But in building this I’ve run into a… kind of small problem.
For my centroid stuff, I’ve been converting everything to MIDI via
ftom 0. for better matching/weighting (alal @tremblap’s LPT patch). This presently isn’t a big deal because I end up with numbers in Max-land that I can then process however I want.
If I’m going to
fluid.bufcompose~ stuff into a
fluid.dataset~ directly, there doesn’t appear to be a way to transform these values…
Same goes for
dbtoa (which would be useful for things like spectral weighting).
In @tremblap’s LPT patch that happens via
js data-manipulator.js, which is a definite no-go for fast real-time processing.
And the “normal” data sanitization (as far as I understand it) doesn’t deal with these kinds of transformations either (e.g. normalization or standardization).
What is the ‘intended approach’ for transforming data post analysis and pre
.dataset~-ing? Or is the idea to shove frequencies in there, and let data sanitization and/or dimensionality reduction “deal with it”?
(I can obviously
peek~ out what I want, do maths in Max-land, then
peek~ everything back into a
fluid.bufcompose~-ing what I want together to put into yet another
buffer~ to finally end up in a
fluid.dataset~, but that starts to get really ugly really quick…really slowly)