Ok, that’s as I’ve come to understand it, and it makes sense.
I remember mention of this. So I guess that’s why the scale of time is so microscopic, because it’s intended to remove these discontinuities in a manner that leaves the original audio largely unaffected (uneffected?!). Like a use case for me would be to apply some dsp to just the transient, and other than some time-based effects (or general ‘transient shaper’ dynamics stuff), these extracted transients don’t seem to respond to much (i.e. distortions, filters, etc…). Unless I’m paying really close attention I can’t even tell they have been removed…
This is a bit OT, but there’s been a bit of mention in a couple of threads now about where the generalizability of some of the algorithms break down and/or how often “real world” applications do some kind of contextually appropriate pre-seeding and/or algorithm tweaking. Is that level of specialization and/or fine-tuning in the plans/cards for future FluCoMa stuff?
Like, architecturally speaking, some of the stuff discussed in the thread around realtime NMF matching (and semi-reworking the concept from the machine learning drum trigger thing) just isn’t possible in the fluid.verse~
because there is no step that allows a pre-training/pre-seeding of an expected material to then optimize towards.
It’s great that all the algorithms work on a variety of materials, but form a user point of view, if they don’t do a specific type of material well, the overall range doesn’t really help or matter.