Automatic Separation in soundscapes

Awesome, with bated breath.

Same goes for the KE stuff. Might be worthwhile making a KE thread and posting the structure/roadmap there for people go offer thoughts and gaps in knowledge.

I don’t know nearly enough about the underlying algorithms and such, but what I mean is more along the lines of what you said in the second chunk. Where certain tweaks to the algorithm end up being “better” for certain kinds of material.

In my specific case, although the sounds I may be working with will vary a great deal, if I’m on drums/kit/percussion, I can probably make certain assumptions about transients and amplitude envelopes in the material. This obviously won’t always be the case, but I can go into it knowing that, and if it makes the difference between an algorithm being usable or not (fast nmf-ing, like in the faux-Sensory Percussion tests/patches which ended up being too slow to be useful), then for me, it’s worth exploring.

All of that isn’t to say that this isn’t an argument towards/away from a Black Box (:black_large_square:) paradigm, and surprisingly I’m arguing towards an more open approach where options like “pre-training a neural network model” aren’t available in FluCoMa (now (or ever?)). As I said, I don’t really know or understand enough to know what these kinds of things are, but it is something that seems to come up when encountering “real-world” examples of ML stuff. The implementations are rarely general.