Dimensionality reduction, disparate spaces, and speed

Hmm, that’s a good way to put it. As a mapping thing.

What if what I want is to tell it “things that sound like each other, should be mapped to each other”. Like, is there an unsupervised supervised algorithm where you just use (perceptually meaningful) descriptors to train the regression algorithm?

I suppose it’s easy enough to give it examples of what kind of sounds might be used as inputs (by playing a range of sounds/techniques (though this would, conceptually, feel a bit limiting if I’m defining the field before playing ball (not a useful metaphor I realize, since that’s exactly how you play ball))), but it would be harder to do with arbitrary samples.

Again, I suppose part of it would be to create some kind of dimensionally reduced space/map, then browse the clusters and be like, I want to trigger those kinds of sounds with these kinds of sounds, but those are the kinds of decisions and processes that I’d like to, as much as possible, avoid. I haven’t fully unpacked why, but this kind of (pre)compositional decision making and thinking I find quite uninspiring/uninteresting. Not opposed to it, just the opposite of excited about the prospect.

Like, I’d sooner accept a slightly more arbitrary algorithmically, but conceptually more simple, “mapping the expressive range of each to each other”.

That’s interesting. It strikes (struck?) me that something like that may have to be the case when working with tiny windows and/or staggered windows, where the “legit” ML route might not be quick enough. (I’m bumping the hybrid stitching thread with my findings so far on this approach).