Now, on this: there are so many variables one can play with, it is crazy. So this is a research in progress, trying to find an MFCC space that is (personally, perceptually) as “reactive” as a valid pitch and loudness one. But as I do the tests, I discover a lot of assumptions in my thinking on what is an accurate match. It is incredibly anchored in a fleeting musiking need more than anything objectively nearer. So I keep plouging with my research, now that I (we) have the tools to do it accurately, within Max and SC.
Things I am going to try next in this example 11:
- scale on 0-100 instead of 0-1 as a baseline. or maybe -50+50. It feels easier to grasp.
- make a graphic example of the assumption of the scaling the distance = lowering the impact in 2d, to see if that works as well as in my head
I am writing a fixed media piece with this, and will try in action, and in comparison with my MFCC-musaiking my crazy synth and my analog synth (both examples I provided people with) and the APT…
On the next horizon, in months:
- a sort of branching version of it all
- a sort of MLP based mapping between a bass analysis and a corpus space, including timbral space.
- removing all pitched component and using the noise only in the corpus and passing on the pitch of the target (a sort of cheap corpus-based vocoder)
We’ll see where it all goes. I also look forward to see what people will do with the current tools - there are a lot of possibilities!