So, I’m writing for violin, and I don’t want to write dots otherwise than by ear, so this is perfect for yet another stab at the impossible musaiking problem
- My gesture/sound targets are
- my sound source corpus is the orchidea-ircam dataset corrected by @danieleghisi and friends
Same old same old, I know. But this is fun. I will obviously use Bach later, once I have a good sounding EDL, but for now I need to get to it around the usual hurdles.
But: the switching needs to happen between timbral classes of onsets. The problem isthe eternal non-overlapping timbral space of onsets of violin and (bass|synth) so I was puzzled… it is nothing new, very similar to stuff I spoke about in sandbox#3, and that @rodrigo.constanzo keeps saying about trying to match space A to space B… especially when using UMAP to get a significant lower-dimention space and hoping to remove noise in the repesentation, like @groma and @weefuzzy keep telling me…
so the question: how does one map a latent timbral spaced to another latent space? Making a space with both dataset (target and source) does not solve the non-overlap problem… and separate mapping is arbitrary… and then the obvious answer came to me when I listen to the fantast keynote by Rebecca - I can use a simple regressor do to ad hoc mappings!
So my plan is:
- extract a latent timbral descriptor space of my source corpus (a few hypotheses to test there)
- extract a latent timbral descriptor space of my target (here too)
- do a dirty match of values of both smaller dimensions and see where it leads me
I will provide examples in various threads I think because each task is fun. But I’m sharing here because the last idea might be a solution to other people’s problem. It was in my face all along but hey! that’s life!