Cheers, newbie Q here:
I’m preparing a realtime score in SuperCollider for 5 instrumentalists that I want to base on measures of similarity between the mic’ed signals of the performers.
(E.g. clarinet playing similar to flute, gets instruction “cl play less similar to fl!”
Or, if A sounds very different from B, tell A to play more like B)
After fuzzing too long with SC’s rather cumbersome Machine Listening tools, I reckon this is a good place to get into FluCoMa to do the job.
Can you sages advise me how to proceed at calculating differences/similarities between 5 real time signals?
As a minimum of analyIs features, I thought of the spectral shape descriptors and the rate of Novelty triggers.
Averaging over a short term of 0.5 - 3 secs shd suffice; sending to client a few times a sec; then doing the decisions + display from there.
It’s not clear to me, though, how to go about reducing the different delta values of the live signals to a single difference measure.
Could you give me some hints?
Very appreciated!
Thanks,
Hannes