Is it possible to change the sound source by features? - SuperCollider

Dear all, I wish you are safe and sound

Reading the following paper from Timbre conference: A New Test for Measuring Individual’s Timbre Perception Ability - Lee and Müllensiefen (2020) I got in touch with a experiment interface built in Max which alter parameter such as the temporal envelope (by changing the log-attack time), spectral centroid (Band-Pass Filter), and spectral flux (with inharmonicity)…

By this reading I am thinking about using spectral features to alter the sound source (microphone or audio file for instance)…

Something like:
for temporal envelope

(
{
    EnvGen.kr(
        Env(
            levels: [0, 1, 0.5, 0.5, 0],
            times: [0.5, 0.1, 0.3, 0.1],
			curve: [log10(0.01), 0, 0, 0]
        ),
        gate: Impulse.kr(1)
	);
}.plot(duration: 1);
)

for the BPF:

{ BPF.ar(Saw.ar(200,0.5), MouseX.kr(100, 10000, 1), 0.3) }.play;

And for the inharmonicity?

Is there any example to implement audio features to alter values of an input sound source?
Like for example: using the inharmonicity for pured<->distorted sound; adding noisiness feature, etc.?

I am aware that spectral features are not completely dissimilar descriptors but I am thinking about some usages of it.

But can FluCoMa be used for such an approach? Is there any examples for that?
I’m sorry if I couldn’t explain my doubt in a clearer way, but I have this intrigue question and would like to discuss it applied to fluctoma and supercollider.

Thanks for the opportunity
With The Best Regards.

zip file is just the paper above mentioned

A New Test for Measuring Individual’s Timbre Perception Ability.zip (1.7 MB)

I don’t think there is a canonical way of mapping these things, there just happens to be conveniently perceived musical aspects which connect well to the contour of the descriptors as they are measured in realtime.

Check our the SpectralShape class help. There should be an example in there showing how you can connect various spectral descriptors to a filter. You could of course connect inharmonicity to something like a ring modulator and ensure that higher values = more sidebands or something. Things start to become more interesting (and hard to tweak) once you get into having multiple mappings that scale different – kind of like a macro control. Think band width, sidebands + distortion all from one feature :slight_smile:

1 Like

Hi @jamesbradbury Thank you for your message.
Indeed!

I don’t think there is a canonical way of mapping these things

I completely agree with your comment!
I’m only thinking of it for an alternative use of audio descriptors. Furthermore, I believe that more experiments and case studies are requirements that I should pursue.
And I also follow your answer on the challenge (!) of audio descriptors for… hmm… a timbre perception factor by multiple audio descriptor mappings.

I will seek help from SpectralShape Class. I’m sure I’ll find something along those lines.

Thank you very much!