Play buffers based on pitch of incoming signal

Hello everyone !

Thank you for this amazing project, I’m very excited to explore these tools ! The videos are awe-inspiring too.

As a beginner (in supercollider) thank you also for trying to make it accessible. Your declared interest for the various entry points into this type of process makes it really welcoming !

I will describe here a fairly basic system which I would like to build in sc. I have experimented with a few help files but am unsure how to approach the overall design in the simplest way.

In brief I would like to trigger sound files over an incoming audio signal based on a correspondence in pitch between the two.

For this I would like to load and analyse a folder of sounds stored in separate buffers which i guess would be triggered when ‘a good deal of’ (?) their pitch descriptors approximate the pitch of an incoming audio signal.

The reason I would like that they remain in separate buffers is to be able to decide whether the files are played whole (this is my priority), or in fragments (concatenated, secondary), with the possibility of defining an envelope in the latter case, and hopefully with the possibility of limiting (or gating) the density at which they are triggered (always with the underlying condition that their pitches match).

I would also like to be able to modulate the confidence factor and call for other pitches based on the incoming signal (for example ‘inPitch ± 100Hz’, or ‘inPitch / 2’)

I have looked at FluidBufPitch but all the examples use conditional logic to create arrays and collection, so I’m wondering if this is the right place to start for a flexible solution that would respond to a live input or continuous signals. I would think perhaps something with Ugens such as Pitch, Tartini, Amplitude but maybe flucoma allows for it more directly.

I have used Rodrigo’s Combine extensively in the past (hello Rodrigo!), which was my first and only experience of corpus based concatenative synthesis. I still find it amazingly fun and accurate. However, I am slowly transitioning to supercollider and ofc would like to be able to customize a few things in the future :wink:

Thanks in advance for your suggestions, I hope for you this will be a good example of an newbie’s perspective/hopes/dreams… and limitations !
Max

Hello and welcome!

As an eternal intermediary in SC, I’ll do my best to help. I’m trying to understand what you are trying to do and removing the assumptions that might hinder your developement:

  1. load many sounds
  2. analyse their pitch
  3. (not clear about selection but from live input and finding the nearest pitch?)
  4. play the file (whole or partial) adjusting the pitch

Am I right? If yes, you will need to find a way to keep the loaded data position/buffernumber associated with the pitch and the confidence. DataSet is an option. Dictionaries too.

So for now, I’d start with making #1-2-3 work. The helpfile you refer to is doing the process in a slightly convoluted way. There are simpler ones, and I’m happy to help but we need to take it one step at a time. First, understanding the musical ideal so we can find the simplest way forward through the concepts and the code. Does that make sense?

Hi, thanks for getting back !

First, understanding the musical ideal so we can find the simplest way forward through the concepts and the code. Does that make sense?

absolutely.

  1. yes
  2. yes
  3. yes ! live input and selecting sample with nearest pitch.
  4. yes, just to be clear i don’t need to adjust the pitch of samples, rather sometimes to call for other pitches in the folder based on the pitch of incoming audio (eg: look for inFreq + 100 or in inFreq * 2 etc)

Am I right? If yes, you will need to find a way to keep the loaded data position/buffernumber associated with the pitch and the confidence. DataSet is an option. Dictionaries too.

Ok Dictionaries sounds good. Recently I got used to loading folders with convenience classes such as BufFiles or Convenience but i’m not sure this will work. Will brush up my skills on dictionaries and give it a shot the next few days.
I don’t know about DataSet though, will have a look at the documentation.

I think what i’m most fearful / excited about is having to write conditional logic that is flexible - i don’t even know how to write proper ‘if’ statements, nor how to route signals into them + am not sure that is even the way to go. What is your opinion on this ? - how to say : when inFreq == ~buffer[x]Pitch, play ~buffer[x]
between a live signal and a dictionary

Thanks again immensely for your help and for this project,
max

ps; if that’s of any help to understand the musical ideal, here is a (quite sloppy !) example using Rodrigo’s combine on sinetones a while back : http://u.pc.cd/jnQ7

++ a more composed example, sc material but the harmonization is all done by hand in a daw. http://u.pc.cd/PUsrtalK Hopefully do stuff like that by surprise in flucoma !

Hello!

As much as I like what I hear, I definitely do not hear a pitch-driven process in there, so maybe the mapping is clear to you because you know the material…

What I would say is to start with point 1 and 2 above. We have produced in SuperCollider (and in Max) some utilities that load audio in a buffer and generates a dictionary with the positions (FluidLoadFolder), one that takes that dictionary and optionally further segments it (FluidSliceCorpus), and finally one that takes the dictionary of either of the above and analyses each element (FluidProcessSlices). You can check their respective helpfiles, and the examples in the Example folder.

But first, I think you need to be able to get to sounds in a buffer. You can use the utilities above, or your own, but having a place where you know the sounds are, being able to play them, and to add their pitch. Then we can find one of many ways to query that, including the FluidDataSetQuery class.

Let me know where you block next, so I can help further and/or in more details.

As much as I like what I hear, I definitely do not hear a pitch-driven process in there, so maybe the mapping is clear to you because you know the material…

aha, fair enough. I don’t know if you caught the first example which is more obvious - besides the unnecessary rhythm on top it’s really just samples triggered from sine tones with rodrigo’s combine set on pitch only.
As for the second one (at least the first minute) it would be more like an example of calling other pitches based on the pitch of the incoming signal. So if you imagine something like this to query for pitches in the folder, with ~inFreq being the pitch of the incoming audio: Array.makeScaleCps(~inFreq, ‘major’, 100, 1000).choose.
i don’t know if this makes any sense? Maybe we can just go back to it later, apologies if it’s hard to read/understand !

What I would say is to start with point 1 and 2 above. We have produced in SuperCollider (and in Max) some utilities that load audio in a buffer and generates a dictionary with the positions (FluidLoadFolder), one that takes that dictionary and optionally further segments it (FluidSliceCorpus), and finally one that takes the dictionary of either of the above and analyses each element (FluidProcessSlices). You can check their respective helpfiles, and the examples in the Example folder.

Thanks, this is very helpful ! Helps me understand the process a bit better.

But first, I think you need to be able to get to sounds in a buffer. You can use the utilities above, or your own, but having a place where you know the sounds are, being able to play them, and to add their pitch. Then we can find one of many ways to query that, including the FluidDataSetQuery class.

Yes. Sounds loaded in buffers is ok. Adding their pitch will be the next step. Will try this out in the coming days and get back to you with hopefully more specific struggles.

Thanks again for your guidance, have a nice afternoon !
m