Interesting discussion on corpus-based musicking

Two people on this site have talked together for 90 minutes on corpus-based musicking, I think that should interest some people :slight_smile:

@rodrigo.constanzo and @encanti it is fun to see your thoughts and tunes, congrats!

6 Likes

Enlightening!

@rodrigo.constanzo @encanti
You talked about the art of creating corpuses. What would be an approach to dynamically include and exclude certain parts of a corpus?
With parts I mean not regions of the corpus (for that one must change the analyzed features of the target signal I suppose?)
more for example all the samples from a specific folder or time frame of the source signal.

Maybe this is not the right approach - but the goal would be to have some dynamic control over the selection of samples of an already constructed corpusā€¦ In a live performance for example.

Thereā€™s obviously loads of ways you can go about it.

At a macro level, deciding what to include in a corpus at all, and how detailed/thorough it should be, will probably have the single biggest impact. For example, Iā€™ve quite gotten into having relatively small corpora, so it ends up sounding kind of stuck and repeat-y in certain settings.

Beyond that, you can choose how you navigate the corpus. I really got used to using @a.harkerā€™s entrymatcher, which allowed for very flexible querying of the corpus space, and you could chain loads of queries together (e.g. ā€œthe nearest match within only samples longer than 500ms, with a bright centroid, and no pitch confidenceā€). In FluCMa that kind of querying is not possible (in the same way, or at all in some cases), so Iā€™m approaching things differently.

I know @tremblap likes pre-splitting the corpus up into different chunks and then querying within each of those sections. I think thatā€™s quite handy, but not so fast for quick realtime use (i.e. ā€˜per queryā€™ variations).

Beyond that you can get a bit more abstract with it. Iā€™ve got a friend that likes building up a corpus by number of samples (e.g. samples 0-100 represent ā€œsection 1ā€, samples 101-200 represent ā€œsection 2ā€ etcā€¦) and then he will navigate through those sub selections of the larger corpus.

Thereā€™s also loads to be said about what descriptors you use too etcā€¦, as that can have a gigantic impact on the matching you get as well.

Thanks! I will explore these ideasā€¦

Do you know of any other (than for example your sp.tools) shared patchers using the flucoma objects for c-cat synthesis?

Would be nice to learn about different approaches/conceptsā€¦

//or resources in general, videos etc.

Thereā€™s some discussion (and patches from @tremblap) in this thread:

Hey @MartinMartin

I donā€™t know if you have read the various articles on the FluCoMa Learn website (https://learn.flucoma.org/) but many people have tried various ways indeed, and @jacob.hart article come with demo patchesā€¦ I did some implementations in here, @jamesbradbury tooā€¦ there are so many ways to match 2 bunch-of-sounds :slight_smile:

There is also the thread about pedagogues here - in there you will find some ideas and some teaching material so you can find inspiration and start-up patches for audio mosaicking, in real-time and non-real-timeā€¦ ideas of batch processing for instance.

I donā€™t know what you are looking for - the subject is vast! Let me know what you are trying to do, and what you have already done in any software so I can maybe a bit more specific.

1 Like