Where to start? / Tutorials

Hi all. First post: this is an absolutely mind blowing project - Love it!

I’m very interested in digging in to the Flucoma tools, but I’m also a little lost at how to begin. I’m particularly interested in resynthesis using different types of descriptor datasets (for example, being able to weight towards spectral centroid as opposed to pitch and so on), but I’m finding it hard to work out how I would start tackling this, and combining these different objects (which I by and large mostly understand looking at the Max help files) into musically usable systems/Max patches.

However, I’ve noticed a number of users posting obviously very advanced applications of the Flucoma tools, so was wondering what resources they used to learn about this? Any help or pointers would be much appreciated! Thanks!

Hello and welcome!

Your question sounds simple, but it is actually full of personal interests that peak through… the good news is that we are designing various entry points at the moment and for the next year, so understanding what tickles your interest is helpful to us to enable more people to join in.

You say:

which I wouldn’t want to read with my personal music hat on but yours. I don’t understand all the missing parts that you assume in your question too. So my questions are:

  • do you have a resynthesis idea in mind?

  • what other tools have you tried that gave you ideas of descriptor-driven synthesi? CataRT? AudioGuide? AudioStellar? Any other (there are many!)

  • what was missing from those you want to see happen in your FluCoMa-in-Max experience?

If you tell me more about where you come from musically and in Max, and where you try to go with them, I can give you a clear pointer to the next few ideas in your path.

Looking forward to reading your answers

p

1 Like

Hi,
Thanks for your reply. I don’t currently have an exact approach to resynthesis in mind - I suppose I’m interested in being able to select different features as having priority, or morphing between weightings of different features.

I have been using AudioGuide, which I like a lot. The issue I’m finding is that doing everything in Python is really slowing down the decision making process.
I’m looking at Flucoma as a way to get more hands on realtime control for auditioning sounds (it wouldn’t necessarily have to be super low latency, so long as the latency was predictable). The other downside of AudioGuide for my use is that, as far as I’m aware, you can’t evolve your parameters over time, which I’d imagine would be much more possible in Max/Flucoma, and is definitely something that holds a lot of interest and potential sonic complexity. My initial usage of this will also eventually sync with video, which is easy with AudioGuide (parsing the json export), and with Max. So that’s why I’m looking at flucoma right now. I suppose what I’m looking to make is a real-time AudioGuide, with fairly adjustable resythesis parameters.

Musically, I write for what would probably be termed contemporary classical musicians, often in combination with electronics and multimedia, and I’m fairly goodish with Max, having used it quite regularly for 10+ years.

Does any of what I’ve written make any sense? I can probably start delving deeper into the help files and watch the plenary videos, but thought there might be documentation with common building blocks, but I know documentation is an absolutely massive additional effort, and the number of possible applications of flucoma is vast.

Thanks again for your reply!

Thanks again for this. There are a lot of questions still after your answer so I will try to unpack them and propose avenues forward with what is currently available.

  • it is true that AudioGuide is amazing. I used it and still plan to use it as it has a lot of affordances for what it does. One next point for you in that world is maybe to decide what is missing in it for your musicking. You speak of real-time control, but on what?

  • the idea of the relative weight of descriptor in a query is very clearly implemented in CataRT as well as if AudioGuide, and I’ve used it a lot too. But implementing it, one discovers that it brings the bigger question of what proximity is. I found out that I never want the same answer, it is very situated. Again this is where the iterative querying of AudioGuide is helpful to understand what we care about. There are ways to do similar workflows to both CataRT (skewed euclidian distances) and AudioGuide (same, plus consecutive sub-queries) in flucoma, but you need to know what you are missing.

  • both of these are taking for granted that your resynthesis is a concatenating of samples, but these samples do not need to be audio. They could be settings of a complex synthesiser, for instance. Again, what you care about at a given moment to define proximity is quite important.

So… how to get started? We don’t have easy big blocks in the early tutorials yet, but you can get started with the folder examples/dataset/1-learning examples which explains a few key tools enabling various ways to find sounds in a corpus… but more importantly, they show how everything is biased: the descriptors, how you deal with their evolution time, how you query, and how you think about all that musically in context, will influence your decision making.

Agreed, the folder is messy for now, the order is a bit too much ‘data science’ and gratification is a little late in the process… but this is what we are working on for the next 15 months, so if you could decide to park it for a few months, we should have more approachable handles mid-autumn-term, or you could embark on the current stuff and ask questions here, which will help us understand various learning curves.

Hi. Thanks for your reply. I have to confess I couldn’t find the examples folder - I was looking at everything via the patch “Fluid Corpus Manipulation Toolkit” in the Max Extras menu (AFAIK the examples aren’t linked/referenced in the object help files). I should have probably had a little look, but I just dragged the Flucoma folder into Packages and left it at that.

I’ll have a look through those, and thanks again for pointing me in the direction of that example.

1 Like