Where to start? / Tutorials

Hi all. First post: this is an absolutely mind blowing project - Love it!

I’m very interested in digging in to the Flucoma tools, but I’m also a little lost at how to begin. I’m particularly interested in resynthesis using different types of descriptor datasets (for example, being able to weight towards spectral centroid as opposed to pitch and so on), but I’m finding it hard to work out how I would start tackling this, and combining these different objects (which I by and large mostly understand looking at the Max help files) into musically usable systems/Max patches.

However, I’ve noticed a number of users posting obviously very advanced applications of the Flucoma tools, so was wondering what resources they used to learn about this? Any help or pointers would be much appreciated! Thanks!

1 Like

Hello and welcome!

Your question sounds simple, but it is actually full of personal interests that peak through… the good news is that we are designing various entry points at the moment and for the next year, so understanding what tickles your interest is helpful to us to enable more people to join in.

You say:

which I wouldn’t want to read with my personal music hat on but yours. I don’t understand all the missing parts that you assume in your question too. So my questions are:

  • do you have a resynthesis idea in mind?

  • what other tools have you tried that gave you ideas of descriptor-driven synthesi? CataRT? AudioGuide? AudioStellar? Any other (there are many!)

  • what was missing from those you want to see happen in your FluCoMa-in-Max experience?

If you tell me more about where you come from musically and in Max, and where you try to go with them, I can give you a clear pointer to the next few ideas in your path.

Looking forward to reading your answers

p

1 Like

Hi,
Thanks for your reply. I don’t currently have an exact approach to resynthesis in mind - I suppose I’m interested in being able to select different features as having priority, or morphing between weightings of different features.

I have been using AudioGuide, which I like a lot. The issue I’m finding is that doing everything in Python is really slowing down the decision making process.
I’m looking at Flucoma as a way to get more hands on realtime control for auditioning sounds (it wouldn’t necessarily have to be super low latency, so long as the latency was predictable). The other downside of AudioGuide for my use is that, as far as I’m aware, you can’t evolve your parameters over time, which I’d imagine would be much more possible in Max/Flucoma, and is definitely something that holds a lot of interest and potential sonic complexity. My initial usage of this will also eventually sync with video, which is easy with AudioGuide (parsing the json export), and with Max. So that’s why I’m looking at flucoma right now. I suppose what I’m looking to make is a real-time AudioGuide, with fairly adjustable resythesis parameters.

Musically, I write for what would probably be termed contemporary classical musicians, often in combination with electronics and multimedia, and I’m fairly goodish with Max, having used it quite regularly for 10+ years.

Does any of what I’ve written make any sense? I can probably start delving deeper into the help files and watch the plenary videos, but thought there might be documentation with common building blocks, but I know documentation is an absolutely massive additional effort, and the number of possible applications of flucoma is vast.

Thanks again for your reply!

Thanks again for this. There are a lot of questions still after your answer so I will try to unpack them and propose avenues forward with what is currently available.

  • it is true that AudioGuide is amazing. I used it and still plan to use it as it has a lot of affordances for what it does. One next point for you in that world is maybe to decide what is missing in it for your musicking. You speak of real-time control, but on what?

  • the idea of the relative weight of descriptor in a query is very clearly implemented in CataRT as well as if AudioGuide, and I’ve used it a lot too. But implementing it, one discovers that it brings the bigger question of what proximity is. I found out that I never want the same answer, it is very situated. Again this is where the iterative querying of AudioGuide is helpful to understand what we care about. There are ways to do similar workflows to both CataRT (skewed euclidian distances) and AudioGuide (same, plus consecutive sub-queries) in flucoma, but you need to know what you are missing.

  • both of these are taking for granted that your resynthesis is a concatenating of samples, but these samples do not need to be audio. They could be settings of a complex synthesiser, for instance. Again, what you care about at a given moment to define proximity is quite important.

So… how to get started? We don’t have easy big blocks in the early tutorials yet, but you can get started with the folder examples/dataset/1-learning examples which explains a few key tools enabling various ways to find sounds in a corpus… but more importantly, they show how everything is biased: the descriptors, how you deal with their evolution time, how you query, and how you think about all that musically in context, will influence your decision making.

Agreed, the folder is messy for now, the order is a bit too much ‘data science’ and gratification is a little late in the process… but this is what we are working on for the next 15 months, so if you could decide to park it for a few months, we should have more approachable handles mid-autumn-term, or you could embark on the current stuff and ask questions here, which will help us understand various learning curves.

Hi. Thanks for your reply. I have to confess I couldn’t find the examples folder - I was looking at everything via the patch “Fluid Corpus Manipulation Toolkit” in the Max Extras menu (AFAIK the examples aren’t linked/referenced in the object help files). I should have probably had a little look, but I just dragged the Flucoma folder into Packages and left it at that.

I’ll have a look through those, and thanks again for pointing me in the direction of that example.

1 Like

Hi… I’m reviving this thread.
I’ve looked through the dataset examples, and I must say I generally feel quite confident with technology (my day job is interactive programming and electronic design for some of the world’s largest tech companies), but in this case I feel like a real idiot.

I’d be curious to know if looking through the examples folder is generally how people are learning to use Flucoma, or is it mostly a type of institutional network in which there’s a lot of discussion and support available at the academic institutions?

This isn’t a complaint at all about the documentation, but I’d love to know how people are becoming so advanced in their usage of Flucoma, and what resources they are using?

Please be critical, we’re redesigning it all right now! Feedback is very welcome.

Hey Jamonimo,

This thread makes my heart sing – not because you are struggling – but because its a confirmation that there is a serious need for documentation that is plain-speaking and verbose. I know my wonderful colleagues @weefuzzy, @tedmoore, @tremblap agree here and its the aim of this final year of the project to make the FluCoMa ecosystem more accessible across all levels.

I guarantee in any case its not your own fault and the lack of documentation :bookmark_tabs:

Many who roam here have been sticking with the software since version 0.1a (a privilege that others have not had access to) and so they’ve become accustomed to the edges and have also had a chance to benefit from people’s usage over an extended period of time. I would also argue that while people seem like they have ninja skills my observation is that users find their corner of FluCoMa that resonates with their practice and they explore around there for a long time. For me this was UMAP + descriptors for sample browsing and I’m a total slob when it comes to tweaking neural networks for example. So I think my hypothesis is that the documentation should help people find their happy place and also offer ways to challenge oneself.

Absolutely not lol! The examples are mostly an artefact of playing with musical ideas and questions and that crystallising into a patch. What we should really do, and actually its in my calendar for the immediate future, is to turn this patches into teaching vignettes. I’m just in the process of doing workshops right now.

So, my question to you is what are you interested in doing musically / technologically and how can I help? Can I make some tutorials for you and we can see how that might filter upwards into a tutorial? Let’s turn this into a win-win scenario where we, as in FluCoMa, learn how to reach and disseminate potential users and you can learn something about the tools.

1 Like

Hi James,

Thank you very much for your incredibly kind response. My apologies for leaving my reply to this for such a long time - I was called off on a work project just as it came in, and I also wanted to consider what to ask.

I’ve just watched your new YouTube tutorial about building a 2D Corpus explorer. Very illuminating, and has definitely helped me understand how data structures work and has allowed me to get the ball rolling.

I’d be interested in approaches towards combining the 2D datasets with analysis of live inputs - e.g. finding nearest neighbours for slices of live audio as opposed to drawing with the cursor. I suppose one issue is how to match realtime descriptors to the normalised 2D values (accepting that some sort of arbitrary mapping or calibration will have to happen to the input signal).

I know this is probably a vast topic, and my enquiries are quite general (and that making detailed tutorials is a lot of work!), but this is an area that I feel would open a lot of doors for me (and I think many other) musicians.

1 Like

Woo! That’s the goal.

This is coming in the next set that will be the sequel. I’ll be starting on it early January and hopefully finishing soon thereafter. Stay tuned!

As people who are in touch with the software all the time it is easy to forget that people need musicianly anchors to grab on to otherwise its all quite abstract and hard to see the value in learning. I hope the pedagogical content that we make over the next 6 months or so helps you to find a place that situates FluCoMa in your work :slight_smile: