Thanks for your reply. I don’t currently have an exact approach to resynthesis in mind - I suppose I’m interested in being able to select different features as having priority, or morphing between weightings of different features.
I have been using AudioGuide, which I like a lot. The issue I’m finding is that doing everything in Python is really slowing down the decision making process.
I’m looking at Flucoma as a way to get more hands on realtime control for auditioning sounds (it wouldn’t necessarily have to be super low latency, so long as the latency was predictable). The other downside of AudioGuide for my use is that, as far as I’m aware, you can’t evolve your parameters over time, which I’d imagine would be much more possible in Max/Flucoma, and is definitely something that holds a lot of interest and potential sonic complexity. My initial usage of this will also eventually sync with video, which is easy with AudioGuide (parsing the json export), and with Max. So that’s why I’m looking at flucoma right now. I suppose what I’m looking to make is a real-time AudioGuide, with fairly adjustable resythesis parameters.
Musically, I write for what would probably be termed contemporary classical musicians, often in combination with electronics and multimedia, and I’m fairly goodish with Max, having used it quite regularly for 10+ years.
Does any of what I’ve written make any sense? I can probably start delving deeper into the help files and watch the plenary videos, but thought there might be documentation with common building blocks, but I know documentation is an absolutely massive additional effort, and the number of possible applications of flucoma is vast.
Thanks again for your reply!