Audio-video Max improviser with acoustic ensemble

Good evening to all!
I’m a young italian composer who started to learn a few things about programming in Max/MSP 4 years ago. Long from being an expert programmer, I have some compositional ideas which I can’t realize without some suggestions and help.

I’m planning on developing a patch to be used to improvise along with an ensemble of 15 musicians improvisers, for whom I need to write a piece by late March 2023. I found a project by Taylor Brook (Taylor Brook - Composer) in which he used, among many other things, Markov Chain algorithms to obtain something similar to an “imitating improviser” thanks to a poly~ object playing back an audio buffer recorded in real time. I’d love to go on with this approach, maybe implementing FluCoMa tools in order to optimize it and opening new ways of making the Markov work (but I’m open to any suggestion of course).

Moreover, since it would be particularly fitting for this work because of many reasons, I want to realize a second algorithm that could create a dialogue between video files (pre-recorded or not) and the real time audio played by the ensemble: it seems to me that if the parameters to match are chosen wisely, the result could be very interesting. I haven’t found yet examples of FluCoMa tools used in real-time audio-visual environments.

Do you have any suggestions at all on where else to look to find people who already did something similar before, or about anything really that could help me realize this ideas?

Thanks to all in advance.
Love,
Giovanni

2 Likes

Hi @giovannifalascone,

Regarding Taylor’s work, you might be able to ask him! @brookt

As for the video stuff. I have done some work with video and FluCoMa, see quartet, however the video wasn’t done in Max, instead it was done in openFrameworks. See the article for more. The product was a fixed video file, but as I was creating the piece I was working with the materials in real-time. Let me know if that article helps at all!

Best,

T

2 Likes

Sounds like a cool project! In fact I’m currently working on something like what you are describing, a new version of my computer improviser that integrates the flucoma objects as well as allows for non-realtime training. In any case, I think there’s a lot to be done and a lot of directions something like this can go, but happy to dialogue with your here if it’s helpful.

As for video, there is a video module to my earlier improviser, but all it does is record video live as the improviser is trained and then plays it back when the corresponding audio is recalled and sampled by the improviser.

2 Likes