Video presentation by the artists: second set of commissions

Dear all

Once again, as we with the first tools, we asked the artists commissioned to work with the full pre-release toolbox to present in detail how they integrated it into their creative workflow.

All the premieres will be at the Dialogues festival, in a mix of online and in-person performances, on the 7-8-9 July, so we thought you might be interested to hear from the musicians themselves what they are cooking.

Last week, @tutschku has released his explanation of the piece he wrote of Mark Knoop. You can find it in this thread. I’ll add to this Topic as we release more of these videos.


ps: If you missed it, you can find all the information about the first cohort here: Commissioned Artworks and the pieces of this gig will be documented the same way after the premieres.


@tutschku presented his work in progress earlier in the plenary, so for those who are interest to see the progress between the videos, here it is:

1 Like

And this week’s film is… me! I talk about the interaction between the toolset design, this discussion, helpfile and examples, and the piece I worked on and premiered last week.

And this week it is @spluta presenting how machine learning has helped him curate an expressive space for a chaotic synthesiser


And this week it is @rdevine1 with his integration of FluCoMa driven sound design within a modular performance.


btw this is the publishing schedule:
18 May: Hans Tutschku
25 May: (yours truly)
1st June: Sam Pluta
8th June: Richard Devine
15th June: Owen Green
22nd June: Alex Harker
29th June: Gerard Roma
6th July: Alice Eldridge

Gigs are 7-9 July. I’ll post the link when I have it.

I hope you enjoy!


Oh I forgot @weefuzzy here last week, where he presents his work on ‘Regulatory Capture’, a machine-learning-driven mashing engine for his trio with Raw Green Rust.

1 Like

and this week, @a.harker presents his work in progress for oboe and real-time electronics, to be premiered by Niamh Dell in 2 weeks!

1 Like

Last week was our own @groma - I posted everywhere except here! It is an updated one where he talks about the latest development in his research on Fluid Corpus Map V.2 and how it integrates in his performance of ‘Big Fry Up’ in 2 days!

1 Like

Final instalment, this week @alicee and @Chriskiefer present the development process of their shared feedback instrument for the performance FeedbackFeedForward, to be premiere this Friday!


For info: the festival’s website is here:

and all the gigs are streamed for those who cannot attend (there are FluCoMa pieces from tomorrow)

Great to hear the detail of @alicee & @Chriskiefer’s approach in this, and how much is being done in real time with the ‘listen’ function. Enjoyed the implementation both of NN and real-time/kd-tree matching too.

Can the cue point searching use material from earlier in the same improvisation? It sounds great btw. Look forward to Friday!

I’ll let them reply if theirs do, but if not, you certainly can. Happy to talk about it if you want.

1 Like

Thanks @Saguaro, we use pre-recorded material (of the Halo installation we mentioned in the talk) here, but all the analysis and resynthesis is done live, so could equally bring in own material if you recorded something earlier in an improvisation …