Not clear what it’ll be mapped to, but this has some cool potential. This was pointed to me by @spluta and I’m kind of certain that @rodrigo.constanzo will drool on the potential and that @weefuzzy will want to poke at it :
That’s pretty cool.
I guess here’s an example of it doing some processing/synthesis based on the analysis:
I guess the idea is that it’s a smarter(?) version of Ben Carey’s _derivations thing.
I wonder if the electronic sounds are informed by more machine listening stuff or is it simply driven off the ratio between those 10 faders.
A video of it in context as well:
Heretic is an artificially intelligent computer music system to be used within the context of collaborative human-machine free improvisation. Heretic is written in the SuperCollider programming language with specific aspects of the system implemented in the machine learning software Wekinator. The motivation behind Heretic’s inception is not to design a mirror into my own improvisational practice, but rather to use machine autonomy to explore a new form of human-machine collaboration. Heretic’s architecture consists of three interdependent modules: interpretive listening, contextual decision making, and musical synthesis. Interpretive listening is a collection of ten multi-layer perceptron neural networks that are organized according to my interpretation of Anthony Braxton’s Language Music System. The Contextual decision making module uses Joe Morris’ Postures of Interaction as a framework for designing a series of cascading Markov models. The musical synthesis module uses a flexible laptop improvisation framework for Heretic to express its musical decisions. Heretic is trained on my approach to improvisation, but by interacting with a human performer its own improvisational voice and modes of musical expression emerge.
I did the short course at IRCAM as part of manifeste 2016 with Hunter! Small world in geeky audio land it seems…