Not using FluCoMa stuff at the moment, but in working on the (epic) Kaizo Snare blog I’m recording and added other bits of related media to the project.
This is expanding on some of the stuff I did in Kaizo Snare where I take a bunch of secondary controller streams from the fader (overall activity level, distance between direction changes, time between direction changes, “velocity” from that (distance/time), etc…) and use these to control the whole patch. So 8 streams from the ES-8 controlling everything from the single fader.
Would be interesting to gestural decomposition either in an NMF-y way like that CV stuff I did at the start (which I never got working in a real-time manner) or something like that Heretic thing from the other thread.
Even some “simple” novelty stuff to send triggers when new material is detected.
The funny thing is I set off to try to get as much from a single fader as I could, and that shit fills up real quick!