This turned out longer than I initially thought, as it first gives some more general examples how I use interaction between piano and el. In previous compositions.
If you are mainly interested in the Max usage and how I combine sound processing with real-time symbolic note crunching with Bach and cage, skip to 11:30 min.
Super clear video. Great to see the earlier works discussed too (I’m reminded how useful they were for me back in the day of starting with piano+elec, and why so much of my instrument is attack based stuffs). On that subject: you stuck with existing transient detection, correct? Curious if this is simply because it worked (so no need to ‘fix’?) The modules sound really productive for your material. Congrats and thank you!
To nitpick: is FluCoMa use of NMF non-negative always non-linear?
The latter is actually not a delay line. I’m recording into a buffer. When an attack is determined, I’m storing the current position and - depending on the delay duration - play a windowed groove loop. Pitch in Groove is controlled with sig, where I can run a line to change Transposition over time. This describes one voice of a poly. I have 16 instances : each new attack can have its own rhythm ( with slight acceleration or riterdando) and it’s own glissando curve.
For the curious minds: there is a previous video, extracted from the last plenary, which contains other interesting implementation details and investigations, earlier in the composition process. It is quite fascinating!
Thanks for sharing these! Very interesting and inspiring.
I have a noobish question regarding the video in the first post of this thread: what is the module that is used to choose samples that looks like a jit.cellblock? I have been trying to find a way to choose multiple simultaneous cells in a jit.cellblock with no luck.
NMF = non-negative matrix factorization but your slide said non-linear matrix factorization, which is also a thing. Just curious if this was intended… Not a big deal.