HPSS/Transients-sliced "realtime" granularization

Actually, kind of like @jamesbradbury suggested, this can also impact how grains are recorded. So say there’s a massive rolling buffer (an “everything that has happened” buffer), when the onset detection algorithm fires, it queries the buffer, as it presently stands, for novelty (or whatever), then use either/both loudness (derivatives) or spectral (spread) to determine how long a snippet to take, and build up the rolling buffer this way.

Another thing I tested a while back (before I ran into a @startchan bug) was doing something similar to this idea, but with NMFed buffers.

I never got it fully working in the end, and then moved onto other things, but the core idea I may return to.

Basically, I would run audio into record~ @loop 1, and whenever the loop was done, it would decompose the contents of the buffer (at that moment) into a fluid.bufnmf~ @components 2. That was then sorted by spectral centroid to be a “dark” one and a “bright” one. I then ran that into fluid.bufonsetslice~ and copied all the slices (100ms long) into another buffer, which served as the “final” buffer for granlurization.

I was using a motion sensor on my left wrist, which tracked when I changed direction, to (re)create an NMF-y split of the material I had been doing in the past (i.e. moving forward plays from the “bright” buffer, and movement backwards from the “dark” one, and my absolute position also scrubbed through the position of the buffer).

As I said, I got hung up on the @startchan bug from a while back, and then decided against the motion sensor since I have enough going on already, but a process like that could work well in this context too.