Looking inside BufNMFCross

Hi! I’ve been using FluCoMa for a few months and getting some fun results. I am however quite new to a lot of what powers it, learning a bit “on the go”, so I apologise if the question is a bit nonsensical.
Basically, how “Black Box” is BufNMFCross? Is it possible to access some of the intermediate results of the process?
Specifically, if it’s possible to know which spectral frame is being used for which moment.
So instead of outputting into an already rendered Audio Buffer, if it was possible to get from it a DataSet of spectral frames and their location in the output. This would allow me to fine tune or modify the result in more ways, for example assigning different spectral frames to different channels.
I use Supercollider.

Hello and welcome!

Don’t worry - I’m 8 years in and I still think of new ways of using it :slight_smile:

The answer is “pretty closed boxed” - the heavy lifting is slow even in optimised C++!

The ‘simple’ explanation is on the learn platform and for the math-savvy, in this academic paper led by our DSP mastermind at the time, @groma.

The good news is that you can probably try to do something “similar” in FluCoMa itself. There is an example of just-in-time nmf in the example folder. If you do short grains, it is faster than real-time, so you can do loads of fun stuff there. I coded a classifier for instance… but you could do comparison of your fft bases in some way (sorry no simple way to do that) but if you want to assign you can do something crude there too. hours of glitchy fun ahead there.

p

2 Likes

Thank you for the answer!
Looks like a lot of fun learning is ahead. At the minute I’m still understanding how each tool works and getting “Fluid” with it :wink:

1 Like