Somewhat inspired by @a.harkerposts on masks I was wondering about using audio to mask or punch spectral/temporal holes in another bit of audio, and how that might be done.
So instead I’ll post what it is I’m thinking about, and then see if this is possible, and if it’s possible with NMF-y stuff.
Say I have some broadband audio in a buffer/loop/sample/whatever. Like a bowed cymbal, or maybe lets say pink noise to make it simpler. I’d like to then take some live audio input and use that to mask those frequencies out in the noise/buffer audio, on a temporal basis. I can’t imagine this is even remotely possible (or audible) in a real-time context, so the idea would be to create “subtractive” looping on the noise, where you start removing bits of it based on new playing.
I kind of feel like this is something NMF can help with, but I’m not exactly sure what would be what, and what the sequence of events would be (presumably with fluid.nmfmorph~ being involved).
this is exactly what the vocoder example is doing, with a pink~ source. My helper patch does that with adc~ pink~ and noise~
it sounds good, as you heard the other day it is a spectral domain filter with envelope. In my helper I even interpolate the activations for more pleasure, and you can loop any bits.
I mean the opposite of that, unless I’m misunderstanding you.
I don’t want to hear the input as synthesized through the other thing, I want to “poke holes” in the noise, so it sounds neither like the input or noise.
To use a visual metaphor, I think you’re saying this:
Oh, and I think this is kind of different as well (which makes me think of fluid.nmfmorph~), but I want to retain the temporality of stuff, whereas in the vocoder the noise is static (as in, not changing).
So kind of like a reverse convolver, but that doesn’t sound like bubbly fft bullshit.
another option: just invert the filters/bases and run the same process. you can take them, flip and normalise (so the silences are now 1) or more sexy, take them, convert to dB, subtract from silence with a cap (a log inversion with a threshold), then change back to lin. That should be fun!
As in, use fluid.nmffilter~, but fluid.bufscale~ the bases to “invert” them?
Like, doing some interpolation on them?
Do you mean back to vocoder-land, or also using fluid.nmffilter~?
Also, is there a way to do this as an offline process? I’m thinking in a “looper” context where the “noise” would be what’s in the buffer, and then the real-time audio would process the buffer’d material. I guess I can constantly play/re-record everything, but it will get a bit faffy to make sure things line up correctly with fft window offsets and such.
Played around with this some today and I think NMF might not be what I’m after here.
Since the basis (bases?) are static across any given NMF, it’s a bit more like a sidechained (static) filter response as opposed to masking out time and frequency-variant spectral material.
It (NMF) does sound good as a way to decompose vanilla bits of audio and then “peel back the layers” in an interesting way.
yes, but this is where my JIT windowed approach is interesting: you can divide in time with new divisions, faking a continuous approach. i like to seed them with 3 dc bands which give some sort of BD-SN-HH starting point, a sort of crazy crossover…