Spectral "Compensation"?

Oh nice.

So one intended implementation of this idea would be to include it in C-C-Combine, where for any given analysis frame there would be an input frame (already analyzed) and it’s nearest match from the database (then using lookup to get the difference between the two)).

So in that case there would be a known frame for both.

If I understand you right, would that then entail something along the lines of using irtrimnorm~ (or fluid.bufcompose~) to get the audio from the incoming audio buffer (using a JIT approach in C-C-Combine) into its own buffer~, and then running irplapprox~ on it to get an IR which I can then apply to the playback grain?

I meant more in the sense that it may involve buffer~ operations before/after it, which I guess have the same issues as the threading/overdrive stuff from this thread.

That would be quite cool.

I tried whipping something up using biquad~ and taking the raw readings and going all wiggle-waggle on noise~ to see if it works, and I guess I could kind of hear what was happening, but obviously this is not a great way to go about it.

Unless it’s crazy complicated, I’d be curious to see it either way if it’s handy.