Is it possible to create @antiranks? (fluid.nmfmatch~)

Yeah in this specific case the “reality” has been bent a bit to get accurate transient-based dicts.

The original version I built analyzed everything, but based on @weefuzzy’s intuition that was changed since I don’t actually care if I’m matching the decay, or envelope, etc… of a given sound. In context I only want it to match via the transient, because I want it to match quickly (and with a tiny fft (64)), hence training it up on just that part of the sound.

In reality nmfmatch~ will be getting the whole sounds, but there’s a bit of post-processing that tries to grab the frame that is most synced up to a parallel onset detection algorithm (or hopefully in the future, being able to just trigger an analysis frame manually).

So even though nmfmatch~ will hear the “whole sound”, the post-processing stops caring after the transient.

What I’ll do for now is try both versions (training it off whole audio, with mixed hits, as well as with just transients) and see how it responds.

The sounds interesting, but is above my head in terms of maths/dsp!