Fancy that
So yeah, this sample (and many others in the library (and most sounds of things being hit by other things)) have these micro-flams at the start.
When I first ran a big batch setup after the last plenary with suggested settings from @a.harker, it gave me very “wide” transients. As in, it could catch all of these little flams, and have gaps of silence between them. It would also often swallow the initial one (which I thought was my imperfect editing of the starts of the files) so I built a thing that would remove the zero padding from the start of the file after the batch process was done.
To my ears, this extraction of transients from multiple “sections” sounds really bad. Without the rest of the material that contextualizes it as a flammy sound, it ends up sounding mushy, like a smeared transient.
Ideally what I’d want to be able to do is have to tune the settings for the initial transient it meets, and then have some combination of settings (or lockout-type thing) that ignores stuff for a while. I’ve run the batch process on varying cropped/faded sections of files and would still get these silent returns, or more often, silent starts with the second “transient” being caught instead of the first.
So more concretely, with a file like this, I would want to have the biggest possible initial transient extracted, and then nothing else for the rest of the duration of the (short) file.
I’ve not tried running it through HPSS stuff, as my thinking for this was to create a database of “just transients” which would hopefully layer up nicely with subsequent bits, in a more synthetic way (perhaps the second chunk of playback would have had its transient extracted). I would think that this would be an ideal use case for this algorithm.