Pre-processing for (Training for real-time NMF)

I just meant that you would be moving a single synthetic parameter vs an acoustic bass drum would have a lot more things change along with it.

Though honestly I was mainly concerned with snare/hat synthesis which tends to be based on broadband noise, which would (presumably) have a much more identifiable spectra from a source that doesn’t, than a hat.

BUT you know what you’re doing more than me! So I look forward to seeing the results of the experiments.

Also curious how you deal with time too. If you use the “look for transients in the training set and then analyze 30ms from there” technique, or if you use the “analyze only the extracted transients and @filterupdate 1” technique @groma suggested, or if you’re just chucking whole audio samples at it (like my initial approach that @weefuzzy recommended against).

At the moment, I’m exploring the first one: detect as sample accurate as possible, process the next 256 samples. For training, and for sorting. A sort of ‘on demand nmf’ as it was suggested :wink:

check this - since gen~ is so badly documented, I found this behaviour by accident while scavenging the examples…


----------begin_max5_patcher----------
1261.3oc6a80bahCD+Y7mBFd5tabSQfwN95byz6av8dZGOxfLQ8vBFP1M85z
7Y+z+fJgABwF6ISG8Pf3ckzt6OsZ42hS99LGus4Ogp7b+S2Gbcb99LGGgHt.
G0mc71CeJNCVIFlG4v9snxm8lK0UQ+VFRnnVR9AZFhR+VARtrdU3TBLyatq2
trbH0y8ypQtOOQLl.0mYKMlvlaktvc4D5NXrXf9JYEPZ7iXR5lRTLUZkEgqt
yetavp.9snkhOv98FiwWmJ7+IVG.ePRw3Dgymu8KuKXQSPgSElqYT7ISf6kA
5eWh4gSiOKCXCmtnDUgHTHEmS50I8UWb+LeR+X1L9k4ica.8UlOW6ETzSBS3
khHO69QlS4F39wDDwc4qeipAxL1O.uL1GJBKf7lA3qixQcCcfwAcgsfNcuBU
p.LEhw12vYninxJ1ZoYCGOXQglXGsovg4ujKVnUyaDgIRQgMhJQGw0yOpQJr
jgMTlSenTh0OsrNoRkuWRNfEthTHaCeV8BVGpK8umGdua8JQXtzWbaQjV753
gRvT9tv1z37Loy8.CQVOW6BPeBaSyxi+WTh14HVJPAhfI5Ptg5DzN3gL5ltO
6Xpu0YTSkcd1wwKsDmjS3NgwtCWbs4dvEDIxphzCFwHHvhNlLKGigj8nrhEj
Gp1BK4adayzq9vNDkmmYppYdYncTk5BLgzBEo4E8qrDm93.ycaNS49gVaglp
MGHRsaXk.napfGMQaJLKSUSvb4eBRv6gTDEK2BB7aThHPVf9XUbYdVlQ7J0b
rCMIr79XzWwIzGMJQx0vFNtnNIxqYWNAmhpnlxnvzJSImTehI5vV045MTz9h
LVTXN.iGdoeHVu5og7gphZVIESbA5JN0+5rFZS0ySqf5qonipnQhi4fE7qg9
sKg1tLpgu0pRZsBwiTTUYNS.JlUxhOxIEH.ZJ5pzwf.zZQ4f.YQQwMCC0Sop
VnWnWqozcApdeJkPSrh9h2+.Kg6cYOs82XUb+8O7oxOQjhXy8mh3+fIG4OT9
uDkleOeJMpXFAvTfI.2+fOQ1U4nEivaB2SGHom4DiIqu2cyd20BWKus5kxqW
zO162AF7yGIjgI8UKP3Ub8cCNU4GJiqSaUYGtlNHq1F6osMOf7gFe0Uwd6E2
Zds9.Xj9PXu9fRnTxqkc4aSR9gRlQWBIev8WaR9lN40ijOuPBv+FPuuNfjTu
W6OD6dv5yhcuoIrr6sr6sr6sr6sr6sr6sr6sr6mV18WNy54uA5v3WP18fUWN
69fqM69FmDb8eE92D180Ajhc+xAY2GdVr6aLAvxt2xt2xt2xt2xt2xt2xt2x
t29t6md186QUUvTzI7JeMjI6gFYv.zHAghRf.AGuf02E0OMxUmEKx.eA2z6W
cwDu6EgtK5phQxFaFEDs77HZuL5JCQfqJ.EFMZ.pu+Jhlt9vXMT87s3aWQzv
.PlaDFNT6WgSXL+F8K36heC.SvK.XZak+lkBo5fW8g95f+EpqXaE21JtsUba
q31VwsshaaE21Jt8KZqqVwEKnITIYI118TbEG1s5wk3+iRnMn7xDIwL+N4mN
kVl+GwUWVFb8sbX2VN3hrbKjrG3VKE3rsTKjqmf79ovRiBNCl.KsZDFJbBry
xajchtQ1YTaPcmtGdYoFiIa+j.TVSqUGcbSzpStVcwcZGb828V6N2XV9Gy9e
.KOoiwB
-----------end_max5_patcher-----------

Bumping this as I’m on a roll bumping threads!

A quick summary, with more detail coming from this post, but I want to be able to do iZotope-style “spectral de-noise” where I create an FIR from a noise example file. I’m not exactly sure how to go about doing that with HIRT.

I think you will have to contact @a.harker offlist since he does not reply and that is his area of expertise. I think that denoising might not help at all, although removing commonalities before classifying does make sense. Another idea would be to do a bufhpss on the signal, and only take the percussive part to do the whole process. That would remove the hum for sure.

1 Like