I have to ammend, I was thinking of the difference between spectrum and mfcc in the noveltyslice
object. I’m not sure but I think that melbands is a calculation that comes before mfcc as a prerequisite. Im super fuzzy on this. If that’s the case it will be slower. I dunno! Worth testing or deferring to someone who has coded these things by hand @groma @weefuzzy.
I get this error btw:
irtrimnorm~: requested start trim level / end trim level never reached in any given buffer
Okay this was an issue with me being a dummie, but also because the values which are stored and used for number crunching later weren’t initialised.
Yeah the parametermode init thing doesn’t seem to properly loadbang
, and it’s a confusing/misleading error.
But yeah, you got it.
Whoops, I realized I didn’t minphase that IR…
This subpatch does that too now.
Sadly it is even slower…
----------begin_max5_patcher----------
4080.3oc2cs0iaiaE94Y9UnZTf9xjAjTWnTep61sMawljtXSJVTDrXfrMsGs
QVxPRdtzEa9sWdQRl5potZ6cPxDEIQoycd3g7i52t8lEKCegDuP6up8Ysat4
2t8la3mhchaR++2rXm6Kq7ci421h.xygK+0E2ItTB4kD9o2q8u9o8QgqHwwd
Aayt7d2jUOR++ODQVkHdKPL3dvcZlN360sf5Hri3GS66zrcXWBgtGn8KoOgf
C67B7II7WN73ICOjT9rdq4TBk5dCDYkQBhaL408Dw6ewRWJ8k+74THIJkgS4
3aVrwym7DIJ1KLP5UbyB286kN8MRMgIl90P9Cx9t7S4EHNEL+TQjm7xZud9Y
cinxoDpP5PDmNW7hkwhiOlv0jnfCd7mj3j+9sYjDW0D3tiDu2cknwLMX1kOJ
4sMYRWCfA6ev5lB0f4QgMU0u0Ob0WHbAIH6jg6IAdA6iHwjfD2jTZO+xqIab
O3m7vlvfjXu+GmBfTUXcWeSJEV6EY7.m7+lHOW+bFXaj25v.FQTPSvNc1qiZ
Uw4FXAlgeGAt6qowTaDpXogKFSYxCwKciXJpk972.J6hIgg9EuTd67IaRRu7
duffRRwjv8MewHusO1RaWFRu3t1d17qD+vg.wUefZSj7Pr6SEk1It99o9uEe
7u3F3syMgj3ITAHP9EIAtTF8w3UQg99E3WwUdplqrlZiuh7r25jG4uHYiA5s
6sOyHZQtVds2VRbRwyk3tMt3YhSdUHzkN0gko9vOjP1s2mxEEugBQ3jcXkiz
U37sEwqXTukG1rgD8UMun6ot56ezMlnAkuyZB.hrgb+PSgIKBVNlWiw8pI1G
R5JRw+PxzPkHfa7CcSVbWdnvra82u81rCtqmxJZTnX2sjZEV4RHYoE833cgg
IORMokZTywCZRpR6OgIGwBopiBRUTiRUX8RUSiVEqKJ7pJGNbDExsXP5EwEq
ecLjk5hfpl3gXf1fnTucKzB8QO0hyUg61Q6YqV442EF7WRz9RP3yZOR+6tCq
dTKbiVxidwZqbCzVRsf2Qy64IxZsvHsHxN1gxOJeu.xpvCA7mmQSpkum3+DM
56J2SnZzEF1PH7dKcaKLzFaxTNhyBrTUII2YbyJOPCJOqEMnYPlykgdh1RsM
mPXYJBzhvxxpgYN2P7VHR+z1y2kE4c1h3RyV0+UsstdAZ+Y3fBIHxL2vxQVT
ZilffrPH9hOJ65kIgtCRdJLMohRY4oAbBhzBg1W7xy+zan4HNFxSZNURxScz
8lSf8oiBIWMWB0179YiiIlkfURjGsGt0ZzVxhELBBZjcgXpPcmIHRfk4UP5V
bg7WGCaWjQgto.SQr.DnUQpuWbxkfXUZXUY1tPk5fxDXWvtDhlhN6g5mqgW0
poHSTEDFsaLrFQEijhmDqQ3Ezf.ZKPJSnxJAGkQzXB4jPYKS4QyFtYi1av.M
.0RTM4LUlJmHuCdRRoBdpHoyjjLsBwxi5mdL8OJIrJMpGnt0THrlQQUqtytO
Qhnxwup82DhJ2crQQRSTBplvh1ChrvxdJbgsAp3AOhRL2jjnCdmP.vp9iYUy
kIoKUyBiElQdhbujTYynAEmQTx7vz9dw7tXiMxl9Ect.h86lEPftmygQGixm
UOF4p3xenAqIuHMEHihDSPOpIxnFPHLV2PmZOAn+XfTSl04Br.QER8sWBjiy
CBqBTMT.bNuxtd8Rp3vCQqxTTYksSqHmtlDm3EjOsPeVpKG1cpjppqjAKriR
jAqhsSFUj8vOIUjI0lDpfMJI0TIHvTJLrTjLrmRpfGrQMowjRE5JqSLmb5.p
j6JdRoCfxgMblR5H6oq.cXOozgsx1G5SJcfUVdLszgpcqXMotKY8Yof3XJin
CGdjzic9ymw7F58+XcANr1K7i7k8vCumDb3XRFYKPi5I8ka2346uJzOLR5Fj
tC5.ZRyzag3ttS5R4s6yzA0fbLntET94dcnNFYwOhdfoYoHXosCl0PCSCG.h
c6FVHrgI+HajtoN6HPMMEc7cBfNh2DvwFXHNhdJ537gkaJMM0zkBB935YPne
iB2GFkutXtW2oP6NjDtMxcsWZFb4Es3n953h4gasDQuypKzAVhqTEgm7ynNq
sbk56NHOahsqHcWsh9RKnPzQTAKWhXXRkN7ivhyUR1TTQZfMg5bs.xQ21Qbj
IDXUSCWtsPSc.FPL+cRamAFyOx.4f4ODSqBsk3S1Ullw1VVha1F.ssEGQIeD
p56VtvXTGp+8dRf1GcCh09HYm2xP+0EGTRL88shoiK9BQT9hKafVXjvhyF.f
FUeerQ1ya6CdArkPFI2JT2AAsDhoLSYD11PGVm.ScGtbitV745f8eQuN6beM
aLvz1VbTsDcYuNqLyIK1OXwQhm10uS26cSRT0oSdN3+bozzjqzI8Z7.YSJg+
ieSxC7GCX.wLTW0VzPzwjE9W3jafAbSRlEotJwLD1q0czIb4EUCuHU4Sbijd
97w8SefzfQlBFiZti06f2Mu8Y+5OhtycpoSg+bydMnI2q4m+z+b.tKl.rEWK
ArL0gb2EZlkPzo5hExxPRzmHyGg2iANKWo1cWfV5TOL9aBaX43jdDnlWYccv
ZXAg.QJVNHKCtwloCjpuOo212+55nvsjfOwMn+kxEbSrLojslY4Ax6QkJRLs
DohvLv3rqykoej5IMcwkKZytQ1StaDubrdqFfqjQZuE0cT6oq1gFVxWxV2AK
B2o6j18E6Tl5mzW3sj.xStKFgdPp22Yz5zogAsA6tHXP16kE6o4KW8nA4onh
yfyb4K7sGRRBCT0inZJK14xWP5Qv4hz+tBKxfqHB+stdAesGTN03Iue3pGMe
je7ydznA8NJJLMZwweMqzM5piv+gXeu0jn9ZqWhhatilVCmWaZZcnSs5KrwI
eByg78cxqj9tFGoKxKqJMXNXu26lD48xpjH+QhKmEhljPh9p5EHtagIGpc8b
HA9P3ZRbu5knCYtLX25yTZKen2gDAmIC5ObX2R0I35hVVMwkYj7+QuUIcSle
w5X8iLDFmbUl43GWEtmzuTGONqUN5reZbTaWdJrO4trerbGhiUm05IiiNKLO
c75K6zfzporEsloxEPFteJbKa7ySxPhNgRVGaYfghVYnC4EHvHcJcmCV++rq
CSD9Efp5mcehro3p+ehzVceDCf6MmXgvZ2nu7l.1tnva3kDa.9jcYxnpu7aF
4Qkv4Sna9T6ZyeTSP4qgmvyYxJH2jU555KHWdgRUdFnlTKueMc2m..jVW9Sw
iG0k4UVdZNW5Sn8RIVbCeaX3WZeJOacRL.JY+ebZfqbvzUJZ0o5qVK+xioyb
xMs8B9xPVN.TIAvNOEGcK.0q7DyGihso7Twn9jPWeAfTjRq3qbbpX4MQW9H0
52TwWcSdZ2U32Svb8zM57LzWzDOYnU5toMAwj5N9Ew37+Y1FE10Pho9rzw51
PjJWuPfiQ5JHzwQ2FmdD1VwAJZkm0kYdWAYVjSO+uy8EKochsdD.0t3XbwoN
CsFLrjZlm5c+ll4IW3TBsZcoBBHcSjSw93Y8.La50Jfbsch+bt7vjVjWYKJu
pGch.xXa1Jp9tZO5LkhzXtbOq5o1cKhx.IrqE9VZcrOa0wSf+3u0+.QF1yce
ohCwoKpriAbclsvroLQT3yAChKNucVH3h+9qtCiI.HGSwxxCi00QBebDjll5
7wDuMhPFHWv0B4kCGZwxue9Xf2SirGj3NHVv1.XYIJluEfx.bUhkCzwb93ie
hrdP7.NOkcX9XpAo87NW7v+k36G97vTEHSwhqEaCs4iEAXR8RLO0jlxV+vlN
.f8bxnn+Hyn7oZ8sg9CzrLurrlzLiMEqlY4TJlTlX+gn8pOkHUFPiAPj2JxD
X4v8sP1lVH6N.o.bNPGLflzPi7X7rHjyRnkHxJh2SpuBHpyNMuFMGY.fLNQl
VFPLLZlYXuxVyxfFX2IsiUbJJ0ndYN5mrLUmWK2XRv53qwdkOpvl13FMBOI.
FBvhon2vB3jhKQLbVrWSb22yJ++N2jPs2wpByhImFiSHaN36mz+ILlA.XLW8
XkOlQVo1pCaIkQ0hsoITLOaV5XcAXJsngTrlGEjf4OO7NxFaKRzTO20TG4n6
XeMTirD1l990HdIelUE1W6oi424Ek7p1+XanJXLFvLM3UbQO0hfFJKcNpUNm
gNfLy5moAniCPLJFpakQ5xrxB.zgUgiUQkgt03nLReFYOfErutAqeP7kb3A1
Fuk2xCIhs5.Ioe29hCr0Oboqe5Ttlqqq4CRvsGKtD+2oa4Bk+9fvHhReWPJ8
MAo52Cjl+VfT96.BWTTZGdP7Ib4D6rCkLVS+LuTS4FyDh0OIQ8YGbnm6dC8b
BhpuhhMWMwFpjXlhVH7p2rsnIqH1QIkwwcjg5U.8dWXnW6.C8c2WXH67BcXW
WXP63BCd2VPImgx0bueU9tmPwte6pBWVNDG2sDp2gngd7ar29ASPk2EDT1OU
Mweu10C54NdP661A8ZmNn6Si0UmajxMar7ip2hFMZVz46PAJaJq1tRPu1QB5
2tQvP1IBTZWHXX6.AmOabE2sAN24NUuIt8nYhWX2CPYybEAgPuPtP+1o.Nwt
DPOh91scGf9N.ftta.zW6wdgkNkshayX0YrsUkQ2e8VrcBdSiEYcD49WPDkD
p7OIUoHR7GMRSFw8JF4ookg2HSSnKJhp.R4Uy1pVzwqJx3GB7YGBh3GK4UNx
2U0OTUYP8HcerH6xHZuWT+nRPRnUuohd0GbP0ePzNVblDJzUHpnh8VNH2lIt
KyOzoPHfI1vRF030SLcFo3iEoUFQ30SdmcCXYzdeAksgLRtUvJ6zn2tWH2dr
3lbDZqBqnneduQj8nwTkPdsxC4qwdJmwLgjQT8XjNaKJkSgf5whkjPJcuRsa
LokRnfdDkvC.0yCl4pGcyJa2qZglGLZlGPov5BhwFkBHL5kAqXAD5GJkGrkR
EzHOFOJzoliFkPabuQZ7.PY7fKokhnJ9ZvhrAzCOBlb4nDV8oK6jfIsqfOsm
HBtmnAt+HAt2n.t6H.dnKnCEP767E6djmLhNip2A6lTE8tmujkpfLW0pMhZn
ws+qxvAyWEvwmxAiNMJaUEgscEcsiBCKghVUFkmBHmcTHqBHjsdBaNWNAcFE
r8DArSbW5CYY4Tz6PMsnLpVUsnasfj0AaYUEwppuT5TAkpiEAJiF0NrHhlif
jUQZp5EPQEzkNRDXATj1sR7zBFUFIhqDBQUuqFUPE5HQiRn+TY5SIDeNRzWQ
jcptHTEPN1N.GGWF.cMx.UPho5lHpfgsASfxnrTwjSUBYkCDUkCluJhdR0sY
TAwjCm3JiLRE5wWQzP1KjPNXFRBwiWN8fTEMiimuWsKyXUPu3f4pbTJpTUBq
fLwQ38WCBDUv9UQj20KDGNZL07vSJhjvy336OhVvKD7BHiBPkL7qh7ugg5ug
h3ugg1uFQ5W2DrBnywmAOlnJdepJbAazp29629+QKBASE
-----------end_max5_patcher-----------
(what an exciting time to be alive…)
I only code things with @groma’s hands.
Anyway, yes, getting MFCCs involves getting some melbands along the way, so they won’t ever be quicker to compute (given the same number of melbands).
IIRC, inverting truncated MFCCs back to a spectral envelope is a bit sketchy, but kind of doable (inverse DCT, exponentiate, and then somehow convert the mel warping back to linear frequency). I think the stuff we were trying about above was using linear ceptral coefficients rather than MFCCs, which are simpler to turn back into a spectral envelope.
it’s interactive, and my post must be at least 20 characters, apparently
How does that compare to fluid.bufmelbands~
? To my ears, the generated IRs/envelopes sound pretty good here. Enough for the intended purpose of nudging the sample playback spectrally a bit, rather than “full blown convolution™”.
I’ll try this properly in the studio tomorrow to see how it feels but the biggest concern for me at the moment is the speed of the thing. It jumps around all over the place in terms of latency, and hovers in the 8-15ms range, which isn’t ideal given it already comes after an 11ms analysis window latency.
Here’s a video of it in context with “real world” samples.
The first bit shows the same sample being filtered by the colored noise bursts and then drums, and then I follow it up with normal querying with varying amounts of compensation (even this early test is suuuper promising in terms of the timbral impact it has on the matching).
Sonically, or algorithmically? The first, you can probably check by re-jigging the patches we were throwing around at the top of the thread last summer…
I took a bit of a peek yesterday, but having a closer look again today and it’s definitely a different kind of thing.
It also strikes me that all the stuff @jamesbradbury and I did yesterday are unidirectional, or rather, only take into account the spectrum of one source, and applies that to the other. In the case of the corpus/analysis stuff, it would stand to reason that the sample being selected matches the spectrum as best as possible, so this is just to give it a bit of an oomph. Plus there’s the fundamental issue that I’m playing back samples (much) longer than my analysis window, so even if I do a correction/inversion that compensates for both spectra, there’s nothing to say that it will be accurate (or musically useful) for the rest of the sample.
Now for the C-C-Combine use case, where it’s replacing/mosaicking, a per-grain convolution thing may be more interesting, especially since centroid isn’t an ideal timbral descriptor.
I guess where things left off, it wasn’t possible to do minphase stuff in framelib.land~
, though that may be different now that @a.harker has added some filter template things.
I think ultimately this kind of thing (both the small-to-large and small-to-small variants) will benefit from happening in framelib.thelandof~
for the sake of latency and timing (latency for small-to-large and lack of jitter and tightness for small-to-small), so some of the HIRT-specific questions in the post and patch won’t be relevant then, but some general stuff is still unknown (how to best thread super fast, but time-consuming processes and potentially standardizing/regularizing the input stuff).
I started playing with this a bit earlier when I realized that a choke point in the patch I posted is actually the initial fluid.melbands~
calculation. Regardless of the blocking mode, at the rate that the onsets can come in, poor .melbands~
can’t keep up.
So the options are either wrapping all of the analysis (including the descriptors and associated stats) into a poly and kind of round-robin/busyflag through the amount of voices that make sense. I’d probably err on the side of having too many (available) voices since it would, in effect, lock out onsets, which would negate some of the benefits of having super fast/tight onset detection attached to it.
What my initial, and still, instincts are is to wrap the fluid.melbands~
(and whatever IR shit needs to happen after it) into a poly, but that would potentially make syncing up things back up more a pain in the ass than would be saved by not duplicating descriptors analysis.
On a hunch, while coding up and trying to optimize the multi-stage analysis from the hybrid stitching thread, I tried to do the fluid.bufmelband~
part of the patch on just 256 samples, and starting that process before the larger 512 analysis window finished thinking that the 4-7ms it took to create and process the IRs could happen while already waiting for the rest of the descriptor analysis to happen.
Sadly, I was disappointed. I played with a few analysis window and fft settings, and it looks like 512 samples with 256/64 for fft settings is as low as I can go (with this approach) while still having useful envelopes being produced. I would either lose some of the high end resolution, or get weird lumpy bass stuff.
So for now, doing it with HIRT is the way to go until (hopefully!) @a.harker has some magic Framelib-based filter/IR tools.
A shorter FFT size than window size? This shouldn’t be possible
Ah right, should my @numframes
always equal my FFT size? (so @numframes 512 @fftsettings 512 64
in this case)
Oh, right, your @numframes
, not your window size – that’s a relief for me, at least.
@numframes
can be whatever you want. I think if it’s shorter than your analysis window size, the anlaysis will be zero padded (which may well not be useful in some situations). I should probably verify that this is true, and the remainder of the frame isn’t just garbage…
The fftsettings you quote are a window size of 512 and a hop of 64, leaving the FFT size to its default (which is equal to the window size). If you’ve not been experimenting with using a longer FFT than the window, then this may be a useful avenue to explore in addition to what you’ve already tried, e.g. @fftsettings 512 64 1024
.
This will zero pad the window and run a longer FFT, which gives you some interpolation in the spectral domain, which in turn might provide a workable trade off between immediacy with short windows and hops, and useful analysis.
Phew!
I’ve been wrestling with a patch for the “hybrid” stuff which I’ll post as soon as I have it working. It’s just tricky getting the cocktail of @numframes
and @fftsettings
for each analysis window, while still keeping latency of that step down AND the fun job of having to keep manually resizing buffer~
s each time I decide on a new @blocking 2
-worthy setting.
I think for what I want, the zero padding won’t be helpful as I’m trying to eek out as much time as I can while getting “good enough” results.
To be clear(er), a bigger FFT (rather than window) won’t affect the latency per se, but will add some processing load.
But, you should be aware that resizing buffers on the fly is going to screw up the temporal sleekness you’re trying to engineer. Buffer resizing, because it involves memory allocation, is always deferred by Max. So, at the very least you could end up getting unpredictable results if you try and process from the scheduler before a resize has completed.
Presumably there’s a pretty finite set of possible sizes you need? I would suggest that you have a pool of pre-sized buffers that encompasses this set and switch between these: that way you can stay safely in scheduler-thread-land where you want to be.
Sorry, yeah, that’s what I’m after. It’s the processing time I’m chasing down. For the 64 sample window, looking at around 0.2ms, the 256 jumps up to 0.8-1.0ms, and 768 is like 3-4ms (of processing time).
I have noticed some funny business where sometimes I switch to @blocking 1
and things speed up, but I haven’t been able to isolate/narrow down when/why that’s happening.
It’s just a tricky as there are so many objects and related buffers that changing all the @source
/ @features
for each fluid.buf..~
objects takes some time, as does changing the buffer~
sizes based on the amount of analysis frames that get returned.
Don’t know if this is a viable option at all, but it would be amazing if an object set to @blocking 2
, if it is presented with a buffer~
that is the wrong size, or has no size, that it resizes the buffer, once. So basically whatever loop throws up the “buffer is wrong size” error just resizes the buffer instead (internally in @blocking 1
I guess) and then carries on in @blocking 2
.
edit: made this a feature request here to not “cross streams”.
OR if there’s a better/faster way to just the list of values that a buffer~
will require than checking the attrui
for samps (which never seems to line up with the “actual” size in ms) and then checking an info~
for amount of channels.
I’m afriad not, but thanks for filing the request separately, and I’ll give an actual explanation there.
Perhaps? It seems like the kind of thing that should be abstractable-away in advance, without needing to interrogate buffers directly. How do the samps and ms fail to line up?