Max, poly~ and parallel processing

I had a first quick look yesterday evening. The good news is that there isn’t an obvious memory leak where the heap is just getting bigger and bigger on successive analyses. I haven’t yet tested against an especially large set because I have to use un-optimised builds for this, which makes everything verrry sloooow.

If you’re expecting to see the reported usage go down after an analysis, this is unlikey to happen, because released memory isn’t (normally / often) handed straight back to the operating system, but remains in the application’s free list against future allocations. This is all low level stuff that we don’t control.

However, there is scope to lower the memory footprint for your patch, which might well help. FluidBufMFCC uses quite a lot of memory internally because it needs to maintain a matrix of stuff big enough to cope with its maximum FFT size. Given that your FFT size is fixed, changing your boxes to specify 4096 as a maximum up front should help:
[fluid.bufmfcc~ 4096 @source htsound @features fluid_mfcc @numcoeffs 20 @fftsettings 4096 2048 4096]