Continuing to work on the ‘over-night’ analysis engine and made some interesting discovery today. As mentioned previously, I’m running several instances of the patch, as all the poly~ and parallel processing did not lead to faster calculations.
My setup for the past weeks was: copying the Max application several times and run 4 or 8 Max versions. Each of the apps loads one copy of the patch and I give each a portion of the large sound folder to analyze.
What I noticed was that memory creeped up considerably and the initial quick calculation would slow down to a point that one single short sound file would take 15-20 seconds to complete. The same file would analyze in less than one second if Max was just freshly started.
SOLUTION: I packed the entire analysis portion of the patch into an abstraction. With scripting I’m tossing it after each analyzed subfolder and re-instantiate it before the next folder gets loaded. Memory behaves much better. Calculation time went down from a full night to about 2 hours.
I’m afriad not (unless I’ve missiagnosed that particular problem).
Probably the only way to account for @tutschku’s issue here is to do runs with his whole patch on data sets of a comprable size, using a build with debug info so that once performance has started to degrade we can attach a debugger and actually see what’s happening. When I last looked at this, here, I wasn’t able to reproduce on a smaller set of data, which I why I think we’ll need to go all in.
Hello, I’m bumping this discussion because I recently experienced something similar to what happens in this thread.
When analyzing a large buffer with [fluid.bufnoveltyslice~], the object seems to take a big chunk of memory and not releasing it, even when the analyzing process is done. I did a little testing patch, here’s an example of what happens:
I open the testing patch / Max takes 172 MB of memory
I load 337 MB of audio in a buffer / Max takes 860 MB of memory
I analyze the buffer with [fluid.bufnoveltyslice~] / Max takes 3652 MB of memory
Ok so this time once the audio files are loaded into a buffer and [fluid.bufnoveltyslice~] has done the analysis on it, Max is taking 3554 MB of memory.
If I delete the audio files and/or slice points buffers objects, Max is still taking 3554 MB of memory.
If instead of deleting the buffers I send a “clear, size 11” message I get:
Send “clear, size 11” message to audio files buffer : Max takes 2883 MB of memory
and then, send “clear, size 11” message to slice points buffer : Max takes 2883 MB of memory
I also tried:
Send “clear, size 11” message to slice points buffer : Max takes 3554 MB of memory
and then, send “clear, size 11” message to audio files buffer : Max takes 2883 MB of memory
Thanks for the simple patch to look at this with. I’ve had a nose about in Visual Studio to see what’s going on memory wise. Happy to say that it’s not a leak; however, you will see the memory grow markedly, and quite possibly descend again although it still holds on to a bunch. This is because the internal buffers used by the fluid.buf~ objects aren’t junked after the first run, but kept alive for subsequent invocations, which is useful if hitting them often.
Medium-term I think I can reduce some of this memory usage, because it does seem excessive.
No, it’s everything, but the OSes might differ in how much memory gets returned from Max to the OS once it’s been freed. In both cases though, there will be a marked net increase.
I just tried something else that feels like good news too (at least regarding what I want to do). I’ve try loading almost 2 GB of audio instead of 337 MB just to see how it reacts. After the analyzing process (and a huge peak of memory), Max is taking 4084 MB (only 500 MB more) and everything still feels very sharp.
So I guess that somehow, these objects are managing to keep their memory usage reasonable.