Vague Question on Performance Practice

I have well-established protocols for what I will/won’t do with Max during live performance (CPU limits, hot swapping polys on-the-fly, etc.)

With FluCoMa, it still feels a bit of a mystery to me. To economise on code, for example, I recently did some performances where I’d swap out datasets on-the-fly. Small enough not to drop audio, but certainly some momentary precarious glitching. Not an issue with what I was doing but felt risky. Perhaps hard-coding/loading would be safer?

Maybe there’s already a thread here that addresses this but looking for some pointers towards best practices for live performance of the more intensive tools in the package (knn searches, etc.) For now, I just go with extensive prior testing, so maybe that’s all one can do.

I should probably look more into threading à la this… um… thread
Threading/blocking on vanilla fluid.dataset~ messages but curious if that is the only concern here?

Apologies for vagueness of question :cactus::sunny:

Depends on what you’re trying to do, but these days there’s plenty of RAM to go around, so you can always just load up parallel process/datasets all you want, and switch between them, rather than keeping the same objects and swapping datasets. Even with really large datasets I would imagine that wouldn’t really be much of a problem.

That being said, threading can be quite mysterious. My general rule of thumb is @blocking 2 for small/fast tasks, and @blocking 1 for everything else (where applicable).

(for my actual patches/abstractions I do a funky loop thing where the first pass of everything is @blocking 1 and spawns buffers, then switches to @blocking 2 for speed/latency purposes after that)

2 Likes

Thanks Rodrigo! That’s helpful. I’ll switch to pre-loading and explore threading a bit more…

1 Like

Just to shade things in a bit more: any of those operations that involve disk access or model replacement (read, load, write, dump) are always deferred internally IIRC. So, in principle (assuming your machine isn’t being thrashed) the only thing that should burp as a direct consequence is the GUI.

However, if you have audio or high priority thread stuff that is using the models that are being swapped, then you probably will get a burp and possibly run afoul of the current total lack of thread safety. In these cases, I’d suggest having two instances of objects you need for dynamic loading. You have one that’s ‘live’ for querying, and load in the background on the other, swapping over when it’s done.

1 Like

Thank you, that makes sense. I’m probably also running a few processes that shouldn’t be run in real-time. Need to sit with this and do a thorough audit.