Fluid.list2buf thread funny business?

So doing some regression stuff and I keep getting this rare/intermittent errors from fluid.list2buf which outputs terse and often malformed error messages:

Screenshot 2024-02-23 at 9.03.55 PM

Screenshot 2024-02-23 at 9.07.08 PM

(it’s worth mentioning that the buffer being used for this error message is 4458tempinputbuffer and not whatever Buffer,7 is)

Screenshot 2024-02-23 at 8.48.39 PM

The third one shows the messages going to fluid.list2buf when the errors happened. As far as I can tell there’s not really any pattern or reason (and the empty error there doesn’t really help).


Actually poking a bit and it seems like having multiple threads coming into fluid.list2buf freaks it out. For context I have it set to @autosize 0.

I (occasionally) have a line 1. upstream so when paired with controller input means that different threads are knocking on fluid.list2buf’s door.

Here’s the output of getthread when errors come up:
Screenshot 2024-02-23 at 9.18.35 PM

Interestingly/annoyingly, it doesn’t happen all the time.


I managed to make a patch that produces the error intermittently:


This produced this in my console after running for like 30s:

Screenshot 2024-02-23 at 9.31.27 PM

(p.s. it’s impossible to copy/paste the error as it’s empty empty)


Is there some internal thread stuff that is not safe (sometimes)?

Off the top of my head, messages coming from the high priority thread that involve buffer access probably have to be defered internally, which certainly doesn’t mean that hitting an instance from different threads won’t produce funniness. Garbage error messages are worrying because it suggests that stuff is getting overwritten that really shouldn’t be. The blank error message will be something different I think, but should be zapped.

Can you put the last bit of this (the patch to reproduce, however intermittently and a description of the error message garbage) in the flucoma-max issue tracker?

1 Like

I don’t really remember having this (kind of) error elsewhere with threading stuff, which is why it stood out so much here.

I’ll post it to the git issue thing tomorrow.

I have to say that this patch’s threading management gave me spasms… but hey, we can expect less-than-super-flucoma-coders to create such conflicting threading conditions I reckon :slight_smile:

1 Like

Hehe, to be fair, the real patch doesn’t do this, but it was a compact way to get it to freak out consistently enough.

Ok, I’ve found the cause of the malformed error messages. Those familiar with the Max API can gather round and have a good chortle that I was sending a t_symbol rather than a string as a parameter into object_error. I realise that this has only niche entertainment value.

So, now at least I get to see what the actual reported error is on @rodrigo.constanzo 's patch, which is the rather unhelpful Buffer test not found. What I presume is happening is that, when being hit from two threads, periodically the object can’t get the pointer to the buffer because Max has locked it whilst servicing the request from the other thread. I don’t know if we can detect this condition separately from just being unable to get the object and thus concluding that it doesn’t exist.

Anyway, I’ll put a PR in for the message fix and we can worry about the quality of the message separately.


Can confirm that the garbled messages are fixed in 1.0.7 but it still has a hard time finding over-thread’d buffer~s.

Yes. Until someone can point me to a better diagnostic, I don’t know a way to distinguish between ‘this buffer doesn’t exist’ and ‘Max has locked this buffer’. Presumably what’s happening is that a call is getting deferred because it implies a buffer resize, and you’re talking to the object from the high priority thread.

So, to solve your actual underlying problem (besides the imprecise message) we could try and figure out a way of being able to avoid ever having to re-allocate, like presetting an allocated buffer size. This would shovel some housekeeping responsibility back on to the user, but would at least be more predictable for use cases like yours.

In this case I’m already doing an @autosize 0 thing, so having to do an @autosizebutthistimeimeanit 0 @buffersize 2 wouldn’t be that much more an issue (if I’m understanding you correctly).

The alternative in this use case would be to manually defer things above, which I’d like to avoid as it would (potentially) introduce jitter elsewhere.

Yeah. I should look over the code again, but the ideal is to try and make sure (if possible) that we can avoid any defferals for your use case. It might be impossible (I don’t remember), insofar as Max itself will always defer certain buffer operations.