Instacrash with fluid.bufcompose~

Putting this in the secret forum cuz it might have to do with fluid.dataset~ instead, or as well.

I was working on a patch for this thread and got an instacrash when doing some fluid.bufcompose~-ing.

Attached is the crash report, but here’s the zesty bit (I think).

Thread 23 Crashed:
0 org.flucoma.${PRODUCT_NAME:rfc1034identifier} 0x0000000124a47478 fluid::client::Result fluid::client::BufComposeClient::process(fluid::client::FluidContext&) + 1000
1 org.flucoma.${PRODUCT_NAME:rfc1034identifier} 0x0000000124a46b70 fluid::client::NRTThreadingAdaptor<fluid::client::ClientWrapperfluid::client::BufComposeClient >::ThreadedTask::process(std::__1::promisefluid::client::Result) + 64
2 org.flucoma.${PRODUCT_NAME:rfc1034identifier} 0x0000000124a4699d fluid::client::NRTThreadingAdaptor<fluid::client::ClientWrapperfluid::client::BufComposeClient >::ThreadedTask::ThreadedTask(std::__1::shared_ptr<fluid::client::ClientWrapperfluid::client::BufComposeClient >, fluid::client::ParameterSet<fluid::client::ParameterDescriptorSet<std::__1::integer_sequence<unsigned long, 0ul, 0ul, 0ul, 0ul, 0ul, 0ul, 0ul, 0ul, 0ul, 0ul>, std::__1::tuple<std::__1::tuple<fluid::client::InputBufferT, std::__1::tuple<>, fluid::client::Fixed >, std::__1::tuple<fluid::client::LongT, std::__1::tuple<fluid::client::impl::MinImpl >, fluid::client::Fixed >, std::__1::tuple<fluid::client::LongT, std::__1::tuple<>, fluid::client::Fixed >, std::__1::tuple<fluid::client::LongT, std::__1::tuple<fluid::client::impl::MinImpl >, fluid::client::Fixed >, std::__1::tuple<fluid::client::LongT, std::__1::tuple<>, fluid::client::Fixed >, std::__1::tuple<fluid::client::FloatT, std::__1::tuple<>, fluid::client::Fixed >, std::__1::tuple<fluid::client::BufferT, std::__1::tuple<>, fluid::client::Fixed >, std::__1::tuple<fluid::client::LongT, std::__1::tuple<>, fluid::client::Fixed >, std::__1::tuple<fluid::client::LongT, std::__1::tuple<>, fluid::client::Fixed >, std::__1::tuple<fluid::client::FloatT, std::__1::tuple<>, fluid::client::Fixed > > > const>&, bool) + 365
3 org.flucoma.${PRODUCT_NAME:rfc1034identifier} 0x0000000124a4d297 fluid::client::NRTThreadingAdaptor<fluid::client::ClientWrapperfluid::client::BufComposeClient >::process() + 231
4 org.flucoma.${PRODUCT_NAME:rfc1034identifier} 0x0000000124a4d04f fluid::client::impl::NonRealTime<fluid::client::FluidMaxWrapper<fluid::client::NRTThreadingAdaptor<fluid::client::ClientWrapperfluid::client::BufComposeClient > > >::process() + 95
5 org.flucoma.${PRODUCT_NAME:rfc1034identifier} 0x0000000124a4c1f6 fluid::client::impl::NonRealTime<fluid::client::FluidMaxWrapper<fluid::client::NRTThreadingAdaptor<fluid::client::ClientWrapperfluid::client::BufComposeClient > > >::deferProcess(fluid::client::FluidMaxWrapper<fluid::client::NRTThreadingAdaptor<fluid::client::ClientWrapperfluid::client::BufComposeClient > >*) + 486

I’m also getting a bunch of errors which I don’t think should be the case too.

fluid.bufcompose~: Source Buffer Not Found or Invalid
fluid.bufcompose~: Zero length segment requested

Actually looking at the relevant part of the patch:

I wonder if there’s something going on with how fluid.dataset~ “writes” to a buffer when it gets a getpoint message.

From the looks of this section of the patch, I should never get a “Buffer Not Found” error (or a crash).

I don’t know if it works this way at all, but how does fluid.dataset~ handle buffer writing? As in, does it do it in one of the @blocking modes? Is it possible that the @blocking 2 from the fluid.bufcompose~ further down the stream is accessing the buffer at the same time that fluid.dataset~ is (if it’s happening in a lower priority thread or whatever).

(if that’s the case, I’ll make a separate feature request for threading options for fluid.dataset~ as getting data out in a timely manner is pretty important.

Full crash report: (42.1 KB)

Can get the patch to crash fairly reliably now as well (with what’s shown on screen). I can post the code, but there’s a lot of files/data bits surrounding it.

Here’s another crash report. (34.8 KB)

In working with some more objects, it seems that loads of the TB2 objects read/write from buffer~s directly.

Don’t know if this is something that is planned down the road, but will there be @blocking modes for these?

That’s the dream, but it’s not going to be immediate, because it’s complex. There are some functions will be poor candidates for immediate-mode (most of the fitting ones, e.g), and others where it makes sense.

However, I wouldn’t be so quick to diagnose this as a threading issue. The actual exception in your crash report was a divide by zero (which is surprising) so it could be that a buffer~ is caught in some intermediate state. But, IIRC, everything in that patch will be on the scheduler thread anyway: I think the messages all run in the calling thread at the moment. Anyway, rest assured, it’s on the list (your crash).

1 Like

Now that I’m doing it more, the transforms will definitely be handy, but it seems like my workflow(s) are all about this getpoint kind of thing, in some cases running parallel(ish) with @blocking 2 elsewhere. So just vanilla getting buffer~s in and out would be super champ.

The way things are at the moment suits your purposes then, because everything runs in the thread it gets called from, like immediate mode. What’s probably less ideal is that buffers will get resized in the scheduler thread at present.

1 Like


I’ll just prebake my buffer sizes to hopefully avoid shenanigans.

No crashes as of yet, but getting loads of Wrong Point Size and Invalid buffer messages from fluid.pca~ now. (edit, and occasionally fluid.normalize~)

I’m sending the same message every 150ms (calculating stats for this other thread), and every few seconds it throws up one of these buffers.

All there is is a series of transformpoint messages from fluid.standardize~ -> fluid.pca~ -> fluid.normalize~.

defer appears to fix it. Obviously this is clunky and problematic, but hopefully that narrows things down some.

I think it may be what @weefuzzy said the other day that these objects are executing things in the thread that they are called with. So perhaps some funny threading business is going on here.

I think @a.harker was having an intuition on how to poke at it - @weefuzzy is aware something is piling up in the background sometimes…

My gut tells me all of these issues are related.

My gut is not smart, however.

I’m not sure that what is being discussed here is clearly related to anything I have pointed to exactly.

I can ping you in Slack on that thread, but I’m sure Rod will have done this by now. This was when you were talking about attaching the measurement tool and access to buffer resizing under the hood, but the issues might be unrelated. @rodrigo.constanzo reproducible code is quite essential so when you have the time that would be ace.

I do not thing these are clearly related enough to consider them as one thing.