I asked this question in this other thread, but figured I’d move it into its own thread.
Don’t know if this is a viable option at all, but it would be amazing if an object set to @blocking 2 , if it is presented with a buffer~ that is the wrong size, or has no size, that it resizes the buffer, once. So basically whatever loop throws up the “buffer is wrong size” error just resizes the buffer instead (internally in @blocking 1 I guess) and then carries on in @blocking 2.
I find I spend more time than it seems like it should take each time I want to configure a buffer for @blocking 2. Don’t know if this is a catch-22 type problem with that, but it would make it much easier to use as a user.
I’m afraid it’s not possible. Max will always defer buffer resizes and the whole schtick of @blocking 2 is that it stays on the thread that it was called from, which could well be the scheduler if overdrive is on. As such, it becomes unsafe to call resize on a buffer (with potentially much more exciting consequences than you’d see in Max land, because we could start trying to write into memory that’s already been freed, or belongs to something else etc).
Hmm. I guess I meant more like an automatic version of having an object set to @blocking 1 for a moment, where it gets resized, and then switching it back to @blocking 2 so it can carry on with that size. Obviously @blocking 2 can’t act like it can write whatever memory, but it was more wishful thinking that the error loop that runs when the buffer~ isn’t sized correctly can just internally jump threads to resize it before jumping back.
I think this would produce more undesireable side effects than solved problems if baked into the external. It can be done in Max land if you really need to have conditional resizing.
Bear in mind the buffer~ only needs to be big enough, it shouldn’t need to be the exact size. So one appoach could be to allocate as much as you’d ever need, and then keep track of the extent of each analysis as it happens (so you’d be using @numframes et al, but needing to make sure you keep a record of that for when processing is done).
Yeah. Just feels real hacky to do it in Max…now what happens in Vegas (C++), stays in Vegas (C++), so that wouldn’t feel as dirty.
That’s good to know about the size thing. I am seeing some funky stuff (in the other thread) where it appears duplicate values are being written (or something else). Making buffers extra long is easy enough, though the channels dimension can get messier.
Seems like the principle should be the same for channels, unless I’m missing something. [buffer~ @samps x y] lets you declare both length and breadth in one go, so if you pre-declare something chunky and then have at it with @numframes and @numchans things ought to work satisfactorily (or be made to!)
Well, tbf, the channel count post-stats shoudln’t be different from the dimensionality of the feature(s) you were interested in to begin with, so if you’re only ever doing pitch + confidence then you only need 2 channels, or if 12 MFCCs (say) then 12 channels and so on.
Fewer than you’d think, perhaps! The principle I’m reluctant to violate here is that of Least Surprise, i.e. that code (library code in particlar) shouldn’t do surprising things. It feels to me like a mode that makes promises about how it’s going to run vis-a-vis the calling thread would be surprising if it sometimes didn’t do that.
I guess to me it seems more like the kind of thing where you correct weird FFT sizes. Like if someone asks for an FFT size of 1025, it would, perhaps, be surprising to them that it was instead set to 1024, but the alternative is that it doesn’t work (which is also what happens with @blocking 2 on a non-sized buffer).
So in spirit I’m with you, but in practice those examples seem the same to me.
Yeah; for me constraining parameter values doesn’t seem like a violation of contract, plus with @warnings 1, you’ll be told whenever it happens. However, the entire reason for @blocking 2 is that the thread it runs on is unconditionally the one it was called from. If you’re relying on the potential speed gain of that, and then a bug in buffer size planning means that, essentially silently, that performance advantage goes away, I’d find it surprising, and perhaps even very annoying. So, in this instance, I’m going to insist that the dirt stays in the patch (but, as covered, I’m not sure it’s needed anyway!)