Max - Controversial automatic temporary buffer mode

I’m aware this is unlikely to be accepted, but it would be possible to have a more data flow like usage in Max without a new data structure - this might work as follows:

  • input (or main input) can be sent by message (not attribute) for immediate processing
  • if main outputs are not specified the object hosts a uniquely named buffer and outputs it name when done.

I’m aware that the conflict with what is there right now is not perfect, and this might be nastier (having both) than just what is there now.

1 Like

awesome idea. it just has an internal default output buffer.

i also like the idea of me sending it an array to make into a point and the class method deals with turning it into a buffer for me. so easy. so much pain diminished.

I’m quite keen no this, not least because the same thing occurred to me this weekend – it’s similar to the paradigm used by ears.

My suggested approach would be to use a uid as you suggest where the main output isn’t set. I’d be tempted to start prefacing this with buffer analogous to dictionary or jit_matrix. Then this selector in an object would set the source attribute and trigger processing. bang would continue to work as it is.

It comes apart a bit for multiple buffer i/o, so needs a bit more thought, but am planning to pitch this to the others, IAC.

I think it’s harder in SC, tbh, because there’s no way for plugins to make buffers. We’d want to think hard about what would ‘feel’ more right than what we currently have.

But the class can make a buffer on init.

Great! Yes - dictionaries/matrices would be the main thing to look at for interface.

You might want to add “flush/clear” calls to empty/free inner buffers, but that’s a next step. The key thing for me is being able to input the main buffer this way - so the question is whether there is anything that requires multiple ins to do anything (or has no primary input).

For outputs it’s a bit different but your output would need to select the buffers used, so you could send that tagged in several forms, or have an output per buffer.

Ah yes - us non SC heads must always be reminded that the language is a thing, and a UGen relies on both parts.

In case it isn’t something known there’s something in the Max API for generating these easily. Happy to contribute to the wrapper side of things if helpful.

Yes, I assumed there was, because of the similarity between the default names you see elsewhere.

There’s quite a lot of spring cleaning that needs to be done across all the glue code before too long, which I’d prefer to get out of the way before bolting more stuff on, as it’s all getting a bit tangled. G and I are due a conversation about how to approach some of this (including trying to get shot of the two macros-of-shame)

Indeed. What makes me wary is doing things that surprise SC users (and, of course, I have little notion of what those things might be), so have tended to shy away from allocating capped resources like buffers, because I don’t see it happen in other places. My SOP is to bounce things off Gerard, and then you and Ted and see how things pan out :smiley:

My general feeling has been that the impedences in SC were slightly different to Max, and that creating buffers wasn’t The Worst Thing. Totally prepared to review this though. In both Max and SC things are different between the Buf* processing classes (which are pipeline-y), and the data classes with xPoint messages (which aren’t pipeliney in the same way). And of course, in SC, there are now more server-bound ways of doing most things.

From the perspective of long-term of course, but from the perspective of what is in use for Feb I’d personally advocate for external-facing changes first - I’d also possibly argue that it’s best to change big things together, because that’s when things break, but if there are any plans to get this in for Feb I’d say earlier is better than perfect (especially on the code front). As you will know I spend a lot of time thinking about what the code looks like and it definitely affects delivery times.

Totally. We’ve already embarked on some of this, with exactly these considerations in mind. The problem for me is that the wrapper code for Max and SC is becoming very hard to maintain because of the amount of things that have been glued on ad hoc since we started pedalling hard on TB2 last summer.

It may be that I can skate by with just some good ol’fashioned putting-things-in-different-files to reduce the noise. But I need to sit and look at it for a bit. I’m certainly wary of doing things that are going to mess with hard-found stability, but also concious that some things that people hanker for (e.g. thread-aware messaging), would involve messing with some of the more problematic bits.

Well, a class will certainly create its own buffer. At least I do this all the time. After all, there are 1024 available by default and you can increase that number if you like. Busses too. So, something like KDTree creating its own inbus, outbus, inbuf, and outbuf would be fine by me.

1 Like

Let’s just pretend I didn’t make this thread:

And move the relevant discussion from there over to here.

1 Like

Another massive perk of having something like this, is the ability to have multiple processes cohabitating a single patch without having to rename or make everything unique.

For all the testing I’m doing in the thread about time travel I’m having to manage dozens of patches all with slightly different buffer~ sizes or processing chains because if I chuck them all in the same patch, the buffer~s would all be named the same and potentially break things. I could go through and give each version of this a unique name, and make sure all the from/to attributes for each object points to those, but that’s incredibly faffy, and a process I’d have to do for each variation of thing I wanted to test.

So if things followed the ears.stuff~ model, it could be processing chains that would all have unique buffer~s by default and wouldn’t require so much faff for every variation in testing.

Worth noting here (for myself as much as anything) that as nice as this idea is, it wouldn’t work completely simply for your preferred MO of running everything from the scheduler, because it would involve making new buffer~s, which has to happen on the main thread

For fast/real-time stuff, figuring out the buffer~s and sizing everything up, is no problem (well, it’s not great, but it’s no big deal). At the moment I’m in testing/comparing mode, which is where this really gets faffy. Where you’re working with an indeterminate amount of buffer~s and steps and transformations etc…

That’s where I’m feeling the buffer~ burn the most at the moment.

1 Like