So based on @danieleghisi’s presentation at the Friday geek out session, it struck me that the way that ears.stuff~
handles buffer~
-based operations is suuper elegant. Even seeing him prototype an idea on the screen share took no time… and more importantly, no manual creation of buffer~
s.
This also relates a bit to what @a.harker brought up as a general interface-y thing.
But I guess the general idea is that, unless otherwise specified, every object creates and manages its own internal buffer (with random names ala jitter u2535233523
) and then passes that out of its output. So that if you want to do a series of processes you just create a serial connection of objects, and send a starting message/buffer at the top, and at the bottom you are returned a randomly generated buffer~
reference.
You can obviously still specify a buffer~
if you want or need it, as well as specifying if you want to do things ‘in place’ which @jamesbradbury brought up in this thread.
For my own silly shenanigans I’d still have to create and manage buffer~
s in order to @blocking 2
everything, but for prototyping and/or offline processes, the workflow is so much nicer.
Really, seeing @danieleghisi whip up a patch that slices a buffer~
, applies a fade, and reorder the segments was 3-4 objects, maybe 10seconds of coding, with not a buffer~
in sight looked so effortless! The equivalent coding in the fluid.verse~
(presuming you could do fades) would require figuring out how many buffer~
s you needed, you’d have to name them all and remember the names (as well as making sure that you haven’t already used that name elsewhere), etc…
So this is more an interface discussion/prompt than a feature-request, but there’s some feature-requests undertones to it!