— What do you think of the possibility to normalize with Fluid.BufCompose~ ? Bad idea ?
Normalizing between -x and x, and for one or all channels. Normalizing by just finding the max sample value.
@tutschku has suggested some of these, as did @danieleghisi - with @weefuzzy we are trying to keep the feature list to a minimum, but to cater for what is not possible in other ways easily. Normalisation is available, so is enveloping, in the buffer~ object, so will probably not make the cut. But the ideas of complexe compositing interest me, although maybe not as a priority… Let us think about it, and thanks for the proposal!
I guess it’s not easily possible to do that as it stands, but if the entire scope of the object is bufferA + bufferB = bufferC, that seems like too limited a scope to be part of a whole toolkit.
At minimum I would think being able to do bufferA + bufferB +… bufferN = bufferZ, then that’d be useful.
It’s possible I also didn’t understand the whole point of the object too.
The problem with many more buffer names is the argument bloating, to allow for clever compositing. We currently allow, with writing and reading to the same buffer, to do what you want through a programmatic iterative process, which also allow to do pre-summing process, so its current design is in effect more flexible than one that would allow to simply pile-up buffers You can even make a quick abstraction that does what you want around the object transparently, but the other way round would not be possible…
Yeah, Owen posted a fantastic bit of code in another thread for that.
There’s tons that’s possible through making abstractions and such, it’s more about making the object more generally useful (more “medium” level rather than “low”-ish level). If one wanted, one could make the appropriate sized buffers and peek/poke something like this too, that’s just a bit of a pain to build.
I could be in the minority here, but it seems like more people have requested features for this object that is beyond the A+B=C paradigm.
Guys, the library I talked about (the one I was working on for buffer handling) does that, I.e. Creates buffers dynamically. There are some issues and it is not at all ready for release but I can share some modules if you want (normalization, concatenation of folder, etc.), they work… Let me know if that can help.
@danieleghisi that’s really generous of you, thanks! It’d be great to pool resources.
@ everyone in this thread (or everyone in general), it would be valuable to get a handle on what people’s various desires for this object might be. In a making-no-promises kind of way. It’s interesting that it’s attracted the interest that it has, given that I threw it together to help @tremblap get the helpfiles together: plainly, there are things people want to do with buffers that they can’t at the moment.
Questions, off the top of my head:
Syntax: The current syntax doesn’t really have much scope for enrichment. I find it pretty hard to use as is, and I wrote the damn thing
Break out into multiple commands? (If so, what?)
Keep as a single command?
Indexing syntax
Handling n inputs
Functions
Given that [buffer~] does peak normalising, do we need this? TBH, I don’t use peak normalising very often in my workflow, but others might. For concartening arbitary sources, would loudness normalising be more helpful?
Fades: How would this work?
Does that not imply two types of operation: mixing (+), modulating (*)
General functions
Only work on host buffers, or talk to file system?
I guess it’s also an easy object to understand, and kind of does what it says right out of the box. So that might have something to do with the interest.
I’m curious what @danieleghisi has baked up as that sort of thing is more what I imagined out of the object.
In terms of syntax, I find the (HISS/HIRT/FluCoMa) style of endless arguments pretty unreadable, and fairly meaningless even with a fair amount of context. That being said, when working with lots of files, it’s difficult to avoid weird syntax.
Perhaps breaking it out into multiple messages/commands?
As far as functions go, I would lean towards just having peak and some windowing, as if you get into loudness (and other analysis oriented things), feature bloat is surely around the corner. That being said, phase would probably be handy (and simple).
Similar for mixing, I would go with straight summing, as modulating also opens up another can of worms (a single Max message composition environment…) .
For fades, I would go with the standard options, but without steering dangerously close to overdoing things, it would be good to be able to specify that over the length of an entire buffer OR a fixed time duration (e.g. applying a 20ms fade in/out on a 10s file, instead of a gigantic hanning window over it).
Quick question: when you say phase, you know you can multiply by negative gain at the moment, right? Check the MS example (which needs developing but shows how to sum and difference in 2 buffers)
Breaking the commands into multiple messages would indeed be great. I am here thinking about jit.mgraphics with simple command lists.
Normalization from peaks can still be useful if one wants to make normalization within a series of processes. It indeed still remains useful for other processes than audio. For instance when normalizating bins within buffers in a frequency domain.
That library allows you to process buffers, online or offline. It provides a large variety of operations until wavelet analysis, mfcc, filters etc. But they are within their own internal system and, depending on where you want to go afterwards, I am not sure you want to go that far with buffers.
AND PiPo and MuBu have their PROBLEMS and are not SIMPLY connected to… Max.
I hesitated to show you that because it can be a bit depressing to see how far they went. But this is just a starting point and you wanna have a different paradigm. I feel I am the “Ircam guy” selling his shit.
@tremblap: “no caps ever” < this can be dangerous. ahaha
Thanks for this… and don’t worry: our plan is not to reinvent the wheel, but to link at other toolboxes that do other things we won’t implement. I will try the pipo.onseg to see how it compares to our segmenter too (except if you have experiences to share?)
Thanks @pasquetje! Good observation about the utility of peak normalising in more general use.
Yes, I know pipo quite well, and there’s some stuff I really like about it. For Max, the way they have of chaining processes together is elegant, as is the way they use object attributes. This would be one possible way of dealing with more complex processing chains (e.g. allowing for other types of transform). As you say, it still has its issues though (typically IRCAM ones, like doing the beginnings of something really impressive, and then stopping before the documentation was ready!). How we’d acheive something like this that could also work in SC is an open question…
It also occurred to me that much of what fluid.bufcompose~ does (or that people want it to so) could already be done with jit.buffer~ / jit.matrix~ and a set of abstractions (again, problems for other platforms though). Maybe we can at least look at jitter’s interface as a possible inspiration too.
As usual, this kind of thing, such as onsets, formants etc are case by case. It depends on what you are after and consequently how you define them. My two cent…
pipo.onseg is indeed pretty good only for detecting attacks.
I am nasty but I prefer using sqrt~ with onepole~ and sometimes average~ for dessert. ahahaha
Hi guys, as promised here’s a rough, personal release of my “ears” library, operating upon buffers. Most of the objects share their names with bach. and cage. objects, only they operate with buffers (ex: ears.slice will slice a buffer, not a list like bach.slice, and not a score like cage.slice; cage.join will concatenate buffers, not lists and so on).
Link:
It’s more than likely to be buggy, not very tested, etc. I’ve managed to roughly document it in the last few days (more or less), you may start from the few examples included in docs > samplepatches, or from the helpfiles. It has nothing to do with DSP research or fancy algorithms, it’s extremely stupid, it’s just a toolbox for working with buffers offline. In principle, it should help for batch operations (e.g. concatenate files in a folder, mix files randomly, etc.).
If you have any feedback, that would be great. Please, for the time being please keep it for yourself, because I still have no idea of what I am really going to do with the library. I have been searching for some institutional funding in the last couple of years but I couldn’t find any, so I developed it anyway in my spare time for my own purposes. It’s far from complete, and some cool things I had in mind are still missing.