Peak seems like a fairly redundant metric if you’re only going to take 4d of loudness overall.
Indeed. At the moment I’m just trying to rebuild a simple entrymatcher~-workflow replacement to see how it feels. Based on my previous testing, I’m leaning towards it being “too slow”, but I want to be able to work out the details of it and see how/where.
that is not yours to pick… buffers are interleave they way they are you can do an expensive transposition of the matrix if you want but that is not useful.
as I keep saying, keep using entrymatcher until it does not satisfy you. then you will be able to think differently about your problem. Using French to speak German is not productive in Spain.
Wait, you lost me here.
I mean, once I have the flattened points in the dataset, to addcolumn out the specific bits of information I want. So rather than taking the first four, taking the first two, then wherever the next two happen to fall (I’ll have to do “indices math” to figure that out).
Are you saying that’s not possible and/or expensive?
Or are you saying something else?
Or more specifically, are you taking the first four columns in the ‘loudness’ dataset because you think those are four useful dimensions to take, or because they are the first four?
Part of this is to show potential use cases, and where they may be problematic given the interface/structure. From my testing a while back, using fluid.datasetquery~ on anything over a few points was already >10ms, and for the more “real-world” version test example I built, it was >30ms.
So I want to be able to build exactly I want to do, potentially with several steps of fluid.datasetquery~, along with fit-ing a fluid.kdtree~ at the bottom, to see what the actual cost is for real-time use.
Would the same be the case for doing some bit-ty fluid.bufcompose~ or peek~-ing rather than fluid.bufflatten~ at the end of each individual descriptor processing chain to get only the bits you want into the fluid.dataset~?
Like, if you specifically wanted mean/std/min/max of the stats and their derivs, for example.
This would make something like what I suggest here (@output attributes) even more compelling so you can get only the things you want up front, rather than getting what you don’t, and having to carefully sift/prune around it.
I think you are optimising too early. Get the patch to do music you want to hear, then think of optimising…
bufflatten is fast. column copying is fast-ish. many bufcomposes will not be as fast I think but feel free to try, but at the moment, I’m much more curious about hearing musical result of the right descriptors, which is what I’m exploring with our imperfect interface. That will be the only way to see how to optimise it for many use-cases.
That’s sort of what I’m trying. I didn’t think that the column copying stuff was an expensive-ish part of the process. I was just going through the patch and trying to rebuild out the bits I want/need, and didn’t understand the descriptor decisions.
But for now, it’s that. Trying to get the onset descriptors sample player working with fluid.dataset~s.
Actually in rebuilding everything I realized that the real-time version is mainly buffers and bufcompose, with the dataset stuff happening in the batch analysis stage.
So you could potentially be bit-tier and fiddler at the batch stage and then just bufcompose/peek the equivalent columns for the real-time version.