Found and played with the 7-making-subsets-of-datasets.maxpat
example and this looks like it would kind of do what I want.
I don’t really understand the syntax (or intended use case actually).
Say I have a fluid.dataset~
that has 5000 points (rows) with 300 features (columns), and I want to filter and create a subset of those (something like filter 0 > 0.128
). Meaning, I want to keep the amount of columns in tact.
My intended use case here would be to create a subset based on some metadata/criteria which I would then query/match against. In this case I want all the actual features to stay in tact, so I can query them. I don’t want to filter out just a single columns worth of stuff.
I don’t understand what addcolumn
(or addrange
) are supposed to do. I played with the messages a bit, but none of the examples show the dataset retaining the amount of columns.
The process also isn’t terribly fast. Even with just 100 points like in the example, it takes around 0.5ms to transform
a dataset into another one. If queries are chained together, that can start adding up.
Granted this process wouldn’t be happening per grain/query, but it may need to happen often enough to be fluid (say if I’m modulating the filter criteria by an envelope follower or something where the louder I’m playing, the longer the samples I’m playing back are etc…).