Fluid.datasetplot~ abstraction for 2D/3D visualization

super great! I was thinking that (as we explained) since size and colour are not dim4 and 5 of a 5d space but actually 2 other independent dimensions, you will have good need of datasetquery to assemble that so you don’t end up with a strange latent space in 3d+colour+size. For instance, 3d latent space for timbre, with colour as classes and size as loudness (or length) could be fun and quite intuitive to navigate.

Congrats.

It’s all a conceptual mapping either way, as there’s no 3D space (just a “rotating” 2D space), but it’s more about the usefulness/readability. I’m certain this a massive field or research as it stands, but with this approach the scale (size) of a point has three parameters, so those could be independently addressed too, though I’m not sure that would offer any meaningful (visual) information.

Hopefully it doesn’t come to that! ;p

That could be interesting, to have X/Y/Z be one “dataset” and color be another one, though I’m not sure I would have the perception to tell the difference between clustering and relative distances in a RGB space, particularly (as discovered and outlined earlier in this thread) that distances within it are not perceptually linear.

that is what I mean… because asking for 3 dim and navigating in 3 dim make sense, especially because none of the 3dims make sense on its own: it is their position in 3d that means something… and that is why I find colour-as-dim difficult to make sense in a 4d where the other 3d are space: because the ‘red-ness’ would have to be mapped in your head to the ‘x-ness’ the ‘y-ness’ and the ‘z-ness’ to give a 4d vision…

so my proposal is 3d in 3d, then a class as colour (to see how they relate maybe to the 3d space) and then loudness or duration as size (to see again if that single dimension parameter inhabits a particular spot in the colour space (classes) or in the 3d space)

does it make sense?

That’s where the linear color mapping helps. Greenish stuff is as similar as much as “stuff on the left” is similar.

The first bit is what I was trying to do here, by just adding the labels that come from kmean-ing something into a dataset for mapping. That’s definitely not a simple task.

But any of these bits can be handy in that you just pack a single dataset with what you want (e.g. 3Ds of “timbre”, the 4th D of “loudness” and the 5th D of “duration”). It’s not possible to do everything this way as you can’t have symbols/text as in a dataset, as otherwise you can have a set of (text) labels as a dimension and map those onto differences in a color space or whatever.

I can break out the interface so you can feed it different kinds of things, as having to funnel things through a dataset complicates things a bit, but then it doesn’t become as easy to just drop in and inspect things.

don’t break things. I’ll just datasetquery what I need to show you what I mean.

1 Like

Sadly it won’t be possible (without loads of work) to select points when it’s in a “proper” 3D space, so this kind of view will only be for looking, and no touching.

Actually, reading this does not say it is difficult. I am not certain how mouse is reported in a 2d projection of a 3d space, which is the hard bit to give you ‘where you are’ and then it should be a normal query.

This is what I was referring to as an approach screentoworld and worldtoscreen messages - Jitter Forum | Cycling '74

The core thing there is a jit.pworld, and it spits out the same mouse/X/Y stuff as all the other objects. The handle thing also spits out it’s rotation stuff too. How to merge those is quite beyond me, but I’ll take a look at this thread.

I’ll also tidy up my code and post what I have later today. It won’t be as wrapped up as fluid.datasetplot~ but it will take data input. It’s just a bit faffy at the moment cuz I have to get things from a dataset into jit.matrixs.

Ok, here’s what I have. I didn’t include any dataset->matrix stuff, so all the data is just in a coll at the moment, but this should give you an idea of where I’m going with it.

For the life of me, I couldn’t figure out how to use jit.gl.camera instead of jit.gl.handle like that video you posted a bit ago. It would be nice to not see the circle/origin points, and the @tripod mode is nice, as it’s not useful to go “upside down” with this kind of plot.

I’ll look at that thread you linked now, but just wanted to post this first, but it’s possible that there’s another/different way to generate the points that still leaves them addressable. Once you use jit.gl.multiple, it seems that the individual .jit.gl.gridshapes aren’t accessible anymore.

5D plot.maxpat.zip (153.8 KB)

That looks pretty good. I’m not entirely sure how a camera thing works vs a world thing, as it’s nice being able to navigate the space using the mouse + hotkeys (I think it would be terrible to resize/rotate things using 3 float boxes (though I’m sure that would be your preference!)).

so many years, and yet you know me so little…

lol, I’ll die on that hill that I can 100% see you moving camera position around with 3 float boxes!

Rewatching a bit of that vid above and I’m wondering if it’s better to shove points into a jit.gl.mesh instead. Though I guess it would have to dump all the color/size things one by one. I’ll try that again and see if I get anywhere.

Ok, so jit.gl.handle takes a @visible attribute, so that makes things look much nicer.

I bumped the thread on the c74 forum to see if it’s possible to use the worldtoscreen / screentoworld messages in the context of my original patch, or if there’s another way to get at the same kinds of info.

check this: a colleague was talking about using VR to visualise his ideas in a 3D mindmap - it made me think of this thread

1 Like

Interesting…

I’m not entirely sure how useful something like that would be for mindmapping (and it doesn’t help that the video on their page looks like a bad parody or something off Adult Swim) but being able to VR/AR through point clouds for data stuff would definitely be useful.

Could be cool to set up a gamepad to control the camera in these plots as the two analog stick control paradigm is pretty ubiquitous and would allow for (potentially) faster/easier navigation of the “3D” space.

The music is so compelling, though

For a mind-map business, ‘inspired’ would be a good qualificative :slight_smile:

Unless you did some clever stuff I didn’t notice, this will only be returning the nearest neighbors in the 2D reduction (for mouse input).

I initially put it together such that the X/Y coordinates of the mouse just correspond to the first two dimensions in the dataset, so you can just choose from what you see on the screen (flattened), but once you have 3/4/5D, this will no longer be correct for the full multidimensional space.

Is there a way to query fluid.kdtree~ by giving it the label and asking for all the nearest points? As in, using the 2D mouse reduction to return point-72 (or whatever), then passing that on to a fluid.kdtree~ that has the entire dataset in it, and then asking for the nearest neighbors to that label, without having to manually ask for knearest?

A work around would be to take the results from the first/mouse KDtree and dump out the corresponding values from the dataset, then feed that set of values into another KDtree and asking for the multidimensional nearest neighbor to that point. Faffy, but that’s the only way I can think of doing this with my understanding of the objects.

you can getPoint to a buffer and use it to query, that is the simplest. so in you case you can query the NN in what you see (2D) then get that…

but more interestingly, if your downward process is reversible you can get the 2D and get guestimated high-D values. The autoencoder example is doing that (8c) for instance. I’m told by @groma that maybe UMAP has that ability but we have not implemented it yet.