Fluid.datasetplot~ abstraction for 2D/3D visualization

For a mind-map business, ‘inspired’ would be a good qualificative :slight_smile:

Unless you did some clever stuff I didn’t notice, this will only be returning the nearest neighbors in the 2D reduction (for mouse input).

I initially put it together such that the X/Y coordinates of the mouse just correspond to the first two dimensions in the dataset, so you can just choose from what you see on the screen (flattened), but once you have 3/4/5D, this will no longer be correct for the full multidimensional space.

Is there a way to query fluid.kdtree~ by giving it the label and asking for all the nearest points? As in, using the 2D mouse reduction to return point-72 (or whatever), then passing that on to a fluid.kdtree~ that has the entire dataset in it, and then asking for the nearest neighbors to that label, without having to manually ask for knearest?

A work around would be to take the results from the first/mouse KDtree and dump out the corresponding values from the dataset, then feed that set of values into another KDtree and asking for the multidimensional nearest neighbor to that point. Faffy, but that’s the only way I can think of doing this with my understanding of the objects.

you can getPoint to a buffer and use it to query, that is the simplest. so in you case you can query the NN in what you see (2D) then get that…

but more interestingly, if your downward process is reversible you can get the 2D and get guestimated high-D values. The autoencoder example is doing that (8c) for instance. I’m told by @groma that maybe UMAP has that ability but we have not implemented it yet.

Ah yes. Ok, it can just stay in buffer-land.

Not sure if I follow the usefulness of this. Are you suggesting that it would be good to have a guesstimated higher dimensional projection of a 2D space (while being presented as a NN of the higher D space)?

I don’t know but I know it is possible to try. I love the interpolated space I get in the example I devised to compare the regressors… but hey, how they sound is what matters so have fun with the simple way out first.

1 Like

Hey guys. Had a nice chat with @rodrigo.constanzo the other day. I think my raycasting muscles are getting stronger. :relaxed:
This version gets the point closest to where you click the mouse (in the jit.pworld) by casting a ray to (through) where you click, then getting the point, where the angle between the vector from the cam (kinda) to the point has the narrowest angle to the ray. Arguably not the most fail-proof method, but it seems to do the job:

datasetplot-raycast

The good thing about it is that the camera can be anywhere and have any orientation, it will (supposedly) work the same way.

5D plot_mod.zip (136.8 KB)

2 Likes

That’s so badass!

On quick look it looks super elegant too. I guess the xray object is doing a bit of heavy lifting, but it’s not nearly as complex looking as I would have thought!

1 Like

Is the gif low framerate or does this start to push Max + openGL to its limits? I assume its the former as I’ve done 20k~ plots with gl stuff and its coped just fine. This is super great to have in Max and I ended up stepping out of it to do all my vis because I found jitter too much at the time to deal with.

1 Like

Yes, the quicksort is probably my favourite jitter object ever. In my experience it can be surprisingly fast and lightweight even with huge matrices, though to do this with sorting instead of a kdtree will eventually become problematic. (but I have no idea how to solve this with the kdtree at the moment)

It’s just the gif, in the patch it is basically instantaneous, using something like 2% cpu. But north of 20k points I imagine it will start to slow down a bit… The opengl part is only the visualization, the query happens in jit.gen + xray.jit.quicksort, so as you scale up, your cpu will ache more.

1 Like

Ah okay, so are you sorting the space to get the closest point? I know the three.js raycaster does a bounding box optimisation so it only ever checks points that are realistically close, rather than the whole space. Might be worth doing - but literally no idea how to approach it in the Max world :slight_smile:

1 Like

Yeah it’s definitely real quick on the screen.

I’m adapting his patch slightly to match the behavior of the previous version, and trying to do it sans xray.jit.quicksort for ease of sharing/distribution (at the cost of speed/efficiency).

I think what might work best in the end is that it’s a single unified plot abstraction (fluid.datasetplot~) and if you want to turn/rotate the data, you can, but you still always interact/click/select things in a 2d space. (my version lets you rotate the camera/view around, with the mouse selection not doing anything to the camera).

I tried working out a thing using the jit.gl.handle as well, but that seems to break the raytracing.

1 Like

Yeah, something like that would definitely be clever. Even if it seems fast, it is not so nice to rely on sorting the entire point cloud for every mouse xy, that will scale very badly. It would bee good to implement the bounding box in this Max implementation too, maybe that would fix this!

Yes, I think because with the handle you rotate the point cloud, but that won’t update the xyz coords in the mesh, so (afaik) there is no way to know the new, rotated coords unless you calculate it once more, alongside. Might be worth to try though. It could also be that you can “fake” it with orbiting the camera around the mesh with @locklook 1. Then you have to tune the jit.anim.drive to give you consistent results with the mouse interaction. Or another option is to make a little minimap in the corner like some CAD apps, where you can do the rotation, then you don’t confuse the mouse interaction on the mesh. Just brainstorming. :slight_smile:

Yeah, I understand. I guess jit.bsort would be worth a try, but it is at least 4X slower (plus I think it consumes more cpu too). I wish max had something like pypi or npm, that can automatically install dependencies from other packages… (Wouldn’t that be great?)

1 Like

It is one of the worst aspects of Max and packages non-solves it in such a terrible way (differentiation between local and global things, no way to track the sources of such things etc…). It really bugs me.

1 Like

Yeah it is obviously an “added feature” without really realizing what a package manager should do “in the real world”. Maybe they will give it an overhaul soon, it seems like they are adding “a lot” of new packages nowadays.

1 Like

Ideally what I would want is a containerised runtime for each patch which draws dependencies from a environment ala Python. Maybe when I finish this PhD there would be scope to make something like this by shimming to different temporary envs ala virtualfish

2 Likes

That’s a good idea. In generally I’m not super familiar with the jitter navigation stuff, so they all feel a bit awkward, in different ways. I think having some useful/tuned WASD would be a good compliment to a vanilla 2D projection.

I guess the main difference is that with the other approach the cloud stays centered, so you can rotate it quickly and not worry about “losing” it, whereas when you move the camera, you can easily navigate outside of what you want to see, which can be the case if you want to see it from the top/down or something like that.

That would indeed be badass!

Indeed!

1 Like

Based on Balint’s patch I’m exploring the navigation of the 3D space with a Leapmotion. I’m using the left hand to ‘navigate’ the space and the right index finger to point (replacing the mouse x y).
As I’m a total newbie to jitter, here are a few questions:
I learned how to use the Leapmotion left hand position and movement to guide a shape in the space (the optional 'left-hand-rotation-box) - just for visualization.
I managed to use the position to change the camera angle. But when I’m also using the quaternian data for the camera, everything goes wild.
I’m scaling the right index finger position to the 600x600 jit.world (omitting the z axis for now) and use Balint’s math to find the closest point. That seems to work.

If anybody can point me to solutions for:

  • using hand inclination, rotation to tilt the ‘cloud of dots’
  • use also the z axis of the right index finger tip to find the closets dot

Thanks, Hans

the zip file contains also the leapmotion external I’m using.

Archive.zip (969.8 KB)

1 Like

I don’t know about jitter, but I used the Leap Motion for Processing library in processing to send OSC to SC when I was using it, and that worked great.

Sam

The Leapmotion is working fine. It’s more my lack of any math memory from 40 years ago…
Here is a short video showing what I’m after.

1 Like

Hey @tutschku, this looks nice! I think the method to connect a hand position to camera position is good, and I especially like how intuitive the zoom gets this way (ie. it’s not really a separate “zoom” feature anymore).

I think it can be problematic to rotate the world, because then all the points will change position. So if you want to look up points from the dataset (or select them with the raycasting trick) you need to rotate the matrix of points to look up from alongside your world rotation, which to me feels like asking for problems later down the line. I would try to instead translate orientation to position, so that the orientation of your hand orbits the camera around the mesh instead of rotating the mesh itself. I know this sounds counter-intuitive (and I am also not a jitter wizard), but this way the position of the data points remains an unambiguous “truth”.
Translating (or faking) orientation into position should be straightforward if we assume that the camera is locked into orbiting around a point, let’s say the center of the world. Then the pitch of your hand can correspond to the elevation of the camera, and the yaw of your hand can correspond to the azimuth of the camera. I made a very simple spatial comp tool with a phone this way a few years ago, so that I point to a direction and a sound source goes exactly where I’m pointing (simple, but it was surprisingly useful to avoid fooling myself with a visualization and to start using my ears :slight_smile:). Note that distance stays constant (or at least modulated in some other way) in this context.
Examples from the top of my head: if you tilt (or roll) your hand 45 degrees to the left, then instead of rolling the world 45 degrees to the left, you roll the camera 45 degrees to the right. The result will look the same, but now your point coordinates haven’t changed. Or if you turn (the yaw of) your hand 90 degrees to the right then instead of rotating the world 90 degrees to the right, you orbit the camera 90 azimuth (degrees) left. I guess you will need to either use a spat5.translate (mind the different axes between spat5 and openGL) or just look up the conversion between AED and XYZ.
In your case you can actually control the distance with the Z of your hand position.

That should already be easy without the rotation kung fu above, and you don’t even need the raycasting trick. You just put the data points into a fluid.kdtree, then query with your index fingertip. It of course gets trickier if you move and/or rotate the camera since then you have to have your XYZ point mapped into whatever area is in front of the camera. I just did this a few weeks ago, and I’m not sure I found the nicest implementation but it seems to work.

  • First you have to counter-rotate your point to the camera rotation. I used a spat5.transform for this. It takes some experimentation because spat5 axes are not the same as the openGL axes.
  • Then you cast a single ray from the camera looking straight ahead and define a “mount point”, a point that is always N units in front of the camera.
  • Finally, offset the transformed query point (your index finger xyz transformed respective to cam rotation) with the vector of the mount point.

And tada, now your query point will always be projected into whatever area is in front of the camera no matter where the camera is or where it’s looking at.

The issue with plugging in the quaternion to the camera might have been because of the leapmotion object uses a different quaternion ordering than the jit.gl.camera. Maybe the points didn’t go away, it’s just the camera was looking at the “void”. Honestly, I don’t know how to convert between them (might be as simple as flipping the sign or index of some numbers) but maybe there is some tool for this in spat5 or somewhere else.