Fluid.datasetplot~ abstraction for 2D/3D visualization

Ah yes. Ok, it can just stay in buffer-land.

Not sure if I follow the usefulness of this. Are you suggesting that it would be good to have a guesstimated higher dimensional projection of a 2D space (while being presented as a NN of the higher D space)?

I donā€™t know but I know it is possible to try. I love the interpolated space I get in the example I devised to compare the regressorsā€¦ but hey, how they sound is what matters so have fun with the simple way out first.

1 Like

Hey guys. Had a nice chat with @rodrigo.constanzo the other day. I think my raycasting muscles are getting stronger. :relaxed:
This version gets the point closest to where you click the mouse (in the jit.pworld) by casting a ray to (through) where you click, then getting the point, where the angle between the vector from the cam (kinda) to the point has the narrowest angle to the ray. Arguably not the most fail-proof method, but it seems to do the job:

datasetplot-raycast

The good thing about it is that the camera can be anywhere and have any orientation, it will (supposedly) work the same way.

5D plot_mod.zip (136.8 KB)

2 Likes

Thatā€™s so badass!

On quick look it looks super elegant too. I guess the xray object is doing a bit of heavy lifting, but itā€™s not nearly as complex looking as I would have thought!

1 Like

Is the gif low framerate or does this start to push Max + openGL to its limits? I assume its the former as Iā€™ve done 20k~ plots with gl stuff and its coped just fine. This is super great to have in Max and I ended up stepping out of it to do all my vis because I found jitter too much at the time to deal with.

1 Like

Yes, the quicksort is probably my favourite jitter object ever. In my experience it can be surprisingly fast and lightweight even with huge matrices, though to do this with sorting instead of a kdtree will eventually become problematic. (but I have no idea how to solve this with the kdtree at the moment)

Itā€™s just the gif, in the patch it is basically instantaneous, using something like 2% cpu. But north of 20k points I imagine it will start to slow down a bitā€¦ The opengl part is only the visualization, the query happens in jit.gen + xray.jit.quicksort, so as you scale up, your cpu will ache more.

1 Like

Ah okay, so are you sorting the space to get the closest point? I know the three.js raycaster does a bounding box optimisation so it only ever checks points that are realistically close, rather than the whole space. Might be worth doing - but literally no idea how to approach it in the Max world :slight_smile:

1 Like

Yeah itā€™s definitely real quick on the screen.

Iā€™m adapting his patch slightly to match the behavior of the previous version, and trying to do it sans xray.jit.quicksort for ease of sharing/distribution (at the cost of speed/efficiency).

I think what might work best in the end is that itā€™s a single unified plot abstraction (fluid.datasetplot~) and if you want to turn/rotate the data, you can, but you still always interact/click/select things in a 2d space. (my version lets you rotate the camera/view around, with the mouse selection not doing anything to the camera).

I tried working out a thing using the jit.gl.handle as well, but that seems to break the raytracing.

1 Like

Yeah, something like that would definitely be clever. Even if it seems fast, it is not so nice to rely on sorting the entire point cloud for every mouse xy, that will scale very badly. It would bee good to implement the bounding box in this Max implementation too, maybe that would fix this!

Yes, I think because with the handle you rotate the point cloud, but that wonā€™t update the xyz coords in the mesh, so (afaik) there is no way to know the new, rotated coords unless you calculate it once more, alongside. Might be worth to try though. It could also be that you can ā€œfakeā€ it with orbiting the camera around the mesh with @locklook 1. Then you have to tune the jit.anim.drive to give you consistent results with the mouse interaction. Or another option is to make a little minimap in the corner like some CAD apps, where you can do the rotation, then you donā€™t confuse the mouse interaction on the mesh. Just brainstorming. :slight_smile:

Yeah, I understand. I guess jit.bsort would be worth a try, but it is at least 4X slower (plus I think it consumes more cpu too). I wish max had something like pypi or npm, that can automatically install dependencies from other packagesā€¦ (Wouldnā€™t that be great?)

1 Like

It is one of the worst aspects of Max and packages non-solves it in such a terrible way (differentiation between local and global things, no way to track the sources of such things etcā€¦). It really bugs me.

1 Like

Yeah it is obviously an ā€œadded featureā€ without really realizing what a package manager should do ā€œin the real worldā€. Maybe they will give it an overhaul soon, it seems like they are adding ā€œa lotā€ of new packages nowadays.

1 Like

Ideally what I would want is a containerised runtime for each patch which draws dependencies from a environment ala Python. Maybe when I finish this PhD there would be scope to make something like this by shimming to different temporary envs ala virtualfish

2 Likes

Thatā€™s a good idea. In generally Iā€™m not super familiar with the jitter navigation stuff, so they all feel a bit awkward, in different ways. I think having some useful/tuned WASD would be a good compliment to a vanilla 2D projection.

I guess the main difference is that with the other approach the cloud stays centered, so you can rotate it quickly and not worry about ā€œlosingā€ it, whereas when you move the camera, you can easily navigate outside of what you want to see, which can be the case if you want to see it from the top/down or something like that.

That would indeed be badass!

Indeed!

1 Like

Based on Balintā€™s patch Iā€™m exploring the navigation of the 3D space with a Leapmotion. Iā€™m using the left hand to ā€˜navigateā€™ the space and the right index finger to point (replacing the mouse x y).
As Iā€™m a total newbie to jitter, here are a few questions:
I learned how to use the Leapmotion left hand position and movement to guide a shape in the space (the optional 'left-hand-rotation-box) - just for visualization.
I managed to use the position to change the camera angle. But when Iā€™m also using the quaternian data for the camera, everything goes wild.
Iā€™m scaling the right index finger position to the 600x600 jit.world (omitting the z axis for now) and use Balintā€™s math to find the closest point. That seems to work.

If anybody can point me to solutions for:

  • using hand inclination, rotation to tilt the ā€˜cloud of dotsā€™
  • use also the z axis of the right index finger tip to find the closets dot

Thanks, Hans

the zip file contains also the leapmotion external Iā€™m using.

Archive.zip (969.8 KB)

1 Like

I donā€™t know about jitter, but I used the Leap Motion for Processing library in processing to send OSC to SC when I was using it, and that worked great.

Sam

The Leapmotion is working fine. Itā€™s more my lack of any math memory from 40 years agoā€¦
Here is a short video showing what Iā€™m after.

1 Like

Hey @tutschku, this looks nice! I think the method to connect a hand position to camera position is good, and I especially like how intuitive the zoom gets this way (ie. itā€™s not really a separate ā€œzoomā€ feature anymore).

I think it can be problematic to rotate the world, because then all the points will change position. So if you want to look up points from the dataset (or select them with the raycasting trick) you need to rotate the matrix of points to look up from alongside your world rotation, which to me feels like asking for problems later down the line. I would try to instead translate orientation to position, so that the orientation of your hand orbits the camera around the mesh instead of rotating the mesh itself. I know this sounds counter-intuitive (and I am also not a jitter wizard), but this way the position of the data points remains an unambiguous ā€œtruthā€.
Translating (or faking) orientation into position should be straightforward if we assume that the camera is locked into orbiting around a point, letā€™s say the center of the world. Then the pitch of your hand can correspond to the elevation of the camera, and the yaw of your hand can correspond to the azimuth of the camera. I made a very simple spatial comp tool with a phone this way a few years ago, so that I point to a direction and a sound source goes exactly where Iā€™m pointing (simple, but it was surprisingly useful to avoid fooling myself with a visualization and to start using my ears :slight_smile:). Note that distance stays constant (or at least modulated in some other way) in this context.
Examples from the top of my head: if you tilt (or roll) your hand 45 degrees to the left, then instead of rolling the world 45 degrees to the left, you roll the camera 45 degrees to the right. The result will look the same, but now your point coordinates havenā€™t changed. Or if you turn (the yaw of) your hand 90 degrees to the right then instead of rotating the world 90 degrees to the right, you orbit the camera 90 azimuth (degrees) left. I guess you will need to either use a spat5.translate (mind the different axes between spat5 and openGL) or just look up the conversion between AED and XYZ.
In your case you can actually control the distance with the Z of your hand position.

That should already be easy without the rotation kung fu above, and you donā€™t even need the raycasting trick. You just put the data points into a fluid.kdtree, then query with your index fingertip. It of course gets trickier if you move and/or rotate the camera since then you have to have your XYZ point mapped into whatever area is in front of the camera. I just did this a few weeks ago, and Iā€™m not sure I found the nicest implementation but it seems to work.

  • First you have to counter-rotate your point to the camera rotation. I used a spat5.transform for this. It takes some experimentation because spat5 axes are not the same as the openGL axes.
  • Then you cast a single ray from the camera looking straight ahead and define a ā€œmount pointā€, a point that is always N units in front of the camera.
  • Finally, offset the transformed query point (your index finger xyz transformed respective to cam rotation) with the vector of the mount point.

And tada, now your query point will always be projected into whatever area is in front of the camera no matter where the camera is or where itā€™s looking at.

The issue with plugging in the quaternion to the camera might have been because of the leapmotion object uses a different quaternion ordering than the jit.gl.camera. Maybe the points didnā€™t go away, itā€™s just the camera was looking at the ā€œvoidā€. Honestly, I donā€™t know how to convert between them (might be as simple as flipping the sign or index of some numbers) but maybe there is some tool for this in spat5 or somewhere else.

I realize now that this makes no sense, it was a bit late for my English facultyā€¦ So what we need is not rotating the point itself, but rotating the coordinate system where the point lives.

Hi there! (@balintlaczko in particular!)
I had come across that thread which I find really interesting, since Iā€™ve been focussing on similar stuff since at least a year now, namely browsing a database in 3d, in VR. I am currently brainstorming about how to get this further. The possibilities of interaction in VR are great, but I am kind of missing the back and forth allowed by the plotter if you stay within the max world. I am imagining it would be awesome to be able to modify data visualisation in real time, but I donĀ“t think IĀ“ll be able do that in VR.I am therefore thinking about some kind of middleware between ā€œflucomaxā€ and VR (patchXR), Iā€™ve looked into plotly Screen Recording 2023 06 30 at 00 33 24 - YouTube but I think that would be really tuff :slight_smile: . That being said I already use javascript to generate VR patches, so perhaps some js libraries could already give nice results in the browser? Iā€™d be interested to know how it works in ā€œthe humā€, is it all jitter? Youā€™ve mentionned Unreal, have you made the leap in the end? I donā€™t have very specific questions, I think I just wanted to share some experience, hear about your approach, and express that feeling being really lost in that weird 3d stuff :slight_smile:
Violin and Harp, Berio Sequenza remix - YouTube
madmax2 - YouTube
Screen Recording 2023 06 29 at 23 51 56 - YouTube
356787051 6515871615122608 6727110143786723469 n - YouTube
Screen Recording 2023 06 30 at 00 33 24 - YouTube

2 Likes

Hey there, @belljonathan50 :slight_smile:

Have you considered the VR package in Max? I have only briefly worked with it a few years ago, but it is supposed to be an easy way to adapt a jitter world to VR. I think I tried it with an HTC Vive in 2017-18, canā€™t remember the exact model details though. But I remember that the launch example worked well and the controllers worked too.

Sure, I think that should work too, just make sure you use the dedicated video card in case you are on a Windows system.

Yes, indeed, it is fully Jitter. :slight_smile: The Unreal-adaptation is still very basic and half-baked, but the OSC-bridge between Max and UE5 is fairly simple to set up, and I can recommend it. My main motivation to look at UE5 is the performance optimization, stability, and of course, the gorgeous Lumen lighting system that can make use of hardware-accelerated ray-tracing. :slight_smile: When I made The Hum in Jitter, I had to spend a lot of time implementing performance-optimization tricks myself (like not trying to render things that are not in camera view ā€“ for one), which was a bit cumbersome, and Iā€™m not really a veteran to get too far in that anyway. UE5, being a gaming engine, gives you a lot of such optimizations ā€œfor freeā€ which is really motivating.

2 Likes