THREE.js Wrap 3D object in a aquare grid - three.js

I have looked and looked but have not come across anything close to what I'm trying to accomplish.
I am using THREE.js and I have a 3D object, say a human skull. I want to wrap a square grid around its surface while maintaining the unit spacing of the grid lines. Like wire frame but all the line spacing is uniform and straight.
Goal is to have the user easily see the surface length of an object. Real world example. Draw a line from the front of a skull to the back over the top and measure the length of the line.
Any code example, or even a start on the process I could take to solve this problem using THREE.js.
Thanks

you can compute Bounding Box on the object(skull) , and draw a cube using properties of Bounding Box .
object.computeBoundingBox();
// adding cube
var centerX = (object.boundingBox.max.x - object.boundingBox.min.x);
var centerY = (object.boundingBox.max.y - object.boundingBox.min.y);
var centerZ = (object.boundingBox.max.z - object.boundingBox.min.z);
var boundingBoxGeometry = new THREE.CubeGeometry(centerX, centerY, centerZ);
here is a sample fiddle for that : Bounding box fiddle

Related

three.js Draw bounding box from 3d-force-graph

I'm currently trying to calculate/draw the bounding box from a force graph using 3d-force-3d and three.js. I'm not having any luck creating the box. force-graph has a getGraphBbox() function but not much documentation. I have tried Any help would be appreciated! ...here is some not working code, i'm working with. Edit - my data is dynamic, I can add a mostly transparent cube to the scene around the points but in a static graph… I want the cube to always render the appropriate size around the nodes as some graphs have 1000 nodes and others 50 in my data sets. Hence the bounding box.
const Graph = ForceGraph3D()
(document.getElementById('3d-graph'))
.graphData(data)
.nodeLabel('id')
.nodeAutoColorBy('group');
//.getGraphBbox();//Not sure how to use this
let helper = new THREE.Box3Helper(Graph, 0xff0000);
helper.update();
Graph.scene().add(helper)

Raycasting to intersect objects that have been displaced by vertex shader

Let's say I have a vertical list of meshes created from PlaneBufferGeometry with ShaderMaterial. The meshes are distributed vertically and evenly spaced.
The list will have two states:
Displaying the meshes as they are
Displaying meshes with each object's vertices transformed by the vertex shader to the same arbitrary value, let's say z = -50. This gives a zoomed out effect and the user can scroll through this list (in the code we do this by moving the camera y position)
In my app I'm trying to make my mouseover events work for the second state but it's tricky since the GPU transforms the vertices so the updated vertices are not reflected in the attributes on the JS side.
*Note I've looked into GPU picking and do not want to use it because I believe there should be a simpler way to do this without render targets
Attempted Solution
My current approach is to manually change the boundingBox of each plane when we are in the second state like so:
var box = new THREE.Box3().setFromObject(plane);
box.min.z = -50;
box.max.z = -50;
plane.geometry.boundingBox = box;
And then to change the boundingSphere's center to have the same z position of -50 after computing it.
I did this approach because I looked into the Raycaster and Mesh code for THREE.js and it seems like they check both boundingSphere and boundingBox for object intersections. So I thought if I modified both of them to reflect the transforms done by the GPU, the raycaster would work fine but it doesn't seem to be working for me.
The relevant raycaster code is here:
// mouse being vec2 of normalized coordinates and camera being a perspective camera
raycaster.setFromCamera( mouse, camera );
const intersects = raycaster.intersectObjects( planes );
Possible Theories
The only thing I can think of that's wrong about this approach is maybe I'm not projecting the mouse coords right? Since all the objects now lie on the plane z = -50 would I need to project those mouse coordinates to that plane?
Inspired by the link posted by #prisoner849 I found a working solution to just create additional transparent planes equal to the number of planes in the scene. In these planes, I set the z position to -50 and just intersect with these when in state #2.
A bit hacky, but works for now.

How to calculate near plane of camera in three.js with model bounding box

I am creating a viewer using three.js and found that setting camera near and far plane to fixed values is causing flickering for some 3d models.
I see that this is due to the fact that GPU is running out of precision for model having bounding box length around 4000-5000.
Near plane is currently set to 0.1 and far to 20000.
You can move up your near plane to get more resolution. Maybe 1.0...
Another option to be aware of is logarithmic depth buffer:
https://threejs.org/examples/webgl_camera_logarithmicdepthbuffer.html
You can get the bounding box of the mesh via its geometry... geometry.boundingBox and geometry.boundingSphere .. sometimes you need to recalculate them using mesh.geometry.computeBoundingBox and computeBoundingSphere...
To get the bounding box in camera space is a bit tricky.. I don't know of a super optimal one-liner to do it, but someone else may weigh in...
a brute force way would be to transform the mesh vertices to screen space..
Maybe something like:
var gclone = mesh.geometry.clone();
for(var i=0;i<geometry.vertices.length;i++)
gclone.vertices[i].applyMatrix4(mesh.matrixWorld).project(camera)
gclone.computeBoundingBox()
var zExtent = gclone.boundingBox.max.z-gclone.boundingBox.min.z

Three.js bounding box wrong alignment

In my program, I add points to a particle system and then calculate bounding box for it as:
var object = new THREE.Mesh(geometry, new THREE.MeshBasicMaterial(0xff0000));
var box = new THREE.BoxHelper( object, 0xffff00 );
scene.add(box);
geometry is instance of BufferGeometry and it contains all the points constituting particle system.
What I see is that bounding box is wrongly aligned. Its in direction perpendicular to expected direction.
So I expect wireframe structure to envelop the point cloud.
Do I need to do something extra here?
Edit:
Code I am working upon is in github repo:
github file
In function ParticleSystemBatcher.prototype.push, points read from a file are pushed in particle system.I have added code above at end of this function. Bounding box does appear,but aligned wrongly.
You have a THREE.ShaderMaterial which applies some logic to positioning these vertices. Hence, the result rendered is different than the result stored in the main memory.
You can debug this by making a Mesh or sprite, and positioning each where you expect the particle in the system to be using just rhe scene graph (object.position.set()). The result will be a bunch of dots that are not in the same space as your particle system. These will also fit the bounding box.
The solution is to apply the same transformation that is being applied by the shader.

three.js delay in updating local clipping planes

For realising a scrollable text container (using own bitmap fonts that are basically small sprite meshes) I am using local clipping planes.
When my text container moves the clipping planes are updated according to the global boundaries of my container.
This works perfectly except for fast movements. In this case the clipping planes are slightly delayed behind the container making the text shine through where it shouldn't.
My first thought was that the necessary code for updating the clipping planes might cause the delay.. but when I use apply this order:
1. update the text box position
2. update the clipping planes
3. render()
the delay still exists
Is the reason maybe located in the threejs framework in how the actual clipping is applied?
Here's a small code snippet that shows how I compute my upper clippin plane using two helper meshes. The one is a plane that is positioned orthogonally on my text object (red plane in the picture). The other one is a THREE.Object3D that is positioned in the middle of the upper edge for computing the right plane constant.
// get the world direction of a helper plane mesh that is located orthogonally on my text plane
var upperClippingPlaneRotationProxyMeshWordDirection = _this.upperClippingPlaneRotationProxyMesh.getWorldDirection();
// get the world position of a helper 3d object that is located in the middle of the upper edge of my text plane
var upperClippingPlanePositionProxyObjPosition = _this.upperClippingPlanePositionProxyObj.getWorldPosition();
// a plane through origin which makes it easier for computing the plane constant
var upperPlaneInOrigin = new THREE.Plane(upperClippingPlaneRotationProxyMeshWordDirection, 0);
var dist = upperPlaneInOrigin.distanceToPoint(upperClippingPlanePositionProxyObjPosition);
var upperClippingPlane = new THREE.Plane(upperClippingPlaneRotationProxyMeshWordDirection, dist*-1);
// clipping plane update
_this.myUpperClippingPlane.copy(upperClippingPlane);
picture showing the text object with clipping plane helpers
I found the reason for the delay. In my matrix updating code I only used updateMatrix() on the text object when it moves. To make sure that its child objects including the helper meshes update instantly I had to call updateMatrixWorld(true), this makes sure that the clipping planes are computed correctly

Resources