Three.js bounding box wrong alignment - three.js

In my program, I add points to a particle system and then calculate bounding box for it as:
var object = new THREE.Mesh(geometry, new THREE.MeshBasicMaterial(0xff0000));
var box = new THREE.BoxHelper( object, 0xffff00 );
scene.add(box);
geometry is instance of BufferGeometry and it contains all the points constituting particle system.
What I see is that bounding box is wrongly aligned. Its in direction perpendicular to expected direction.
So I expect wireframe structure to envelop the point cloud.
Do I need to do something extra here?
Edit:
Code I am working upon is in github repo:
github file
In function ParticleSystemBatcher.prototype.push, points read from a file are pushed in particle system.I have added code above at end of this function. Bounding box does appear,but aligned wrongly.

You have a THREE.ShaderMaterial which applies some logic to positioning these vertices. Hence, the result rendered is different than the result stored in the main memory.
You can debug this by making a Mesh or sprite, and positioning each where you expect the particle in the system to be using just rhe scene graph (object.position.set()). The result will be a bunch of dots that are not in the same space as your particle system. These will also fit the bounding box.
The solution is to apply the same transformation that is being applied by the shader.

Related

Raycasting to intersect objects that have been displaced by vertex shader

Let's say I have a vertical list of meshes created from PlaneBufferGeometry with ShaderMaterial. The meshes are distributed vertically and evenly spaced.
The list will have two states:
Displaying the meshes as they are
Displaying meshes with each object's vertices transformed by the vertex shader to the same arbitrary value, let's say z = -50. This gives a zoomed out effect and the user can scroll through this list (in the code we do this by moving the camera y position)
In my app I'm trying to make my mouseover events work for the second state but it's tricky since the GPU transforms the vertices so the updated vertices are not reflected in the attributes on the JS side.
*Note I've looked into GPU picking and do not want to use it because I believe there should be a simpler way to do this without render targets
Attempted Solution
My current approach is to manually change the boundingBox of each plane when we are in the second state like so:
var box = new THREE.Box3().setFromObject(plane);
box.min.z = -50;
box.max.z = -50;
plane.geometry.boundingBox = box;
And then to change the boundingSphere's center to have the same z position of -50 after computing it.
I did this approach because I looked into the Raycaster and Mesh code for THREE.js and it seems like they check both boundingSphere and boundingBox for object intersections. So I thought if I modified both of them to reflect the transforms done by the GPU, the raycaster would work fine but it doesn't seem to be working for me.
The relevant raycaster code is here:
// mouse being vec2 of normalized coordinates and camera being a perspective camera
raycaster.setFromCamera( mouse, camera );
const intersects = raycaster.intersectObjects( planes );
Possible Theories
The only thing I can think of that's wrong about this approach is maybe I'm not projecting the mouse coords right? Since all the objects now lie on the plane z = -50 would I need to project those mouse coordinates to that plane?
Inspired by the link posted by #prisoner849 I found a working solution to just create additional transparent planes equal to the number of planes in the scene. In these planes, I set the z position to -50 and just intersect with these when in state #2.
A bit hacky, but works for now.

Find which object3D's the camera can see in Three.js - Raycast from each camera to object

I have a grid of points (object3D's using THREE.Points) in my Three.js scene, with a model sat on top of the grid, as seen below. In code the model is called default mesh and uses a merged geometry for performance reasons:
I'm trying to work out which of the points in the grid my perspective camera can see at any given point i.e. every time the camera position is update using my orbital controls.
My first idea was to use raycasting to create a ray between the camera and each point in the grid. Then I can find which rays are being intersected with the model and remove the points corresponding to those rays from a list of all the points, thus leaving me with a list of points the camera can see.
So far so good, the ray creation and intersection code is placed in the render loop (as it has to be updated whenever the camera is moved), and therefore it's horrendously slow (obviously).
gridPointsVisible = gridPoints.geometry.vertices.slice(0);
startPoint = camera.position.clone();
//create the rays from each point in the grid to the camera position
for ( var point in gridPoints.geometry.vertices) {
direction = gridPoints.geometry.vertices[point].clone();
vector.subVectors(direction, startPoint);
ray = new THREE.Raycaster(startPoint, vector.clone().normalize());
if(ray.intersectObject( defaultMesh ).length > 0){
gridPointsVisible.pop(gridPoints.geometry.vertices[point]);
}
}
In the example model shown there are around 2300 rays being created, and the mesh has 1500 faces, so the rendering takes forever.
So I 2 questions:
Is there a better of way of finding which objects the camera can see?
If not, can I speed up my raycasting/intersection checks?
Thanks in advance!
Take a look at this example of GPU picking.
You can do something similar, especially easy since you have a finite and ordered set of spheres. The idea is that you'd use a shader to calculate (probably based on position) a flat color for each sphere, and render to an off-screen render target. You'd then parse the render target data for colors, and be able to map back to your spheres. Any colors that are visible are also visible spheres. Any leftover spheres are hidden. This method should produce results faster than raycasting.
WebGLRenderTarget lets you draw to a buffer without drawing to the canvas. You can then access the render target's image buffer pixel-by-pixel (really color-by-color in RGBA).
For the mapping, you'll parse that buffer and create a list of all the unique colors you see (all non-sphere objects should be some other flat color). Then you can loop through your points--and you should know what color each sphere should be by the same color calculation as the shader used. If a point's color is in your list of found colors, then that point is visible.
To optimize this idea, you can reduce the resolution of your render target. You may lose points only visible by slivers, but you can tweak your resolution to fit your needs. Also, if you have fewer than 256 points, you can use only red values, which reduces the number of checked values to 1 in every 4 (only check R of the RGBA pixel). If you go beyond 256, include checking green values, and so on.

three.js delay in updating local clipping planes

For realising a scrollable text container (using own bitmap fonts that are basically small sprite meshes) I am using local clipping planes.
When my text container moves the clipping planes are updated according to the global boundaries of my container.
This works perfectly except for fast movements. In this case the clipping planes are slightly delayed behind the container making the text shine through where it shouldn't.
My first thought was that the necessary code for updating the clipping planes might cause the delay.. but when I use apply this order:
1. update the text box position
2. update the clipping planes
3. render()
the delay still exists
Is the reason maybe located in the threejs framework in how the actual clipping is applied?
Here's a small code snippet that shows how I compute my upper clippin plane using two helper meshes. The one is a plane that is positioned orthogonally on my text object (red plane in the picture). The other one is a THREE.Object3D that is positioned in the middle of the upper edge for computing the right plane constant.
// get the world direction of a helper plane mesh that is located orthogonally on my text plane
var upperClippingPlaneRotationProxyMeshWordDirection = _this.upperClippingPlaneRotationProxyMesh.getWorldDirection();
// get the world position of a helper 3d object that is located in the middle of the upper edge of my text plane
var upperClippingPlanePositionProxyObjPosition = _this.upperClippingPlanePositionProxyObj.getWorldPosition();
// a plane through origin which makes it easier for computing the plane constant
var upperPlaneInOrigin = new THREE.Plane(upperClippingPlaneRotationProxyMeshWordDirection, 0);
var dist = upperPlaneInOrigin.distanceToPoint(upperClippingPlanePositionProxyObjPosition);
var upperClippingPlane = new THREE.Plane(upperClippingPlaneRotationProxyMeshWordDirection, dist*-1);
// clipping plane update
_this.myUpperClippingPlane.copy(upperClippingPlane);
picture showing the text object with clipping plane helpers
I found the reason for the delay. In my matrix updating code I only used updateMatrix() on the text object when it moves. To make sure that its child objects including the helper meshes update instantly I had to call updateMatrixWorld(true), this makes sure that the clipping planes are computed correctly

Creating lines that can be modified from ends

So I'd like to create some lines that can be modified from points that are connecting them.
An example of initial state
First one has been moved down, second one up and third one right and down.
On the implementation side I currently have two meshes. First one is stretched out so that it would cover the distance from its starting point to the next point and second one marks the starting point.
var meshLine = new THREE.Mesh(boxGeometry, material);
meshLine.position.set(x,y,z);
meshLine.scale(1,1,distancetonextpoint);
var meshPoint = new THREE.Mesh(sphereGeometry, material);
meshPoint.position.set(x,y,z);
meshPoint.scale(2,2,2);
What I want from it is that when the user drags the circular point other lines would stretch or change their position accordingly to the one being dragged.
Is there some more reasonable solution for this as I feel mine is not quite good and clean. I'd have to do quite heavy lifting to get the movement done.
I've also looked at this example which looks visually very nice but could not integrate it to my system.
You mean you need to edit the object's geometry by dragging their vertices (here a line).
Objects'vertices can't be dragged theirselves, so you need to loop through the geometry and create little spheres at each vertex position ;
You set a raycaster to pick those spheres, as in the examples ;
Your screen is 2D so to drag objects in 3D you need a surface perpendicular to the screen, that intersects the sphere position. For this you set an invisible plane at the vertex position and make it look at the camera ;
Once you can correctly drag the spheres, you tell the corresponding vertices on the object (your lines) they must keep the same position as their spheres ;
End with geometry.verticesNeedUpdate=true.
And you have your new geometry
For code details on picking objects look at the official picking objects'example draggable cubes
This example shows how to use it for editing objects
Comment if you need more explanations

Obtaining normal of the mesh face using raycaster intersectObjects - Three.js

I tried to obtain the normal of the mesh face using these:
ray = new THREE.Raycaster(x, y);
var intersection = ray.intersectObjects(objectsOptical, true);
var vector = intersection[0].face.normal;
Added intersection[0].point and intersection[0].face.normal (multiplied by constant) as one vertex and intersection[0].point as second vertex of a (gray) line. And I got this (red lines are rays and gray should be normals - but they are not):
Illustrative image
Please help me to obtain NORMALS of the mesh FACE.
Thank you.
The normals that you have plotted with red lines look like they might be correct taking into effect perspective projection.
The raycast test hits a single triangular face from your mesh. The normal you are referring to is the normal for the face object the ray hit, ie. from the original mesh.
In the source code for THREE.Raycaster the intersection calculations can be seen returning the face directly.
Elsewhere it is suggested that Ray.intersectObjects() requires face centroids. However I'm not sure about this since the source code doesn't refer to centroids.
Perhaps the normals in the original geometry weren't correct. Try this function first:
geometry.computeFaceNormals();

Resources