I tried to obtain the normal of the mesh face using these:
ray = new THREE.Raycaster(x, y);
var intersection = ray.intersectObjects(objectsOptical, true);
var vector = intersection[0].face.normal;
Added intersection[0].point and intersection[0].face.normal (multiplied by constant) as one vertex and intersection[0].point as second vertex of a (gray) line. And I got this (red lines are rays and gray should be normals - but they are not):
Illustrative image
Please help me to obtain NORMALS of the mesh FACE.
Thank you.
The normals that you have plotted with red lines look like they might be correct taking into effect perspective projection.
The raycast test hits a single triangular face from your mesh. The normal you are referring to is the normal for the face object the ray hit, ie. from the original mesh.
In the source code for THREE.Raycaster the intersection calculations can be seen returning the face directly.
Elsewhere it is suggested that Ray.intersectObjects() requires face centroids. However I'm not sure about this since the source code doesn't refer to centroids.
Perhaps the normals in the original geometry weren't correct. Try this function first:
geometry.computeFaceNormals();
Related
Let's say I have a vertical list of meshes created from PlaneBufferGeometry with ShaderMaterial. The meshes are distributed vertically and evenly spaced.
The list will have two states:
Displaying the meshes as they are
Displaying meshes with each object's vertices transformed by the vertex shader to the same arbitrary value, let's say z = -50. This gives a zoomed out effect and the user can scroll through this list (in the code we do this by moving the camera y position)
In my app I'm trying to make my mouseover events work for the second state but it's tricky since the GPU transforms the vertices so the updated vertices are not reflected in the attributes on the JS side.
*Note I've looked into GPU picking and do not want to use it because I believe there should be a simpler way to do this without render targets
Attempted Solution
My current approach is to manually change the boundingBox of each plane when we are in the second state like so:
var box = new THREE.Box3().setFromObject(plane);
box.min.z = -50;
box.max.z = -50;
plane.geometry.boundingBox = box;
And then to change the boundingSphere's center to have the same z position of -50 after computing it.
I did this approach because I looked into the Raycaster and Mesh code for THREE.js and it seems like they check both boundingSphere and boundingBox for object intersections. So I thought if I modified both of them to reflect the transforms done by the GPU, the raycaster would work fine but it doesn't seem to be working for me.
The relevant raycaster code is here:
// mouse being vec2 of normalized coordinates and camera being a perspective camera
raycaster.setFromCamera( mouse, camera );
const intersects = raycaster.intersectObjects( planes );
Possible Theories
The only thing I can think of that's wrong about this approach is maybe I'm not projecting the mouse coords right? Since all the objects now lie on the plane z = -50 would I need to project those mouse coordinates to that plane?
Inspired by the link posted by #prisoner849 I found a working solution to just create additional transparent planes equal to the number of planes in the scene. In these planes, I set the z position to -50 and just intersect with these when in state #2.
A bit hacky, but works for now.
[Updated with a JSFiddle here]
If you hover slightly outside the plane the raycaster still thinks it's hovering over the object because we modified the z position in the vertex shader
For my project I have a carousel of planes (PlaneBufferGeometry and ShaderMaterial) that I need hover effects on.
However, I have this one state where the planes are shrunk by animating each vertex's z coordinate in the vertex shader. In this state, my hover effects using THREE.Raycaster are broken because the positions in the BufferGeom array aren't updated so the Raycaster still uses the same uvs as the original sized planes.
I already tried calling the following functions for every plane p after the vertex shader runs:
p.frustrumCulled = false;
p.geometry.verticesNeedUpdate = true;
p.geometry.normalsNeedUpdate = true;
p.geometry.computeBoundingBox();
p.geometry.computeBoundingSphere();
p.geometry.computeFaceNormals();
p.geometry.computeVertexNormals();
p.geometry.attributes.position.needsUpdate = true;
I also know if I just scale each plane using THREE.Mesh's built in scale, the uvs would be raycasted correctly but I can't do that because there's a specific animation I can only achieve with the vertex shader.
Raycasting happens on the CPU. If you are going to displace vertices on the GPU (via the vertex shader), raycasting can' work correctly since it is not possible to respect the transformed vertices for the intersection test.
You have two options now. You can apply the transformation at the CPU instead of the GPU before performing the raycast. An other option is the usage of different approaches like GPU picking in order to detect the interaction with a 3D object.
I am building a 3d terrain in three.js. I do this by creating a PlaneGeometry, and setting the z values of the PlaneGeometry vertices to elevation data. I then load NAIP (National Agriculture Imagery Program) Imagery into a MeshPhongMaterial. The mesh is created from the PlaneGeometry and MeshPhongMaterial. This all works great.
Where I am stuck is trying to plot a trail on top of this mesh. I do this by creating a Geometry, whose vertices are of type Vector3. I cant figure out how to set the z values of these Vector3 vertices so that this Geometry lies on top of the mesh. In the attached images, I have just hard-coded 2 for the z value - and you can see that some places the trail (white line)is above the terrain, and in others it disappears below it.
I tried using the z-value from the intersecting point of where the trail intersects the PlaneGeometry - that didnt work.
Any easy ways to do just clamp the trail to the mesh?
You can raycast down onto you mesh from each point in your path to get the point on the surface, then move the path point slightly above that point.
Just trying some some intersection experiments with three.js. Wanted to see if I could figure out when a cube intersected a PlaneGeometry. I can get the bbox of the Cube easily enough with
var bbox = new THREE.Box3().setFromObject(mesh);
where the mesh is the cube's mesh. I also have a PlaneBufferGeometry object. It looks like I might get the intersection info from the intersectsBox() method in the Plane object (or vice versa with the intersectsPlane() method in Box3. But I can't see easily how to get the Plane object from the PlaneBufferGeometry (or PlaneGeometry). I can create the Plane from three coplanar points or a point on the plane and the normal, but I don't see offhand how to get those from a given PlaneGeometry. If/when I constructed the PlaneBufferGeometry I could do it from the inputs to that, but how to obtain the Plane from an arbitrary PlaneGeometry object? Still looking but suggestions are welcome.
I have a geometry object, and I'm trying to add a Torus mesh that goes around that geometry. What I'm trying to do is have the original geometry, and then when the geometry is clicked, it adds a Torus shape on the line around the location that was clicked. However, I'm having trouble getting it to rotate correctly.
I get the torus to show up at the correct place, but I can't orient it around the line. I'm using a raycaster to get the point clicked, so I have the face and the faceindex of the point clicked. On every implementation I try using rotation (using setEulerFromRotationMatrix), it simply moves the location of the torus mesh, not actually rotate it to allow the line to go through the torus.
This seems like it would be trivial, but it's giving me a lot of trouble. What am I doing wrong? Two methods I tried, both unsucessful and exhibiting the behavior above:
var rotationMatrix = new THREE.Matrix4();
rotationMatrix.makeRotationAxis(geometry.faces[fIndex].centroid.normalize(), Math.PI/2);
torusLoop.matrix.multiply(rotationMatrix);
torusLoop.rotation.setEulerFromRotationMatrix(torusLoop.matrix);
//attempt two, similar results to above attempt
tangent = geometry.tangents[segments/radiusSegments].normalize();
axis.crossVectors( up, tangent ).normalize();
var radians = Math.acos( up.dot( tangent ) );
matrix.makeRotationAxis( axis, radians );
torusLoop.rotation.setEulerFromRotationMatrix( matrix );
I need the torus knot to follow the curve of the spline, but it will only stay flat, and rotations simply cause it to move around, not change angles.
Never mind, I figured it out. For those wondering, I translated before rotating, which caused my figure to be rotating around a different axis. My solution was to rotate first, and then translate, and then after creating the mesh, moving that to the position I needed it to be.