Let's say I have a vertical list of meshes created from PlaneBufferGeometry with ShaderMaterial. The meshes are distributed vertically and evenly spaced.
The list will have two states:
Displaying the meshes as they are
Displaying meshes with each object's vertices transformed by the vertex shader to the same arbitrary value, let's say z = -50. This gives a zoomed out effect and the user can scroll through this list (in the code we do this by moving the camera y position)
In my app I'm trying to make my mouseover events work for the second state but it's tricky since the GPU transforms the vertices so the updated vertices are not reflected in the attributes on the JS side.
*Note I've looked into GPU picking and do not want to use it because I believe there should be a simpler way to do this without render targets
Attempted Solution
My current approach is to manually change the boundingBox of each plane when we are in the second state like so:
var box = new THREE.Box3().setFromObject(plane);
box.min.z = -50;
box.max.z = -50;
plane.geometry.boundingBox = box;
And then to change the boundingSphere's center to have the same z position of -50 after computing it.
I did this approach because I looked into the Raycaster and Mesh code for THREE.js and it seems like they check both boundingSphere and boundingBox for object intersections. So I thought if I modified both of them to reflect the transforms done by the GPU, the raycaster would work fine but it doesn't seem to be working for me.
The relevant raycaster code is here:
// mouse being vec2 of normalized coordinates and camera being a perspective camera
raycaster.setFromCamera( mouse, camera );
const intersects = raycaster.intersectObjects( planes );
Possible Theories
The only thing I can think of that's wrong about this approach is maybe I'm not projecting the mouse coords right? Since all the objects now lie on the plane z = -50 would I need to project those mouse coordinates to that plane?
Inspired by the link posted by #prisoner849 I found a working solution to just create additional transparent planes equal to the number of planes in the scene. In these planes, I set the z position to -50 and just intersect with these when in state #2.
A bit hacky, but works for now.
Related
I'm using ThreeJS, but this is a general math question.
My end goal is to position an object in my scene using 2D screen space coordinates; however, I want a specific z position in the perspective projection.
As an example, I have a sphere that I want to place towards the bottom left of the screen while having the sphere be 5 units away from the camera. If the camera were to move, the sphere would maintain its perceived size and position.
I can't use an orthographic camera because the sphere needs to be able to move around in the perspective projection. At some point the sphere will be undocked from the screen and interact with the scene using physics.
I'm sure the solution is somewhere in the camera inverse matrix, however, that is beyond my abilities at the current moment.
Any help is greatly appreciated.
Thanks!
Your post includes too many questions, which is out of scope for StackOverflow. But I’ll try to answer just the main one:
Create a plane Mesh using PlaneGeometry.
Rotate it to face the camera, place it 5 units away from the camera.
Add it as a child with camera.add(plane); so whenever the camera moves, the plane moves with it.
Use a Raycaster’s .setfromCamera(cords, cam)
then
.intersectObject(plane)
method to convert x, y screen cords into an x, y, z world position where it
intersects the plane. You can read about it in the docs.
Once it’s working, make the plane invisible with visible = false
You can see the raycaster working in this official example: https://threejs.org/examples/#webgl_geometry_terrain_raycast
[Updated with a JSFiddle here]
If you hover slightly outside the plane the raycaster still thinks it's hovering over the object because we modified the z position in the vertex shader
For my project I have a carousel of planes (PlaneBufferGeometry and ShaderMaterial) that I need hover effects on.
However, I have this one state where the planes are shrunk by animating each vertex's z coordinate in the vertex shader. In this state, my hover effects using THREE.Raycaster are broken because the positions in the BufferGeom array aren't updated so the Raycaster still uses the same uvs as the original sized planes.
I already tried calling the following functions for every plane p after the vertex shader runs:
p.frustrumCulled = false;
p.geometry.verticesNeedUpdate = true;
p.geometry.normalsNeedUpdate = true;
p.geometry.computeBoundingBox();
p.geometry.computeBoundingSphere();
p.geometry.computeFaceNormals();
p.geometry.computeVertexNormals();
p.geometry.attributes.position.needsUpdate = true;
I also know if I just scale each plane using THREE.Mesh's built in scale, the uvs would be raycasted correctly but I can't do that because there's a specific animation I can only achieve with the vertex shader.
Raycasting happens on the CPU. If you are going to displace vertices on the GPU (via the vertex shader), raycasting can' work correctly since it is not possible to respect the transformed vertices for the intersection test.
You have two options now. You can apply the transformation at the CPU instead of the GPU before performing the raycast. An other option is the usage of different approaches like GPU picking in order to detect the interaction with a 3D object.
I have some project for child http://kinosura.kiev.ua/sova/ and i need to check faceIndex of all cubes in screen.
Now i use intersections array from mouse, but is working only when user pointer at the cube.
How to make ray or rays from camera to all object to check faceIndex ?
I try to make four rays to cubes but if i set cube.position as origin of like this:
raycaster.setFromCamera( cube1.positoin , camera )
I get empty array of intersections.
I also try to set static 2d vector as origin (get coordinate from mouse) but i have relative renderer size and this coordinate all time change... its not work(
Thanks for answer anyway.
I suggest that you try another approach It appears that your cubes do not cover one another, relative to the camera view. So use the surface normals, and compare them to the view direction to determine if they are facing the camera or facing away from the camera by a simple one-per-polygon dot product.
When you are creating your geometry, before adding it a THREE.Mesh call .generateFaceNormals() on it.
Instead of ray casting, iterate through all faces, grab the surface normal of the face, transform relative to the view (inverse transpose of the object's matrix), then dot(). might sound complicated, at first, but it's actually just a couple of steps and much faster than doing a lot of raycasts (which will probably include this anyway!)
I'm working on a simple Three.js demo that uses OrbitControls.js.
I'd like to change the behavior of panning in OrbitControls. Currently, when you pan the camera, it moves the camera in a plane that is perpendicular to the viewing direction. I'd like to change it so that the camera stays a constant distance from the ground plane and moves parallel to it. Google Earth uses a similar control setup.
Edit: I should have mentioned this detail in the first place, but I'd also like the point where you click and start dragging to remain directly under the cursor throughout the entire drag. There needs to be that solid connection between the mouse movement and what the user expects to happen on the screen. Otherwise, it feels as though I'm 'slipping' when I try to move around the scene.
Can someone give me a high-level explanation of how this might be done (with or without OrbitControls.js)?
EDIT: OrbitControls now supports panning parallel to the "ground plane", and it is the default.
To pan parallel to screen-space (the legacy behavior), set:
controls.screenSpacePanning = true;
Also available is MapControls, which has an API similar to that of Google Earth.
three.js r.94
Some time ago I was working on exactly this issue, i.e. adaptation of OrbitControls.js to map navigation.
Here's the code of MapControls.js.
Here's the demo of the controls.
I figured it out. Here's the overview:
Store the mousedown event somewhere.
When the mouse moves, get the new mousedown event.
For each of those points, find the points on the plane where those clicks are located (You'll need to put the points into camera space, transform them into world space, then fire a ray from the camera through each point to find their intersections with the plane. This page explains the ray-plane intersection test).
Subtract the world-space start intersection point from the world-space end intersection point to get the offset.
Subtract that offset from the camera's target point and you're done!
In the case of OrbitControl.js, the camera always looks at the target point, and its position is relative to that point. So when you change the target, the camera moves with it. Since the target always lies on the plane, the camera moves parallel to that plane (as long as you're panning).
You should set your camera 'up' to z axe:
camera.up.set(0,0,1)
And then, the main problem with OrbitControl is its panUp() function. It should be fixed.
My pull request : https://github.com/mrdoob/three.js/pull/12727
y axe is relative to camera axes and should be relative to a fixed plan in the world. To define the expected y axe, make a 90° rotation of camera x axe, based on world z axe.
v.setFromMatrixColumn( objectMatrix, 0 ); // get X column of objectMatrix
v.applyAxisAngle( new THREE.Vector3( 0, 0, 1 ), Math.PI / 2 );
v.multiplyScalar( distance );
panOffset.add( v )
Enjoy!
I have a geometry object, and I'm trying to add a Torus mesh that goes around that geometry. What I'm trying to do is have the original geometry, and then when the geometry is clicked, it adds a Torus shape on the line around the location that was clicked. However, I'm having trouble getting it to rotate correctly.
I get the torus to show up at the correct place, but I can't orient it around the line. I'm using a raycaster to get the point clicked, so I have the face and the faceindex of the point clicked. On every implementation I try using rotation (using setEulerFromRotationMatrix), it simply moves the location of the torus mesh, not actually rotate it to allow the line to go through the torus.
This seems like it would be trivial, but it's giving me a lot of trouble. What am I doing wrong? Two methods I tried, both unsucessful and exhibiting the behavior above:
var rotationMatrix = new THREE.Matrix4();
rotationMatrix.makeRotationAxis(geometry.faces[fIndex].centroid.normalize(), Math.PI/2);
torusLoop.matrix.multiply(rotationMatrix);
torusLoop.rotation.setEulerFromRotationMatrix(torusLoop.matrix);
//attempt two, similar results to above attempt
tangent = geometry.tangents[segments/radiusSegments].normalize();
axis.crossVectors( up, tangent ).normalize();
var radians = Math.acos( up.dot( tangent ) );
matrix.makeRotationAxis( axis, radians );
torusLoop.rotation.setEulerFromRotationMatrix( matrix );
I need the torus knot to follow the curve of the spline, but it will only stay flat, and rotations simply cause it to move around, not change angles.
Never mind, I figured it out. For those wondering, I translated before rotating, which caused my figure to be rotating around a different axis. My solution was to rotate first, and then translate, and then after creating the mesh, moving that to the position I needed it to be.