Get the new camera position with onmousemove event on a sphere object and transforming it onto 2D map location - three.js

I have made a sphere object with a camera.position(0,0,1) and using orbit controls(camera). Now when i drag mouse on my sphere either of the sides how can i get my new camera positions and transform them onto a 2-D map ? I have searched but could not find any relevant sources if somebody can help, it could be really great-full.

Your camera position is stored as a vector3 in camera.position, you can also add the face direction with controls.target. With the camera.position vector you can easily calculate the position on a 2D map with the x and z position of the camera.

Related

Can I convert camera rotation to sphere mesh rotation?

I have a very large SphereBufferGeometry that I project an equirectangular 360 image onto in order to create a 360 scene. I have two functions to my application: 1.) to set the initial view of which part of the scene should be viewed at scene load and 2.) to always load the scene at that saved coordinates.
To set the initial view of which part of the scene should be viewed at scene load:
I can use OrbitControls to move the camera to look at a certain direction of this sphere, and I can save the position of the camera when I look at a 360 scene position I like.
To always load the scene at that saved coordinates:
I can set the position of the camera to this previously saved position and view the scene at my favorite starting location.
This works well, but I do not want to set camera position each time a 360 scene loads. Rather, my requirement is to load the scene with camera in a neutral position, and rather the sphere mesh is rotated on the sphere such that I am looking at the [x,y,z] position of my favorite position set earlier.
Is it at all possible to take a camera position and rotation and use those values to rotate or position a mesh on my sphere?
Can I use OrbitControls to rotate the entire scene/sphere instead of the camera onClick?

Raycasting to intersect objects that have been displaced by vertex shader

Let's say I have a vertical list of meshes created from PlaneBufferGeometry with ShaderMaterial. The meshes are distributed vertically and evenly spaced.
The list will have two states:
Displaying the meshes as they are
Displaying meshes with each object's vertices transformed by the vertex shader to the same arbitrary value, let's say z = -50. This gives a zoomed out effect and the user can scroll through this list (in the code we do this by moving the camera y position)
In my app I'm trying to make my mouseover events work for the second state but it's tricky since the GPU transforms the vertices so the updated vertices are not reflected in the attributes on the JS side.
*Note I've looked into GPU picking and do not want to use it because I believe there should be a simpler way to do this without render targets
Attempted Solution
My current approach is to manually change the boundingBox of each plane when we are in the second state like so:
var box = new THREE.Box3().setFromObject(plane);
box.min.z = -50;
box.max.z = -50;
plane.geometry.boundingBox = box;
And then to change the boundingSphere's center to have the same z position of -50 after computing it.
I did this approach because I looked into the Raycaster and Mesh code for THREE.js and it seems like they check both boundingSphere and boundingBox for object intersections. So I thought if I modified both of them to reflect the transforms done by the GPU, the raycaster would work fine but it doesn't seem to be working for me.
The relevant raycaster code is here:
// mouse being vec2 of normalized coordinates and camera being a perspective camera
raycaster.setFromCamera( mouse, camera );
const intersects = raycaster.intersectObjects( planes );
Possible Theories
The only thing I can think of that's wrong about this approach is maybe I'm not projecting the mouse coords right? Since all the objects now lie on the plane z = -50 would I need to project those mouse coordinates to that plane?
Inspired by the link posted by #prisoner849 I found a working solution to just create additional transparent planes equal to the number of planes in the scene. In these planes, I set the z position to -50 and just intersect with these when in state #2.
A bit hacky, but works for now.

NDC coordinates to Camera Coordinates

I have a scenario in which the mouse is over the screen and I want to calculate the 3d coordinate the mouse represents in the XY- plaine of the camera.
The nice way would be to convert NDC mouse coordinates that I have to the matching camera coordinate with which I struggle. Alternatively I could create a fake mesh at the given distance and just use a raycaster - but that seems really ugly.
Since I struggled I tried looking up how three.js raycaster calculates 3d coordinates from ndc coordinates and found the matching lines for PerspectiveCamera
this.ray.origin.setFromMatrixPosition( camera.matrixWorld );
this.ray.direction.set( coords.x, coords.y, 0.5 ).unproject( camera ).sub( this.ray.origin ).normalize();
That doesn't make much sense to me since the ray origin should depend on the mouse coordinates as well and not just the camera. In comparison the ray direction I would only expect to depend on the camera but not on the coordinates.
I guess I am thinking to much in terms of orthogonal coordinate systems and not in terms of a perspective camera, but I can't make any sense of it. IF sb has a good reference for camera perspectives that'd be great too. Cheers Tom

Three.js place an object in rotating camera's fov

I have a fixed-position, perspective camera that rotates around all 3 axes via keyboard input. At random intervals, independent of user input, I need to place objects within the camera's field of view no matter what direction the camera is looking. The objects will also need to be offset specific x and y distances from the center of the camera's fov and offset a specific z distance from the camera's position. I cannot use camera.addChild because once the object is added I need to move the object via tweening independent of the camera's movements.
How can this be done?
You want to transform a point from camera space to world space.
In the camera's coordinate system, the camera is located at the origin, and is looking down its negative z-axis.
Place the object in front of the camera (in the camera's coordinate system).
object.position.set( x, y, - z ); // z is the distance in front of the camera, and is positive
Now, transform the object's position from camera space to world space:
object.position.applyMatrix4( camera.matrixWorld );
three.js r.69

Three.js Get Camera lookAt Vector

I'm looking to translate the camera along its lookAt vector. Once I have this vector, I can do scalar translation along it, use that point to move the camera position in global coordinates and re-render. The trick is getting the arbitrary lookAt vector? I've looked at several other questions and solutions but they don't seem to work for me.
You can't get the lookAtVector from the camera itself, you can however create a new vector and apply the camera rotation to that.
var lookAtVector = new THREE.Vector3(0,0, -1);
lookAtVector.applyQuaternion(camera.quaternion);
The first choice should be cam.translateZ();
There is a second option as well. You can extract the lookAt vector from the matrix property of the camera object. You just need to take the elements corresponding to the local z-axis.
var lookAtVector = new THREE.Vector3(cam.matrix[8], cam.matrix[9], cam.matrix[10]);

Resources