I am using Mr Doob's periodic table code in my web app:
https://threejs.org/examples/css3d_periodictable.html
I am using it in Helix view. When I rotate the object left, I shift the camera's Y position by a certain amount too. My desire is to give the impression that the Helix is corkscrewing vertically up and down as you rotate the camera. This is the code I'm using, where angle and panAmount are constants that control how much rotation and vertical pan takes place per second:
let g_RotationMatrix = new THREE.Matrix4();
g_RotationMatrix.makeRotationY(angle);
// Apply matrix like this to rotate the camera.
self.object.position.applyMatrix4(g_RotationMatrix);
// Shift it vertically too.
self.object.position.y += panAmount;
// Make camera look at the target.
self.object.lookAt(self.target);
The problem I'm having is that from the camera's perspective, the object appears to tilt towards you and away from you respectively, depending on the rotation direction, as the camera shifts vertically. This makes sense to me because the I'm guessing that the lookat() function causes the camera to look at the center of the target, and not at point on the target that is closest point to it, so the camera has to tilt to focus on the targe's center of mass. I see the same effect when I use the mouse to pan vertically using the Orbit controls. I believe another way to describe the effect is that the object appears to pitch up and down as the camera shifts vertically.
The effect I want instead is that of a window washer on an automated lift being raised up and down the side of building, with the side of the building appearing perfectly flat to the camera regardless of the camera's current Y position.
How can I achieve this effect with the camera?
Make the camera lookAt the target, but at the same y level as the camera.
Assuming self.target is a vector3 object and self.object is the camera:
If, for example, self.target is the camera's rotation center of the object, you wouldn't want to change the actual self.target vector. Make a copy of it first.
const newTarget = new THREE.Vector3( );
function render( ){
// copy the target vector to newTarget so that the original self.target
// will not change and the camera can still rotate around the original
// target.
newTarget.copy( self.target );
// Set newTarget's Y from the camera Y. So that is looks horizontal.
newTarget.y = self.object.position.y;
// Make the camera look at the objects newTarget
self.object.lookAt( newTarget);
}
Related
Let's say I have a vertical list of meshes created from PlaneBufferGeometry with ShaderMaterial. The meshes are distributed vertically and evenly spaced.
The list will have two states:
Displaying the meshes as they are
Displaying meshes with each object's vertices transformed by the vertex shader to the same arbitrary value, let's say z = -50. This gives a zoomed out effect and the user can scroll through this list (in the code we do this by moving the camera y position)
In my app I'm trying to make my mouseover events work for the second state but it's tricky since the GPU transforms the vertices so the updated vertices are not reflected in the attributes on the JS side.
*Note I've looked into GPU picking and do not want to use it because I believe there should be a simpler way to do this without render targets
Attempted Solution
My current approach is to manually change the boundingBox of each plane when we are in the second state like so:
var box = new THREE.Box3().setFromObject(plane);
box.min.z = -50;
box.max.z = -50;
plane.geometry.boundingBox = box;
And then to change the boundingSphere's center to have the same z position of -50 after computing it.
I did this approach because I looked into the Raycaster and Mesh code for THREE.js and it seems like they check both boundingSphere and boundingBox for object intersections. So I thought if I modified both of them to reflect the transforms done by the GPU, the raycaster would work fine but it doesn't seem to be working for me.
The relevant raycaster code is here:
// mouse being vec2 of normalized coordinates and camera being a perspective camera
raycaster.setFromCamera( mouse, camera );
const intersects = raycaster.intersectObjects( planes );
Possible Theories
The only thing I can think of that's wrong about this approach is maybe I'm not projecting the mouse coords right? Since all the objects now lie on the plane z = -50 would I need to project those mouse coordinates to that plane?
Inspired by the link posted by #prisoner849 I found a working solution to just create additional transparent planes equal to the number of planes in the scene. In these planes, I set the z position to -50 and just intersect with these when in state #2.
A bit hacky, but works for now.
I'm currently working on a scene, with an object at the centre (0,0,0) and 6 other object orbiting it. I have a camera set up behind and slightly above an outer object, looking back to centre (camera.lookAt(0,0,0) etc).
I want to click an object and just for the camera to arc towards and face the clicked object. I'm using Quaternion.slerp to achieve this, and it sort of works, yet while the camera's moving, it sort of drifts about lazily then slowly focuses just off centre of the object.
So if I click an object to the left of the camera's FOV, it sort of drifts right a little before swinging left, and vice versa.
I found some help regarding movement with slerp, I was hoping to simply adapt it:
How to animate the camera in three.js to look at an object?
Setup
this._endQ = new Quaternion();
this._iniQ = new Quaternion().copy(this._camera.quaternion);
this._curQ = new Quaternion();
this._vec3 = new Vector3();
const euler = new Euler(this._cameraLookAtDestination.x, this._cameraLookAtDestination.y, this._cameraLookAtDestination.z);
this._endQ.setFromEuler(euler);
In renderloop
Quaternion.slerp(this._iniQ, this._endQ, this._curQ, pointInTime);
this._vec3.x = this._cameraLookAtDestination.x;
this._vec3.y = this._cameraLookAtDestination.y;
this._vec3.z = this._cameraLookAtDestination.z;
this._vec3.applyQuaternion(this._curQ);
this._camera.lookAt(this._vec3);
Is this normal slerp behaviour, or is there something I'm missing?
EDIT:
I also noticed that each time I click an object to look at, the camera seems to reset its orientation, then rotates
I'm trying to use Three.js to create an ’asteroid field' using particle systems or point clouds or stuff like that. One of the problems I've bumped into with all of these is that when the camera rotates around the z axis, the particles rotate individually with the camera, preserving the same orientation no matter how the camera is turned. I want the simulation to look as if the user is flying through a bunch of asteroids, and obviously asteroids don't magically spin whenever you tilt your head, so I was wondering if there is any way to prevent them from turning when the camera turns. Must particles always be upright?
If you want to rotate sprites you can use attribute SpriteMaterial.rotation:
var sprite = new THREE.Sprite( new THREE.SpriteMaterial({map: texture,rotation: Math.PI/4}));
see this http://threejs.org/examples/webgl_sprites.html
In your case, rotation of all sprites should be equal to camera rotation.
I have a fixed-position, perspective camera that rotates around all 3 axes via keyboard input. At random intervals, independent of user input, I need to place objects within the camera's field of view no matter what direction the camera is looking. The objects will also need to be offset specific x and y distances from the center of the camera's fov and offset a specific z distance from the camera's position. I cannot use camera.addChild because once the object is added I need to move the object via tweening independent of the camera's movements.
How can this be done?
You want to transform a point from camera space to world space.
In the camera's coordinate system, the camera is located at the origin, and is looking down its negative z-axis.
Place the object in front of the camera (in the camera's coordinate system).
object.position.set( x, y, - z ); // z is the distance in front of the camera, and is positive
Now, transform the object's position from camera space to world space:
object.position.applyMatrix4( camera.matrixWorld );
three.js r.69
I'm working on a simple Three.js demo that uses OrbitControls.js.
I'd like to change the behavior of panning in OrbitControls. Currently, when you pan the camera, it moves the camera in a plane that is perpendicular to the viewing direction. I'd like to change it so that the camera stays a constant distance from the ground plane and moves parallel to it. Google Earth uses a similar control setup.
Edit: I should have mentioned this detail in the first place, but I'd also like the point where you click and start dragging to remain directly under the cursor throughout the entire drag. There needs to be that solid connection between the mouse movement and what the user expects to happen on the screen. Otherwise, it feels as though I'm 'slipping' when I try to move around the scene.
Can someone give me a high-level explanation of how this might be done (with or without OrbitControls.js)?
EDIT: OrbitControls now supports panning parallel to the "ground plane", and it is the default.
To pan parallel to screen-space (the legacy behavior), set:
controls.screenSpacePanning = true;
Also available is MapControls, which has an API similar to that of Google Earth.
three.js r.94
Some time ago I was working on exactly this issue, i.e. adaptation of OrbitControls.js to map navigation.
Here's the code of MapControls.js.
Here's the demo of the controls.
I figured it out. Here's the overview:
Store the mousedown event somewhere.
When the mouse moves, get the new mousedown event.
For each of those points, find the points on the plane where those clicks are located (You'll need to put the points into camera space, transform them into world space, then fire a ray from the camera through each point to find their intersections with the plane. This page explains the ray-plane intersection test).
Subtract the world-space start intersection point from the world-space end intersection point to get the offset.
Subtract that offset from the camera's target point and you're done!
In the case of OrbitControl.js, the camera always looks at the target point, and its position is relative to that point. So when you change the target, the camera moves with it. Since the target always lies on the plane, the camera moves parallel to that plane (as long as you're panning).
You should set your camera 'up' to z axe:
camera.up.set(0,0,1)
And then, the main problem with OrbitControl is its panUp() function. It should be fixed.
My pull request : https://github.com/mrdoob/three.js/pull/12727
y axe is relative to camera axes and should be relative to a fixed plan in the world. To define the expected y axe, make a 90° rotation of camera x axe, based on world z axe.
v.setFromMatrixColumn( objectMatrix, 0 ); // get X column of objectMatrix
v.applyAxisAngle( new THREE.Vector3( 0, 0, 1 ), Math.PI / 2 );
v.multiplyScalar( distance );
panOffset.add( v )
Enjoy!