How to rotate with OrbitControls, without limits - three.js

Right now i'm using orbit controls, and i can only rotate 180 degrees in the up-and-down direction. In another direction i can rotate forever, i think it is the z direction. Anyways, how can i make it completely limiteless for all rotation directions?
here's my code now, i tried it with and without infinity:
this.scene_threeD = new THREE.Scene();
this.camera_threeD = new THREE.PerspectiveCamera( 75, width_threeD / height_threeD, 0.1, 1000 );
this.renderer_threeD = new THREE.WebGLRenderer({ canvas: threeDCanvas,
preserveDrawingBuffer: true,
antialias: true });
this.renderer_threeD.setSize( width_threeD, height_threeD);
controls = new THREE.OrbitControls(this.camera_threeD, this.renderer_threeD.domElement);
controls.maxPolarAngle = Infinity;
controls.minPolarAngle = -Infinity;
controls.maxAzimuthAngle = Infinity;
controls.minAzimuthAngle=-Infinity;
controls.update();

The problem with an "orbital camera" is that (by definition) it always tries to keep the camera "up" pointing upwards. This means the camera orientation is undefined when you are looking straight up or down. That is why three.js implements a makeSafe() method that keeps the polar angle just within a +/- 90 degrees angle.
If you were to remove this limitation, you would probably see the camera instantly flip directions when passing the 90 degrees angle (or worse). This is generally undesired behaviour in an application.
To sum things up: if you want limitless rotation, you don't want an orbital camera. This is not a technical but a conceptual limitation.

Related

THREEjs create an intersection plane for a raycast with negative origin

I have a THREEJS scene with an object that 'looks at my mouse'. This works fine and I am using a raycast to get the mouse position like so:
this.intersectionPlane = new THREE.Plane(new THREE.Vector3(0, 0, 1), 10);
this.raycaster = new THREE.Raycaster();
this.mouse = new THREE.Vector2();
this.pointOfIntersection = new THREE.Vector3();
On the mouse-move event I lookAt the pointOfIntersection vector and the object rotates. This works really well.
onDocumentMouseMove = (event) => {
event.preventDefault();
this.mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
this.mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;
this.raycaster.setFromCamera(this.mouse, this.camera);
this.raycaster.ray.intersectPlane(this.intersectionPlane, this.pointOfIntersection);
let v3 = new THREE.Vector3(this.pointOfIntersection.x*0.05, this.pointOfIntersection.y*0.05, this.pointOfIntersection.z);
if(this.pebbleLogo){
this.pebbleLogo.lookAt(v3);
// console.log(v3);
}
if(this.videoWall){
this.videoWall.lookAt(v3);
}
}
BUT, I want to do the same thing with another object that lives at a z-depth of -20 and the camera flies through to this position. At this point, it also flies through the intersectionPlane and the raycast no longer works.
The intersectionPlane is not added to the scene so it doesn't have a position that I can move so how do I make sure that it stays with the camera?
I can see that the plane has two properties:
normal - (optional) a unit length Vector3 defining the normal of the plane. Default is (1, 0, 0).
constant - (optional) the signed distance from the origin to the plane. Default is 0.
I have been able to move the Plane using a translate but this is not ideal as I need the plane to be in a constant position in relation to the camera (just in front of it). I tried to make the plane a child of the camera but it didn't seem to make any difference to its position.
Any help appreciated.
When you perform renderer.render(scene, cam), the engine updates the transformation matrices of all objects that need to be rendered. However, since your camera and plane are not descendants of the scene, you'll have to manually update these matrices. The plane doesn't know that it's parent camera has moved, so you might need to perform plane.updateMatrix(). You can read about manually updating transformation matrices in the docs.
I think since only the parent moves, you might need to use updateMatrixWorld() or updateWorldMatrix() instead. But one of these 3 options should work.
Edit
Upon re-reading your code, it looks like you're using a purely Mathematical THREE.Plane object. This is not an Object3D, which means it cannot be added as a child of anything, so it doesn't behave as a regular object.
My answer assumed you were using a Mesh with PlaneGeometry, which is an Object3D, and it can be added as a child of the camera.

three-globe SphereBufferGeometry/Mesh is offset on globe, but lines up if flat

I have a three-globe, and lat/long points perfectly go to the correct locations. The base (Earth) map is 1600x800.
However, I also have a RainViewer map (storm radar) which is square (4096x4096). If I scale that to 1600x1600 and overlay the Earth map, it fits perfectly lined up (top 800 and bottom 800 are outside the boundaries, but that is blank anyway, so perfect).
When I use the TextureLoader/SphereBufferGeometry/MeshPhongMaterial/Mesh, and add it to the scene, it locates itself completely in the wrong spot. No amount of rotateX/Y/Z, or phi/theta shifting seems to work to get it to position correctly.
How can one map this correctly on the globe?
Relevant code (url hardcoded to a timestamp for clarity):
this.myGlobe = new ThreeGlobe()
.globeImageUrl(myImageUrl)
.polygonsData(this.polyData)
.pointsData(gData)
.pointColor('color');
const renderer = new THREE.WebGLRenderer();
console.log('width=' + width);
renderer.setSize(width, width / 2);
document.getElementById('globeViz').appendChild(renderer.domElement);
const myScene = new THREE.Scene();
myScene.add(this.myGlobe);
myScene.add(new THREE.AmbientLight(0xbbbbbb));
myScene.add(new THREE.DirectionalLight(0xffffff, 0.6));
const camera = new THREE.PerspectiveCamera();
camera.aspect = 2; //window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
camera.translateZ(300);
const globeMaterial = new THREE.MeshPhongMaterial();
globeMaterial.bumpScale = 10;
new THREE.TextureLoader().load('//unpkg.com/three-globe/example/img/earth-water.png',
texture => {
globeMaterial.specularMap = texture;
globeMaterial.specular = new THREE.Color('grey');
globeMaterial.shininess = 15;
});
this.myGlobe.globeMaterial = globeMaterial;
new THREE.TextureLoader().load('https://tilecache.rainviewer.com/v2/radar/1652860800/4096/2/0_1.png',
cloudsTexture => {
const geo = new THREE.SphereBufferGeometry(this.myGlobe.getGlobeRadius() * (1 + 0.004), 80, 80);
const mesh = new THREE.MeshPhongMaterial({ map: cloudsTexture, transparent: true });
const weather = new THREE.Mesh(geo, mesh);
myScene.add(weather);
});
Correct placement:
In color (harder to see) to show apples-to-apples:
Incorrect placement when Globified:
I believe Marquizzo is correct in the comments, one of the projected images is rotated 90 degrees (plus or minus, but probably minus in your case) compared to the other. Since you said that your earth map is not rotated at all, this means the RainViewer map is.
This is consistent with how other NASA weather maps I recently projected on my own Earth globe had to be dealt with - in my case, the cloud cover simulation movie applied on the globe started with the prime meridian aka 0 degree of longitude to the left side of the image (instead of being positioned in the horizontal middle of the image as its customary in nearly all maps), and I'm guessing something similar is happening here, except for the direction of the angle needed to make it look right.
The assumption is supported by the fact that in your screenshots, the big orange spot that should be positioned close to the North American Great Lakes (i.e. 90 degrees West) is placed precisely on the prime meridian (i.e. 0 degrees of longitude). Yup, I know this thanks to my own globe... :)
To (partially, see below) fix this, you should construct your geometry so that the phiStart parameter of the constructor is set to the correct rotation angle, something like:
const geo = new THREE.SphereBufferGeometry(this.myGlobe.getGlobeRadius() * (1 + 0.004), 80, 80, - Math.PI / 2);
This will project the map starting from 90 degrees to the "left" as its left side, if this makes sense.
That being said, I don't think this is the entire extent of the issue, because that orange spot is also displaced at around 23 degrees of latitude North (i.e. at the Tropic of Cancer in your Globified screenshot) compared to the correct 46 degrees of latitude North (i.e. more or less where the left side of Lake Superior lies). This fits well with the fact that the projected image is a 1600 x 1600 px square, instead of an expected 1600 x 800 px rectangle, as the most probable cause of the latitudinal aka vertical displacement, so you might want to appropriately "crop" the RainViewer map to have the expected 2:1 horizontal to vertical size that's expected from a plane projection on a sphere. You could probably use the thetaStart and thetaLength parameters of the sphere geometry constructor to adjust things here as well, if that yields what you want.
Or, it might just be that both the longitudinal and latitudinal displacements are somehow caused by the usage of a 1600 x 1600 px square image source instead of a 1600 x 800 px one. The cause of the issue shouldn't affect the way it can be fixed though.

Three.js - change camera POV on click

Some project background:
I have a Sprite particle field that is randomly generated. The camera is located at position 0, 0, 0. The particle field is all around the camera. I'm using Raycaster to be able to select the particle that is clicked on and change it's color. Once clicked I would like the camera to focus on this particle. I'm also attempting to use Tween to glide the particle into view.
I've attempted several different methods and none of them work. They are described here:
A traditional lookAt method that used Raycaster to pick up the intersect point from clicking.
var raycaster = new THREE.Raycaster();
raycaster.setFromCamera(mouse, this.camera);
var intersects = raycaster.intersectObjects( this.starfield.children );
this.camera.lookAt(intersects[0].object.position)
A distanceTo method where the distance between the camera and the intersect coordinates is used to move the camera. This only moves the camera along the z plane. It wont actually change its POV.
var cameraPosition = new THREE.Vector3(this.camera.position.x, this.camera.position.y, this.camera.position.z);
var intersectPosition = new THREE.Vector3(intersects[0].object.position.x, intersects[0].object.position.y , intersects[0].object.position.z );
var zoomPos = intersectPosition.distanceTo( cameraPosition );
const newCameraPosition = cameraPosition.addVectors(this.camera.position, vector.setLength(zoomPos));
I calculated the angle of rotation for each X, Y, and Z axis via tan and cos equations. I then attempted to rotate the camera by those degrees. I even tried converting them to radians to see if that would make a difference with the rotation method. It didnt :(
I don't know what else to do. At this stage I'm completely open to a different approach as long as I get this camera working. I'm very stuck,
any help would be greatly appreciated!
Instead of using
intersects[0].object.position
try using
intersects[0].point
.point is the world space position of the hit.
.objectis the object the triangle belongs to. .object.position is just the origin of that object, in this case the particle system. The particle positions themselves are relative to this origin.

Bring point to nearest camera position by camera rotation

I have a scene in Three.js (r67) with a camera that is controlled by OrbitControls.
If I now select an arbitrary point (Vector3) in the scene, what would be the best way to bring this point (programmatically) to the nearest camera position just by rotating the camera?
Example Scenario
In the below picture the left side is the starting point. The camera rotates around the green sphere (like OrbitControls) where the center of the camera is the center of the sphere. I now like to automatically rotate the camera around the sphere (doing the minimum amount of moves) so that the red box is nearest to the camera (like on the right side).
Independntly to the method of selecting the point in the scene, there's several understanding to what you mean by "bringing camera just by rotating".
I suppose, You want to rotate the camera in the way, to make the selected point in the center of the screen.
This is simple:
camera.lookAt(your_point_in_scene);
You could do this more complicated. Firstly, find the current pointing vector. By default camera looks in direction Vector(0,0,1). When we rotate it in the same rotation as a camera, we will have camera direction:
var vec = new THREE.Vector3(0,0,1);
vec.applyQuaternion(camera.rotation._quaternion);
Now we must determine angle to rotate our camera, and axis, around which we would rotate.
Axis of rotation could be found as a cross product of camera direction and vector from camera to object. Angel could be extracted from dot product:
var object_dir = object_world_point.clone().sub(camera.position);
var axis = vec.clone().crossProduct(object_dir);
var angle = Math.acos( vec.clone().dot(object_dir) / vec.length() / object_dir.length());
Having angle and axis, we could rotate camera:
camera.rotateOnAxis(axis, angle);
Or, if you want to make it smooth:
// before animation started
var total_rotation = 0,
rotateon,
avel = 0.01; // 0.01 radian per second
if(total_rotation < angle){
rotateon = avel * time_delta;
camera.rotateOnAxis(axis, angle);
total_rotation += rotateon;
}
Well that's not hard Oo
You have a center/target point for the camera. You calculate the difference from the target position to the point position and normalize that vector to the length of the camera-centerpoint-distance (i.e. something like pointdistance.multiplyScalar(cameradistance.length() / pointdistance.length()) ).
And that's it. If I understood your question correctly. All you do is "extend" the point's positioni onto your "camera movement dome" and then you have the ideal new camera position. The camera's rotation is done automatic since you always target the center point.
Aaand if you want to smoothen the camera movement a bit you can just interpolate the angle (not the positions directly) with e.g. an exponential function, whatever you prefer.
Hi Dear please follow this
Independntly to the method of selecting the point in the scene, there's several understanding to what you mean by "bringing camera just by rotating".
I suppose, You want to rotate the camera in the way, to make the selected point in the center of the screen.
This is simple:
camera.lookAt(your_point_in_scene);
You could do this more complicated. Firstly, find the current pointing vector. By default camera looks in direction Vector(0,0,1). When we rotate it in the same rotation as a camera, we will have camera direction:
var vec = new THREE.Vector3(0,0,1);
vec.applyQuaternion(camera.rotation._quaternion);
Now we must determine angle to rotate our camera, and axis, around which we would rotate.
Axis of rotation could be found as a cross product of camera direction and vector from camera to object.
Angle could be extracted from dot product:
var object_dir = object_world_point.clone().sub(camera.position);
var axis = vec.clone().crossProduct(object_dir);
var angle = Math.acos( vec.clone().dot(object_dir) / vec.length() / object_dir.length());

THREE.js checking how close the camera is to a mesh

I have a mesh landscape in THREE.js that the camera points down at, I'd like to maintain a certain distance from that mesh (so if there's peaks in the terrain the camera moves further away).
I thought raycasting would be the correct way to start going about this (by getting the intersection distance) but all the examples I find relate to using mouse co-ordinates; when I try to set the origin as the camera position, and the direction co-ords to be the camera position but with a 0 on the Y axis (so camera up in the air facing down) the intersect results come up empty.
For example, on the render event I have:
t.o.ray.vector = new THREE.Vector3(t.o.camera.position.x, 0, t.o.camera.position.z );
t.o.ray.cast = new THREE.Raycaster(t.o.camera.position,t.o.ray.vector );
t.o.ray.intersect = t.o.ray.cast.intersectObject(object, true);
console.log(t.o.ray.intersect);
This results in an empty array, even when I'm moving the camera, the only way I can seem to get this to work is by using the examples that rely on mouse events.
Any ideas what I'm missing?
I realised it was because that setting 0 as the Y property was not enough. I had assumed that the vector co-ordinate simply helped calculate the direction in which the ray was pointing in, but this doesn't seem to be the case. i.e. -:
t.o.ray.vector = new THREE.Vector3(t.o.camera.position.x, -1000, t.o.camera.position.z );
t.o.ray.vector.normalize();
t.o.ray.cast = new THREE.Raycaster(t.o.camera.position,t.o.ray.vector );
t.o.ray.intersect = t.o.ray.cast.intersectObject(t.Terrain.terrain, true);
Produces the expected results.
What about this approach:
console.log( t.o.camera.position.distanceTo(t.o.vector.position) );

Resources