Moving the camera to look at a Plane Object - html5-canvas

Hi I have been working with a map in three.js
Happens to be I have problem in facing/moving the camera to look at a Plane which happen to be like a marker in the map.
What I have here is that a line of codes derive from /threejs.org/examples/#webgl_interactive_draggablecubes
Load my collada model of map and plot the google like marker in.
I turn the cube to be the marker and change to double side plane.
So when I click the marker the camera tween/move to the position of the plane but it's facing the other side of the plane or to the wrong side. Just want it to be face to face by camera.
I like it to be some kind of http://www.tweetopia.net/.
The tweetopia faces will be my marker and the ground of tweetopia is my map model.
Here's an illustration http://i.imgur.com/AsFFe3B.png

I play around with the TrackballControls.js of draggable cubes and it seems like it has controls.target = obj.position
Here's my solution to fly into the face of plane object
function toObj(obj) {
var rotateTween = new TWEEN.Tween( controls.target )
.to( { x: obj.position.x, y: obj.position.y, z: obj.position.z }, 4000 )
.interpolation(TWEEN.Interpolation.CatmullRom)
.easing( TWEEN.Easing.Quintic.InOut )
.start();
var goTween = new TWEEN.Tween( camera.position )
.to( { x: obj.position.x, y: obj.position.y, z: obj.position.z + 10 }, 4000 )
.interpolation(TWEEN.Interpolation.CatmullRom)
.easing(TWEEN.Easing.Quintic.InOut)
goTween.start();
}

Related

Getting Z value on mesh for respective XY

I am trying to get the Z value on the mesh when i pass the X & Y coordinate. Sorry, i am new to three js.
I am using raycaster for the same. My plan is to set origin exactly above the point and direction just below it. So that it will intersect on mesh and will return me the respective values.
Here is my code:
for(var i=0;(i)<points.length;i++){
var pts = points[i];
var top = new THREE.Vector3(pts.x , pts.y , 50 );
var bottom = new THREE.Vector3( pts.x , pts.y , -50 );
//start raycaster
var raycaster = new THREE.Raycaster();
raycaster.set( top, bottom );
// calculate objects intersecting the picking ray
var intersects = rayCaster.intersectObjects(scene.getObjectByName('MyObj_s').children, false);
if (intersects.length > 0){
console.log(intersects[0].point);
}
}
However the above code results shows totally different X & Y positions, and definitely inaccurate Z values.
top
Object { x: 58.26593421875712, y: 63.505675324244834, z: 50 }
bottom
Object { x: 58.26593421875712, y: 63.505675324244834, z: -50 }
Result
Object { x: -2.9414508017947445, y: -13.236528362050667, z:
-2.0969017881066634 }
raycaster.set( top, bottom );
It seems you are not using Raycaster.set() correctly. As you can see in the documentation, the method expects an origin and a direction vector. In your code, you just pass in two points.
The first parameter origin represents the origin vector where the ray casts from.
The second parameter direction is a normalized (!) vector representing the direction of the ray.
three.js R104

Why do Camera and Object3D look opposite directions?

I have a basic question, but I could not find the answer.
I noticed that the following code:
const p = new THREE.Vector3();
const q = new THREE.Quaternion();
const s = new THREE.Vector3();
function setPositionAndRotation(o) {
o.position.set(1, 1, -1);
o.lookAt(0, 0, 0);
o.updateMatrix();
o.matrix.decompose(p, q, s);
console.log(JSON.stringify(q));
}
const camera = new THREE.PerspectiveCamera(60, window.innerWidth / window.innerHeight, .01, 1000);
var mesh = new THREE.Mesh(new THREE.Geometry(), new THREE.MeshBasicMaterial());
setPositionAndRotation(camera);
setPositionAndRotation(mesh);
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/97/three.min.js"></script>
produces different quaternions for Camera and Object3D:
{"_x":-0.11591689595929515,"_y":0.8804762392171493,"_z":0.27984814233312133,"_w":0.36470519963100095}
{"_x":0.27984814233312133,"_y":-0.3647051996310009,"_z":0.11591689595929516,"_w":
(These are two quaternions pointing into opposite directions on Z axis.)
The problems lies in the bahavior of the lookAt function. I dug into source code of Object3d and I found this if
https://github.com/mrdoob/three.js/blob/master/src/core/Object3D.js#L331
if ( this.isCamera ) {
m1.lookAt( position, target, this.up );
} else {
m1.lookAt( target, position, this.up );
}
As you can see Object3D is handled differently than Camera. target and position are swapped.
Object3D's documentation says:
lookAt ( x : Float, y : Float, z : Float ) : null
Rotates the object to face a point in world space.
but the code does the opposite. It uses Matrix4's lookAt function
lookAt ( eye : Vector3, center : Vector3, up : Vector3, ) : this
Constructs a rotation matrix, looking from eye towards center oriented by the up vector.
putting target into eye, and position into center.
I can deal with that, but it is weird. Is there anybody able to explain why it is so?
r.97
In three.js, an unrotated object is considered to face its local positive-z axis.
The exception is a camera, which faces its local negative-z axis.
This design decision followed OpenGL convention.

Coordinate system issue in three.js

I am working on a program in three.js. So I have a particle system loaded from a LAS file. It has a coordinate system. I have added a functionality which enables user to click on a particle, and it would add a bounding box to the scene. My aim is to find which particles lie inside this bounding box.
Code for adding bounding box at point p clicked by user:
var cubeGeometry = new THREE.BoxGeometry( 1, 1, 1 );
var cubeMaterial = new THREE.MeshLambertMaterial( { color:
0xffff00,wireframe: true } );
var cube = new THREE.Mesh( cubeGeometry, cubeMaterial );
cube.position.x = p.x;
cube.position.y = p.y;
cube.position.z = p.z;
scene.add(cube);
But I am facing an issue. The box created has a different coordinate axis orientation than particle system. It's Y axis is oriented in direction of particle system's Z axis. This causes containsPoint method to give wrong answer.
How to solve this issue?
Try this:
var e = new THREE.Euler( - Math.PI / 2, 0, 0 );
p.applyEuler( e );
This will apply a rotation of 90 degrees around the x-axis or a conversion from Z-up to Y-up.

Is there ANY way to have the three.js camera lookat being rendered off-center?

Is there a way to setup the Three.js renderer in such a way that the lookat point of the camera is not in the center of the rendered image?
To clarify: image a scene with just one 1x1x1m cube at ( 0, 0, 0 ). The camera is located at ( 0, 0, 10 ) and the lookat point is at the origin, coinciding with the center of the cube. If I render this scene as is, I might end up with something like this:
normal render
However I'd like to be able to render this scene in such a way that the lookat point is in the upper left corner, giving me something like this:
desired render
If the normal image is 800x600, then the result I envision would be as if I rendered a 1600x1200 image with the lookat in the center and then cropped that normal image so that only the lower right part remains.
Of course, I can change the lookat to make the cube go to the upper left corner, but then I view the cube under an angle, giving me an undesired result like this:
test.moobels.com/temp/cube_angle.jpg
I could also actually render the full 1600x1200 image and hide 3/4 of the image, but one would hope there is a more elegant solution. Does anybody know it?
If you want your perspective camera to have an off-center view, the pattern you need to use is:
camera = new THREE.PerspectiveCamera( for, aspect, near, far );
camera.setViewOffset( fullWidth, fullHeight, viewX, viewY, viewWidth, viewHeight );
See the docs: https://threejs.org/docs/#api/cameras/PerspectiveCamera
You can find examples of this usage in this example and this example.
three.js r.73
Here's a simple solution:
Assuming your cube is 4 x 4 x 4, at position 0, 0, 0:
var geometry = new THREE.BoxGeometry( 4, 4, 4 );
var material = new THREE.MeshBasicMaterial( { color: 0x777777 } );
var cube = new THREE.Mesh( geometry, material );
cube.position.set( 0, 0, 0 );
Get cube's position:
var Vx = cube.position.x,
Vy = cube.position.y,
Vz = cube.position.z;
Then deduct by 2 from x position, then add 2 to y and z position, and use the values to create a new Vector3:
var newVx = Vx - 2,
newVy = Vy + 2;
newVz = Vz + 2;
var xyz = new THREE.Vector3(newVx, newVy, newVz)
Then camera lookAt:
camera.lookAt(xyz);
Using console log, it would show that the camera is now looking at -2, 2, 2, which is the upper-left of your cube.
console.log(xyz);

Three.js raycast from camera center

i try since weeks to raycast from the centre of a perspectivecamera. without event.client.x and y, because i use the device orientation:
var raycaster = new THREE.Raycaster();
var vector = new THREE.Vector3( 0, 0, 0 ); // instead of event.client.x and event.client.y
var direction = new THREE.Vector3( 0, 0, -1 ).transformDirection( camera.matrixWorld );
raycaster.set( vector, direction );
var intersects = raycaster.intersectObjects(objects);
i have a perspectivecamera on (0,0,0) … i use the device orientation instead keyboard and mouse to obtain the Rotation… and want to hit a cube with a raycast sent from the center of my cam. i try a lot of examples for a three.js raycast, but no success.
what the hell is wrong with my raycast????
For your raycast vector and direction use the cameras world vectors:
raycaster.set( camera.getWorldPosition(), camera.getWorldDirection() );
Use simple:
raycaster.setFromCamera( new THREE.Vector2(), camera );
http://jsfiddle.net/gLbkg21e/

Resources