How to apply two-finger touch anywhere in the image, such that the point stays at the same location - three.js

I have a 2D texture image that is zoomed in/out via 2-finger touch and pinch.
Currently the image is not panned, i.e. the center of the image is always in the middle.
I want the center point between the twoFinger touch to stay between the 2 fingers.
If I pinch exactly in the center of the image, then the center image point will stay between the fingers - good!
But if I pinch near the corner of the image the point will move away, relative to the 2 fingers, because of the zoom.
So I need to apply some pan in addition to the zoom, to make the point appear in the same place.
I basically need to transftorm the camera position such that, for every zoom, the same world coordinate is projected to the same screen coord.
Figures 1-3 illustrate the problem.
Figure1 is the original image.
Currently when I zoom in the camera stays in the same position, so the object between the 2 fingers (the cat's eye on the right) is drifted from being between the 2 fingers, as the image zooms (Figure 2).
I want to pan the camera such that the object between the 2 fingers stays between the 2 fingers even after the zooming the image (Figure 3).
I used the code below, but the object still drifts as the image zooms in/out.
How should I calculate the amount of shift that needs to be applied to the camera?
Thanks
Code to calculate the amount of shift that needs to be applied to the camera
handleTwoFingerTouchMove( p3_inScreenCoord) {
// normalize the screen coord to be in the range of [-1, 1]
// (See method1 in https://stackoverflow.com/questions/13542175/three-js-ray-intersect-fails-by-adding-div/)
let point2dNormalizedX = ( ( p3_inScreenCoord.x - windowOffset.left ) / windowWidth) * 2 - 1;
let point2dNormalizedY = -( ( p3_inScreenCoord.y - windowOffset.top ) / windowHeight) * 2 + 1;
// calc p3 before zoom (in world coords)
let p3_beforeZoom = new THREE_Vector3( point2dNormalizedX, point2dNormalizedY, -1 ).unproject( this.camera );
// Apply zoom
this.dollyInOut( this.getZoomScale(), true );
// calc p3 after zoom (in world coords)
let p3_afterZoom = new THREE_Vector3( point2dNormalizedX, point2dNormalizedY, -1 ).unproject( this.camera );
// calc the required shift in camera position
let deltaX = p3_afterZoom.x - p3_beforeZoom.x;
let deltaZ = p3_afterZoom.z - p3_beforeZoom.z;
// shift in camera position
this.pan( deltaX, deltaZ );
};

I was able to solve my problem. Here is my solution in the hope that it helps others.
When first applying a 2-finger touch (i.e. on touchstart event), the code computes:
the world-coordinate of the object pointed at (e.g. the cat's eye on the right), when starting two-finger touch
centerPoint3d_inWorldCoord0 (Vector3)
the screen-coordinate anchor for zooming via two-finger touch
centerPoint2dBetweenTwoFingerTouch_inScreenCoordNormalized (Vector2)
While zooming in/out via two-finger pinch in/out (on touchmove event)
in the event listener function, immediately after the applying the zoom,
I call the following code:
// Calculate centerPoint3d_inWorldCoord2, which is the new world-coordinate, for
// centerPoint2dBetweenTwoFingerTouch_inScreenCoordNormalized given the new zoom setting.
let centerPoint3d_inWorldCoord2 = new THREE_Vector3( centerPoint2dBetweenTwoFingerTouch_inScreenCoordNormalized.x,
centerPoint2dBetweenTwoFingerTouch_inScreenCoordNormalized.y,
-1 ).unproject( camera );
// compute the shift in world-coordinate between the new vs the original world-coordinate
let delta_inWorldCoords = new THREE_Vector2(centerPoint3d_inWorldCoord2.x - centerPoint3d_inWorldCoord0.x,
centerPoint3d_inWorldCoord2.z - centerPoint3d_inWorldCoord0.z);
// pan the camera, to compensate the shift
pan_usingWorldCoords( delta_inWorldCoords );
The function pan_usingWorldCoords shifts the camera in the x axis (panLeft), and then in the y axis (panUp)
pan_usingWorldCoords( delta_inWorldCoord ) {
panLeft( delta_inWorldCoord.x );
panUp( delta_inWorldCoord.y );
};
The functions panLeft, panUp are similar to the functions that are used in three.js-r114/examples/jsm/controls/OrbitControls.js
Initially the object pointed at was drifting from being between the 2 fingers, as the image zoomed in/out.
I added this.camera.updateProjectionMatrix() at the end of each function.
This updates the projection matrix at the end of panLeft, before using it again in panUp.
With the code above and after updating the projection matrix at the end of panLeft, panUp, the object pointed at (e.g. the eye on the right), when starting two-finger touch, is kept between the 2 fingers, while zooming via two-finger pinch.

Related

How to draw a line with the mouse on a 3D surface in threejs

I have been looking for an example of how to draw a line with the mouse on a 3D surface within a scene in threejs to achieve the following image but not been able to find an example this specific and was hoping someone could provide an example of how to do this by using the mouse and clicking down to start, dragging, then using mouse up to get the end position. Then using these positions in the 3D space to draw the line.
What you need is to raycast points to the plane both in mousedown and mousemove, you should do it on mousemove and not in mouseup since the mouse may be released outside of your plane, so keeping track of where the mouse was last during the drag option is the best approach imo.
renderer.domElement.addEventListener('mousedown', e => {
if (e.button === 0) {
isMouseDown = true;
const x = (e.clientX / ctx.renderer.domElement.clientWidth) * 2 - 1;
const y = -(e.clientY / ctx.renderer.domElement.clientHeight) * 2 + 1;
rayCaster.setFromCamera({ x, y }, ctx.camera);
const intersections = rayCaster.intersectObject(plane);
if (intersections && intersections.length) {
hitPoint.copy(intersections[0].point);
lineGeometry.setFromPoints([ hitPoint, intersections[0].point ]);
}
}
});
Sample Pen

Three.js: How to rotate a Sphere on Axis using camera rotation values

I have a special control called SphericalControls. Its similar to OrbitControls, but it keeps camera at position 0,0,0 and instead rotates camera on x and y to look around a scene. It is placed in the middle of a SphereBufferGeometry which has a 360 equirectangular image projected upon it. The user can look around the 360 image, and as he does the camera x and y rotation values change.
When a user clicks a button, I need to take these x and y rotation values and rotate the sphere to the rotation of the camera. I then set camera back to x:0 and y:0.
The result is that the camera is reset and the 360 scene has now rotated to show the same rotation view that the camera was previously looking at. So to the user, the view stays basically static, just the values for camera.rotation and sphere rotation have swapped.
This works great if I offset the texture on the sphere:
sphereObj.material.map.wrapS = THREE.RepeatWrapping;
sphereObj.material.map.offset.x = ((camera.rotation.x) / (Math.PI * 2));
sphereObj.material.map.needsUpdate = true;
sphereObj.material.needsUpdate = true;
camera.rotation.set(0, 0);
// Success!
But what I need to do is not offset the texture, but rotate the entire geometry. I have tried:
var axis = new THREE.Vector3(0, 1, 0).normalize();;
var offsetRadian = ((camera.rotation.x) / (Math.PI * 2));
sphere.rotateOnAxis(axis, offsetRadian);
// Fail
But the result is that the sphere rotation is off by approx 30%. Any help is appreciated.
Every objects' rotational data is stored in their respective .quaternion object. Both camera and sphereObj have a quaternion, so what you could do is copy the camera's rotational data into the sphere:
// Get camera's rotation
targetRotation = camera.quaternion;
// Invert rotation
targetRotation.inverse();
// Set sphere's rotation
sphereObj.quaternion.copy(targetRotation);
camera.rotation.set(0, 0, 0);
I'm not entirely sure if you need the .inverse() line... if you're noticing the sphere is rotating in the opposite direction, just get rid of it to get the desired result.

How to correctly position html elements in three js coordinate system?

I hopefully have a simple problem I can't get an answer to.
I have three js geometric spheres which move in a box. I place this box at the centre of the scene. The mechanics of how the spheres stay in the box is irrelevant. What is important is the spheres move about the origin (0,0) and the canvas always fills the page.
I want to draw a line from the moving spheres to a div or img element on the page. To do this I would assume I have to transform the css coordinates to three js coordinates. I found something I thought did something like this (Note: Over use of somethings to signify I am probably mistaken)
I can add a html element to the same scene/camera as webgl renderer but obviously using a different renderer but I am unsure how to proceed from there?
Basically I want to know:
How should I change the size of the div preserving aspect ratio if need be?
In essence I want the div or element to fill screen at some camera depth.
How to place the div at the centre of the scene by default?
Mines seems to be shifted 1000 in z direction but this might be the size of the div(img) which I have to bring into view.
How to draw a line between the webgl sphere and html div/img?
thanks in advance!
Unfortunately you have asked 3 questions, it is tricky to address them all at once.
I will explain how to position DIV element on top of some 3D object. My example would be a tooltip that appears when you hover the object by mouse: http://jsfiddle.net/mmalex/ycnh0wze/
So let's get started,
First of all you need to subscribe mouse events and convert 2D coordinates of a mouse to relative coordinates on the viewport. Very well explained you will find it here: Get mouse clicked point's 3D coordinate in three.js
Having 2D coordinates, raycast the object. These steps are quite trivial, but for completeness I provide the code chunk.
var raycaster = new THREE.Raycaster();
function handleManipulationUpdate() {
// cleanup previous results, mouse moved and they're obsolete now
latestMouseIntersection = undefined;
hoveredObj = undefined;
raycaster.setFromCamera(mouse, camera);
{
var intersects = raycaster.intersectObjects(tooltipEnabledObjects);
if (intersects.length > 0) {
// keep point in 3D for next steps
latestMouseIntersection = intersects[0].point;
// remember what object was hovered, as we will need to extract tooltip text from it
hoveredObj = intersects[0].object;
}
}
... // do anything else
//with some conditions it may show or hide tooltip
showTooltip();
}
// Following two functions will convert mouse coordinates
// from screen to three.js system (where [0,0] is in the middle of the screen)
function updateMouseCoords(event, coordsObj) {
coordsObj.x = ((event.clientX - renderer.domElement.offsetLeft + 0.5) / window.innerWidth) * 2 - 1;
coordsObj.y = -((event.clientY - renderer.domElement.offsetTop + 0.5) / window.innerHeight) * 2 + 1;
}
function onMouseMove(event) {
updateMouseCoords(event, mouse);
handleManipulationUpdate();
}
window.addEventListener('mousemove', onMouseMove, false);
And finally see the most important part, DIV element placement. To understand the code it is essential to get convenient with Vector3.project method.
The sequence of calculations is as follows:
Get 2D mouse coordinates,
Raycast object and remember 3D coordinate of intersection (if any),
Project 3D coordinate back into 2D (this step may seem redundant here, but what if you want to trigger object tooltip programmatically? You won't have mouse coordinates)
Mess around to place DIV centered above 2D point, with nice margin.
// This will move tooltip to the current mouse position and show it by timer.
function showTooltip() {
var divElement = $("#tooltip");
//element found and mouse hovers some object?
if (divElement && latestMouseIntersection) {
//hide until tooltip is ready (prevents some visual artifacts)
divElement.css({
display: "block",
opacity: 0.0
});
//!!! === IMPORTANT ===
// DIV element is positioned here
var canvasHalfWidth = renderer.domElement.offsetWidth / 2;
var canvasHalfHeight = renderer.domElement.offsetHeight / 2;
var tooltipPosition = latestMouseProjection.clone().project(camera);
tooltipPosition.x = (tooltipPosition.x * canvasHalfWidth) + canvasHalfWidth + renderer.domElement.offsetLeft;
tooltipPosition.y = -(tooltipPosition.y * canvasHalfHeight) + canvasHalfHeight + renderer.domElement.offsetTop;
var tootipWidth = divElement[0].offsetWidth;
var tootipHeight = divElement[0].offsetHeight;
divElement.css({
left: `${tooltipPosition.x - tootipWidth/2}px`,
top: `${tooltipPosition.y - tootipHeight - 5}px`
});
//get text from hovered object (we store it in .userData)
divElement.text(hoveredObj.userData.tooltipText);
divElement.css({
opacity: 1.0
});
}
}

Rotate camera X on local axis using Three.js

I'm new to Three.js and fairly new to 3d space engines and what I'm trying to achieve is a 360 equirectangular image viewer.
What my script does so far is to create a camera at 0,0,0 and a sphere mesh at the same location with normals inverted and an emission map of my 360 image.
Representation of scene using Blender's viewport.
The user should be enabled to rotate the camera using mouse drag or keyboard arrows, so using mouse listeners I created the drag feature which calculates the amount of rotation in the camera's Y axis (blue) and X axis (red) at each render frame. I also created min and max rotation limit on X (so the user couldn't spin backward), as follows:
var render = function () {
requestAnimationFrame( render );
if((camera.rotation.x < Math.PI/6 && speedX >= 0) || (camera.rotation.x > -Math.PI/6 && speedX <= 0))
camera.rotation.x += speedX * (Math.PI/180);
camera.rotation.y += speedY * (Math.PI/180);
renderer.render(scene, camera);
};
Where speedX and speedY represent the amount of rotation in each axis.
So far so good, but since those rotation coordinates are relative to the world and not the camera itself the X rotation makes the camera go wild, since after a couple of rotated degrees in the Y axis, the camera's X axis is no longer the same as the world's X axis.
My question, finally, is: how do I rotate the camera on it's own X axis at each frame?
If you want a camera's rotation to have meaning in terms of yaw (heading), pitch, and roll, you need set:
camera.rotation.order = 'YXZ'; // default is 'XYZ'
For more information, see this stackoverflow answer.
three.js r.82

three.js / web-vr-boilerplate / polyfill - HMD to controlled object:Axis re-mapping does not re-map also rotation order/rules

I'm using your webvr-boilerplate and trying to map it to a human face mesh.
The way I do is is:
1) attach the camera to an eye bone
main js script:
//add camera to eye
mesh.skeleton.bones[ 22 ].add(camera);
//resets camera rotation
camera.rotation.set(0,0,0);
//looks at mesh up direction to face front
camera.lookAt( mesh.up );
//moves camera to middle of eyes
camera.position.set(10,10,0);
2) change the webvr-manager.js to update the neck bone ( passed as argument on initialization ) position and rotation and in index.php I swap the axis to match the HMD ones with the ones of the bone:
webvr-manager.js:
if ( state.orientation !== null ) {
object.quaternion.copy( state.orientation );
}
if ( state.position !== null ) {
object.position.copy( state.position ).multiplyScalar( scope.scale );
}
main js script:
/* INSIDE UPDATE CYCLE */
// mesh.rotation.y+=0.1;
controls.update();
//resets bone position to default
mesh.skeleton.bones[ neckVRControlBone ].position.set(neckInitPosition.x,neckInitPosition.y,neckInitPosition.z) ;
//ROTATION SWAP
mesh.skeleton.bones[ neckVRControlBone ].rotation.x = pivot.rotation.y;
mesh.skeleton.bones[ neckVRControlBone ].rotation.y = - pivot.rotation.z;
mesh.skeleton.bones[ neckVRControlBone ].rotation.z = - tempRotation;
UPDATE 28/10/2015:
to simplify and after some extra debug realised is not a clamp problem..
The restated problem is:
To map the VR controls to an object that has a different axis configuration of the HMD/Cardboard and keep the correct rotation rules.
Example of object axis:
* x - up
* y - depth
* z - side
Swapping the rotations by just
object .rotation.x = object .rotation.z results that, after updating the controls, rotating to the side makes an undesired rotation after 45ยบ.
The rotation rules for each axis are different :
x rotates until PI and after that inverts signal and keeps changing in the same direction it was;
y rotates until PI/2 and after inverts the direction (when increasing, starts decreasing)
z is equal to x.
Changed webvr-polyfill.js and got it fixed for keyboard/mouse with this:
MouseKeyboardPositionSensorVRDevice.prototype.getState = function() {
// this.euler.set(this.phi, this.theta, 0, 'YXZ');
this.euler.set( this.theta , 0, - this.phi, 'YXZ');
But no way similar line to other controllers (HMD, cardboard, etc.).
Maybe it would be nice the rotation order and mapping could be available to the user.
Thanks
Example - try an set swappedAxis = true in the js console and rotate the neck.
The main problem you are running into is gimbal lock because you are using Euler rotations. Use Quaternions to avoid this problem.
Additionally, the axes on your mesh appear to be flipped, so you have to account for that.
Instead of setting components of the rotation, just set the quaternion:
mesh.skeleton.bones[neckVRControlBone].quaternion.set(
pivot.quaternion.y,
-pivot.quaternion.z,
-pivot.quaternion.x,
pivot.quaternion.w
);

Resources