draw sprite on where mouse clicked - three.js

I am noobie to three.js, after reading https://threejs.org/examples/?q=inter#canvas_interactive_cubes , I want to add a function draw points where i clicked to my project(load and display stl files), I used the formula in that example,
mouse.x = ( event.clientX / renderer.domElement.clientWidth ) * 2 - 1;
mouse.y = - ( event.clientY / renderer.domElement.clientHeight ) * 2 + 1;
but I found when the sprite is not drawn on where i clicked, but a little lower, you can see at http://static.medi-tool.cn/share/ds1/index.html (first click on the model will draw a sprite, second will draw a sprite and calculate the distance between two point, then two sprite will be removed in 2 seconds). can anyone tell me why? thanks a lot.

I found out it is because I have a div up to the canvas, when using the formula, i have to minus the height of that div, in other words, when calculate mouse.x, mouse.y, 'event.clientX' & 'event.clientY' should be relative to the canvas, not viewport.
The test page in the Question is OK now.

Related

How to apply two-finger touch anywhere in the image, such that the point stays at the same location

I have a 2D texture image that is zoomed in/out via 2-finger touch and pinch.
Currently the image is not panned, i.e. the center of the image is always in the middle.
I want the center point between the twoFinger touch to stay between the 2 fingers.
If I pinch exactly in the center of the image, then the center image point will stay between the fingers - good!
But if I pinch near the corner of the image the point will move away, relative to the 2 fingers, because of the zoom.
So I need to apply some pan in addition to the zoom, to make the point appear in the same place.
I basically need to transftorm the camera position such that, for every zoom, the same world coordinate is projected to the same screen coord.
Figures 1-3 illustrate the problem.
Figure1 is the original image.
Currently when I zoom in the camera stays in the same position, so the object between the 2 fingers (the cat's eye on the right) is drifted from being between the 2 fingers, as the image zooms (Figure 2).
I want to pan the camera such that the object between the 2 fingers stays between the 2 fingers even after the zooming the image (Figure 3).
I used the code below, but the object still drifts as the image zooms in/out.
How should I calculate the amount of shift that needs to be applied to the camera?
Thanks
Code to calculate the amount of shift that needs to be applied to the camera
handleTwoFingerTouchMove( p3_inScreenCoord) {
// normalize the screen coord to be in the range of [-1, 1]
// (See method1 in https://stackoverflow.com/questions/13542175/three-js-ray-intersect-fails-by-adding-div/)
let point2dNormalizedX = ( ( p3_inScreenCoord.x - windowOffset.left ) / windowWidth) * 2 - 1;
let point2dNormalizedY = -( ( p3_inScreenCoord.y - windowOffset.top ) / windowHeight) * 2 + 1;
// calc p3 before zoom (in world coords)
let p3_beforeZoom = new THREE_Vector3( point2dNormalizedX, point2dNormalizedY, -1 ).unproject( this.camera );
// Apply zoom
this.dollyInOut( this.getZoomScale(), true );
// calc p3 after zoom (in world coords)
let p3_afterZoom = new THREE_Vector3( point2dNormalizedX, point2dNormalizedY, -1 ).unproject( this.camera );
// calc the required shift in camera position
let deltaX = p3_afterZoom.x - p3_beforeZoom.x;
let deltaZ = p3_afterZoom.z - p3_beforeZoom.z;
// shift in camera position
this.pan( deltaX, deltaZ );
};
I was able to solve my problem. Here is my solution in the hope that it helps others.
When first applying a 2-finger touch (i.e. on touchstart event), the code computes:
the world-coordinate of the object pointed at (e.g. the cat's eye on the right), when starting two-finger touch
centerPoint3d_inWorldCoord0 (Vector3)
the screen-coordinate anchor for zooming via two-finger touch
centerPoint2dBetweenTwoFingerTouch_inScreenCoordNormalized (Vector2)
While zooming in/out via two-finger pinch in/out (on touchmove event)
in the event listener function, immediately after the applying the zoom,
I call the following code:
// Calculate centerPoint3d_inWorldCoord2, which is the new world-coordinate, for
// centerPoint2dBetweenTwoFingerTouch_inScreenCoordNormalized given the new zoom setting.
let centerPoint3d_inWorldCoord2 = new THREE_Vector3( centerPoint2dBetweenTwoFingerTouch_inScreenCoordNormalized.x,
centerPoint2dBetweenTwoFingerTouch_inScreenCoordNormalized.y,
-1 ).unproject( camera );
// compute the shift in world-coordinate between the new vs the original world-coordinate
let delta_inWorldCoords = new THREE_Vector2(centerPoint3d_inWorldCoord2.x - centerPoint3d_inWorldCoord0.x,
centerPoint3d_inWorldCoord2.z - centerPoint3d_inWorldCoord0.z);
// pan the camera, to compensate the shift
pan_usingWorldCoords( delta_inWorldCoords );
The function pan_usingWorldCoords shifts the camera in the x axis (panLeft), and then in the y axis (panUp)
pan_usingWorldCoords( delta_inWorldCoord ) {
panLeft( delta_inWorldCoord.x );
panUp( delta_inWorldCoord.y );
};
The functions panLeft, panUp are similar to the functions that are used in three.js-r114/examples/jsm/controls/OrbitControls.js
Initially the object pointed at was drifting from being between the 2 fingers, as the image zoomed in/out.
I added this.camera.updateProjectionMatrix() at the end of each function.
This updates the projection matrix at the end of panLeft, before using it again in panUp.
With the code above and after updating the projection matrix at the end of panLeft, panUp, the object pointed at (e.g. the eye on the right), when starting two-finger touch, is kept between the 2 fingers, while zooming via two-finger pinch.

Does the point coordinate in three.js change if the camera moves?

I'm using the raycaster function to get the coordinates of portions of a texture as a preliminary to creating areas that will link to other portions of my website. The model I'm using is hollow and I'm raycasting to the intersection with the skin of the model from a point on the interior. I've used the standard technique suggested here and elsewhere to determine the coordinates in 3d space from mouse position:
//1. sets the mouse position with a coordinate system where the center
// of the screen is the origin
mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;
console.log("mouse position: (" + mouse.x + ", "+ mouse.y + ")");
//2. set the picking ray from the camera position and mouse coordinates
raycaster.setFromCamera( mouse, camera );
//3. compute intersections
var intersects = raycaster.intersectObjects( scene.children, true );
var intersect = null;
var point = null;
//console.log(intersects);
for ( var i = 0; i < intersects.length; i++ ) {
console.log(intersects[i]);
if (i = intersects.length - 1) {
intersect = intersects[ i ];
point = intersect[ "point" ];
}
This works, but I'm getting inconsistent results if the camera position changes. My assumption right now is that this is because the mouse coordinates are generated from the center of the screen and that center has changed since I've moved the camera position. I know that getWorldPosition should stay consistent regardless of camera movement, but trying to call point.getWorldPosition returns "undefined" as a result. Is my thinking about why my results are inconsistent correct, and if so and I'm right that getWorldPosition is what I'm looking for how do I go about calling it so I can get the proper xyz coordinates for my intersect?
EDITED TO ADD:
When I target what should be the same point (or close to) on the screen I get very different results.
For example, this is my model (and forgive the janky code under the hood -- I'm still working on it):
http://www.minorworksoflydgate.net/Model/three/examples/clopton_chapel_dev.html
Hitting the upper left corner of the first panel of writing on the opposite wall (so the spot marked with the x in the picture) gets these results (you can capture them within that model by hitting C, escaping out of the pointerlock, and viewing in the console) with the camera at 0,0,0:
x: -0.1947601252025508,
​
y: 0.15833788110908806,
​
z: -0.1643094916216681
If I move in the space (so with a camera position of x: -6.140427450769398, y: 1.9021520960972597e-14, z: -0.30737391540643844) I get the following results for that same spot (as shown in the second picture):
x: -6.229400824609087,
​
y: 0.20157559303778091,
​
z: -0.5109691487471469
My understanding is that if these are the world coordinates for the intersect point they should stay relatively similar, but that x coordinate is much different. Which makes sense since that's the axis the camera moves on, but shouldn't it not make a difference for the point of intersection?
My comment will not be related to the camera but I had also an issue about the raycaster and calculating the position of the mouse is more accurate with the following way.
const rect = renderer.domElement.getBoundingClientRect();
mouse.x = ((event.clientX - rect.left) / rect.width) * 2 - 1;
mouse.y = - ((event.clientY - rect.top) / rect.height) * 2 + 1;
So the trick to this when there's no mouse available due to a pointer lock is to use the direction of the ray created by the object controls. It's actually pretty simple, but not really out there.
var ray_direction = new THREE.Vector3();
var ray = new THREE.Raycaster(); // create once and reuse
controls.getDirection( ray_direction );
ray.set( controls.getObject().position, ray_direction );

three.js raycaster in a container

for my internship in need to make an aplication with three.js what needs to be in a container on a page but it needs an onclick function on the objects. the problem is i cannot find annything on raycasting only in a container and clicking now will not get objects i need
application
onMouseDown(event) {
let s = this;
// calculate mouse position in normalized device coordinates
// (-1 to +1) for both components
s.mouse.x = ( event.clientX / s.renderer.domElement.clientWidth ) * 2 - 1;
s.mouse.y = - ( event.clientY / s.renderer.domElement.clientHeight ) * 2 + 1;
s.intersects = s.raycaster.intersectObjects(s.blocks, true);
for ( var i = 0; i < s.intersects.length;){
s.intersects[ i ].object.material.color.set( 0xff0000 );
console.log(i)
console.log(s.getScene().children)
console.log(s.intersects)
console.log("test 123")
}
if (s.intersects == 0){
console.log(s.mouse.x)
console.log(s.mouse.y)
}
}
edit: it is not the same as Detect clicked object in THREE.js he is not working in a container. plus he has a little problem with the margins for me everywhere i click on the screen it does not detect what i need and i need it to be working only on the container not the whole webpage plus there help is outdated and is not working annymore
If you are working with a canvas that is not at the top-left corner of the page, you need one more step to get to the normalized device coordinates. Note that the NDC in WebGL are relative to the canvas drawing-area, not the screen or document ([-1,-1] and [1,1] are the bottom-left and top-right corners of the canvas).
Ideally, you'd use ev.offsetX/ev.offsetY, but browser-support for that isn't there yet. Instead, you can do it like this:
const {top, left, width, height} = renderer.domElement.getBoundingClientRect();
mouse.x = -1 + 2 * (ev.clientX - left) / width;
mouse.y = 1 - 2 * (ev.clientY - top) / height;
See here for a working example: https://codepen.io/usefulthink/pen/PVjeJr
Another option is to statically compute the offset-position and size of the canvas on the page and compute the final values based on ev.pageX/ev.pageY. This has the benefit of being a bit more stable (as it is not scrolling-dependent) and would allow to cache the top/left/width/height values.

Feature projection into equirectangular panorama with THREE.js

Relatively new to THREE.js. I am trying to figure out how to project a DIV with text into an equirectangular panorama.
I have this simple example working with my panorama images.
https://threejs.org/examples/webgl_panorama_equirectangular
The question: I have a latitude and longitude of a feature that's in my panorama, I'd like to project a DIV labeling such item into 3D space. How do I convert longitude and latitude into X and Y on the canvas so I can change the DIVS left and top style attributes so the label renders in 3D space and appears fixed to its coordinates?
UPDATE:
For clarity, how does one take planet earth longitude and latitude, and convert it into X Y pixels inside a mesh? I know where the image was taken on earth, and I know where an item in that picture was taken on earth. I want to label that item in 3D space.
Any help would be much appreciated. Thanks.
Code from this question seems to do the trick:
3d coordinates to 2d screen position
function getCoordinates( element, camera ) {
var screenVector = new THREE.Vector3();
element.localToWorld( screenVector );
screenVector.project( camera );
var posx = Math.round(( screenVector.x + 1 ) * renderer.domElement.offsetWidth / 2 );
var posy = Math.round(( 1 - screenVector.y ) * renderer.domElement.offsetHeight / 2 );
console.log( posx, posy );
}
I updated the jsfiddle with the new version of Three.js
http://jsfiddle.net/L0rdzbej/409/

three.js / web-vr-boilerplate / polyfill - HMD to controlled object:Axis re-mapping does not re-map also rotation order/rules

I'm using your webvr-boilerplate and trying to map it to a human face mesh.
The way I do is is:
1) attach the camera to an eye bone
main js script:
//add camera to eye
mesh.skeleton.bones[ 22 ].add(camera);
//resets camera rotation
camera.rotation.set(0,0,0);
//looks at mesh up direction to face front
camera.lookAt( mesh.up );
//moves camera to middle of eyes
camera.position.set(10,10,0);
2) change the webvr-manager.js to update the neck bone ( passed as argument on initialization ) position and rotation and in index.php I swap the axis to match the HMD ones with the ones of the bone:
webvr-manager.js:
if ( state.orientation !== null ) {
object.quaternion.copy( state.orientation );
}
if ( state.position !== null ) {
object.position.copy( state.position ).multiplyScalar( scope.scale );
}
main js script:
/* INSIDE UPDATE CYCLE */
// mesh.rotation.y+=0.1;
controls.update();
//resets bone position to default
mesh.skeleton.bones[ neckVRControlBone ].position.set(neckInitPosition.x,neckInitPosition.y,neckInitPosition.z) ;
//ROTATION SWAP
mesh.skeleton.bones[ neckVRControlBone ].rotation.x = pivot.rotation.y;
mesh.skeleton.bones[ neckVRControlBone ].rotation.y = - pivot.rotation.z;
mesh.skeleton.bones[ neckVRControlBone ].rotation.z = - tempRotation;
UPDATE 28/10/2015:
to simplify and after some extra debug realised is not a clamp problem..
The restated problem is:
To map the VR controls to an object that has a different axis configuration of the HMD/Cardboard and keep the correct rotation rules.
Example of object axis:
* x - up
* y - depth
* z - side
Swapping the rotations by just
object .rotation.x = object .rotation.z results that, after updating the controls, rotating to the side makes an undesired rotation after 45º.
The rotation rules for each axis are different :
x rotates until PI and after that inverts signal and keeps changing in the same direction it was;
y rotates until PI/2 and after inverts the direction (when increasing, starts decreasing)
z is equal to x.
Changed webvr-polyfill.js and got it fixed for keyboard/mouse with this:
MouseKeyboardPositionSensorVRDevice.prototype.getState = function() {
// this.euler.set(this.phi, this.theta, 0, 'YXZ');
this.euler.set( this.theta , 0, - this.phi, 'YXZ');
But no way similar line to other controllers (HMD, cardboard, etc.).
Maybe it would be nice the rotation order and mapping could be available to the user.
Thanks
Example - try an set swappedAxis = true in the js console and rotate the neck.
The main problem you are running into is gimbal lock because you are using Euler rotations. Use Quaternions to avoid this problem.
Additionally, the axes on your mesh appear to be flipped, so you have to account for that.
Instead of setting components of the rotation, just set the quaternion:
mesh.skeleton.bones[neckVRControlBone].quaternion.set(
pivot.quaternion.y,
-pivot.quaternion.z,
-pivot.quaternion.x,
pivot.quaternion.w
);

Resources