Does the point coordinate in three.js change if the camera moves? - three.js

I'm using the raycaster function to get the coordinates of portions of a texture as a preliminary to creating areas that will link to other portions of my website. The model I'm using is hollow and I'm raycasting to the intersection with the skin of the model from a point on the interior. I've used the standard technique suggested here and elsewhere to determine the coordinates in 3d space from mouse position:
//1. sets the mouse position with a coordinate system where the center
// of the screen is the origin
mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;
console.log("mouse position: (" + mouse.x + ", "+ mouse.y + ")");
//2. set the picking ray from the camera position and mouse coordinates
raycaster.setFromCamera( mouse, camera );
//3. compute intersections
var intersects = raycaster.intersectObjects( scene.children, true );
var intersect = null;
var point = null;
//console.log(intersects);
for ( var i = 0; i < intersects.length; i++ ) {
console.log(intersects[i]);
if (i = intersects.length - 1) {
intersect = intersects[ i ];
point = intersect[ "point" ];
}
This works, but I'm getting inconsistent results if the camera position changes. My assumption right now is that this is because the mouse coordinates are generated from the center of the screen and that center has changed since I've moved the camera position. I know that getWorldPosition should stay consistent regardless of camera movement, but trying to call point.getWorldPosition returns "undefined" as a result. Is my thinking about why my results are inconsistent correct, and if so and I'm right that getWorldPosition is what I'm looking for how do I go about calling it so I can get the proper xyz coordinates for my intersect?
EDITED TO ADD:
When I target what should be the same point (or close to) on the screen I get very different results.
For example, this is my model (and forgive the janky code under the hood -- I'm still working on it):
http://www.minorworksoflydgate.net/Model/three/examples/clopton_chapel_dev.html
Hitting the upper left corner of the first panel of writing on the opposite wall (so the spot marked with the x in the picture) gets these results (you can capture them within that model by hitting C, escaping out of the pointerlock, and viewing in the console) with the camera at 0,0,0:
x: -0.1947601252025508,
​
y: 0.15833788110908806,
​
z: -0.1643094916216681
If I move in the space (so with a camera position of x: -6.140427450769398, y: 1.9021520960972597e-14, z: -0.30737391540643844) I get the following results for that same spot (as shown in the second picture):
x: -6.229400824609087,
​
y: 0.20157559303778091,
​
z: -0.5109691487471469
My understanding is that if these are the world coordinates for the intersect point they should stay relatively similar, but that x coordinate is much different. Which makes sense since that's the axis the camera moves on, but shouldn't it not make a difference for the point of intersection?

My comment will not be related to the camera but I had also an issue about the raycaster and calculating the position of the mouse is more accurate with the following way.
const rect = renderer.domElement.getBoundingClientRect();
mouse.x = ((event.clientX - rect.left) / rect.width) * 2 - 1;
mouse.y = - ((event.clientY - rect.top) / rect.height) * 2 + 1;

So the trick to this when there's no mouse available due to a pointer lock is to use the direction of the ray created by the object controls. It's actually pretty simple, but not really out there.
var ray_direction = new THREE.Vector3();
var ray = new THREE.Raycaster(); // create once and reuse
controls.getDirection( ray_direction );
ray.set( controls.getObject().position, ray_direction );

Related

three.js raycaster in a container

for my internship in need to make an aplication with three.js what needs to be in a container on a page but it needs an onclick function on the objects. the problem is i cannot find annything on raycasting only in a container and clicking now will not get objects i need
application
onMouseDown(event) {
let s = this;
// calculate mouse position in normalized device coordinates
// (-1 to +1) for both components
s.mouse.x = ( event.clientX / s.renderer.domElement.clientWidth ) * 2 - 1;
s.mouse.y = - ( event.clientY / s.renderer.domElement.clientHeight ) * 2 + 1;
s.intersects = s.raycaster.intersectObjects(s.blocks, true);
for ( var i = 0; i < s.intersects.length;){
s.intersects[ i ].object.material.color.set( 0xff0000 );
console.log(i)
console.log(s.getScene().children)
console.log(s.intersects)
console.log("test 123")
}
if (s.intersects == 0){
console.log(s.mouse.x)
console.log(s.mouse.y)
}
}
edit: it is not the same as Detect clicked object in THREE.js he is not working in a container. plus he has a little problem with the margins for me everywhere i click on the screen it does not detect what i need and i need it to be working only on the container not the whole webpage plus there help is outdated and is not working annymore
If you are working with a canvas that is not at the top-left corner of the page, you need one more step to get to the normalized device coordinates. Note that the NDC in WebGL are relative to the canvas drawing-area, not the screen or document ([-1,-1] and [1,1] are the bottom-left and top-right corners of the canvas).
Ideally, you'd use ev.offsetX/ev.offsetY, but browser-support for that isn't there yet. Instead, you can do it like this:
const {top, left, width, height} = renderer.domElement.getBoundingClientRect();
mouse.x = -1 + 2 * (ev.clientX - left) / width;
mouse.y = 1 - 2 * (ev.clientY - top) / height;
See here for a working example: https://codepen.io/usefulthink/pen/PVjeJr
Another option is to statically compute the offset-position and size of the canvas on the page and compute the final values based on ev.pageX/ev.pageY. This has the benefit of being a bit more stable (as it is not scrolling-dependent) and would allow to cache the top/left/width/height values.

In A-Frame/THREE.js, is there a method like the Camera.ScreenToWorldPoint() from Unity?

I know a method from Unity whichs is very useful to convert a screen position to a world position : https://docs.unity3d.com/ScriptReference/Camera.ScreenToWorldPoint.html
I've been looking for something similar in A-Frame/THREE.js, but I didn't find anything.
Is there an easy way to convert a screen position to a world position in a plane which is positioned a given distance from the camera ?
This is typically done using Raycaster. An equivalent function using three.js would be written like this:
function screenToWorldPoint(screenSpaceCoord, target = new THREE.Vector3()) {
// convert the screen-space coordinates to normalized device coordinates
// (x and y ranging from -1 to 1):
const ndc = new THREE.Vector2()
ndc.x = 2 * screenSpaceCoord.x / screenWidth - 1;
ndc.y = 2 * screenSpaceCoord.y / screenHeight - 1;
// `Raycaster` can be used to convert this into a ray:
const raycaster = new THREE.Raycaster();
raycaster.setFromCamera(ndc, camera);
// finally, apply the distance:
return raycaster.ray.at(screenSpaceCoord.z, target);
}
Note that coordinates in browsers are usually measured from the top/left corner with y pointing downwards. In that case, the NDC calculation should be:
ndc.y = 1 - 2 * screenSpaceCoord.y / screenHeight;
Another note: instead of using a set distance in screenSpaceCoord.z you could also let three.js compute an intersection with any Object in your scene. For that you can use raycaster.intersectObject() and get a precise depth for the point of intersection with that object. See the documentation and various examples linked here: https://threejs.org/docs/#api/core/Raycaster

How to accelerate calculations when update messive position from 3d to screen (hud)

I want to update hud positon form 3d position to 2d when mouse moving. Since it may have a large number of 3d objects to project to the screen position, I meet a performance problem.
Are there any way to accelerate calculations? The following is how I calculate 3d object position on 2d screen.
function toScreenPosition(obj) {
var vector = new THREE.Vector3();
//calculate screen half size
var widthHalf = 0.5 * renderer.context.canvas.width;
var heightHalf = 0.5 * renderer.context.canvas.height;
//get 3d object position
obj.updateMatrixWorld();
vector.setFromMatrixPosition(obj.matrixWorld);
vector.project(this.camera);
//get 2d position on screen
vector.x = (vector.x * widthHalf) + widthHalf;
vector.y = -(vector.y * heightHalf) + heightHalf;
return {
x: vector.x,
y: vector.y
};
}
Rather than repositioning your HUD in world space every time your camera moves, add your HUD object(s) to your camera object, and position them only once. Then, when your camera moves, your HUD moves along with it, because the camera's transformation is cascaded to it's children.
yourCamera.add(yourHUD);
yourHUD.position.z = 10;
Note that doing it this way (or even positioning it the way you were) may allow scene objects to clip through your HUD geometry, or even appear between your HUD and the camera, obscuring the HUD. If that's what you want, great! If not, you could move your HUD to a second render pass, allowing it to remain "on top."
First, here is an example of your function rewritten for (almost) optimal performance as written in the comments above, the renderloop is obviously just an example to illustrate where to do which calls:
var width = renderer.context.canvas.width;
var height = renderer.context.canvas.height;
// has to be called whenever the canvas-size changes
function onCanvasResize() {
width = renderer.context.canvas.width;
height = renderer.context.canvas.height;
});
var projMatrix = new THREE.Matrix4();
// renderloop-function, called per animation-frame
function render() {
// just needed once per frame (even better would be
// once per camera-movement)
projMatrix.multiplyMatrices(
camera.projectionMatrix,
projMatrix.getInverse(camera.matrixWorld)
);
hudObjects.forEach(function(obj) {
toScreenPosition(obj, projMatrix);
});
}
// wrapped in IIFE to store the local vector-variable (this pattern
// is used everywhere in three.js)
var toScreenPosition = (function() {
var vector = new THREE.Vector3();
return function __toScreenPosition(obj, projectionMatrix) {
// this could potentially be left away, but isn't too
// expensive as there are 'needsUpdate'-checks in place
obj.updateMatrixWorld();
vector.setFromMatrixPosition(obj.matrixWorld);
vector.applyMatrix4(projectionMatrix);
vector.x = (vector.x + 1) * width / 2;
vector.y = (1 - vector.y) * height / 2;
// might want to consider returning a Vector3-instance
// instead, depends on how the result is used
return {x: vector.x, y: vector.y};
}
}) ();
But, considering you want to render a HUD, it would be better to do that independently of the main-scene, making all of the above computations obsolete and also allowing you to choose a different coordinate-system for sizing and positioning of HUD-elements.
I have an example for this here: https://codepen.io/usefulthink/pen/ZKPvPB. There I used an orthographic camera and a seperate scene to render HUD-Elements on top of the 3d-scene. No extra computations required. Plus I can specify the size and position of HUD-elements conveniently in pixel-units (The same would work using a perspective camera, only requires a bit more trigonometry to get it right).

Why isn't my raycast intersecting anything?

I have this code, designed to find the mesh the user is clicking on:
// scene and camera are defined outside of this code
var mousePoint = new THREE.Vector2();
var raycaster = new THREE.Raycaster();
var intersections;
function onClick(event) {
mousePoint.x = event.clientX;
mousePoint.y = event.clientY;
raycaster.setFromCamera(mousePoint, camera);
intersections = raycaster.intersectObjects(
scene.children);
}
Yet every time I click, intersections comes back as an empty array, with nothing getting intersected. What am I doing wrong?
From the three.js documentation for Raycaster (emphasis mine):
.setFromCamera ( coords, camera )
coords — 2D coordinates of the mouse, in normalized device coordinates (NDC)---X and Y components should be between -1 and 1.
camera — camera from which the ray should originate
Updates the ray with a new origin and direction.
Therefore, when setting the coordinates of mousePoint, instead of setting x and y directly to event.clientX and event.clientY, they should be converted to this coordinate space:
// calculate mouse position in normalized device coordinates
// (-1 to +1) for both components
mousePoint.x = (event.clientX / window.innerWidth) * 2 - 1;
mousePoint.y = (event.clientY / window.innerHeight) * -2 + 1;

How to click an object in THREE.js

I'm working my way through this book, and I'm doing okay I guess, but I've hit something I do not really get.
Below is how you can log to the console and object in 3D space that you click on:
renderer.domElement.addEventListener('mousedown', function(event) {
var vector = new THREE.Vector3(
renderer.devicePixelRatio * (event.pageX - this.offsetLeft) / this.width * 2 - 1,
-renderer.devicePixelRatio * (event.pageY - this.offsetTop) / this.height * 2 + 1,
0
);
projector.unprojectVector(vector, camera);
var raycaster = new THREE.Raycaster(
camera.position,
vector.sub(camera.position).normalize()
);
var intersects = raycaster.intersectObjects(OBJECTS);
if (intersects.length) {
console.log(intersects[0]);
}
}, false);
Here's the book's explanation on how this code works:
The previous code listens to the mousedown event on the renderer's canvas.
Get that, we're finding the domElement the renderer is using by using renderer.domElement. We're then binding an event listener to it with addEventListner, and specifing we want to listening for a mousedown. When the mouse is clicked, we launch an anonymous function and pass the eventvariable into the function.
Then,
it creates a new Vector3 instance with the mouse's coordinates on the screen
relative to the center of the canvas as a percent of the canvas width.
What? I get how we're creating a new instance with new THREE.Vector3, and I get that the three arguments Vector3 takes are its x, y and z coordinates, but that's where my understanding completely and utterly breaks down.
Firstly, I'm making an assumption here, but to plot a vector, surely you need two points in space in order to project? If you give it just one set of coords, how does it know what direction to project from? My guess is that you actually use the Raycaster to plot the "vector"...
Now onto the arguments we're passing to Vector3... I get how z is 0. Because we're only interested in where we're clicking on the screen. We can either click up or down, left or right, but not into or out of the screen, so we set that to zero. Now let's tackle x:
renderer.devicePixelRatio * (event.pageX - this.offsetLeft) / this.width * 2 - 1,
We're getting the PixelRatio of the device, timsing it by where we clicked along the x axis, dividing by the renderer's domElement width, timsing this by two and taking away one.
When you don't get something, you need to say what you do get so people can best help you out. So I feel like such a fool when I say:
I don't get why we even need the pixel ratio I don't get why we times that by where we've clicked along the x
I don't get why we divide that by the width
I utterly do not get why we need to times by 2 and take away 1. Times by 2, take away 1. That could genuinely could be times by an elephant, take away peanut and it would make as much sense.
I get y even less:
-renderer.devicePixelRatio * (event.pageY - this.offsetTop) / this.height * 2 + 1,
Why are we now randomly using -devicePixelRatio? Why are now deciding to add one rather than minus one?
That vector is then un-projected (from 2D into 3D space) relative to the camera.
What?
Once we have the point in 3D space representing the mouse's location,
we draw a line to it using the Raycaster. The two arguments that it
receives are the starting point and the direction to the ending point.
Okay, I get that, it's what I was mentioning above. How we need two points to plot a "vector". In THREE talk, a vector appears to be called a "raycaster".
However, the two points we're passing to it as arguments don't make much sense. If we were passing in the camera's position and the vector's position and drawing the projection from those two points I'd get that, and indeed we are using the camera.position for the first points, but
vector.sub(camera.position).normalize()
Why are we subtracting the camera.position? Why are we normalizing? Why does this useless f***** book not think to explain anything?
We get the direction by subtracting the mouse and camera positions and
then normalizing the result, which divides each dimension by the
length of the vector to scale it so that no dimension has a value
greater than 1.
What? I'm not being lazy, not a word past by makes sense here.
Finally, we use the ray to check which objects are located in the
given direction (that is, under the mouse) with the intersectObjects
method. OBJECTS is an array of objects (generally meshes) to check; be
sure to change it appropriately for your code. An array of objects
that are behind the mouse are returned and sorted by distance, so the
first result is the object that was clicked. Each object in the
intersects array has an object, point, face, and distance property.
Respectively, the values of these properties are the clicked object
(generally a Mesh), a Vector3 instance representing the clicked
location in space, the Face3 instance at the clicked location, and the
distance from the camera to the clicked point.
I get that. We grab all the objects the vector passes through, put them to an array in distance order and log the first one, i.e. the nearest one:
console.log(intersects[0]);
And, in all honestly, do you think I should give up with THREE? I mean, I've gotten somewhere with it certainly, and I understand all the programming aspects of it, creating new instances, using data objects such as arrays, using anonymous functions and passing in variables, but whenever I hit something mathematical I seem to grind to a soul-crushing halt.
Or is this actually difficult? Did you find this tricky? It's just the book doesn't feel it's necessary to explain in much detail, and neither do other answers , as though this stuff is just normal for most people. I feel like such an idiot. Should I give up? I want to create 3D games. I really, really want to, but I am drawn to the poetic idea of creating an entire world. Not math. If I said I didn't find math difficult, I would be lying.
I understand your troubles and I'm here to help. It seems you have one principal question: what operations are performed on the vector to prepare it for click detection?
Let's look back at the original declaration of vector:
var vector = new THREE.Vector3(
renderer.devicePixelRatio * (event.pageX - this.offsetLeft) / this.width * 2 - 1,
-renderer.devicePixelRatio * (event.pageY - this.offsetTop) / this.height * 2 + 1,
0
);
renderer.devicePixelRatio relates to a ratio of virtual site pixels /
real device pixels
event.pageX and .pageY are mouseX, mouseY
The this context is renderer.domElement, so .width, .height, .offsetLeft/Right relate to that
1 appears to be a corrective "magic" number for the calculation (for the purpose of being as visually exact as possible)
We don't care about the z-value, THREE will handle that for us. X and Y are our chief concern. Let's derive them:
We first find the distance of the mouse to the edge of the canvas: event.pageX - this.offsetLeft
We divide that by this.width to get the mouseX as a percentage of the screen width
We multiply by renderer.devicePixelRatio to convert from device pixels to site pixels
I'm not sure why we multiply by 2, but it might have to do with an assumption that the user has a retina display (someone can feel free to correct me on this if it's wrong).
1 is, again, magic to fix what might be just an offset error
For y, we multiply the whole expression by -1 to compensate for the inverted coordinate system (0 is top, this.height is bottom)
Thus you get the following arguments for the vector:
renderer.devicePixelRatio * (event.pageX - this.offsetLeft) / this.width * 2 - 1,
-renderer.devicePixelRatio * (event.pageY - this.offsetTop) / this.height * 2 + 1,
0
Now, for the next bit, a few terms:
Normalizing a vector means simplifying it into x, y, and z components less than one. To do so, you simply divide the x, y, and z components of the vector by the magnitude of the vector. It seems useless, but it's important because it creates a unit vector (magnitude = 1) in the direction of the mouse vector!
A Raycaster casts a vector through the 3D landscape produced in the canvas. Its constructor is THREE.Raycaster( origin, direction )
With these terms in mind, I can explain why we do this: vector.sub(camera.position).normalize(). First, we get the vector describing the distance from the mouse position vector to the camera position vector, vector.sub(camera.position). Then, we normalize it to make it a direction vector (again, magnitude = 1). This way, we're casting a vector from the camera to the 3D space in the direction of the mouse position! This operation allows us to then figure out any objects that are under the mouse by comparing the object position to the ray's vector.
I hope this helps. If you have any more questions, feel free to comment and I will answer them as soon as possible.
Oh, and don't let the math discourage you. THREE.js is by nature a math-heavy language because you're manipulating objects in 3D space, but experience will help you get past these kinds of understanding roadblocks. I would continue learning and return to Stack Overflow with your questions. It may take some time to develop an aptitude for the math, but you won't learn if you don't try!
This is more universal no matter the render dom location, and the dom and its ancesters's padding margin.
var rect = renderer.domElement.getBoundingClientRect();
mouse.x = ( ( event.clientX - rect.left ) / ( rect.width - rect.left ) ) * 2 - 1;
mouse.y = - ( ( event.clientY - rect.top ) / ( rect.bottom - rect.top) ) * 2 + 1;
here is a demo, scroll to the bottom to click the cube.
<!DOCTYPE html>
<html>
<head>
<script src="http://threejs.org/build/three.min.js"></script>
<link rel="stylesheet" href="http://libs.baidu.com/bootstrap/3.0.3/css/bootstrap.min.css" />
<style>
body {
font-family: Monospace;
background-color: #fff;
margin: 0px;
}
#canvas {
background-color: #000;
width: 200px;
height: 200px;
border: 1px solid black;
margin: 10px;
padding: 0px;
top: 10px;
left: 100px;
}
.border {
padding:10px;
margin:10px;
height:3000px;
overflow:scroll;
}
</style>
</head>
<body>
<div class="border">
<div style="min-height:1000px;"></div>
<div class="border">
<div id="canvas"></div>
</div>
</div>
<script>
// Three.js ray.intersects with offset canvas
var container, camera, scene, renderer, mesh,
objects = [],
count = 0,
CANVAS_WIDTH = 200,
CANVAS_HEIGHT = 200;
// info
info = document.createElement( 'div' );
info.style.position = 'absolute';
info.style.top = '30px';
info.style.width = '100%';
info.style.textAlign = 'center';
info.style.color = '#f00';
info.style.backgroundColor = 'transparent';
info.style.zIndex = '1';
info.style.fontFamily = 'Monospace';
info.innerHTML = 'INTERSECT Count: ' + count;
info.style.userSelect = "none";
info.style.webkitUserSelect = "none";
info.style.MozUserSelect = "none";
document.body.appendChild( info );
container = document.getElementById( 'canvas' );
renderer = new THREE.WebGLRenderer();
renderer.setSize( CANVAS_WIDTH, CANVAS_HEIGHT );
container.appendChild( renderer.domElement );
scene = new THREE.Scene();
camera = new THREE.PerspectiveCamera( 45, CANVAS_WIDTH / CANVAS_HEIGHT, 1, 1000 );
camera.position.y = 250;
camera.position.z = 500;
camera.lookAt( scene.position );
scene.add( camera );
scene.add( new THREE.AmbientLight( 0x222222 ) );
var light = new THREE.PointLight( 0xffffff, 1 );
camera.add( light );
mesh = new THREE.Mesh(
new THREE.BoxGeometry( 200, 200, 200, 1, 1, 1 ),
new THREE.MeshPhongMaterial( { color : 0x0080ff }
) );
scene.add( mesh );
objects.push( mesh );
// find intersections
var raycaster = new THREE.Raycaster();
var mouse = new THREE.Vector2();
// mouse listener
document.addEventListener( 'mousedown', function( event ) {
var rect = renderer.domElement.getBoundingClientRect();
mouse.x = ( ( event.clientX - rect.left ) / ( rect.width - rect.left ) ) * 2 - 1;
mouse.y = - ( ( event.clientY - rect.top ) / ( rect.bottom - rect.top) ) * 2 + 1;
raycaster.setFromCamera( mouse, camera );
intersects = raycaster.intersectObjects( objects );
if ( intersects.length > 0 ) {
info.innerHTML = 'INTERSECT Count: ' + ++count;
}
}, false );
function render() {
mesh.rotation.y += 0.01;
renderer.render( scene, camera );
}
(function animate() {
requestAnimationFrame( animate );
render();
})();
</script>
</body>
</html>

Resources