Three.js, unexpected position shift when scaling object - three.js

I'm trying to create a zoom box, so far I managed to translate the cursor positions from locale to world coordinates and create a box object around the cursor with the right uvs.
Here is the fiddle of my attempt : https://jsfiddle.net/2ynfedvk/2/
Without scaling the box is perfectly centered around the cursor, but if you toggle the scaling checkbox to set the scale zoomMesh.scale.set(1.5, 1.5, 1), the box position shift the further you move the cursor from the scene center.
Am I messing any CSS like "transform origin" for three.js to center the scale around the object, is this the right approach the get this kind of zoom effect ?
I'm new to three.js and 3d in general, so thanks for any help.

When you scale your mesh with 1.5, it means that apply transform matrix that scales values of vertices.
The issue comes from changing of vertices. Vertices are in local space of the mesh. And when you set the left-top vertex of the square, for example, to [10, 10, 0] and then apply .scale.set(1.5, 1.5, 1) to the mesh, then the coordinate of vertex became [15, 15, 0]. The same to all the other 3 vertices. And that's why the center of the square does not match at 1.5 times from the center of the picture to mouse pointer.
So, an option is not to scale a mesh, but change the size of the square.
I changed your fiddle a bit, so maybe it will be more explanatory:
const
[width, height] = [500, 300],
canvas = document.querySelector('canvas'),
scaleCheckBox = document.querySelector('input')
;
console.log(scaleCheckBox)
canvas.width = width;
canvas.height = height;
const
scene = new THREE.Scene(),
renderer = new THREE.WebGLRenderer({canvas}),
camDistance = 5,
camFov = (2 * Math.atan( height / ( 2 * camDistance ) ) * ( 180 / Math.PI )),
camera = new THREE.PerspectiveCamera(camFov, width/height, 0.1, 1000 )
;
camera.position.z = camDistance;
const
texture = new THREE.TextureLoader().load( "https://picsum.photos/500/300" ),
imageMaterial = new THREE.MeshBasicMaterial( { map: texture , side : 0 } )
;
texture.minFilter = THREE.LinearFilter;
texture.magFilter = THREE.LinearFilter;
texture.format = THREE.RGBFormat;
const
planeGeometry = new THREE.PlaneGeometry( width, height ),
planeMesh = new THREE.Mesh( planeGeometry, imageMaterial )
;
const
zoomGeometry = new THREE.BufferGeometry(),
zoomMaterial = new THREE.MeshBasicMaterial( { map: texture , side : 0 } ),
zoomMesh = new THREE.Mesh( zoomGeometry, zoomMaterial )
;
zoomMaterial.color.set(0xff0000);
zoomGeometry.setAttribute('position', new THREE.BufferAttribute(new Float32Array([
0, 0, 0,
0, 0, 0,
0, 0, 0,
0, 0, 0
]), 3));
zoomGeometry.setIndex([
0, 1, 2,
2, 1, 3
]);
scene.add( planeMesh );
scene.add( zoomMesh );
var zoom = 1.;
function setZoomBox(e){
const
size = 50 * zoom,
x = e.clientX - (size/2),
y = -(e.clientY - height) - (size/2),
coords = [
x,
y,
x + size,
y + size
]
;
const [x1, y1, x2, y2] = [
coords[0] - (width/2),
coords[1] - (height/2),
coords[2] - (width/2),
coords[3] - (height/2)
];
zoomGeometry.setAttribute('position', new THREE.BufferAttribute(new Float32Array([
x1, y1, 0,
x2, y1, 0,
x1, y2, 0,
x2, y2, 0
]), 3));
const [u1, v1, u2, v2] = [
coords[0]/width,
coords[1]/height,
coords[2]/width,
coords[3]/height
]
zoomGeometry.setAttribute('uv',
new THREE.BufferAttribute(new Float32Array([
u1, v1,
u2, v1,
u1, v2,
u2, v2,
u1, v1,
u1, v2
]), 2));
}
function setScale(e){
//zoomMesh.scale.set(...(scaleCheckBox.checked ? [1.5, 1.5, 1] : [1, 1, 1]));
zoom = scaleCheckBox.checked ? 1.5 : 1 ;
}
function render(){
renderer.render(scene, camera);
requestAnimationFrame(render);
}
render();
canvas.addEventListener('mousemove', setZoomBox);
scaleCheckBox.addEventListener('change', setScale);
html, body {
margin: 0;
height: 100%;
}
body{
background: #333;
color: #FFF;
font: bold 16px arial;
}
canvas{
}
<script src="https://threejs.org/build/three.min.js"></script>
<canvas></canvas>
<div>Toggle scale <input type="checkbox" /></div>

thanks for the answer, not quite what I was looking for (not only resize the square but also zoom in the image), but you pointed me in the right direction.
Like you said the positions coordinate are shifting with the scale, so I have to recalculate the new position relative to the scale.
Added these new lines, with new scale and offset variables :
if(scaleCheckBox.checked){
const offset = scale - 1;
zoomMesh.position.set(
-(x1 * offset) - (size*scale)/2) -(size/2),
-((y1 * offset) + (size*scale)/2) -(size/2)),
0
);
}
Here is the working fiddle : https://jsfiddle.net/dc9f5v0m/
It's a bit messy, with a lot of recalculation (Especially to center the cursor around the square), but it gets the job done and the zoom effect can be achieved with any shape not only a square.
Thanks again for your help.

Related

How to make a not squared texture fit in a "background-size:cover" way to a geometry plane in Three.js?

I want my texture to have the same behaviour than the "background-size:cover" css property.
I'd like to work with uvs coordinates.
I looked at this answer and start to work on a solution : Three.js Efficiently Mapping Uvs to Plane
I try to have the same dimension/position planes that some div of my DOM.
This is what I want :
And this is the result I get with this code : the dimension and position are good, the ratio of my texture looks good too but it seems like there's a scale issue :
let w = domElmt.clientWidth / window.innerHeight;
let h = domElmt.clientHeight / window.innerHeight;
geometry = new THREE.PlaneGeometry(w, h);
var uvs = geometry.faceVertexUvs[ 0 ];
uvs[ 0 ][ 0 ].set( 0, h );
uvs[ 0 ][ 1 ].set( 0, 0 );
uvs[ 0 ][ 2 ].set( w, h );
uvs[ 1 ][ 0 ].set( 0, 0 );
uvs[ 1 ][ 1 ].set( w, 0 );
uvs[ 1 ][ 2 ].set( w, h );
tex = new THREE.TextureLoader().load('image.jpg'));
tex.wrapS = tex.wrapT = THREE.RepeatWrapping;
material = new THREE.MeshBasicMaterial( { map: tex } );
mesh = new THREE.Mesh( geometry, material );
Should I play with the repeat attribute of my texture or can I fully made this behaviour using uvs ? Thank you
https://en.wikipedia.org/wiki/UV_mapping
UV mapping values range from 0 to 1, inclusive, and represent a percentage mapping across your texture image.
You're using a ratio of the div size vs the window size, which is likely much smaller than 1, and would result in the "zoomed in" effect you're seeing.
For example, if your w and h result in the value 0.5, then The furthest top-right corner of the mapped texture will be the exact center of the image.
background-style: cover:
Scales the image as large as possible without stretching the image. If the proportions of the image differ from the element, it is cropped either vertically or horizontally so that no empty space remains.
In other words, it will scale the image based on the size of the short side, and crop the rest. So let's assume you have a nice 128x512 image, and a 64x64 space. cover would scale the width of 128 down to 64 (a scale factor of 0.5), so multiply 512 by 0.5 to get the new height (128). Now your w would still be 1, but your h will be 128 / 512 = 0.25. Your texture will now fit to the width, and crop the height.
You'll need to perform this calculation for each image-to-container size relationship to find the proper UVs, keeping in mind that the scaling is always relevant to the short side.
You don't need to generate UVs, you can just use texture.repeat and texture.offset
const aspectOfPlane = planeWidth / planeHeight;
const aspectOfImage = image.width / image.height;
let yScale = 1;
let xScale = aspectOfPlane / aspectOfImage;
if (xScale > 1) { // it doesn't cover so based on x instead
xScale = 1;
yScale = aspectOfImage / aspectOfPlane;
}
texture.repeat.set(xScale, yScale);
texture.offset.set((1 - xScale) / 2, (1 - yScale) / 2);
'use strict';
/* global THREE */
async function main() {
const canvas = document.querySelector('#c');
const renderer = new THREE.WebGLRenderer({canvas});
const fov = 75;
const aspect = 2; // the canvas default
const near = 0.1;
const far = 50;
const camera = new THREE.PerspectiveCamera(fov, aspect, near, far);
camera.position.z = 4;
const scene = new THREE.Scene();
const loader = new THREE.TextureLoader();
function loadTexture(url) {
return new Promise((resolve, reject) => {
loader.load(url, resolve, undefined, reject);
});
}
const textures = await Promise.all([
"https://i.imgur.com/AyOufBk.jpg",
"https://i.imgur.com/ZKMnXce.png",
"https://i.imgur.com/TSiyiJv.jpg",
"https://i.imgur.com/v38pV.jpg",
].map(loadTexture));
const geometry = new THREE.PlaneBufferGeometry(1, 1);
const material = new THREE.MeshBasicMaterial({map: textures[0]});
const planeMesh = new THREE.Mesh(geometry, material);
scene.add(planeMesh);
let texIndex = 0;
function setTexture() {
const texture = textures[texIndex];
texIndex = (texIndex + 1) % textures.length;
// pick and random width and height for plane
const planeWidth = rand(1, 4);
const planeHeight = rand(1, 4);
planeMesh.scale.set(planeWidth, planeHeight, 1);
const image = texture.image;
const aspectOfPlane = planeWidth / planeHeight;
const aspectOfImage = image.width / image.height;
let yScale = 1;
let xScale = aspectOfPlane / aspectOfImage;
if (xScale > 1) { // it doesn't cover so based on x instead
xScale = 1;
yScale = aspectOfImage / aspectOfPlane;
}
texture.repeat.set(xScale, yScale);
texture.offset.set((1 - xScale) / 2, (1 - yScale) / 2);
material.map = texture;
}
setTexture();
setInterval(setTexture, 1000);
function resizeRendererToDisplaySize(renderer) {
const canvas = renderer.domElement;
const width = canvas.clientWidth;
const height = canvas.clientHeight;
const needResize = canvas.width !== width || canvas.height !== height;
if (needResize) {
renderer.setSize(width, height, false);
}
return needResize;
}
function render(time) {
time *= 0.001;
if (resizeRendererToDisplaySize(renderer)) {
const canvas = renderer.domElement;
camera.aspect = canvas.clientWidth / canvas.clientHeight;
camera.updateProjectionMatrix();
}
renderer.render(scene, camera);
requestAnimationFrame(render);
}
requestAnimationFrame(render);
}
function rand(min, max) {
if (max === undefined) {
max = min;
min = 0;
}
return Math.random() * (max - min) + min;
}
main();
body {
margin: 0;
}
#c {
width: 100vw;
height: 100vh;
display: block;
}
<canvas id="c"></canvas>
<script src="https://threejsfundamentals.org/threejs/resources/threejs/r105/three.min.js"></script>
texture.repeat and texture.offset are really just applied to the UVs so if you really want UVs it's
newU = u * repeat.x + offset.x;
newV = v * repeat.y + offset.y;
so using the code above
offsetX = (1 - xScale) / 2;
offsetY = (1 - yScale) / 2;
u0 = offsetX;
v0 = offsetY;
u1 = offsetX + xScale;
v1 = offsetY + yScale;
so
var uvs = geometry.faceVertexUvs[ 0 ];
uvs[ 0 ][ 0 ].set( u0, v1 );
uvs[ 0 ][ 1 ].set( u0, v0 );
uvs[ 0 ][ 2 ].set( u1, v1 );
uvs[ 1 ][ 0 ].set( u0, v0 );
uvs[ 1 ][ 1 ].set( u1, v0 );
uvs[ 1 ][ 2 ].set( u1, v1 );

ThreeJS: Getting world coordinates from camera view

I want to animate a Plane vertices to fill the screen. (Vertices as this is the effect I want, I'm hoping to animate each vertex with a short delay to then fill the screen)
As a proof of concept, I've got a vertex to animate off to a random point, using the function below -
tileClick() {
var geo = this.SELECTED.geometry;
var mat = this.SELECTED.material as THREE.MeshBasicMaterial;
TweenMax.TweenLite.to(geo.vertices[0], 0.3, {x: -5, y:5, onUpdate: () =>{
mat.needsUpdate = true;
geo.colorsNeedUpdate = true;
geo.elementsNeedUpdate = true;
}, ease: TweenMax.Elastic.easeOut.config(1, 0.5)});
}
However, now I need to work out the points of the current view of the camera. pseudo code: camera.view.getBoundingClientRect();
Plnkr of WIP - https://next.plnkr.co/edit/Jm4D2zgLtiKBGghC
I believe what you need is THREE.Vector3.unproject. With this method, you can set the vector to x, y, z in screen coordinates, and it'll return x, y, z in world coordinates:
var vector = new THREE.Vector3();
var zNearPlane = -1;
var zFarPlane = 1;
// Top left corner
vector.set( -1, 1, zNearPlane ).unproject( camera );
// Top right corner
vector.set( 1, 1, zNearPlane ).unproject( camera );
// Bottom left corner
vector.set( -1, -1, zNearPlane ).unproject( camera );
// Bottom right corner
vector.set( 1, -1, zNearPlane ).unproject( camera );
Notice that all inputs are in the [-1, 1] range:
x:-1 = left side of screen
x: 1 = right side of screen
y: 1 = top
y:-1 = bottom
z: 1 = far plane
z: -1 = near plane

Flickering of THREE.Points based on camera position and texture coordinates, but only on Nvidia cards

I have a problem with flickering of THREE.Points depending on their UV coordinates, as seen in the following codepen: http://codepen.io/anon/pen/qrdQeY?editors=0010
The code in the codepen is condensed down as much as possible (171 lines),
but to summarize what I'm doing:
Rendering sprites using THREE.Points
BufferGeometry contains spritesheet index and position for each sprite
RawShaderMaterial with custom vertex and pixel shader to lookup up the UV coordinates of the sprite for the given index
a 128x128px spritesheet with 4x4 cells contains the sprites
Here's the code:
/// FRAGMENT SHADER ===========================================================
const fragmentShader = `
precision highp float;
uniform sampler2D spritesheet;
// number of spritesheet subdivisions both vertically and horizontally
// e.g. for a 4x4 spritesheet this number is 4
uniform float spritesheetSubdivisions;
// vParams[i].x = sprite index
// vParams[i].z = sprite alpha
varying vec3 vParams;
/**
* Maps regular UV coordinates spanning the entire spritesheet
* to a specific sprite within the spritesheet based on the given index,
* which points into a spritesheel cell (depending on spritesheetSubdivisions
* and assuming that the spritesheet is regular and square).
*/
vec2 spriteIndexToUV(float idx, vec2 uv) {
float cols = spritesheetSubdivisions;
float rows = spritesheetSubdivisions;
float x = mod(idx, cols);
float y = floor(idx / cols);
return vec2(x / cols + uv.x / cols, 1.0 - (y / rows + (uv.y) / rows));
}
void main() {
vec2 uv = spriteIndexToUV(vParams.x, gl_PointCoord);
vec4 diffuse = texture2D(spritesheet, uv);
float alpha = diffuse.a * vParams.z;
if (alpha < 0.5) discard;
gl_FragColor = vec4(diffuse.xyz, alpha);
}
`
// VERTEX SHADER ==============================================================
const vertexShader = `
precision highp float;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
uniform float size;
uniform float scale;
attribute vec3 position;
attribute vec3 params; // x = sprite index, y = unused, z = sprite alpha
attribute vec3 color;
varying vec3 vParams;
void main() {
vParams = params;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
gl_Position = projectionMatrix * mvPosition;
gl_PointSize = size * ( scale / - mvPosition.z );
}
`
// THREEJS CODE ===============================================================
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
const renderer = new THREE.WebGLRenderer({canvas: document.querySelector("#mycanvas")});
renderer.setSize(window.innerWidth, window.innerHeight);
renderer.setClearColor(0xf0f0f0)
const pointGeometry = new THREE.BufferGeometry()
pointGeometry.addAttribute("position", new THREE.BufferAttribute(new Float32Array([
-1.5, -1.5, 0,
-0.5, -1.5, 0,
0.5, -1.5, 0,
1.5, -1.5, 0,
-1.5, -0.5, 0,
-0.5, -0.5, 0,
0.5, -0.5, 0,
1.5, -0.5, 0,
-1.5, 0.5, 0,
-0.5, 0.5, 0,
0.5, 0.5, 0,
1.5, 0.5, 0,
-1.5, 1.5, 0,
-0.5, 1.5, 0,
0.5, 1.5, 0,
1.5, 1.5, 0,
]), 3))
pointGeometry.addAttribute("params", new THREE.BufferAttribute(new Float32Array([
0, 0, 1, // sprite index 0 (row 0, column 0)
1, 0, 1, // sprite index 1 (row 0, column 1)
2, 0, 1, // sprite index 2 (row 0, column 2)
3, 0, 1, // sprite index 3 (row 0, column 4)
4, 0, 1, // sprite index 4 (row 1, column 0)
5, 0, 1, // sprite index 5 (row 1, column 1)
6, 0, 1, // ...
7, 0, 1,
8, 0, 1,
9, 0, 1,
10, 0, 1,
11, 0, 1,
12, 0, 1,
13, 0, 1,
14, 0, 1,
15, 0, 1
]), 3))
const img = document.querySelector("img")
const texture = new THREE.TextureLoader().load(img.src);
const pointMaterial = new THREE.RawShaderMaterial({
transparent: true,
vertexShader: vertexShader,
fragmentShader: fragmentShader,
uniforms: {
spritesheet: {
type: "t",
value: texture
},
spritesheetSubdivisions: {
type: "f",
value: 4
},
size: {
type: "f",
value: 1
},
scale: {
type: "f",
value: window.innerHeight / 2
}
}
})
const points = new THREE.Points(pointGeometry, pointMaterial)
scene.add(points)
const render = function (timestamp) {
requestAnimationFrame(render);
camera.position.z = 5 + Math.sin(timestamp / 1000.0)
renderer.render(scene, camera);
};
render();
// resize viewport
window.addEventListener( 'resize', onWindowResize, false );
function onWindowResize(){
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize( window.innerWidth, window.innerHeight );
}
If you have an Nvidia card you will see three sprites flicker while the camera
is moving back and forth along the Z axis. On integrated Intel graphics chips
the problem does not occur.
I'm not sure how to solve this problem. The affected uv coordinates seem kind of random. I'd be grateful for any pointers.
The mod()/floor() calculations inside your spriteIndexToUV() function are causing problems in certain constellations (when spriteindex is a multiple of spritesheetSubdivisions).
I could fix it by tweaking the cols variable with a small epsilon:
vec2 spriteIndexToUV(float idx, vec2 uv)
{
float cols = spritesheetSubdivisions - 1e-6; // subtract epsilon
float rows = spritesheetSubdivisions;
float x = mod(idx, cols);
float y = floor(idx / cols);
return vec2(x / cols + uv.x / cols, 1.0 - (y / rows + (uv.y) / rows));
}
PS: That codepen stuff is really cool, didn't know that this existed :-)
edit: It might be even better/clearer to write it like this:
float cols = spritesheetSubdivisions;
float rows = spritesheetSubdivisions;
float y = floor ((idx+0.5) / cols);
float x = idx - cols * y;
That way, we keep totally clear of any critical situations in the floor operation -- plus we get rid of the mod() call.
As to why floor (idx/4) is sometimes producing 0 instead of 1 when idx should be exactly 4.0, I can only speculate that the varying vec3 vParams is subjected to some interpolation when it goes from the vertex-shader to the fragment-shader stage, thus leading to the fragment-shader seeing e.g. 3.999999 instead of exactly 4.0.

Molecule angles building

I try to build molecule CH4 with threejs
But when I try to build 109.5 angle
methanum = function(x, y, z) {
molecule = new THREE.Object3D();
var startPosition = new THREE.Vector3( 0, 0, 0 );
molecule.add(atom(startPosition, "o"));
var secondPosition = new THREE.Vector3( -20, 10, 00 );
molecule.add(atom(secondPosition, "h"));
var angle = 109.5;
var matrix = new THREE.Matrix4().makeRotationAxis( new THREE.Vector3( 0, 1, 0 ), angle * ( Math.PI / 180 ));
var thirdPosition = secondPosition.applyMatrix4( matrix );
molecule.add(atom(thirdPosition, "h"));
var fourthPosition = thirdPosition.applyMatrix4( matrix );
molecule.add(atom(thirdPosition, "h"));
molecule.position.set(x, y, z);
molecule.rotation.set(x, y, z);
scene.add( molecule );
}
Demo: https://dl.dropboxusercontent.com/u/6204711/3d/ch4.html
But my atoms are not uniformly distributed as in the drawing
Some ideas?
Well there are 3 errors in your molecule code.
You place an oxygen as the center of the CH4 instead of a carbon
When you apply your fourth hydrogen, you specify the third position whereas you have created a fourthposition.
You are rotating around the wrong axis when you place your third hydrogen. My hints are the following: First of all , place your carbon, then move along the Z-axis, place your first hydrogen, rotate around the X-axis of 109.5°, place your second hydrogen, rotate around the Z-axis of 120° the position of your second hydrogen, place your third hydrogen and finally rotate once again around the Z-axis of 120° the position of your third hydrogen and place your last hydrogen.
Here is the CH4 I tried:
methanum3 = function(x, y, z) {
molecule = new THREE.Object3D();
var startPosition = new THREE.Vector3( 0, 0, 0 );
molecule.add(atom(startPosition, "c"));
var axis = new THREE.AxisHelper( 50 );
axis.position.set( 0, 0, 0 );
molecule.add( axis );
var secondPosition = new THREE.Vector3( 0, 0, -40 );
molecule.add(atom(secondPosition, "h"));
var angle = 109.5;
var matrixX = new THREE.Matrix4().makeRotationAxis( new THREE.Vector3( 1, 0, 0 ), angle * ( Math.PI / 180 ));
var thirdPosition = secondPosition.applyMatrix4( matrixX );
molecule.add(atom(thirdPosition, "h"));
var matrixZ = new THREE.Matrix4().makeRotationAxis( new THREE.Vector3( 0, 0, 1 ), 120 * ( Math.PI / 180 ));
var fourthPosition = thirdPosition.applyMatrix4( matrixZ );
molecule.add(atom(fourthPosition, "h"));
var fifthPosition = fourthPosition.applyMatrix4( matrixZ );
molecule.add(atom(fifthPosition, "h"));
molecule.position.set(x, y, z);
//molecule.rotation.set(x, y, z);
scene.add( molecule );
}
//water(0,0,0);
//water(30,60,0);
methanum3(-30,60,0);
Explanation:
Let's call H1 an hydrogen and H2 another one. The given angle of 109.5° is defined in the :
---> --->
(CH1,CH2) plane. Therefore when you look in the direction of the normal of that plane, you can see the 109.5° (Cf. the right part of the image below) BUT When you look in the direction of the normal of another plane you'll get the projection of that angle on that plane. In your case when you look in the direction of the Z-axis you can see an angle of 120°.(Cf. left part of the image below).
The two angles are different according to the direction of the camera.
Hope this helps.

Mouse / Canvas X, Y to Three.js World X, Y, Z

I've searched around for an example that matches my use case but cannot find one. I'm trying to convert screen mouse co-ordinates into 3D world co-ordinates taking into account the camera.
Solutions I've found all do ray intersection to achieve object picking.
What I am trying to do is position the center of a Three.js object at the co-ordinates that the mouse is currently "over".
My camera is at x:0, y:0, z:500 (although it will move during the simulation) and all my objects are at z = 0 with varying x and y values so I need to know the world X, Y based on assuming a z = 0 for the object that will follow the mouse position.
This question looks like a similar issue but doesn't have a solution: Getting coordinates of the mouse in relation to 3D space in THREE.js
Given the mouse position on screen with a range of "top-left = 0, 0 | bottom-right = window.innerWidth, window.innerHeight", can anyone provide a solution to move a Three.js object to the mouse co-ordinates along z = 0?
You do not need to have any objects in your scene to do this.
You already know the camera position.
Using vector.unproject( camera ) you can get a ray pointing in the direction you want.
You just need to extend that ray, from the camera position, until the z-coordinate of the tip of the ray is zero.
You can do that like so:
var vec = new THREE.Vector3(); // create once and reuse
var pos = new THREE.Vector3(); // create once and reuse
vec.set(
( event.clientX / window.innerWidth ) * 2 - 1,
- ( event.clientY / window.innerHeight ) * 2 + 1,
0.5 );
vec.unproject( camera );
vec.sub( camera.position ).normalize();
var distance = - camera.position.z / vec.z;
pos.copy( camera.position ).add( vec.multiplyScalar( distance ) );
The variable pos is the position of the point in 3D space, "under the mouse", and in the plane z=0.
EDIT: If you need the point "under the mouse" and in the plane z = targetZ, replace the distance computation with:
var distance = ( targetZ - camera.position.z ) / vec.z;
three.js r.98
This worked for me when using an orthographic camera
let vector = new THREE.Vector3();
vector.set(
(event.clientX / window.innerWidth) * 2 - 1,
- (event.clientY / window.innerHeight) * 2 + 1,
0
);
vector.unproject(camera);
WebGL three.js r.89
In r.58 this code works for me:
var planeZ = new THREE.Plane(new THREE.Vector3(0, 0, 1), 0);
var mv = new THREE.Vector3(
(event.clientX / window.innerWidth) * 2 - 1,
-(event.clientY / window.innerHeight) * 2 + 1,
0.5 );
var raycaster = projector.pickingRay(mv, camera);
var pos = raycaster.ray.intersectPlane(planeZ);
console.log("x: " + pos.x + ", y: " + pos.y);
Below is an ES6 class I wrote based on WestLangley's reply, which works perfectly for me in THREE.js r77.
Note that it assumes your render viewport takes up your entire browser viewport.
class CProjectMousePosToXYPlaneHelper
{
constructor()
{
this.m_vPos = new THREE.Vector3();
this.m_vDir = new THREE.Vector3();
}
Compute( nMouseX, nMouseY, Camera, vOutPos )
{
let vPos = this.m_vPos;
let vDir = this.m_vDir;
vPos.set(
-1.0 + 2.0 * nMouseX / window.innerWidth,
-1.0 + 2.0 * nMouseY / window.innerHeight,
0.5
).unproject( Camera );
// Calculate a unit vector from the camera to the projected position
vDir.copy( vPos ).sub( Camera.position ).normalize();
// Project onto z=0
let flDistance = -Camera.position.z / vDir.z;
vOutPos.copy( Camera.position ).add( vDir.multiplyScalar( flDistance ) );
}
}
You can use the class like this:
// Instantiate the helper and output pos once.
let Helper = new CProjectMousePosToXYPlaneHelper();
let vProjectedMousePos = new THREE.Vector3();
...
// In your event handler/tick function, do the projection.
Helper.Compute( e.clientX, e.clientY, Camera, vProjectedMousePos );
vProjectedMousePos now contains the projected mouse position on the z=0 plane.
to get the mouse coordinates of a 3d object use projectVector:
var width = 640, height = 480;
var widthHalf = width / 2, heightHalf = height / 2;
var projector = new THREE.Projector();
var vector = projector.projectVector( object.matrixWorld.getPosition().clone(), camera );
vector.x = ( vector.x * widthHalf ) + widthHalf;
vector.y = - ( vector.y * heightHalf ) + heightHalf;
to get the three.js 3D coordinates that relate to specific mouse coordinates, use the opposite, unprojectVector:
var elem = renderer.domElement,
boundingRect = elem.getBoundingClientRect(),
x = (event.clientX - boundingRect.left) * (elem.width / boundingRect.width),
y = (event.clientY - boundingRect.top) * (elem.height / boundingRect.height);
var vector = new THREE.Vector3(
( x / WIDTH ) * 2 - 1,
- ( y / HEIGHT ) * 2 + 1,
0.5
);
projector.unprojectVector( vector, camera );
var ray = new THREE.Ray( camera.position, vector.subSelf( camera.position ).normalize() );
var intersects = ray.intersectObjects( scene.children );
There is a great example here. However, to use project vector, there must be an object where the user clicked. intersects will be an array of all objects at the location of the mouse, regardless of their depth.
I had a canvas that was smaller than my full window, and needed to determine the world coordinates of a click:
// get the position of a canvas event in world coords
function getWorldCoords(e) {
// get x,y coords into canvas where click occurred
var rect = canvas.getBoundingClientRect(),
x = e.clientX - rect.left,
y = e.clientY - rect.top;
// convert x,y to clip space; coords from top left, clockwise:
// (-1,1), (1,1), (-1,-1), (1, -1)
var mouse = new THREE.Vector3();
mouse.x = ( (x / canvas.clientWidth ) * 2) - 1;
mouse.y = (-(y / canvas.clientHeight) * 2) + 1;
mouse.z = 0.5; // set to z position of mesh objects
// reverse projection from 3D to screen
mouse.unproject(camera);
// convert from point to a direction
mouse.sub(camera.position).normalize();
// scale the projected ray
var distance = -camera.position.z / mouse.z,
scaled = mouse.multiplyScalar(distance),
coords = camera.position.clone().add(scaled);
return coords;
}
var canvas = renderer.domElement;
canvas.addEventListener('click', getWorldCoords);
Here's an example. Click the same region of the donut before and after sliding and you'll find the coords remain constant (check the browser console):
// three.js boilerplate
var container = document.querySelector('body'),
w = container.clientWidth,
h = container.clientHeight,
scene = new THREE.Scene(),
camera = new THREE.PerspectiveCamera(75, w/h, 0.001, 100),
controls = new THREE.MapControls(camera, container),
renderConfig = {antialias: true, alpha: true},
renderer = new THREE.WebGLRenderer(renderConfig);
controls.panSpeed = 0.4;
camera.position.set(0, 0, -10);
renderer.setPixelRatio(window.devicePixelRatio);
renderer.setSize(w, h);
container.appendChild(renderer.domElement);
window.addEventListener('resize', function() {
w = container.clientWidth;
h = container.clientHeight;
camera.aspect = w/h;
camera.updateProjectionMatrix();
renderer.setSize(w, h);
})
function render() {
requestAnimationFrame(render);
renderer.render(scene, camera);
controls.update();
}
// draw some geometries
var geometry = new THREE.TorusGeometry( 10, 3, 16, 100, );
var material = new THREE.MeshNormalMaterial( { color: 0xffff00, } );
var torus = new THREE.Mesh( geometry, material, );
scene.add( torus );
// convert click coords to world space
// get the position of a canvas event in world coords
function getWorldCoords(e) {
// get x,y coords into canvas where click occurred
var rect = canvas.getBoundingClientRect(),
x = e.clientX - rect.left,
y = e.clientY - rect.top;
// convert x,y to clip space; coords from top left, clockwise:
// (-1,1), (1,1), (-1,-1), (1, -1)
var mouse = new THREE.Vector3();
mouse.x = ( (x / canvas.clientWidth ) * 2) - 1;
mouse.y = (-(y / canvas.clientHeight) * 2) + 1;
mouse.z = 0.0; // set to z position of mesh objects
// reverse projection from 3D to screen
mouse.unproject(camera);
// convert from point to a direction
mouse.sub(camera.position).normalize();
// scale the projected ray
var distance = -camera.position.z / mouse.z,
scaled = mouse.multiplyScalar(distance),
coords = camera.position.clone().add(scaled);
console.log(mouse, coords.x, coords.y, coords.z);
}
var canvas = renderer.domElement;
canvas.addEventListener('click', getWorldCoords);
render();
html,
body {
width: 100%;
height: 100%;
background: #000;
}
body {
margin: 0;
overflow: hidden;
}
canvas {
width: 100%;
height: 100%;
}
<script src='https://cdnjs.cloudflare.com/ajax/libs/three.js/97/three.min.js'></script>
<script src=' https://threejs.org/examples/js/controls/MapControls.js'></script>
ThreeJS is slowly mowing away from Projector.(Un)ProjectVector and the solution with projector.pickingRay() doesn't work anymore, just finished updating my own code.. so the most recent working version should be as follow:
var rayVector = new THREE.Vector3(0, 0, 0.5);
var camera = new THREE.PerspectiveCamera(fov,this.offsetWidth/this.offsetHeight,0.1,farFrustum);
var raycaster = new THREE.Raycaster();
var scene = new THREE.Scene();
//...
function intersectObjects(x, y, planeOnly) {
rayVector.set(((x/this.offsetWidth)*2-1), (1-(y/this.offsetHeight)*2), 1).unproject(camera);
raycaster.set(camera.position, rayVector.sub(camera.position ).normalize());
var intersects = raycaster.intersectObjects(scene.children);
return intersects;
}
For those using #react-three/fiber (aka r3f and react-three-fiber), I found this discussion and it's associated code samples by Matt Rossman helpful. In particular, many examples using the methods above are for simple orthographic views, not for when OrbitControls are in play.
Discussion: https://github.com/pmndrs/react-three-fiber/discussions/857
Simple example using Matt's technique: https://codesandbox.io/s/r3f-mouse-to-world-elh73?file=/src/index.js
More generalizable example: https://codesandbox.io/s/react-three-draggable-cxu37?file=/src/App.js
Here is my take at creating an es6 class out of it. Working with Three.js r83. The method of using rayCaster comes from mrdoob here: Three.js Projector and Ray objects
export default class RaycasterHelper
{
constructor (camera, scene) {
this.camera = camera
this.scene = scene
this.rayCaster = new THREE.Raycaster()
this.tapPos3D = new THREE.Vector3()
this.getIntersectsFromTap = this.getIntersectsFromTap.bind(this)
}
// objects arg below needs to be an array of Three objects in the scene
getIntersectsFromTap (tapX, tapY, objects) {
this.tapPos3D.set((tapX / window.innerWidth) * 2 - 1, -(tapY /
window.innerHeight) * 2 + 1, 0.5) // z = 0.5 important!
this.tapPos3D.unproject(this.camera)
this.rayCaster.set(this.camera.position,
this.tapPos3D.sub(this.camera.position).normalize())
return this.rayCaster.intersectObjects(objects, false)
}
}
You would use it like this if you wanted to check against all your objects in the scene for hits. I made the recursive flag false above because for my uses I did not need it to be.
var helper = new RaycasterHelper(camera, scene)
var intersects = helper.getIntersectsFromTap(tapX, tapY,
this.scene.children)
...
Although the provided answers can be useful in some scenarios, I hardly can imagine those scenarios (maybe games or animations) because they are not precise at all (guessing around target's NDC z?). You can't use those methods to unproject screen coordinates to the world ones if you know target z-plane. But for the most scenarios, you should know this plane.
For example, if you draw sphere by center (known point in model space) and radius - you need to get radius as delta of unprojected mouse coordinates - but you can't! With all due respect #WestLangley's method with targetZ doesn't work, it gives incorrect results (I can provide jsfiddle if needed). Another example - you need to set orbit controls target by mouse double click, but without "real" raycasting with scene objects (when you have nothing to pick).
The solution for me is to just create the virtual plane in target point along z-axis and use raycasting with this plane afterward. Target point can be current orbit controls target or vertex of object you need to draw step by step in existing model space etc. This works perfectly and it is simple (example in typescript):
screenToWorld(v2D: THREE.Vector2, camera: THREE.PerspectiveCamera = null, target: THREE.Vector3 = null): THREE.Vector3 {
const self = this;
const vNdc = self.toNdc(v2D);
return self.ndcToWorld(vNdc, camera, target);
}
//get normalized device cartesian coordinates (NDC) with center (0, 0) and ranging from (-1, -1) to (1, 1)
toNdc(v: THREE.Vector2): THREE.Vector2 {
const self = this;
const canvasEl = self.renderers.WebGL.domElement;
const bounds = canvasEl.getBoundingClientRect();
let x = v.x - bounds.left;
let y = v.y - bounds.top;
x = (x / bounds.width) * 2 - 1;
y = - (y / bounds.height) * 2 + 1;
return new THREE.Vector2(x, y);
}
ndcToWorld(vNdc: THREE.Vector2, camera: THREE.PerspectiveCamera = null, target: THREE.Vector3 = null): THREE.Vector3 {
const self = this;
if (!camera) {
camera = self.camera;
}
if (!target) {
target = self.getTarget();
}
const position = camera.position.clone();
const origin = self.scene.position.clone();
const v3D = target.clone();
self.raycaster.setFromCamera(vNdc, camera);
const normal = new THREE.Vector3(0, 0, 1);
const distance = normal.dot(origin.sub(v3D));
const plane = new THREE.Plane(normal, distance);
self.raycaster.ray.intersectPlane(plane, v3D);
return v3D;
}

Resources