Molecule angles building - three.js

I try to build molecule CH4 with threejs
But when I try to build 109.5 angle
methanum = function(x, y, z) {
molecule = new THREE.Object3D();
var startPosition = new THREE.Vector3( 0, 0, 0 );
molecule.add(atom(startPosition, "o"));
var secondPosition = new THREE.Vector3( -20, 10, 00 );
molecule.add(atom(secondPosition, "h"));
var angle = 109.5;
var matrix = new THREE.Matrix4().makeRotationAxis( new THREE.Vector3( 0, 1, 0 ), angle * ( Math.PI / 180 ));
var thirdPosition = secondPosition.applyMatrix4( matrix );
molecule.add(atom(thirdPosition, "h"));
var fourthPosition = thirdPosition.applyMatrix4( matrix );
molecule.add(atom(thirdPosition, "h"));
molecule.position.set(x, y, z);
molecule.rotation.set(x, y, z);
scene.add( molecule );
}
Demo: https://dl.dropboxusercontent.com/u/6204711/3d/ch4.html
But my atoms are not uniformly distributed as in the drawing
Some ideas?

Well there are 3 errors in your molecule code.
You place an oxygen as the center of the CH4 instead of a carbon
When you apply your fourth hydrogen, you specify the third position whereas you have created a fourthposition.
You are rotating around the wrong axis when you place your third hydrogen. My hints are the following: First of all , place your carbon, then move along the Z-axis, place your first hydrogen, rotate around the X-axis of 109.5°, place your second hydrogen, rotate around the Z-axis of 120° the position of your second hydrogen, place your third hydrogen and finally rotate once again around the Z-axis of 120° the position of your third hydrogen and place your last hydrogen.
Here is the CH4 I tried:
methanum3 = function(x, y, z) {
molecule = new THREE.Object3D();
var startPosition = new THREE.Vector3( 0, 0, 0 );
molecule.add(atom(startPosition, "c"));
var axis = new THREE.AxisHelper( 50 );
axis.position.set( 0, 0, 0 );
molecule.add( axis );
var secondPosition = new THREE.Vector3( 0, 0, -40 );
molecule.add(atom(secondPosition, "h"));
var angle = 109.5;
var matrixX = new THREE.Matrix4().makeRotationAxis( new THREE.Vector3( 1, 0, 0 ), angle * ( Math.PI / 180 ));
var thirdPosition = secondPosition.applyMatrix4( matrixX );
molecule.add(atom(thirdPosition, "h"));
var matrixZ = new THREE.Matrix4().makeRotationAxis( new THREE.Vector3( 0, 0, 1 ), 120 * ( Math.PI / 180 ));
var fourthPosition = thirdPosition.applyMatrix4( matrixZ );
molecule.add(atom(fourthPosition, "h"));
var fifthPosition = fourthPosition.applyMatrix4( matrixZ );
molecule.add(atom(fifthPosition, "h"));
molecule.position.set(x, y, z);
//molecule.rotation.set(x, y, z);
scene.add( molecule );
}
//water(0,0,0);
//water(30,60,0);
methanum3(-30,60,0);
Explanation:
Let's call H1 an hydrogen and H2 another one. The given angle of 109.5° is defined in the :
---> --->
(CH1,CH2) plane. Therefore when you look in the direction of the normal of that plane, you can see the 109.5° (Cf. the right part of the image below) BUT When you look in the direction of the normal of another plane you'll get the projection of that angle on that plane. In your case when you look in the direction of the Z-axis you can see an angle of 120°.(Cf. left part of the image below).
The two angles are different according to the direction of the camera.
Hope this helps.

Related

Three.js, unexpected position shift when scaling object

I'm trying to create a zoom box, so far I managed to translate the cursor positions from locale to world coordinates and create a box object around the cursor with the right uvs.
Here is the fiddle of my attempt : https://jsfiddle.net/2ynfedvk/2/
Without scaling the box is perfectly centered around the cursor, but if you toggle the scaling checkbox to set the scale zoomMesh.scale.set(1.5, 1.5, 1), the box position shift the further you move the cursor from the scene center.
Am I messing any CSS like "transform origin" for three.js to center the scale around the object, is this the right approach the get this kind of zoom effect ?
I'm new to three.js and 3d in general, so thanks for any help.
When you scale your mesh with 1.5, it means that apply transform matrix that scales values of vertices.
The issue comes from changing of vertices. Vertices are in local space of the mesh. And when you set the left-top vertex of the square, for example, to [10, 10, 0] and then apply .scale.set(1.5, 1.5, 1) to the mesh, then the coordinate of vertex became [15, 15, 0]. The same to all the other 3 vertices. And that's why the center of the square does not match at 1.5 times from the center of the picture to mouse pointer.
So, an option is not to scale a mesh, but change the size of the square.
I changed your fiddle a bit, so maybe it will be more explanatory:
const
[width, height] = [500, 300],
canvas = document.querySelector('canvas'),
scaleCheckBox = document.querySelector('input')
;
console.log(scaleCheckBox)
canvas.width = width;
canvas.height = height;
const
scene = new THREE.Scene(),
renderer = new THREE.WebGLRenderer({canvas}),
camDistance = 5,
camFov = (2 * Math.atan( height / ( 2 * camDistance ) ) * ( 180 / Math.PI )),
camera = new THREE.PerspectiveCamera(camFov, width/height, 0.1, 1000 )
;
camera.position.z = camDistance;
const
texture = new THREE.TextureLoader().load( "https://picsum.photos/500/300" ),
imageMaterial = new THREE.MeshBasicMaterial( { map: texture , side : 0 } )
;
texture.minFilter = THREE.LinearFilter;
texture.magFilter = THREE.LinearFilter;
texture.format = THREE.RGBFormat;
const
planeGeometry = new THREE.PlaneGeometry( width, height ),
planeMesh = new THREE.Mesh( planeGeometry, imageMaterial )
;
const
zoomGeometry = new THREE.BufferGeometry(),
zoomMaterial = new THREE.MeshBasicMaterial( { map: texture , side : 0 } ),
zoomMesh = new THREE.Mesh( zoomGeometry, zoomMaterial )
;
zoomMaterial.color.set(0xff0000);
zoomGeometry.setAttribute('position', new THREE.BufferAttribute(new Float32Array([
0, 0, 0,
0, 0, 0,
0, 0, 0,
0, 0, 0
]), 3));
zoomGeometry.setIndex([
0, 1, 2,
2, 1, 3
]);
scene.add( planeMesh );
scene.add( zoomMesh );
var zoom = 1.;
function setZoomBox(e){
const
size = 50 * zoom,
x = e.clientX - (size/2),
y = -(e.clientY - height) - (size/2),
coords = [
x,
y,
x + size,
y + size
]
;
const [x1, y1, x2, y2] = [
coords[0] - (width/2),
coords[1] - (height/2),
coords[2] - (width/2),
coords[3] - (height/2)
];
zoomGeometry.setAttribute('position', new THREE.BufferAttribute(new Float32Array([
x1, y1, 0,
x2, y1, 0,
x1, y2, 0,
x2, y2, 0
]), 3));
const [u1, v1, u2, v2] = [
coords[0]/width,
coords[1]/height,
coords[2]/width,
coords[3]/height
]
zoomGeometry.setAttribute('uv',
new THREE.BufferAttribute(new Float32Array([
u1, v1,
u2, v1,
u1, v2,
u2, v2,
u1, v1,
u1, v2
]), 2));
}
function setScale(e){
//zoomMesh.scale.set(...(scaleCheckBox.checked ? [1.5, 1.5, 1] : [1, 1, 1]));
zoom = scaleCheckBox.checked ? 1.5 : 1 ;
}
function render(){
renderer.render(scene, camera);
requestAnimationFrame(render);
}
render();
canvas.addEventListener('mousemove', setZoomBox);
scaleCheckBox.addEventListener('change', setScale);
html, body {
margin: 0;
height: 100%;
}
body{
background: #333;
color: #FFF;
font: bold 16px arial;
}
canvas{
}
<script src="https://threejs.org/build/three.min.js"></script>
<canvas></canvas>
<div>Toggle scale <input type="checkbox" /></div>
thanks for the answer, not quite what I was looking for (not only resize the square but also zoom in the image), but you pointed me in the right direction.
Like you said the positions coordinate are shifting with the scale, so I have to recalculate the new position relative to the scale.
Added these new lines, with new scale and offset variables :
if(scaleCheckBox.checked){
const offset = scale - 1;
zoomMesh.position.set(
-(x1 * offset) - (size*scale)/2) -(size/2),
-((y1 * offset) + (size*scale)/2) -(size/2)),
0
);
}
Here is the working fiddle : https://jsfiddle.net/dc9f5v0m/
It's a bit messy, with a lot of recalculation (Especially to center the cursor around the square), but it gets the job done and the zoom effect can be achieved with any shape not only a square.
Thanks again for your help.

Three js raycaster WITHOUT camera

I seem to find only examples to use the raycaster with the camera, but none that just have a raycaster from Point A to Point B.
I have a working raycaster, it retrieves my Helpers, Lines etc. but it seems it does not recognize my sphere.
My first thought was my points are off, so i decided to create a line from my pointA to my pointB with a direction like so:
var pointA = new Vector3( 50, 0, 0 );
var direction = new Vector3( 0, 1, 0 );
direction.normalize();
var distance = 100;
var pointB = new Vector3();
pointB.addVectors ( pointA, direction.multiplyScalar( distance ) );
var geometry = new Geometry();
geometry.vertices.push( pointA );
geometry.vertices.push( pointB );
var material = new LineBasicMaterial( { color : 0xff0000 } );
var line = new Line( geometry, material );
This will show a line from my point (50 0 0) to (50 100 0) right trough my sphere which is at point (50, 50, 0) so my pointA and direction values are correct.
Next i add a raycaster:
To avoid conflicts with any side effects i recreated my points here:
var raycaster = new Raycaster(new Vector3( 50, 0, 0 ), new Vector3( 0, 1, 0 ).normalize());
var intersects = raycaster.intersectObject(target);
console.log(intersects);
Seems pretty straight forward to me, i also tried to use raycaster.intersectObjects(scene.children) but it gives Lines, helpers etc. but not my sphere.
What am i doing wrong? I am surely missing something here.
IMG of the line and the sphere:
What you see is explained in the following github issue:
https://github.com/mrdoob/three.js/issues/11449
The problem is that the ray emitted from THREE.Raycaster does not directly hit a face but its vertex which results in no intersection.
There are several workarounds to solve this issue e.g. slightly shift the geometry or the ray. For your case:
var raycaster = new THREE.Raycaster( new THREE.Vector3( 50, 0, 0 ), new THREE.Vector3( 0, 1, 0.01 ).normalize() );
However, a better solution is to fix the engine and make the test more robust.
Demo: https://jsfiddle.net/kzwmoug2/3/
three.js R106

Solar system using Three js animation

I am creating a solar system using three js.In that I want to display some of the details on clicking on any objects.I have used object picking concepts.In that I am trying to get the objects which are intersecting with the
clicking.But I am unable get any objects which are intersecting.When I tried to print the objects in the intersects array I am getting as "undefined" and length of the intersects array as 0.
function mous(event) {
var vector = new THREE.Vector3(( event.clientX / window.innerWidth ) * 2 - 1, -( event.clientY / window.innerHeight ) * 2 + 1, 0.5);
vector = vector.unproject(camera);
raycaster = new THREE.Raycaster(camera.position, vector);
var intersects = raycaster.intersectObjects([orbitDir1,orbitDir2,orbitDir3,orbitDir4,orbitDir5]);
alert(intersects[0]);
alert(intersects.length);
}`
And here is the code for orbitDir.
geometry = new THREE.CircleGeometry(2.3, 100);
geometry.vertices.shift();
circle = new THREE.Line(
geometry,
new THREE.LineDashedMaterial({color: 'red'})
);
circle.rotation.x = Math.PI * 0.5 ;
tex = new THREE.ImageUtils.loadTexture("Mercury.jpeg") ;
planet = new THREE.Mesh(
new THREE.SphereBufferGeometry(0.3, 32, 32),
new THREE.MeshPhongMaterial({ map : tex})
);
planet.position.set(2.3, 0, 0);
scene.add(planet);
orbit = new THREE.Group();
orbit.add(circle);
orbit.add(planet);
orbitDir = new THREE.Group();
orbitDir.add(orbit);
//orbitDir.position.x += 0.1 ;
orbitDir.position.y += 4 ;
orbitDir.position.z += 5 ;
orbitDir.rotation.x +=2.3 ;
scene.add(orbitDir);
The code for »unprojection« and raycasting look fine, so I guess that the x and y values might not be right. You are using clientX and clientY which are the mouse coordinates relative to the upper left corner of the window. Those are only valid if your <canvas> is full page. If that is not the case, make sure to use mouse coordinates relative to the upper left edge of the <canvas>.
I think you can do the raycasting like that:
raycaster.intersectObjects(scene, true) //scan the whole scene recursive
docs
Probably the answer you are looking for is here
projector.unprojectVector( vector, camera.position );

three.js: rotational matrix to place THREE.group along new axis

(Please also refer to my illustration of the problem: http://i.stack.imgur.com/SfwwP.png)
problem description and ideas
I am creating several objects in the standard XYZ coordinate system.
Those are added to a THREE.group.
Please think of the group as a wall with several frames and image hung on it.
I want to create my frame objects with eg. dimension of (40, 20, 0.5). So I get a rather flat landscape formatted frame/artwork. I create and place several of those. Then I add them to the group, which I wanted to freely rotate in the world along two vectors start and end.
The problem I am struggling with is how to rotate and position the group from a given vector start to a give vector end.
So far I tried to solve it with a THREE.Matrix4().lookAt :
var group = new THREE.Group();
startVec = new THREE.Vector3( 100, 0, -100 );
endVec = new THREE.Vector3( -200, 0, 200 );
matrix = new THREE.Matrix4().lookAt(startVec, endVec, new THREE.Vector3( 0, 1, 0 ));
group.matrixAutoUpdate = false;
var object1 = new THREE.Mesh(new THREE.BoxGeometry(0.5, 20, 40), mat);
// etc. -> notice the swapping of X and Z coordinates I have to do.
group.add(object1);
group.applyMatrix(matrix);
You can see the example on jsfiddle:
http://jsfiddle.net/y6b9Lumw/1/
If you open jsfiddle, you can see that the objects are not placed along the line from start to end, although I their are placed along the groups internal X-Axis like: addBox(new THREE.Vector3( i * 30, 0 , 0 ));
Full code:
<html>
<head>
<title>testing a rotation matrix</title>
<style>body { margin: 0; } canvas { width: 100%; height: 100% } </style>
</head> <body>
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r70/three.min.js"></script>
<script>
var scene, camera, renderer, light, matrix;
var startVec, endVec;
var boxes;
function addBox(v) {
var boxmesh;
var boxgeom = new THREE.BoxGeometry( 15, 5, 1 );
var boxmaterial = new THREE.MeshLambertMaterial( {color: 0xdd2222} );
boxmesh = new THREE.Mesh( boxgeom, boxmaterial );
//boxmesh.matrix.makeRotationY(Math.PI / 2);
boxmesh.matrix.setPosition(v);
boxmesh.matrixAutoUpdate = false;
boxes.add(boxmesh);
}
function init() {
renderer = new THREE.WebGLRenderer();
renderer.setClearColor( 0x222222 );
renderer.setPixelRatio( window.devicePixelRatio );
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );
scene = new THREE.Scene();
camera = new THREE.PerspectiveCamera( 45, window.innerWidth / window.innerHeight, 1, 1000 );
camera.position.set( -20, 30, 300 );
var light = new THREE.PointLight (0xCCCCCC, 0.5 );
scene.add(light);
startVec = new THREE.Vector3( 100, 0, -100 );
endVec = new THREE.Vector3( -200, 0, 200 );
matrix = new THREE.Matrix4().lookAt(startVec, endVec, new THREE.Vector3( 0, 1, 0 ));
boxes = new THREE.Group();
for (var i = -100; i < 100; i++) {
addBox(new THREE.Vector3( i * 30, 0 , 0 ));
}
boxes.matrixAutoUpdate = false;
boxes.applyMatrix(matrix);
scene.add(boxes);
var linegeometry = new THREE.Geometry();
linegeometry.vertices.push( startVec, endVec);
var line = new THREE.Line(linegeometry, new THREE.LineBasicMaterial({color: 0x33eeef}));
scene.add(line);
render();
}
function render(){
requestAnimationFrame(render);
renderer.render(scene, camera);
}
init();
</script>
</body> </html>
This works only nicely to some extend. As the look vector is usually oriented along the Z-Axis (i think it is (0,0,1)). So unfortunately the objects inside the group get rotated like that aswell.
This is actually what you would expect from a lookAt() rotational transformation. It's not what I would like to have though, as this places all the children in the group, on their Z-Axis, instead of their X-Axis.
In order to have things look properly I had to initialize my groups children with X and Z swapped.
Instead of:
var object1 = new THREE.BoxGeometry( 40, 20, 0.5 );
I have to do:
var object1 = new THREE.BoxGeometry( 0.5, 20, 40 );
same if I want to translate objects in the group on the X-Axis, I have to use the Z-Axis, as that is the look-vector along which the whole wall is oriented by the matrix transformation.
my question is:
How does my matrix have to be constructed/look like to accomplish what I want: Normally create objects, and then have their X-Axis placed along vector start and vector end, like placing artworks on a wall, which can be moved around?
I thought about creating a matrix, whose X-Axis is end.sub(start), so the vector from start end end, might this be what I need to do? If so, how would I construct it?
problem illustration with an image
I tried to illustrate my sitation in two images. One being the wall, one being the wall inside the world, with the same objects attached to the wall (see top of the post).
In the first figure you see the local coordinate system of the group, with two added children, one translated along X.
In the second figure, you can see the same localsystem inside the world how I would like it to be. The green axes are the world axes. The start and end vectors are shown aswell. You can see, both boxes, are properly placed along that line.
I would like to answer my own question by disregarding the idea of manipulating the matrix myself. Thx to #WestLangley I adapted my idea to the following by setting the groups quaternion via .setFromUnitVectors.
So the rotation is derived from the rotation from the x-axis to the direction vector of start and end, as explained in three.js' documentation:
"Sets this quaternion to the rotation required to rotate direction vector vFrom to direction vector vTo."
(http://threejs.org/docs/#Reference/Math/Quaternion.setFromUnitVectors)
Below is the relevant part of my solution:
// define the starting and ending vector of the wall
start = new THREE.Vector3( -130, -40, 300 );
end = new THREE.Vector3( 60, 20, -100 );
// dir is the direction from start to end, normalized
var dir = new THREE.Vector3().copy(end).sub(start).normalize();
// position wall in the middle of start and end
var middle = new THREE.Vector3().copy(start).lerp(end, 0.5);
wall.position.copy(middle);
// rotate wall by applying rotation from X-Axis to dir
wall.quaternion.setFromUnitVectors( new THREE.Vector3(1, 0, 0), dir );
The result can be seen here: http://jsfiddle.net/L9dmqqvy/1/

Mouse / Canvas X, Y to Three.js World X, Y, Z

I've searched around for an example that matches my use case but cannot find one. I'm trying to convert screen mouse co-ordinates into 3D world co-ordinates taking into account the camera.
Solutions I've found all do ray intersection to achieve object picking.
What I am trying to do is position the center of a Three.js object at the co-ordinates that the mouse is currently "over".
My camera is at x:0, y:0, z:500 (although it will move during the simulation) and all my objects are at z = 0 with varying x and y values so I need to know the world X, Y based on assuming a z = 0 for the object that will follow the mouse position.
This question looks like a similar issue but doesn't have a solution: Getting coordinates of the mouse in relation to 3D space in THREE.js
Given the mouse position on screen with a range of "top-left = 0, 0 | bottom-right = window.innerWidth, window.innerHeight", can anyone provide a solution to move a Three.js object to the mouse co-ordinates along z = 0?
You do not need to have any objects in your scene to do this.
You already know the camera position.
Using vector.unproject( camera ) you can get a ray pointing in the direction you want.
You just need to extend that ray, from the camera position, until the z-coordinate of the tip of the ray is zero.
You can do that like so:
var vec = new THREE.Vector3(); // create once and reuse
var pos = new THREE.Vector3(); // create once and reuse
vec.set(
( event.clientX / window.innerWidth ) * 2 - 1,
- ( event.clientY / window.innerHeight ) * 2 + 1,
0.5 );
vec.unproject( camera );
vec.sub( camera.position ).normalize();
var distance = - camera.position.z / vec.z;
pos.copy( camera.position ).add( vec.multiplyScalar( distance ) );
The variable pos is the position of the point in 3D space, "under the mouse", and in the plane z=0.
EDIT: If you need the point "under the mouse" and in the plane z = targetZ, replace the distance computation with:
var distance = ( targetZ - camera.position.z ) / vec.z;
three.js r.98
This worked for me when using an orthographic camera
let vector = new THREE.Vector3();
vector.set(
(event.clientX / window.innerWidth) * 2 - 1,
- (event.clientY / window.innerHeight) * 2 + 1,
0
);
vector.unproject(camera);
WebGL three.js r.89
In r.58 this code works for me:
var planeZ = new THREE.Plane(new THREE.Vector3(0, 0, 1), 0);
var mv = new THREE.Vector3(
(event.clientX / window.innerWidth) * 2 - 1,
-(event.clientY / window.innerHeight) * 2 + 1,
0.5 );
var raycaster = projector.pickingRay(mv, camera);
var pos = raycaster.ray.intersectPlane(planeZ);
console.log("x: " + pos.x + ", y: " + pos.y);
Below is an ES6 class I wrote based on WestLangley's reply, which works perfectly for me in THREE.js r77.
Note that it assumes your render viewport takes up your entire browser viewport.
class CProjectMousePosToXYPlaneHelper
{
constructor()
{
this.m_vPos = new THREE.Vector3();
this.m_vDir = new THREE.Vector3();
}
Compute( nMouseX, nMouseY, Camera, vOutPos )
{
let vPos = this.m_vPos;
let vDir = this.m_vDir;
vPos.set(
-1.0 + 2.0 * nMouseX / window.innerWidth,
-1.0 + 2.0 * nMouseY / window.innerHeight,
0.5
).unproject( Camera );
// Calculate a unit vector from the camera to the projected position
vDir.copy( vPos ).sub( Camera.position ).normalize();
// Project onto z=0
let flDistance = -Camera.position.z / vDir.z;
vOutPos.copy( Camera.position ).add( vDir.multiplyScalar( flDistance ) );
}
}
You can use the class like this:
// Instantiate the helper and output pos once.
let Helper = new CProjectMousePosToXYPlaneHelper();
let vProjectedMousePos = new THREE.Vector3();
...
// In your event handler/tick function, do the projection.
Helper.Compute( e.clientX, e.clientY, Camera, vProjectedMousePos );
vProjectedMousePos now contains the projected mouse position on the z=0 plane.
to get the mouse coordinates of a 3d object use projectVector:
var width = 640, height = 480;
var widthHalf = width / 2, heightHalf = height / 2;
var projector = new THREE.Projector();
var vector = projector.projectVector( object.matrixWorld.getPosition().clone(), camera );
vector.x = ( vector.x * widthHalf ) + widthHalf;
vector.y = - ( vector.y * heightHalf ) + heightHalf;
to get the three.js 3D coordinates that relate to specific mouse coordinates, use the opposite, unprojectVector:
var elem = renderer.domElement,
boundingRect = elem.getBoundingClientRect(),
x = (event.clientX - boundingRect.left) * (elem.width / boundingRect.width),
y = (event.clientY - boundingRect.top) * (elem.height / boundingRect.height);
var vector = new THREE.Vector3(
( x / WIDTH ) * 2 - 1,
- ( y / HEIGHT ) * 2 + 1,
0.5
);
projector.unprojectVector( vector, camera );
var ray = new THREE.Ray( camera.position, vector.subSelf( camera.position ).normalize() );
var intersects = ray.intersectObjects( scene.children );
There is a great example here. However, to use project vector, there must be an object where the user clicked. intersects will be an array of all objects at the location of the mouse, regardless of their depth.
I had a canvas that was smaller than my full window, and needed to determine the world coordinates of a click:
// get the position of a canvas event in world coords
function getWorldCoords(e) {
// get x,y coords into canvas where click occurred
var rect = canvas.getBoundingClientRect(),
x = e.clientX - rect.left,
y = e.clientY - rect.top;
// convert x,y to clip space; coords from top left, clockwise:
// (-1,1), (1,1), (-1,-1), (1, -1)
var mouse = new THREE.Vector3();
mouse.x = ( (x / canvas.clientWidth ) * 2) - 1;
mouse.y = (-(y / canvas.clientHeight) * 2) + 1;
mouse.z = 0.5; // set to z position of mesh objects
// reverse projection from 3D to screen
mouse.unproject(camera);
// convert from point to a direction
mouse.sub(camera.position).normalize();
// scale the projected ray
var distance = -camera.position.z / mouse.z,
scaled = mouse.multiplyScalar(distance),
coords = camera.position.clone().add(scaled);
return coords;
}
var canvas = renderer.domElement;
canvas.addEventListener('click', getWorldCoords);
Here's an example. Click the same region of the donut before and after sliding and you'll find the coords remain constant (check the browser console):
// three.js boilerplate
var container = document.querySelector('body'),
w = container.clientWidth,
h = container.clientHeight,
scene = new THREE.Scene(),
camera = new THREE.PerspectiveCamera(75, w/h, 0.001, 100),
controls = new THREE.MapControls(camera, container),
renderConfig = {antialias: true, alpha: true},
renderer = new THREE.WebGLRenderer(renderConfig);
controls.panSpeed = 0.4;
camera.position.set(0, 0, -10);
renderer.setPixelRatio(window.devicePixelRatio);
renderer.setSize(w, h);
container.appendChild(renderer.domElement);
window.addEventListener('resize', function() {
w = container.clientWidth;
h = container.clientHeight;
camera.aspect = w/h;
camera.updateProjectionMatrix();
renderer.setSize(w, h);
})
function render() {
requestAnimationFrame(render);
renderer.render(scene, camera);
controls.update();
}
// draw some geometries
var geometry = new THREE.TorusGeometry( 10, 3, 16, 100, );
var material = new THREE.MeshNormalMaterial( { color: 0xffff00, } );
var torus = new THREE.Mesh( geometry, material, );
scene.add( torus );
// convert click coords to world space
// get the position of a canvas event in world coords
function getWorldCoords(e) {
// get x,y coords into canvas where click occurred
var rect = canvas.getBoundingClientRect(),
x = e.clientX - rect.left,
y = e.clientY - rect.top;
// convert x,y to clip space; coords from top left, clockwise:
// (-1,1), (1,1), (-1,-1), (1, -1)
var mouse = new THREE.Vector3();
mouse.x = ( (x / canvas.clientWidth ) * 2) - 1;
mouse.y = (-(y / canvas.clientHeight) * 2) + 1;
mouse.z = 0.0; // set to z position of mesh objects
// reverse projection from 3D to screen
mouse.unproject(camera);
// convert from point to a direction
mouse.sub(camera.position).normalize();
// scale the projected ray
var distance = -camera.position.z / mouse.z,
scaled = mouse.multiplyScalar(distance),
coords = camera.position.clone().add(scaled);
console.log(mouse, coords.x, coords.y, coords.z);
}
var canvas = renderer.domElement;
canvas.addEventListener('click', getWorldCoords);
render();
html,
body {
width: 100%;
height: 100%;
background: #000;
}
body {
margin: 0;
overflow: hidden;
}
canvas {
width: 100%;
height: 100%;
}
<script src='https://cdnjs.cloudflare.com/ajax/libs/three.js/97/three.min.js'></script>
<script src=' https://threejs.org/examples/js/controls/MapControls.js'></script>
ThreeJS is slowly mowing away from Projector.(Un)ProjectVector and the solution with projector.pickingRay() doesn't work anymore, just finished updating my own code.. so the most recent working version should be as follow:
var rayVector = new THREE.Vector3(0, 0, 0.5);
var camera = new THREE.PerspectiveCamera(fov,this.offsetWidth/this.offsetHeight,0.1,farFrustum);
var raycaster = new THREE.Raycaster();
var scene = new THREE.Scene();
//...
function intersectObjects(x, y, planeOnly) {
rayVector.set(((x/this.offsetWidth)*2-1), (1-(y/this.offsetHeight)*2), 1).unproject(camera);
raycaster.set(camera.position, rayVector.sub(camera.position ).normalize());
var intersects = raycaster.intersectObjects(scene.children);
return intersects;
}
For those using #react-three/fiber (aka r3f and react-three-fiber), I found this discussion and it's associated code samples by Matt Rossman helpful. In particular, many examples using the methods above are for simple orthographic views, not for when OrbitControls are in play.
Discussion: https://github.com/pmndrs/react-three-fiber/discussions/857
Simple example using Matt's technique: https://codesandbox.io/s/r3f-mouse-to-world-elh73?file=/src/index.js
More generalizable example: https://codesandbox.io/s/react-three-draggable-cxu37?file=/src/App.js
Here is my take at creating an es6 class out of it. Working with Three.js r83. The method of using rayCaster comes from mrdoob here: Three.js Projector and Ray objects
export default class RaycasterHelper
{
constructor (camera, scene) {
this.camera = camera
this.scene = scene
this.rayCaster = new THREE.Raycaster()
this.tapPos3D = new THREE.Vector3()
this.getIntersectsFromTap = this.getIntersectsFromTap.bind(this)
}
// objects arg below needs to be an array of Three objects in the scene
getIntersectsFromTap (tapX, tapY, objects) {
this.tapPos3D.set((tapX / window.innerWidth) * 2 - 1, -(tapY /
window.innerHeight) * 2 + 1, 0.5) // z = 0.5 important!
this.tapPos3D.unproject(this.camera)
this.rayCaster.set(this.camera.position,
this.tapPos3D.sub(this.camera.position).normalize())
return this.rayCaster.intersectObjects(objects, false)
}
}
You would use it like this if you wanted to check against all your objects in the scene for hits. I made the recursive flag false above because for my uses I did not need it to be.
var helper = new RaycasterHelper(camera, scene)
var intersects = helper.getIntersectsFromTap(tapX, tapY,
this.scene.children)
...
Although the provided answers can be useful in some scenarios, I hardly can imagine those scenarios (maybe games or animations) because they are not precise at all (guessing around target's NDC z?). You can't use those methods to unproject screen coordinates to the world ones if you know target z-plane. But for the most scenarios, you should know this plane.
For example, if you draw sphere by center (known point in model space) and radius - you need to get radius as delta of unprojected mouse coordinates - but you can't! With all due respect #WestLangley's method with targetZ doesn't work, it gives incorrect results (I can provide jsfiddle if needed). Another example - you need to set orbit controls target by mouse double click, but without "real" raycasting with scene objects (when you have nothing to pick).
The solution for me is to just create the virtual plane in target point along z-axis and use raycasting with this plane afterward. Target point can be current orbit controls target or vertex of object you need to draw step by step in existing model space etc. This works perfectly and it is simple (example in typescript):
screenToWorld(v2D: THREE.Vector2, camera: THREE.PerspectiveCamera = null, target: THREE.Vector3 = null): THREE.Vector3 {
const self = this;
const vNdc = self.toNdc(v2D);
return self.ndcToWorld(vNdc, camera, target);
}
//get normalized device cartesian coordinates (NDC) with center (0, 0) and ranging from (-1, -1) to (1, 1)
toNdc(v: THREE.Vector2): THREE.Vector2 {
const self = this;
const canvasEl = self.renderers.WebGL.domElement;
const bounds = canvasEl.getBoundingClientRect();
let x = v.x - bounds.left;
let y = v.y - bounds.top;
x = (x / bounds.width) * 2 - 1;
y = - (y / bounds.height) * 2 + 1;
return new THREE.Vector2(x, y);
}
ndcToWorld(vNdc: THREE.Vector2, camera: THREE.PerspectiveCamera = null, target: THREE.Vector3 = null): THREE.Vector3 {
const self = this;
if (!camera) {
camera = self.camera;
}
if (!target) {
target = self.getTarget();
}
const position = camera.position.clone();
const origin = self.scene.position.clone();
const v3D = target.clone();
self.raycaster.setFromCamera(vNdc, camera);
const normal = new THREE.Vector3(0, 0, 1);
const distance = normal.dot(origin.sub(v3D));
const plane = new THREE.Plane(normal, distance);
self.raycaster.ray.intersectPlane(plane, v3D);
return v3D;
}

Resources