THREE.js: convert one world coordinate unit to screen pixels - three.js

For example, I have a unit cube. When there is no scaling, I want to get one coordinate unit represented by how many screen pixels.
When I zoom in, then this one coordinate unit is represented by how many screen pixels?

I'm not entirely sure if i understood your question, but to get any distance between any two THREE vectors you do this:
const distanceAB = new THREE.Vector3(1,1,1).sub( new THREE.Vector3(2,2,2) ).length()
Your unit cube would have vertices like (-1,-1,-1) and (1,1,1) among others, (or actually 0.5). Either way you need to obtain these values (easier when using THREE.Geometry than BufferGeometry).
Then you project these vertices
const vertexA = new THREE.Vector3() // set this from cube
const vertexB = new THREE.Vector3()
const screenSpaceVector = new THREE.Vector3().subVectors(vertexA.project(myCamera),vertexB.project(myCamera))
The result is now in something called NDC which is a cube going from -1 to 1. To normalize it:
screenSpaceVector.multiplyScalar(0.5).add(new THREE.Vector3(0.5,0.5,0.5))
Finally to figure out how many pixels this is
screenSpaceVector.x *= renderer.getSize().width
screenSpaceVector.y *= renderer.getSize().height
screenSpaceVector.z = 0
const pixelLength = screenSpaceVector.length()
I think should do the trick

Related

PIXI.js - Canvas Coordinate to Container Coordinate

I have initiated a PIXI js canvas:
g_App = new PIXI.Application(800, 600, { backgroundColor: 0x1099bb });
Set up a container:
container = new PIXI.Container();
g_App.stage.addChild(container);
Put a background texture (2000x2000) into the container:
var texture = PIXI.Texture.fromImage('picBottom.png');
var back = new PIXI.Sprite(texture);
container.addChild(back);
Set the global:
var g_Container = container;
I do various pivot points and rotations on container and canvas stage element:
// Set the focus point of the container
g_App.stage.x = Math.floor(400);
g_App.stage.y = Math.floor(500); // Note this one is not central
g_Container.pivot.set(1000, 1000);
g_Container.rotation = 1.5; // radians
Now I need to be able to convert a canvas pixel to the pixel on the background texture.
g_Container has an element transform which in turn has several elements localTransform, pivot, position, scale ands skew. Similarly g_App.stage has the same transform element.
In Maths this is simple, you just have vector point and do matix operations on them. Then to go back the other way you just find inverses of those matrices and multiply backwards.
So what do I do here in pixi.js?
How do I convert a pixel on the canvas and see what pixel it is on the background container?
Note: The following is written using the USA convention of using matrices. They have row vectors on the left and multiply them by the matrix on the right. (Us pesky Brits in the UK do the opposite. We have column vectors on the right and multiply it by the matrix on the left. This means UK and USA matrices to do the same job will look slightly different.)
Now I have confused you all, on with the answer.
g_Container.transform.localTransform - this matrix takes the world coords to the scaled/transposed/rotated COORDS
g_App.stage.transform.localTransform - this matrix takes the rotated world coords and outputs screen (or more accurately) html canvas coords
So for example the Container matrix is:
MatContainer = [g_Container.transform.localTransform.a, g_Container.transform.localTransform.b, 0]
[g_Container.transform.localTransform.c, g_Container.transform.localTransform.d, 0]
[g_Container.transform.localTransform.tx, g_Container.transform.localTransform.ty, 1]
and the rotated container matrix to screen is:
MatToScreen = [g_App.stage.transform.localTransform.a, g_App.stage.transform.localTransform.b, 0]
[g_App.stage.transform.localTransform.c, g_App.stage.transform.localTransform.d, 0]
[g_App.stage.transform.localTransform.tx, g_App.stage.transform.localTransform.ty, 1]
So to get from World Coordinates to Screen Coordinates (noting our vector will be a row on the left, so the first operation matrix that acts first on the World coordinates must also be on the left), we would need to multiply the vector by:
MatAll = MatContainer * MatToScreen
So if you have a world coordinate vector vectWorld = [worldX, worldY, 1.0] (I'll explain the 1.0 at the end), then to get to the screen coords you would do the following:
vectScreen = vectWorld * MatAll
So to get screen coords and to get to world coords we first need to calculate the inverse matrix of MatAll, call it invMatAll. (There are loads of places that tell you how to do this, so I will not do it here.)
So if we have screen (canvas) coordinates screenX and screenY, we need to create a vector vectScreen = [screenX, screenY, 1.0] (again I will explain the 1.0 later), then to get to world coordinates worldX and worldY we do:
vectWorld = vectScreen * invMatAll
And that is it.
So what about the 1.0?
In a 2D system you can do rotations, scaling with 2x2 matrices. Unfortunately you cannot do a 2D translations with a 2x2 matrix. Consequently you need 3x3 matrices to fully describe all 2D scaling, rotations and translations. This means you need to make your vector 3D as well, and you need to put a 1.0 in the third position in order to do the translations properly. This 1.0 will also be 1.0 after any matrix operation as well.
Note: If we were working in a 3D system we would need 4x4 matrices and put a dummy 1.0 in our 4D vectors for exactly the same reasons.

Get screen coordinates of a vertex in a THREE.js Points object using bufferGeometry

I want to have a DOM node track a particle in my THREE.js simulation. My simulation is built with the Points object, using a bufferGeometry. I'm setting the positions of each vertex in the render loop. Over the course of the simulation I'm moving / rotating both the camera and the Points object (through its parent Object3d).
I can't figure out how to get reliable screen coordinates for any of my particles. I've followed the instructions on other questions, like Three.JS: Get position of rotated object, and Converting World coordinates to Screen coordinates in Three.js using Projection, but none of them seem to work for me. At this point I can see that the calculated projections of the vertices are changing with my camera movements and object rotations, but not in a way that I can actually map to the screen. Also, sometimes two particles that neighbor each other on the screen will yield wildly different projected positions.
Here's my latest attempt:
const { x, y, z } = layout.getNodePosition(nodes[nodeHoverTarget].id)
var m = camera.matrixWorldInverse.clone()
var mw = points.matrixWorld.clone()
var p = camera.projectionMatrix.clone()
var modelViewMatrix = m.multiply(mw)
var position = new THREE.Vector3(x, y, z)
var projectedPosition = position.applyMatrix4(p.multiply(modelViewMatrix))
console.log(projectedPosition)
Essentially I've replicated the operations in my shader to derive gl_Position.
projectedPosition is where I'd like to store the screen coordinates.
I'm sorry if I've missed something obvious... I've tried a lot of things but so far nothing has worked :/
Thanks in advance for any help.
I figured it out...
var position = new THREE.Vector3(x, y, z)
var projectedPosition = position.applyMatrix4(points.matrixWorld).project(camera)

How to plot country names on the globe, so the mesh will be aligned with the surfaces

I'm trying to plot country names of the globe, so the text meshes will be aligned with the surface, but I'm failing to calculate proper rotations. For text I'm using THREE.TextGeometry. The name appears on the click of the mesh of the country at the point of intersection using raycasting. I'm lacking knowledge of how to turn these coordinates to proper rotation angles. I'm not posting my code, as it's complete mess and I believe for a knowldgeable person will be easier to explain how to achieve this in general.
Here is desired result:
The other solution, which I tried (and which, of course, is not the ultimate), based on this SO answer. The idea is to use the normal of the face you intersect with the raycaster.
Obtain the point of intersection.
Obtain the face of intersection.
Obtain the normal of the face (2).
Get the normal (3) in world coordinates.
Set position of the text object as sum of point of intersection (1) and the normal in world coordinates (4).
Set lookAt() vector of the text object as sum of its position (5) and the normal in world coordinates (4).
Seems long, but actually it makes not so much of code:
var PGHelper = new THREE.PolarGridHelper(...); // let's imagine it's your text object ;)
var PGlookAt = new THREE.Vector3(); // point of lookAt for the "text" object
var normalMatrix = new THREE.Matrix3();
var worldNormal = new THREE.Vector3();
and in the animation loop:
for ( var i = 0; i < intersects.length; i++ ) {
normalMatrix.getNormalMatrix( intersects[i].object.matrixWorld );
worldNormal.copy(intersects[i].face.normal).applyMatrix3( normalMatrix ).normalize();
PGHelper.position.addVectors(intersects[i].point, worldNormal);
PGlookAt.addVectors(PGHelper.position, worldNormal);
PGHelper.lookAt(PGlookAt);
}
jsfiddle exmaple
The method works with meshes of any geometry (checked with spheres and boxes though ;) ). And I'm sure there are another better methods.
very interesting question.I have tried this way, we can regard the text as a plane. lets define a normal vector n from your sphere center(or position) to point on the sphere surface where you want to display text. I have a simple way to make normal vector right.
1. put the text mesh on sphere center. text.position.copy(sphere.position)
2. make text to the point on sphere surface, text.lookAt(point)
3.relocate text to the point. text.position.copy(point)

Setting the projectionMatrix of a Perspective Camera in Three.js

I'm trying to set the ProjectionMatrix of a Three.js Perspective Camera to match a projection Matrix I calculated with a different program.
So I set the camera's position and rotation like this:
self.camera.position.x = 0;
self.camera.position.y = 0;
self.camera.position.z = 142 ;
self.camera.rotation.x = 0.0;// -0.032
self.camera.rotation.y = 0.0;
self.camera.rotation.z = 0;
Next I created a 4x4 Matrix (called Matrix4 in Three.js) like this:
var projectionMatrix = new THREE.Matrix4(-1426.149, -145.7176, -523.0170, 225.07519, -42.40711, -1463.2367, -23.6839, 524.3322, -0.0174, -0.11928, -0.99270, 0.43826, 0, 0, 0, 1);
and changed the camera's projection Matrix entries like this:
for ( var i = 0; i < 16; i++) {
self.camera.projectionMatrix.elements[i] = projectionMatrix.elements[i];
}
when I now render the scene I just get a black screen and can't see any of the objects I inserted. Turning the angle of the Camera doesn't help either. I still can't see any objects.
If I insert a
self.camera.updateProjectionMatrix();
after setting the camera's projection Matrix to the values of my projectionMatrix the camera is set back to the original Position (x=0,y=0,z=142 and looking at the origin where I created some objects) and the values I set in the camera's matrix seem to have been overwritten. I checked that by printing the cameras projection Matrix to the console. If I do not call the updateProjectionMatrix() function the values stay as I set them.
Does somebody have an idea how to solve this problem?
If I do not call the updateProjectionMatrix() function the values stay as I set them.
Correct, updateProjectionMatrix() calculates those 16 numbers you pasted in your projection matrix based on a bunch of parameters. Those parameters are, the position and rotation you set above, plus the parameters you passed (or default) for the camera. (these actually make the matrixWorld and its inverse.
In case of a perspective camera, you don't have much - near, far, fov and aspect. Left,right,top,bottom are derived from these, with an orthographic camera you set them directly. These are then used to compose the projection matrix.
Scratch a pixel has a REALLY good tutorial on this subject. The next lesson on the openGL projection matrix is actually more relevant to WebGL. left right top and bottom are made from your FOV and your aspect ratio. Add near and far and you've got yourself a projection matrix.
Now, in order for this thing to work, you either have to know what you're doing, or get really lucky. Pasting these numbers from somewhere else and getting it to work is short of winning the lottery. Best case scenario, you can have your scale all wrong and clipping your scene. Worst case, you've mixed a completely different matrix, different XYZ convention, and there's no way you'll get it to work, or at least make sense.
Out of curiosity, what are you trying to do? Are you trying to match your camera to a camera from somewhere else?

Three.js—rotation around arbitrary line

I am starting with Three.js so I might have misunderstood some basics of the concept. I have a usual 3d scene with a hierarchy like this:
.
+-container #(0,0,0) (Object3d, no own geometry)
+-child 1 #(1,1,1)
+-child 2 #(1, -2, 5)
+-child 3 #(-4, -2, -3)
.
.
. more should come
all »children« of the »container« are imported models from Blender. What I would like to do is to rotate the whole container around a pivot axis based on the current selection, which should be one of the children.
Image three cubes in Blender, all selected with the 3d cursor at center of first in location and center of transformation. A rotation transforms all cubes, but the rotation is relative to the first in selection.
In terms of three.js, what would like to do is to rotate the container, so that the rotation is applied to all children.
To do that I think that the following steps should do the trick:
create a matrix,
translate that matrix by the negative of the selected objects position
rotate that matrix
translate the matrix back to the selected objects position
apply the transform to the container
I have tried the following code but the result is just wrong:
var sp = selection.position.clone(),
m = new THREE.Matrix4();
selection.localToWorld(sp);
m.setPosition(sp.clone().negate());
//I've used makeRotationX for testing purposes, should be replaced with quaternion rotation later on…
m = m.multiply(new THREE.Matrix4().makeRotationX(2*180/Math.PI));
m = m.multiply(new THREE.Matrix4().makeTranslation(sp.x,sp.y,sp.z));
this._container.applyMatrix(m);
Thanks for help!
UPDATE
sign error—this works:
var sp = selection.position.clone(),
m = new THREE.Matrix4();
m.makeTranslation(sp.x,sp.y,sp.z);
m.multiply(new THREE.Matrix4().makeRotationX(0.1));
m.multiply(new THREE.Matrix4().makeTranslation(-sp.x,-sp.y,-sp.z));
this._container.applyMatrix(m);
BUT that code does not really look that good, creating three matrices for that single operating seems to bit of overhead, what is the usual »three.js-way«?
UPDATE #2
Due to the comment here is an image describing what I would like to do:
The »arrows« at the origin stand for the parent container and the cube, the sphere and the cone are its »children«. The red line shows the line I would like rotate the parent around, this way the rotation is applied to all children.
rotateOnAxis() takes a Vector as axis, so the line the objects rotates around crosses its origin.

Resources