I have to construct object worldmatrix manually because this is faster and a lot more optimal than using the methods to set position. But the problem i have is that i need to rotate the matrix 180 degrees. makeRotationY(alpha) just sets the rotation to 180 disregarding the pre-existing rotation.I need to be able to retain the pre-existing rotation. So if matrix has rotation 10 and i apply 180 rotation the total is 190. How can i do this?
It currently works like this:
var inverse_view_to_source = new THREE.Matrix4().getInverse(camera.matrix).
multiply(src_portal.matrix);
inverse_view_to_source.makeRotationY(Math.PI);
inverse_view_to_source = dst_portal.matrix.clone().multiply(inverse_view_to_source);
Related
For example, I have a unit cube. When there is no scaling, I want to get one coordinate unit represented by how many screen pixels.
When I zoom in, then this one coordinate unit is represented by how many screen pixels?
I'm not entirely sure if i understood your question, but to get any distance between any two THREE vectors you do this:
const distanceAB = new THREE.Vector3(1,1,1).sub( new THREE.Vector3(2,2,2) ).length()
Your unit cube would have vertices like (-1,-1,-1) and (1,1,1) among others, (or actually 0.5). Either way you need to obtain these values (easier when using THREE.Geometry than BufferGeometry).
Then you project these vertices
const vertexA = new THREE.Vector3() // set this from cube
const vertexB = new THREE.Vector3()
const screenSpaceVector = new THREE.Vector3().subVectors(vertexA.project(myCamera),vertexB.project(myCamera))
The result is now in something called NDC which is a cube going from -1 to 1. To normalize it:
screenSpaceVector.multiplyScalar(0.5).add(new THREE.Vector3(0.5,0.5,0.5))
Finally to figure out how many pixels this is
screenSpaceVector.x *= renderer.getSize().width
screenSpaceVector.y *= renderer.getSize().height
screenSpaceVector.z = 0
const pixelLength = screenSpaceVector.length()
I think should do the trick
I am writing a mesh editor where I have manipulators with the help of which I change the vertices of the mesh. The task is to render the manipulators with constant dimensions, which would not change when changing the camera and viewport parameters. The projection matrix is perspective. I will be grateful for ideas how to implement the invariant scale geometry.
If I got it right you want to render some markers (for example vertex drag editation area) with the same visual size for any depth they are rendered to.
There are 2 approaches for this:
scale with depth
compute perpendicular distance to camera view (simple dot product) and scale the marker size so it has the same visual size invariant on the depth.
So if P0 is your camera position and Z is your camera view direction unit vector (usually Z axis). Then for any position P compute the scale like this:
depth = dot(P-P0,Z)
Now the scale depends on wanted visual size0 at some specified depth0. Now using triangle similarity we want:
size/dept = size0/depth0
size = size0*depth/depth0
so render your marker with size or scale depth/depth0. In case of using scaling you need to scale around your target position P otherwise your marker would shift to the sides (so translate, scale, translate back).
compute screen position and use non perspective rendering
so you transform target coordinates the same way as the graphic pipeline does until you got the screen x,y position. Remember it and in pass that will render your markers just use that instead of real position. For this rendering pass either use some constant depth (distance from camera) or use non perspective view matrix.
For more info see Understanding 4x4 homogenous transform matrices
[Edit1] pixel size
you need to use FOVx,FOVy projection angles and view/screen resolution (xs,ys) for that. That means if depth is znear and coordinate is at half of the angle then the projected coordinate will go to edge of screen:
tan(FOVx/2) = (xs/2)*pixelx/znear
tan(FOVy/2) = (ys/2)*pixely/znear
---------------------------------
pixelx = 2*znear*tan(FOVx/2)/xs
pixely = 2*znear*tan(FOVy/2)/ys
Where pixelx,pixely is size (per axis) representing single pixel visually at depth znear. In case booth sizes are the same (so pixel is square) you have all you need. In case they are not equal (pixel is not square) then you need to render markers in screen axis aligned coordinates so approach #2 is more suitable for such case.
So if you chose depth0=znear then you can set size0 as n*pixelx and/or n*pixely to get the visual size of n pixels. Or use any dept0 and rewrite the computation to:
pixelx = 2*depth0*tan(FOVx/2)/xs
pixely = 2*depth0*tan(FOVy/2)/ys
Just to be complete:
size0x = size_in_pixels*(2*depth0*tan(FOVx/2)/xs)
size0y = size_in_pixels*(2*depth0*tan(FOVy/2)/ys)
-------------------------------------------------
sizex = size_in_pixels*(2*depth0*tan(FOVx/2)/xs)*(depth/depth0)
sizey = size_in_pixels*(2*depth0*tan(FOVy/2)/ys)*(depth/depth0)
---------------------------------------------------------------
sizex = size_in_pixels*(2*tan(FOVx/2)/xs)*(depth)
sizey = size_in_pixels*(2*tan(FOVy/2)/ys)*(depth)
---------------------------------------------------------------
sizex = size_in_pixels*2*depth*tan(FOVx/2)/xs
sizey = size_in_pixels*2*depth*tan(FOVy/2)/ys
I have initiated a PIXI js canvas:
g_App = new PIXI.Application(800, 600, { backgroundColor: 0x1099bb });
Set up a container:
container = new PIXI.Container();
g_App.stage.addChild(container);
Put a background texture (2000x2000) into the container:
var texture = PIXI.Texture.fromImage('picBottom.png');
var back = new PIXI.Sprite(texture);
container.addChild(back);
Set the global:
var g_Container = container;
I do various pivot points and rotations on container and canvas stage element:
// Set the focus point of the container
g_App.stage.x = Math.floor(400);
g_App.stage.y = Math.floor(500); // Note this one is not central
g_Container.pivot.set(1000, 1000);
g_Container.rotation = 1.5; // radians
Now I need to be able to convert a canvas pixel to the pixel on the background texture.
g_Container has an element transform which in turn has several elements localTransform, pivot, position, scale ands skew. Similarly g_App.stage has the same transform element.
In Maths this is simple, you just have vector point and do matix operations on them. Then to go back the other way you just find inverses of those matrices and multiply backwards.
So what do I do here in pixi.js?
How do I convert a pixel on the canvas and see what pixel it is on the background container?
Note: The following is written using the USA convention of using matrices. They have row vectors on the left and multiply them by the matrix on the right. (Us pesky Brits in the UK do the opposite. We have column vectors on the right and multiply it by the matrix on the left. This means UK and USA matrices to do the same job will look slightly different.)
Now I have confused you all, on with the answer.
g_Container.transform.localTransform - this matrix takes the world coords to the scaled/transposed/rotated COORDS
g_App.stage.transform.localTransform - this matrix takes the rotated world coords and outputs screen (or more accurately) html canvas coords
So for example the Container matrix is:
MatContainer = [g_Container.transform.localTransform.a, g_Container.transform.localTransform.b, 0]
[g_Container.transform.localTransform.c, g_Container.transform.localTransform.d, 0]
[g_Container.transform.localTransform.tx, g_Container.transform.localTransform.ty, 1]
and the rotated container matrix to screen is:
MatToScreen = [g_App.stage.transform.localTransform.a, g_App.stage.transform.localTransform.b, 0]
[g_App.stage.transform.localTransform.c, g_App.stage.transform.localTransform.d, 0]
[g_App.stage.transform.localTransform.tx, g_App.stage.transform.localTransform.ty, 1]
So to get from World Coordinates to Screen Coordinates (noting our vector will be a row on the left, so the first operation matrix that acts first on the World coordinates must also be on the left), we would need to multiply the vector by:
MatAll = MatContainer * MatToScreen
So if you have a world coordinate vector vectWorld = [worldX, worldY, 1.0] (I'll explain the 1.0 at the end), then to get to the screen coords you would do the following:
vectScreen = vectWorld * MatAll
So to get screen coords and to get to world coords we first need to calculate the inverse matrix of MatAll, call it invMatAll. (There are loads of places that tell you how to do this, so I will not do it here.)
So if we have screen (canvas) coordinates screenX and screenY, we need to create a vector vectScreen = [screenX, screenY, 1.0] (again I will explain the 1.0 later), then to get to world coordinates worldX and worldY we do:
vectWorld = vectScreen * invMatAll
And that is it.
So what about the 1.0?
In a 2D system you can do rotations, scaling with 2x2 matrices. Unfortunately you cannot do a 2D translations with a 2x2 matrix. Consequently you need 3x3 matrices to fully describe all 2D scaling, rotations and translations. This means you need to make your vector 3D as well, and you need to put a 1.0 in the third position in order to do the translations properly. This 1.0 will also be 1.0 after any matrix operation as well.
Note: If we were working in a 3D system we would need 4x4 matrices and put a dummy 1.0 in our 4D vectors for exactly the same reasons.
I'm trying to set the ProjectionMatrix of a Three.js Perspective Camera to match a projection Matrix I calculated with a different program.
So I set the camera's position and rotation like this:
self.camera.position.x = 0;
self.camera.position.y = 0;
self.camera.position.z = 142 ;
self.camera.rotation.x = 0.0;// -0.032
self.camera.rotation.y = 0.0;
self.camera.rotation.z = 0;
Next I created a 4x4 Matrix (called Matrix4 in Three.js) like this:
var projectionMatrix = new THREE.Matrix4(-1426.149, -145.7176, -523.0170, 225.07519, -42.40711, -1463.2367, -23.6839, 524.3322, -0.0174, -0.11928, -0.99270, 0.43826, 0, 0, 0, 1);
and changed the camera's projection Matrix entries like this:
for ( var i = 0; i < 16; i++) {
self.camera.projectionMatrix.elements[i] = projectionMatrix.elements[i];
}
when I now render the scene I just get a black screen and can't see any of the objects I inserted. Turning the angle of the Camera doesn't help either. I still can't see any objects.
If I insert a
self.camera.updateProjectionMatrix();
after setting the camera's projection Matrix to the values of my projectionMatrix the camera is set back to the original Position (x=0,y=0,z=142 and looking at the origin where I created some objects) and the values I set in the camera's matrix seem to have been overwritten. I checked that by printing the cameras projection Matrix to the console. If I do not call the updateProjectionMatrix() function the values stay as I set them.
Does somebody have an idea how to solve this problem?
If I do not call the updateProjectionMatrix() function the values stay as I set them.
Correct, updateProjectionMatrix() calculates those 16 numbers you pasted in your projection matrix based on a bunch of parameters. Those parameters are, the position and rotation you set above, plus the parameters you passed (or default) for the camera. (these actually make the matrixWorld and its inverse.
In case of a perspective camera, you don't have much - near, far, fov and aspect. Left,right,top,bottom are derived from these, with an orthographic camera you set them directly. These are then used to compose the projection matrix.
Scratch a pixel has a REALLY good tutorial on this subject. The next lesson on the openGL projection matrix is actually more relevant to WebGL. left right top and bottom are made from your FOV and your aspect ratio. Add near and far and you've got yourself a projection matrix.
Now, in order for this thing to work, you either have to know what you're doing, or get really lucky. Pasting these numbers from somewhere else and getting it to work is short of winning the lottery. Best case scenario, you can have your scale all wrong and clipping your scene. Worst case, you've mixed a completely different matrix, different XYZ convention, and there's no way you'll get it to work, or at least make sense.
Out of curiosity, what are you trying to do? Are you trying to match your camera to a camera from somewhere else?
I am starting with Three.js so I might have misunderstood some basics of the concept. I have a usual 3d scene with a hierarchy like this:
.
+-container #(0,0,0) (Object3d, no own geometry)
+-child 1 #(1,1,1)
+-child 2 #(1, -2, 5)
+-child 3 #(-4, -2, -3)
.
.
. more should come
all »children« of the »container« are imported models from Blender. What I would like to do is to rotate the whole container around a pivot axis based on the current selection, which should be one of the children.
Image three cubes in Blender, all selected with the 3d cursor at center of first in location and center of transformation. A rotation transforms all cubes, but the rotation is relative to the first in selection.
In terms of three.js, what would like to do is to rotate the container, so that the rotation is applied to all children.
To do that I think that the following steps should do the trick:
create a matrix,
translate that matrix by the negative of the selected objects position
rotate that matrix
translate the matrix back to the selected objects position
apply the transform to the container
I have tried the following code but the result is just wrong:
var sp = selection.position.clone(),
m = new THREE.Matrix4();
selection.localToWorld(sp);
m.setPosition(sp.clone().negate());
//I've used makeRotationX for testing purposes, should be replaced with quaternion rotation later on…
m = m.multiply(new THREE.Matrix4().makeRotationX(2*180/Math.PI));
m = m.multiply(new THREE.Matrix4().makeTranslation(sp.x,sp.y,sp.z));
this._container.applyMatrix(m);
Thanks for help!
UPDATE
sign error—this works:
var sp = selection.position.clone(),
m = new THREE.Matrix4();
m.makeTranslation(sp.x,sp.y,sp.z);
m.multiply(new THREE.Matrix4().makeRotationX(0.1));
m.multiply(new THREE.Matrix4().makeTranslation(-sp.x,-sp.y,-sp.z));
this._container.applyMatrix(m);
BUT that code does not really look that good, creating three matrices for that single operating seems to bit of overhead, what is the usual »three.js-way«?
UPDATE #2
Due to the comment here is an image describing what I would like to do:
The »arrows« at the origin stand for the parent container and the cube, the sphere and the cone are its »children«. The red line shows the line I would like rotate the parent around, this way the rotation is applied to all children.
rotateOnAxis() takes a Vector as axis, so the line the objects rotates around crosses its origin.