Oriented projectiles keep facing camera - matrix

I'm trying to render a 2d image that represent a projectile in a 3d world and i have difficulty to make the projectile face the camera without changing its direction. Im using JOML math library.
my working code to orient the projectile in his direction
public Quaternionf findRotation(Vector3f objectRay, Vector3f targetRay) {
Vector3f oppositeVector = new Vector3f(-objectRay.x, -objectRay.y, -objectRay.z);
// cas vecteur opposé
if(oppositeVector.x == targetRay.x && oppositeVector.y == targetRay.y && oppositeVector.z == targetRay.z) {
AxisAngle4f axis = new AxisAngle4f((float) Math.toRadians(180), 0, 0, 1);
Quaternionf result = new Quaternionf(axis);
return result;
}
objectRay = objectRay.normalize();
targetRay = targetRay.normalize();
double angleDif = Math.acos(new Vector3f(targetRay).dot(objectRay));
if (angleDif!=0) {
Vector3f orthoRay = new Vector3f(objectRay).cross(targetRay);
orthoRay = orthoRay.normalize();
AxisAngle4f deltaQ = new AxisAngle4f((float) angleDif, orthoRay.x, orthoRay.y, orthoRay.z);
Quaternionf result = new Quaternionf(deltaQ);
return result.normalize();
}
return new Quaternionf();
}
Now i want to add a vector3f cameraPosition parameter to rotate the projectile only on its x axis to face the camera but i dont know how to do it.
For example with this code the projectile correctly rotate around his x axis but not face the camera so i want to know how to find the correct angle.
this.lasers[i].getModel().rotate((float) Math.toRadians(5), 1, 0, 0);
I tried this to rotate around axis X with transforming vector before compute angle.
this.lasers[i] = new VisualEffect(this.position, new Vector3f(1,1,1), color, new Vector2f(0,0.33f));
this.lasers[i].setModel(new Matrix4f().scale(this.lasers[i].getScale()));
this.lasers[i].getModel().rotate(rotation);
this.lasers[i].getModel().translateLocal(this.lasers[i].getPosition());
Vector3f vec = new Vector3f(cameraPosition).sub(this.position);
Vector4f vecSpaceModel = this.lasers[i].getModel().transform(new Vector4f(vec, 1.0f));
Vector4f normalSpaceModel = this.lasers[i].getModel().transform(new Vector4f(normal, 1.0f));
float angleX = new Vector2f(vecSpaceModel.y, vecSpaceModel.z).angle(new Vector2f(normalSpaceModel.y, normalSpaceModel.z));
this.lasers[i].getModel().rotate(angleX, 1, 0, 0);

Since you are using JOML, you can massively simplify your whole setup.
Let's assume that:
projectilePosition is the position of the projectile,
targetPosition is the position the projectile is flying at/towards, and
cameraPosition is the position of the "camera" (which we ultimately want the projectile to face)
We will also assume that the local coordinate system of the projectile is such that its +X axis points along the projectile's path (like how you depicted it) and the +Z axis points away from the projectile towards the viewer when the viewer is "facing" the projectile. So, the projectile itself is defined as a quad on the XY plane within its own local coordinate system.
What we must do now is create a basis transformation that will effectively transform the projectile such that its X axis points towards the "target" and its Z axis points "as best as we can" towards the camera.
This is very reminiscent of what we know as the "lookAt" transformation in OpenGL. And in fact, we are just going to use that. However, since the common "lookAt" is the inverse of what we wanted to do, we will also just invert it.
So, all in all, your complete model matrix/transformation for a single projectile will look like this (in JOML):
Vector3f projectilePosition = ...;
Vector3f cameraPosition = ...;
Vector3f targetPosition = ...;
Vector3f projectileToCamera = new Vector3f(cameraPosition).sub(projectilePosition);
modelMatrix
.setLookAt(projectilePosition, targetPosition, projectileToCamera)
.invert()
.rotateXYZ((float) Math.toRadians(-90), 0, (float) Math.toRadians(90));
In case you do not want to use lookAt and invert, you can also do:
Vector3f projectileToTarget = new Vector3f(targetPosition).sub(projectilePosition);
modelMatrix
.translation(projectilePosition)
.rotateTowards(projectileToTarget, projectileToCamera)
.rotateXYZ((float) Math.toRadians(-90), 0, (float) Math.toRadians(-90));
yielding the same result as the above code.
Note that nowhere do we actually need angles or trigonometric functions. This is very common when you already have all positions/directions given as vectors, you can simply use linear algebra without converting from/to angles.
The last part with the rotateXYZ(90°, 0°, 90°) is to express that we do not want the -Z axis of the projectile to point towards the target (which is what lookAt will do by default), but we want the X axis to point to the target.
Yet another way is to realize that what we do here is also known as a "cylindrical" or "axial" billboard, and can also be expressed like so:
Vector3f projectileToTarget = new Vector3f(targetPosition).sub(quadPosition).normalize();
modelMatrix
.billboardCylindrical(projectilePosition, cameraPosition, projectileToTarget)
.rotateZ((float) Math.toRadians(90));
(Note that in this case projectileToTarget needs to be unit!)
A test with a simple scene containing 24 projectiles all targeting "the center" with the camera hovering over them will look like this:
The corresponding simple LWJGL 3 / JOML demo generating this image.

Related

Having a point from 3 static cameras prespectives how to restore its position in 3d space?

We have same rectangle position relative to 3 same type of staticly installed web cameras that are not on the same line. Say on a flat basketball field. Thus we have tham all inside one 3d space and (x, y, z); (ax, ay, az); positionas and orientations set for all of them.
We have a ball color and we found its position on all 3 images im1, im2, im3. Now having its position on 2d frames (p1x, p1y);(p2x, p2y);(p3x, p3y), and cameras pos\orientations how to get ball position in 3d space?
You need to unproject 2D screen coordinates into 3D coordinates in space.
You need to solve system of equation to find real point in 3D from 3 rays you got on the first step.
You can find source code for gluUnProject here. I also provide here my code for it:
public Vector4 Unproject(float x, float y, Matrix4 View)
{
var ndcX = x / Viewport.Width * 2 - 1.0f;
var ndcY = y / Viewport.Height * 2 - 1.0f;
var invVP = Matrix4.Invert(View * ProjectionMatrix);
// We don't z-coordinate of the point, so we choose 0.0f for it.
// We are going to find out it later.
var screenPos = new Vector4(ndcX, -ndcY, 0.0f, 1.0f);
var res = Vector4.Transform(screenPos, invVP);
return res / res.W;
}
Vector3 ComputeRay(Camera camera, Vector2 p)
{
var worldPos = Unproject(p.X, p.Y, camera.View);
var dir = new Vector3(worldPos) - camera.Eye;
return new Ray(camera.Eye, Vector3.Normalize(dir));
}
Now you need to find intersection of three such rays. Theoretically that would be enough to use only two rays. It depends on positions of your cameras.
If we had infinite precision floating point arithmetic and input was without noise that would be trivial. But in reality you might need to exploit some simple numerical scheme to find the point with an appropriate precision.

In A-Frame/THREE.js, is there a method like the Camera.ScreenToWorldPoint() from Unity?

I know a method from Unity whichs is very useful to convert a screen position to a world position : https://docs.unity3d.com/ScriptReference/Camera.ScreenToWorldPoint.html
I've been looking for something similar in A-Frame/THREE.js, but I didn't find anything.
Is there an easy way to convert a screen position to a world position in a plane which is positioned a given distance from the camera ?
This is typically done using Raycaster. An equivalent function using three.js would be written like this:
function screenToWorldPoint(screenSpaceCoord, target = new THREE.Vector3()) {
// convert the screen-space coordinates to normalized device coordinates
// (x and y ranging from -1 to 1):
const ndc = new THREE.Vector2()
ndc.x = 2 * screenSpaceCoord.x / screenWidth - 1;
ndc.y = 2 * screenSpaceCoord.y / screenHeight - 1;
// `Raycaster` can be used to convert this into a ray:
const raycaster = new THREE.Raycaster();
raycaster.setFromCamera(ndc, camera);
// finally, apply the distance:
return raycaster.ray.at(screenSpaceCoord.z, target);
}
Note that coordinates in browsers are usually measured from the top/left corner with y pointing downwards. In that case, the NDC calculation should be:
ndc.y = 1 - 2 * screenSpaceCoord.y / screenHeight;
Another note: instead of using a set distance in screenSpaceCoord.z you could also let three.js compute an intersection with any Object in your scene. For that you can use raycaster.intersectObject() and get a precise depth for the point of intersection with that object. See the documentation and various examples linked here: https://threejs.org/docs/#api/core/Raycaster

ARCore : decompose view matrix

I'm trying to get eye coordinates (camera's position, direction and up) from pose's view matrix, but what I get is not what I expected.
First, my goal is an Yup coordinate system :
I'm not sure ARCore use the same system, I did not found precise informations about the used coordinate system.
Next, I'm decomposing the view matrix, but if the results are mathematically good (direction and up seems to be in the good directions, position seems to have the good scale) the result is very cahotic since my camera move strangely around my scene.
// Get camera matrix and draw.
float[] viewmtx = new float[16];
frame.getViewMatrix(viewmtx, 0);
Vector3 pos = new Vector3(viewmtx[12], viewmtx[13], viewmtx[14]);
Vector3 camDir = new Vector3(viewmtx[8], viewmtx[9], viewmtx[10]).nor().scl(-1);
Vector3 camUp = new Vector3(viewmtx[4], viewmtx[5], viewmtx[6]).nor();
Does something sound strange for you?
Here is the answer :
The device / configuration i'm using is in landscape mode, so the coordinate frame is not this one.
Here is my fix :
Vector3 pos = new Vector3(viewmtx[12], viewmtx[13], viewmtx[14]).scl(1000);
Vector3 camDir = new Vector3(viewmtx[8], viewmtx[9], viewmtx[10]).nor();
Vector3 camUp = new Vector3(viewmtx[4], viewmtx[5], viewmtx[6]).nor();
if (isLandscape) {
pos = new Vector3(viewmtx[12], viewmtx[13], viewmtx[14]).scl(1000);
camDir = new Vector3(-viewmtx[8], -viewmtx[9], -viewmtx[10]).nor();
camUp = new Vector3(viewmtx[0], viewmtx[1], viewmtx[2]).nor();
}

three.js - Set the rotation of an object in relation to its own axes

I'm trying to make a static 3D prism out of point clouds with specific numbers of particles in each. I've got the the corner coordinates of each side of the prism based on the angle of turn, and tried spawning the particles in the area bound by these coordinates. Instead, the resulting point clouds have kept only the bottom left coordinate.
Screenshot: http://i.stack.imgur.com/uQ7Q8.png
I've tried to set the rotation of each cloud object such that their edges meet, but they will rotate only around the world centre. I gather this is something to do with rotation matrices and Euler angles, but, having been trying to work them out for 3 solid days, I've despaired. (I'm a sociologist, not a dev, and haven't touched graphics before this project.)
Please help? How do I set the rotation on each face of the prism? Or maybe there is a more sensible way to get the particles to spawn in the correct area in the first place?
The code:
// draw the particles
var n = 0;
do {
var geom = new THREE.Geometry();
var material = new THREE.PointCloudMaterial({size: 1, vertexColors: true, color: 0xffffff});
for (i = 0; i < group[n]; i++) {
if (geom.vertices.length < group[n]){
var particle = new THREE.Vector3(
Math.random() * screens[n].bottomrightback.x + screens[n].bottomleftfront.x,
Math.random() * screens[n].toprightback.y + screens[n].bottomleftfront.y,
Math.random() * screens[n].bottomrightfront.z + screens[n].bottomleftfront.z);
geom.vertices.push(particle);
geom.colors.push(new THREE.Color(Math.random() * 0x00ffff));
}
}
var system = new THREE.PointCloud(geom, material);
scene.add(system);
**// something something matrix Euler something?**
n++
}
while (n < numGroups);
I've tried to set the rotation of each cloud object such that their
edges meet, but they will rotate only around the world centre.
It is true they only rotate around 0,0,0. The simple solution then is to move the object to the center, rotate it, and then move it back to its original position.
For example (Code not tested so might take a bit of tweaking):
var m = new THREE.Matrix4();
var movetocenter = new THREE.Matrix4();
movetocenter.makeTranslation(-x, -y, -z);
var rotate = new THREE.Matrix4();
rotate.makeRotationFromEuler(); //Build your rotation here
var moveback = new THREE.Matrix4();
moveback .makeTranslation(x, y, z);
m.multiply(movetocenter);
m.multiply(rotate);
m.multiply(moveback);
//Now you can use geometry.applyMatrix(m)

3D Rotation Matrix deforms over time in Processing/Java

Im working on a project where i want to generate a 3D mesh to represent a certain amount of data.
To create this mesh i want to use transformation Matrixes, so i created a class based upon the mathematical algorithms found on a couple of websites.
Everything seems to work, scale/translation but as soon as im rotating a mesh on its x-axis its starts to deform after 2 to 3 complete rotations. It feels like my scale values are increasing which transforms my mesh. I'm struggling with this problem for a couple of days but i can't figure out what's going wrong.
To make things more clear you can download my complete setup here.
I defined the coordinates of a box and put them through the transformation matrix before writing them to the screen
This is the formula for rotating my object
void appendRotation(float inXAngle, float inYAngle, float inZAngle, PVector inPivot ) {
boolean setPivot = false;
if (inPivot.x != 0 || inPivot.y != 0 || inPivot.z != 0) {
setPivot = true;
}
// If a setPivot = true, translate the position
if (setPivot) {
// Translations for the different axises need to be set different
if (inPivot.x != 0) { this.appendTranslation(inPivot.x,0,0); }
if (inPivot.y != 0) { this.appendTranslation(0,inPivot.y,0); }
if (inPivot.z != 0) { this.appendTranslation(0,0,inPivot.z); }
}
// Create a rotationmatrix
Matrix3D rotationMatrix = new Matrix3D();
// xsin en xcos
float xSinCal = sin(radians(inXAngle));
float xCosCal = cos(radians(inXAngle));
// ysin en ycos
float ySinCal = sin(radians(inYAngle));
float yCosCal = cos(radians(inYAngle));
// zsin en zcos
float zSinCal = sin(radians(inZAngle));
float zCosCal = cos(radians(inZAngle));
// Rotate around x
rotationMatrix.setIdentity();
// --
rotationMatrix.matrix[1][1] = xCosCal;
rotationMatrix.matrix[1][2] = xSinCal;
rotationMatrix.matrix[2][1] = -xSinCal;
rotationMatrix.matrix[2][2] = xCosCal;
// Add rotation to the basis matrix
this.multiplyWith(rotationMatrix);
// Rotate around y
rotationMatrix.setIdentity();
// --
rotationMatrix.matrix[0][0] = yCosCal;
rotationMatrix.matrix[0][2] = -ySinCal;
rotationMatrix.matrix[2][0] = ySinCal;
rotationMatrix.matrix[2][2] = yCosCal;
// Add rotation to the basis matrix
this.multiplyWith(rotationMatrix);
// Rotate around z
rotationMatrix.setIdentity();
// --
rotationMatrix.matrix[0][0] = zCosCal;
rotationMatrix.matrix[0][1] = zSinCal;
rotationMatrix.matrix[1][0] = -zSinCal;
rotationMatrix.matrix[1][1] = zCosCal;
// Add rotation to the basis matrix
this.multiplyWith(rotationMatrix);
// Untranslate the position
if (setPivot) {
// Translations for the different axises need to be set different
if (inPivot.x != 0) { this.appendTranslation(-inPivot.x,0,0); }
if (inPivot.y != 0) { this.appendTranslation(0,-inPivot.y,0); }
if (inPivot.z != 0) { this.appendTranslation(0,0,-inPivot.z); }
}
}
Does anyone have a clue?
You never want to cumulatively transform matrices. This will introduce error into your matrices and cause problems such as scaling or skewing the orthographic components.
The correct method would be to keep track of the cumulative pitch, yaw, roll angles. Then reconstruct the transformation matrix from those angles every update.
If there is any chance: avoid multiplying rotation matrices. Keep track of the cumulative rotation and compute a new rotation matrix at each step.
If it is impossible to avoid multiplying the rotation matrices then renormalize them (starts on page 16). It works for me just fine for more than 10 thousand multiplications.
However, I suspect that it will not help you, numerical errors usually requires more than 2 steps to manifest themselves. It seems to me the reason for your problem is somewhere else.
Yaw, pitch and roll are not good for arbitrary rotations. Euler angles suffer from singularities and instability. Look at 38:25 (presentation of David Sachs)
http://www.youtube.com/watch?v=C7JQ7Rpwn2k
Good luck!
As #don mentions, try to avoid cumulative transformations, as you can run into all sort of problems. Rotating by one axis at a time might lead you to Gimbal Lock issues. Try to do all rotations in one go.
Also, bare in mind that Processing comes with it's own Matrix3D class called PMatrix3D which has a rotate() method which takes an angle(in radians) and x,y,z values for the rotation axis.
Here is an example function that would rotate a bunch of PVectors:
PVector[] rotateVerts(PVector[] verts,float angle,PVector axis){
int vl = verts.length;
PVector[] clone = new PVector[vl];
for(int i = 0; i<vl;i++) clone[i] = verts[i].get();
//rotate using a matrix
PMatrix3D rMat = new PMatrix3D();
rMat.rotate(angle,axis.x,axis.y,axis.z);
PVector[] dst = new PVector[vl];
for(int i = 0; i<vl;i++) {
dst[i] = new PVector();
rMat.mult(clone[i],dst[i]);
}
return dst;
}
and here is an example using it.
HTH
A shot in the dark: I don't know the rules or the name of the programming language you are using, but this procedure looks suspicious:
void setIdentity() {
this.matrix = identityMatrix;
}
Are you sure your are taking a copy of identityMatrix? If it is just a reference you are copying, then identityMatrix will be modified by later operations, and soon nothing makes sense.
Though the matrix renormalization suggested probably works fine in practice, it is a bit ad-hoc from a mathematical point of view. A better way of doing it is to represent the cumulative rotations using quaternions, which are only converted to a rotation matrix upon application. The quaternions will also drift slowly from orthogonality (though slower), but the important thing is that they have a well-defined renormalization.
Good starting information for implementing this can be:
http://www.cprogramming.com/tutorial/3d/quaternions.html
http://www.scheib.net/school/library/quaternions.pdf
A useful academic reference can be:
K. Shoemake, “Animating rotation with quaternion curves,” ACM
SIGGRAPH Comput. Graph., vol. 19, no. 3, pp. 245–254, 1985. DOI:10.1145/325165.325242

Resources