Rotation of an object in the tangent space of a globe - matrix

Given the two following inputs:
a point on a sphere (like an observer on Earth);
and the world matrix of an object in space (the position and attitude of a satellite),
how to get the azimuth and elevation of the object in the tangent space of the point on the sphere (the elevation and azimuth of where the observer should look at)? In particular, when the object is exactly at the zenith, the yaw rotation (rotation around the vertical axis) should account for the azimuth (so that, though the observer is looking straight up, his shoulders would be facing the same azimuth as the object).
The math I've tried so far is:
to put the satellite in tangent space (multiplying its world matrix with the inverse of the matrix of the tangent space on the globe). Or the same with quaternions. An euler rotation is then deduced from the resulting matrix (or the resulting quaternion), with a "ZXY" priority, and the Z and X are interpreted as azimuth and elevation. But this gives incorrect numbers, as part of the rotation seems often interpreted as roll (Y axis rotation) which I want to be zero.
an intuitive approach also is to compute the angle between the vector of the observer to the object's position, with the vertical axis, to deduce the elevation; whereas the azimuth is given by the angle between the tangent north and the projected position of the object on the "tangent ground" (plus some more math to hone this particular deduction). But this approach does not work for the case of the object at the zenith.
Resources exist online but not with these specific inputs and the necessity of supporting the zenith case.
Incidentally the program is in typescript for three.js, and so the code goes as follows for the first solution described above:
function getRotationAtPoint(
object: THREE.Object3D,
point: THREE.Vector3
): { azimuth: number, elevation: number } {
// 1. Get the matrix of the tangent space of the observer.
const tangentSpaceMatrix = new THREE.Matrix4();
const baseTangentSpaceAxes = getBaseTangentAxesOnSphere(point);
tangentSpaceMatrix.makeBasis(...baseTangentSpaceAxes);
// 2. Tranform the object's matrix in tangent space of observer.
const inverseMatrix = new THREE.Matrix4().getInverse(tangentSpaceMatrix);
const objectMatrix = object.matrixWorld.clone().multiply(inverseMatrix);
// 3. Get the angles.
const euler = new THREE.Euler().setFromRotationMatrix(objectMatrix);
return {
azimuth: euler.z,
elevation: euler.x
};
}
Also, Three.js offers references to the up axis of THREE.Object3D instances, however the program I deal with computes everything directly into the objects' matrices and the up axis can't be trusted.

Related

Problem while rotating a 3D object with a rotating parent to face a given direction in Three.js

I am trying to plot an scene where there is an Earth that rotates independently from the camera. In this planet I plot random bezier curves just like in this example: https://pubnub.github.io/webgl-visualization/
Therefore, I add my bezier line as:
var origin = latLonToVector3(lat_source, lon_source, earth_radius);
var destination = latLonToVector3(lat_destination, lon_destination, earth_radius);
var bezierline = bezierCurveBetween(origin, destination);
earth.add(bezierline);
so that the plotted line rotates along with the Earth. Then, I managed to load a 3D model of a plane and make it follow the bezier curve as it is being drawn. So far so good. Finally, I would like to rotate the plane so that its belly is always following a tangent line to the bezier curve. To that end, I computed the tangent vectors for every two points of the line as:
var tangent_vectors = [];
for (var i = 0; i < pnts.length - 1; i++) {
var aux = new THREE.Vector3();
aux.subVectors(pnts[i+1], pnts[i]);
tangent_vectors[i] = aux.normalize();
}
return tangent_vectors;
Just to check that these vectors are ok, I used a THREE.ArrowHelper to see if they are tangent to every segment of the curve and indeed they are. Since I add them to the scene with earth.add( arrowHelper ); they also rotate with the planet and are consistent. I repeat this process over and over as the planet rotates by plotting and erasing the same bezier curve (same origin and destination).
However, the 3D Model behaves fine for the first bezier curve but as the planet rotates and a new bezier curve is plotted at the same place (with the same origin and destination coordinates) I can see that the plane is still following (plane.lookAt(tangent_vectors[point_index]);) the tangent lines of the original bezier curve (even though I am recomputing the tangent lines).
I think that the problem is that latitudes and longitudes (lat_source, lon_source, etc) are fixed in the real life reference framework. This causes that origin and destination variables always return the same values even though the planet is rotating. Then, the new bezier curves have essentially the same points but since I add these points using earth.add(bezier_line); Three.js internally takes care of rotating them to position them in the new rotation and this is not done with my tangent vectors.
I think this is the problem, but I do not know how to solve it. I guess I need to also rotate the tangent vectos of the curve according to the new rotation but I can't find how to do it.
Thanks for your help

Having a 3D point projected onto a 3D plane, find the 2D coord based on the plane two axis

I have a THREE.Plane plane which is intersected by a number of THREE.Line3 lines[].
Using only this information, how can I acquire a 2D coordinate set of points?
Edit for better understanding the problem:
The 2D coordinate is related to the plane, so imagine the 3D plane becomes a Cartesian plane drawn on a blackboard. It is pretty much a 3D drawing of a 2D plane. What I want to find is the X, Y values of points previously projected onto this Cartesian plane. But they are 3D, just like the 3D plane.
You don't have enough information. In this answer I'll explain why, and provide more information to achieve what you want, should you be able to provide the necessary information
First, let's create a plane. Like you, I'm uing Plane.setFromNormalAndCoplanarPoint. I'm considering the co-planar point as the origin ((0, 0)) of the plane's Cartesian space.
let normal = new Vector3(Math.random(), Math.random(), Math.random()).normalize()
let origin = new Vector3(Math.random(), Math.random(), Math.random()).normalize().setLength(10)
let plane = new Plane.setFromNormalAndCoplanarPoint(normal, origin)
Now, we create a random 3D point, and project it onto the plane.
let point1 = new Vector3(Math.random(), Math.random(), Math.random()).normalize()
let projectedPoint1 = new Vector3()
plane.projectPoint(point1, projectedPoint1)
The projectedPoint1 variable is now co-planar with your plane. But this plane is infinite, with no discrete X/Y axes. So currently we can only get the distance from the origin to the projected point.
let distance = origin.distanceTo(projectedPoint1)
In order to turn this into a Cartesian coordinate, you need to define at least one axis. To make this truly random, let's compute a random +Y axis:
let tempY = new Vector3(Math.random(), Math.random(), Math.random())
let pY = new Vector3()
plane.projectPoint(tempY, pY)
pY.normalize()
Now that we have +Y, let's get +X:
let pX = new Vector3().crossVectors(pY, normal)
pX.normalize()
Now, we can project the plane-projected point onto the axis vectors to get the Cartesian coordinates.
let x = projectedPoint1.clone().projectOnVector(pX).distanceTo(origin)
if(!projectedPoint1.clone().projectOnVector(pX).normalize().equals(pX)){
x = -x
}
let y = projectedPoint1.clone().projectOnVector(pY).distanceTo(origin)
if(!projectedPoint1.clone().projectOnVector(pY).normalize().equals(pY)){
y = -y
}
Note that in order to get negative values, I check a normalized copy of the axis-projected vector against the normalized axis vector. If they match, the value is positive. If they don't match, the value is negative.
Also, all the clone-ing I did above was to be explicit with the steps. This is not an efficient way to perform this operation, but I'll leave optimization up to you.
EDIT: My logic for determining the sign of the value was flawed. I've corrected the logic to normalize the projected point and check against the normalized axis vector.

PIXI.js - Canvas Coordinate to Container Coordinate

I have initiated a PIXI js canvas:
g_App = new PIXI.Application(800, 600, { backgroundColor: 0x1099bb });
Set up a container:
container = new PIXI.Container();
g_App.stage.addChild(container);
Put a background texture (2000x2000) into the container:
var texture = PIXI.Texture.fromImage('picBottom.png');
var back = new PIXI.Sprite(texture);
container.addChild(back);
Set the global:
var g_Container = container;
I do various pivot points and rotations on container and canvas stage element:
// Set the focus point of the container
g_App.stage.x = Math.floor(400);
g_App.stage.y = Math.floor(500); // Note this one is not central
g_Container.pivot.set(1000, 1000);
g_Container.rotation = 1.5; // radians
Now I need to be able to convert a canvas pixel to the pixel on the background texture.
g_Container has an element transform which in turn has several elements localTransform, pivot, position, scale ands skew. Similarly g_App.stage has the same transform element.
In Maths this is simple, you just have vector point and do matix operations on them. Then to go back the other way you just find inverses of those matrices and multiply backwards.
So what do I do here in pixi.js?
How do I convert a pixel on the canvas and see what pixel it is on the background container?
Note: The following is written using the USA convention of using matrices. They have row vectors on the left and multiply them by the matrix on the right. (Us pesky Brits in the UK do the opposite. We have column vectors on the right and multiply it by the matrix on the left. This means UK and USA matrices to do the same job will look slightly different.)
Now I have confused you all, on with the answer.
g_Container.transform.localTransform - this matrix takes the world coords to the scaled/transposed/rotated COORDS
g_App.stage.transform.localTransform - this matrix takes the rotated world coords and outputs screen (or more accurately) html canvas coords
So for example the Container matrix is:
MatContainer = [g_Container.transform.localTransform.a, g_Container.transform.localTransform.b, 0]
[g_Container.transform.localTransform.c, g_Container.transform.localTransform.d, 0]
[g_Container.transform.localTransform.tx, g_Container.transform.localTransform.ty, 1]
and the rotated container matrix to screen is:
MatToScreen = [g_App.stage.transform.localTransform.a, g_App.stage.transform.localTransform.b, 0]
[g_App.stage.transform.localTransform.c, g_App.stage.transform.localTransform.d, 0]
[g_App.stage.transform.localTransform.tx, g_App.stage.transform.localTransform.ty, 1]
So to get from World Coordinates to Screen Coordinates (noting our vector will be a row on the left, so the first operation matrix that acts first on the World coordinates must also be on the left), we would need to multiply the vector by:
MatAll = MatContainer * MatToScreen
So if you have a world coordinate vector vectWorld = [worldX, worldY, 1.0] (I'll explain the 1.0 at the end), then to get to the screen coords you would do the following:
vectScreen = vectWorld * MatAll
So to get screen coords and to get to world coords we first need to calculate the inverse matrix of MatAll, call it invMatAll. (There are loads of places that tell you how to do this, so I will not do it here.)
So if we have screen (canvas) coordinates screenX and screenY, we need to create a vector vectScreen = [screenX, screenY, 1.0] (again I will explain the 1.0 later), then to get to world coordinates worldX and worldY we do:
vectWorld = vectScreen * invMatAll
And that is it.
So what about the 1.0?
In a 2D system you can do rotations, scaling with 2x2 matrices. Unfortunately you cannot do a 2D translations with a 2x2 matrix. Consequently you need 3x3 matrices to fully describe all 2D scaling, rotations and translations. This means you need to make your vector 3D as well, and you need to put a 1.0 in the third position in order to do the translations properly. This 1.0 will also be 1.0 after any matrix operation as well.
Note: If we were working in a 3D system we would need 4x4 matrices and put a dummy 1.0 in our 4D vectors for exactly the same reasons.

openGL reverse image texturing logic

I'm about to project image into cylindrical panorama. But first I need to get the pixel (or color from pixel) I'm going to draw, then then do some Math in shaders with polar coordinates to get new position of pixel and then finally draw pixel.
Using this way I'll be able to change shape of image from polygon shape to whatever I want.
But I cannot find anything about this method (get pixel first, then do the Math and get new position for pixel).
Is there something like this, please?
OpenGL historically doesn't work that way around; it forward renders — from geometry to pixels — rather than backwards — from pixel to geometry.
The most natural way to achieve what you want to do is to calculate texture coordinates based on geometry, then render as usual. For a cylindrical mapping:
establish a mapping from cylindrical coordinates to texture coordinates;
with your actual geometry, imagine it placed within the cylinder, then from each vertex proceed along the normal until you intersect the cylinder. Use that location to determine the texture coordinate for the original vertex.
The latter is most easily and conveniently done within your geometry shader; it's a simple ray intersection test, with attributes therefore being only vertex location and vertex normal, and texture location being a varying that is calculated purely from the location and normal.
Extemporaneously, something like:
// get intersection as if ray hits the circular region of the cylinder,
// i.e. where |(position + n*normal).xy| = 1
float planarLengthOfPosition = length(position.xy);
float planarLengthOfNormal = length(normal.xy);
float planarDistanceToPerimeter = 1.0 - planarLengthOfNormal;
vec3 circularIntersection = position +
(planarDistanceToPerimeter/planarLengthOfNormal)*normal;
// get intersection as if ray hits the bottom or top of the cylinder,
// i.e. where |(position + n*normal).z| = 1
float linearLengthOfPosition = abs(position.z);
float linearLengthOfNormal = abs(normal.z);
float linearDistanceToEdge = 1.0 - linearLengthOfPosition;
vec3 endIntersection = position +
(linearDistanceToEdge/linearLengthOfNormal)*normal;
// pick whichever of those was lesser
vec3 cylindricalIntersection = mix(circularIntersection,
endIntersection,
step(linearDistanceToEdge,
planarDistanceToPerimeter));
// ... do something to map cylindrical intersection to texture coordinates ...
textureCoordinateVarying =
coordinateFromCylindricalPosition(cylindricalIntersection);
With a common implementation of coordinateFromCylindricalPosition possibly being simply return vec2(atan(cylindricalIntersection.y, cylindricalIntersection.x) / 6.28318530717959, cylindricalIntersection.z * 0.5);.

Collision detection problem (intersection with plane)

I'm doing a scene using openGL (a house). I want to do some collision detection, mainly with the walls in the house.
I have tried the following code:
// a plane is represented with a normal and a position in space
Vector planeNor(0,0,1);
Vector position(0,0,-10);
Plane p(planeNor,position);
Vector vel(0,0,-1);
double lamda; // this is the intersection point
Vector pNormal; // the normal of the intersection
// this method is from Nehe's Lesson 30
coll= p.TestIntersionPlane(vel,Z,lamda,pNormal);
glPushMatrix();
glBegin(GL_QUADS);
if(coll)
glColor3f(1,0,0);
else
glColor3f(1,1,1);
glVertex3d(0,0,-10);
glVertex3d(3,0,-10);
glVertex3d(3,3,-10);
glVertex3d(0,3,-10);
glEnd();
glPopMatrix();
Nehe's method:
#define EPSILON 1.0e-8
#define ZERO EPSILON
bool Plane::TestIntersionPlane(const Vector3 & position,const Vector3 & direction, double& lamda, Vector3 & pNormal)
{
double DotProduct=direction.scalarProduct(normal); // Dot Product Between Plane Normal And Ray Direction
double l2;
// Determine If Ray Parallel To Plane
if ((DotProduct<ZERO)&&(DotProduct>-ZERO))
return false;
l2=(normal.scalarProduct(position))/DotProduct; // Find Distance To Collision Point
if (l2<-ZERO) // Test If Collision Behind Start
return false;
pNormal= normal;
lamda=l2;
return true;
}
Z is initially (0,0,0) and every time I move the camera towards the plane, I reduce its z component by 0.1 (i.e. Z.z-=0.1 ).
I know that the problem is with the vel vector, but I can't figure out what the right value should be. Can anyone please help me?
You're passing "vel" (which I suppose is velocity of the moving thing) as "Position", and Z (which I suppose is position) as "Direction".
Your calculation of "Distance to Collision Point" makes no sense. It doesn't take position of the plane into account at all (or maybe it does, if the variables are misnamed).
You define pNormal, but I can't see any use for it. Is it supposed to mean something else?
It's almost impossible to get something like this working without understanding the math. Try a simpler version of the test, maybe assuming a z=0 plane and +z-axis movement, get that working and then take another look at the general case.
Thank you for your help.
I looked into the code again and I changed the collision detection method into the following:
//startPoint: the ray's starting point.
//EndPoint: the ray's ending point.
//lamda: the intersection point.
bool Plane::TestIntersionPlane(const Vector3& startPoint,const Vector3& Endpoint, double& lamda)
{
double cosAlpha=Endpoint.scalarProduct(normal); // calculates the angle between the plane's normal and the ray vector.
// Determine If Ray Parallel To Plane
if ((cosAlpha<ZERO)&&(cosAlpha>-ZERO))
return false;
// delta D is the plane's distance from the origin minus the ray's distance from the origin.
double deltaD = distance - startPoint.scalarProduct(normal); //distance is a double representing the plane's distance from the origin.
lamda= deltaD/cosAlpha;// distance between the plane and the vector
// if the distance between the ray and the plane is greater than zero then they haven't intersected.
if(lamda > ZERO)
return false;
return true;
}
This seems to work with all planes except when the ray is too far from the plane. For example if the plane is at z=-10 and the ray's starting point is: 0,0,3 and it's ending point is 0,0,2 then this is detected as a collision but when I move the ray to start(0,0,2) and end(0,0,1) it's not detected as a collision.
The math seems correct to me, so I'm not sure how to handle this.

Resources