Problem while rotating a 3D object with a rotating parent to face a given direction in Three.js - three.js

I am trying to plot an scene where there is an Earth that rotates independently from the camera. In this planet I plot random bezier curves just like in this example: https://pubnub.github.io/webgl-visualization/
Therefore, I add my bezier line as:
var origin = latLonToVector3(lat_source, lon_source, earth_radius);
var destination = latLonToVector3(lat_destination, lon_destination, earth_radius);
var bezierline = bezierCurveBetween(origin, destination);
earth.add(bezierline);
so that the plotted line rotates along with the Earth. Then, I managed to load a 3D model of a plane and make it follow the bezier curve as it is being drawn. So far so good. Finally, I would like to rotate the plane so that its belly is always following a tangent line to the bezier curve. To that end, I computed the tangent vectors for every two points of the line as:
var tangent_vectors = [];
for (var i = 0; i < pnts.length - 1; i++) {
var aux = new THREE.Vector3();
aux.subVectors(pnts[i+1], pnts[i]);
tangent_vectors[i] = aux.normalize();
}
return tangent_vectors;
Just to check that these vectors are ok, I used a THREE.ArrowHelper to see if they are tangent to every segment of the curve and indeed they are. Since I add them to the scene with earth.add( arrowHelper ); they also rotate with the planet and are consistent. I repeat this process over and over as the planet rotates by plotting and erasing the same bezier curve (same origin and destination).
However, the 3D Model behaves fine for the first bezier curve but as the planet rotates and a new bezier curve is plotted at the same place (with the same origin and destination coordinates) I can see that the plane is still following (plane.lookAt(tangent_vectors[point_index]);) the tangent lines of the original bezier curve (even though I am recomputing the tangent lines).
I think that the problem is that latitudes and longitudes (lat_source, lon_source, etc) are fixed in the real life reference framework. This causes that origin and destination variables always return the same values even though the planet is rotating. Then, the new bezier curves have essentially the same points but since I add these points using earth.add(bezier_line); Three.js internally takes care of rotating them to position them in the new rotation and this is not done with my tangent vectors.
I think this is the problem, but I do not know how to solve it. I guess I need to also rotate the tangent vectos of the curve according to the new rotation but I can't find how to do it.
Thanks for your help

Related

Rotation of an object in the tangent space of a globe

Given the two following inputs:
a point on a sphere (like an observer on Earth);
and the world matrix of an object in space (the position and attitude of a satellite),
how to get the azimuth and elevation of the object in the tangent space of the point on the sphere (the elevation and azimuth of where the observer should look at)? In particular, when the object is exactly at the zenith, the yaw rotation (rotation around the vertical axis) should account for the azimuth (so that, though the observer is looking straight up, his shoulders would be facing the same azimuth as the object).
The math I've tried so far is:
to put the satellite in tangent space (multiplying its world matrix with the inverse of the matrix of the tangent space on the globe). Or the same with quaternions. An euler rotation is then deduced from the resulting matrix (or the resulting quaternion), with a "ZXY" priority, and the Z and X are interpreted as azimuth and elevation. But this gives incorrect numbers, as part of the rotation seems often interpreted as roll (Y axis rotation) which I want to be zero.
an intuitive approach also is to compute the angle between the vector of the observer to the object's position, with the vertical axis, to deduce the elevation; whereas the azimuth is given by the angle between the tangent north and the projected position of the object on the "tangent ground" (plus some more math to hone this particular deduction). But this approach does not work for the case of the object at the zenith.
Resources exist online but not with these specific inputs and the necessity of supporting the zenith case.
Incidentally the program is in typescript for three.js, and so the code goes as follows for the first solution described above:
function getRotationAtPoint(
object: THREE.Object3D,
point: THREE.Vector3
): { azimuth: number, elevation: number } {
// 1. Get the matrix of the tangent space of the observer.
const tangentSpaceMatrix = new THREE.Matrix4();
const baseTangentSpaceAxes = getBaseTangentAxesOnSphere(point);
tangentSpaceMatrix.makeBasis(...baseTangentSpaceAxes);
// 2. Tranform the object's matrix in tangent space of observer.
const inverseMatrix = new THREE.Matrix4().getInverse(tangentSpaceMatrix);
const objectMatrix = object.matrixWorld.clone().multiply(inverseMatrix);
// 3. Get the angles.
const euler = new THREE.Euler().setFromRotationMatrix(objectMatrix);
return {
azimuth: euler.z,
elevation: euler.x
};
}
Also, Three.js offers references to the up axis of THREE.Object3D instances, however the program I deal with computes everything directly into the objects' matrices and the up axis can't be trusted.

Having a 3D point projected onto a 3D plane, find the 2D coord based on the plane two axis

I have a THREE.Plane plane which is intersected by a number of THREE.Line3 lines[].
Using only this information, how can I acquire a 2D coordinate set of points?
Edit for better understanding the problem:
The 2D coordinate is related to the plane, so imagine the 3D plane becomes a Cartesian plane drawn on a blackboard. It is pretty much a 3D drawing of a 2D plane. What I want to find is the X, Y values of points previously projected onto this Cartesian plane. But they are 3D, just like the 3D plane.
You don't have enough information. In this answer I'll explain why, and provide more information to achieve what you want, should you be able to provide the necessary information
First, let's create a plane. Like you, I'm uing Plane.setFromNormalAndCoplanarPoint. I'm considering the co-planar point as the origin ((0, 0)) of the plane's Cartesian space.
let normal = new Vector3(Math.random(), Math.random(), Math.random()).normalize()
let origin = new Vector3(Math.random(), Math.random(), Math.random()).normalize().setLength(10)
let plane = new Plane.setFromNormalAndCoplanarPoint(normal, origin)
Now, we create a random 3D point, and project it onto the plane.
let point1 = new Vector3(Math.random(), Math.random(), Math.random()).normalize()
let projectedPoint1 = new Vector3()
plane.projectPoint(point1, projectedPoint1)
The projectedPoint1 variable is now co-planar with your plane. But this plane is infinite, with no discrete X/Y axes. So currently we can only get the distance from the origin to the projected point.
let distance = origin.distanceTo(projectedPoint1)
In order to turn this into a Cartesian coordinate, you need to define at least one axis. To make this truly random, let's compute a random +Y axis:
let tempY = new Vector3(Math.random(), Math.random(), Math.random())
let pY = new Vector3()
plane.projectPoint(tempY, pY)
pY.normalize()
Now that we have +Y, let's get +X:
let pX = new Vector3().crossVectors(pY, normal)
pX.normalize()
Now, we can project the plane-projected point onto the axis vectors to get the Cartesian coordinates.
let x = projectedPoint1.clone().projectOnVector(pX).distanceTo(origin)
if(!projectedPoint1.clone().projectOnVector(pX).normalize().equals(pX)){
x = -x
}
let y = projectedPoint1.clone().projectOnVector(pY).distanceTo(origin)
if(!projectedPoint1.clone().projectOnVector(pY).normalize().equals(pY)){
y = -y
}
Note that in order to get negative values, I check a normalized copy of the axis-projected vector against the normalized axis vector. If they match, the value is positive. If they don't match, the value is negative.
Also, all the clone-ing I did above was to be explicit with the steps. This is not an efficient way to perform this operation, but I'll leave optimization up to you.
EDIT: My logic for determining the sign of the value was flawed. I've corrected the logic to normalize the projected point and check against the normalized axis vector.

How to plot country names on the globe, so the mesh will be aligned with the surfaces

I'm trying to plot country names of the globe, so the text meshes will be aligned with the surface, but I'm failing to calculate proper rotations. For text I'm using THREE.TextGeometry. The name appears on the click of the mesh of the country at the point of intersection using raycasting. I'm lacking knowledge of how to turn these coordinates to proper rotation angles. I'm not posting my code, as it's complete mess and I believe for a knowldgeable person will be easier to explain how to achieve this in general.
Here is desired result:
The other solution, which I tried (and which, of course, is not the ultimate), based on this SO answer. The idea is to use the normal of the face you intersect with the raycaster.
Obtain the point of intersection.
Obtain the face of intersection.
Obtain the normal of the face (2).
Get the normal (3) in world coordinates.
Set position of the text object as sum of point of intersection (1) and the normal in world coordinates (4).
Set lookAt() vector of the text object as sum of its position (5) and the normal in world coordinates (4).
Seems long, but actually it makes not so much of code:
var PGHelper = new THREE.PolarGridHelper(...); // let's imagine it's your text object ;)
var PGlookAt = new THREE.Vector3(); // point of lookAt for the "text" object
var normalMatrix = new THREE.Matrix3();
var worldNormal = new THREE.Vector3();
and in the animation loop:
for ( var i = 0; i < intersects.length; i++ ) {
normalMatrix.getNormalMatrix( intersects[i].object.matrixWorld );
worldNormal.copy(intersects[i].face.normal).applyMatrix3( normalMatrix ).normalize();
PGHelper.position.addVectors(intersects[i].point, worldNormal);
PGlookAt.addVectors(PGHelper.position, worldNormal);
PGHelper.lookAt(PGlookAt);
}
jsfiddle exmaple
The method works with meshes of any geometry (checked with spheres and boxes though ;) ). And I'm sure there are another better methods.
very interesting question.I have tried this way, we can regard the text as a plane. lets define a normal vector n from your sphere center(or position) to point on the sphere surface where you want to display text. I have a simple way to make normal vector right.
1. put the text mesh on sphere center. text.position.copy(sphere.position)
2. make text to the point on sphere surface, text.lookAt(point)
3.relocate text to the point. text.position.copy(point)

openGL reverse image texturing logic

I'm about to project image into cylindrical panorama. But first I need to get the pixel (or color from pixel) I'm going to draw, then then do some Math in shaders with polar coordinates to get new position of pixel and then finally draw pixel.
Using this way I'll be able to change shape of image from polygon shape to whatever I want.
But I cannot find anything about this method (get pixel first, then do the Math and get new position for pixel).
Is there something like this, please?
OpenGL historically doesn't work that way around; it forward renders — from geometry to pixels — rather than backwards — from pixel to geometry.
The most natural way to achieve what you want to do is to calculate texture coordinates based on geometry, then render as usual. For a cylindrical mapping:
establish a mapping from cylindrical coordinates to texture coordinates;
with your actual geometry, imagine it placed within the cylinder, then from each vertex proceed along the normal until you intersect the cylinder. Use that location to determine the texture coordinate for the original vertex.
The latter is most easily and conveniently done within your geometry shader; it's a simple ray intersection test, with attributes therefore being only vertex location and vertex normal, and texture location being a varying that is calculated purely from the location and normal.
Extemporaneously, something like:
// get intersection as if ray hits the circular region of the cylinder,
// i.e. where |(position + n*normal).xy| = 1
float planarLengthOfPosition = length(position.xy);
float planarLengthOfNormal = length(normal.xy);
float planarDistanceToPerimeter = 1.0 - planarLengthOfNormal;
vec3 circularIntersection = position +
(planarDistanceToPerimeter/planarLengthOfNormal)*normal;
// get intersection as if ray hits the bottom or top of the cylinder,
// i.e. where |(position + n*normal).z| = 1
float linearLengthOfPosition = abs(position.z);
float linearLengthOfNormal = abs(normal.z);
float linearDistanceToEdge = 1.0 - linearLengthOfPosition;
vec3 endIntersection = position +
(linearDistanceToEdge/linearLengthOfNormal)*normal;
// pick whichever of those was lesser
vec3 cylindricalIntersection = mix(circularIntersection,
endIntersection,
step(linearDistanceToEdge,
planarDistanceToPerimeter));
// ... do something to map cylindrical intersection to texture coordinates ...
textureCoordinateVarying =
coordinateFromCylindricalPosition(cylindricalIntersection);
With a common implementation of coordinateFromCylindricalPosition possibly being simply return vec2(atan(cylindricalIntersection.y, cylindricalIntersection.x) / 6.28318530717959, cylindricalIntersection.z * 0.5);.

Ray.intersectObjects() always comes up no hits

I've created a sphere and the centre of the sphere is located at 0,0,0.
The radius of the sphere is 9.
I've created a cube that is positioned above the surface/faces of the sphere.
When I click on the cube and then proceed to click on any point on the surface of the sphere my cube will rotate it's relative position to the point clicked on the sphere (to look in the direction of the point so to say) and then it will move along the surface of the sphere towards the point clicked. The rotation and movement all happen within a render loop.
What I want to do is cast a Ray from a point relative to the cubes position but at a greater distance to the centre of the sphere. So for instance if the distance to any given point on any given face of my sphere is ~8.8 - 9 (of course the vertices would be at a distance of 9 and the centre of any face would be ~8.8 - 8.9) The distance of my cube from the centre of the sphere is 9.1. I want to cast a ray from about a distance of 12 towards the centre of my sphere.
So, if my cube is located at 0,0,9.1 then I want to cast a ray who's origin would be 0,0,12 and who's destination would be 0,0,0. Then only target the sphere as the object to intersect, determine the distance to any given point along any given face and then set the distance of the cube to 12 - someDistance. That way it would seem as though the cube is actually moving along the surface of the sphere. And if I modify the features of the sphere, the cube would appear to move along the contours of the surface.
Here is my code which is located inside of a looping render function.
Unfortunately it turns up nothing.
var direction = new THREE.Vector3(0,0,0);
var origin = new THREE.Vector3(object_cubi[x-1].posiX, object_cubi[x-1].posiY, object_cubi[x-1].posiZ);
origin.normalize();
origin.x *= 12;
origin.y *= 12;
origin.z *= 12;
var disRay = new THREE.Raycaster();
disRay.ray.set(origin, direction);
var rayIntersect = disRay.intersectObjects( targetList );
document.getElementById("test7").value = rayIntersect.length;
rayIntersect.length is always 0.
What am I missing?
To select the cube and pick a point on the surface I had to use raycaster and that code works fine. However it does incorporate projector().
all I needed to do was cast the ray from the centre of the sphere and make the material of the sphere double sided.
material.side = THREE.DoubleSide;
Now my cube moves along the contours of my sphere.

Resources