Program to convert points on 3D plane to 2D points - algorithm

I have the normal vector of the plane . I want to convert the 3D points onto a 2D plane maintaining the same distances between them. Basically what I want to do is make the z coordinate of all the points on the plane equal.
How do I go about achieving this and writing a program for it (Preferably C#)? . Are there any good libraries that I can use .
Will this library be useful Point Cloud Library
My objective in doing this is I have several lines(on the same plane) in 3D space and I want to represent these lines in 2D along with their measurements
An example plane of my problem.
I am doing this for an application I am developing in unity using Google ARcore

Ok I have invested a fair amount of time in finding a solution to this problem . I figured out a simple solution to this problem using ARcore as I am doing this using ARCore(Augmented reality SDK provided by Google) . For those who want to achieve this without using ARcore refer these questions Question 1 Question 2 where a new orthonormal basis has to be created or the plane has to be rotated in order to align with the default planes.
For those who are using ARcore in unity , there is a simpler solution given in this issue on GitHub created by me . Basically we can easily create new axes on the 3D plane and record coordinates from this newly created coordinate system.
If you want to project 3d points on a plane, you need to have a 2d coordinate system in mind. A natural one is the one defined by the global axis, but that will work well with one kind of planes (say horizontal) but not another (say vertical).
Another choice of coordinates is the one defined by CenterPose, but it can change every frame. So if you need the 2d points only for 1 frame, this is can be written as:
x_local_axis = DetectedPlane.CenterPose.rotation * Vector3.forward;
z_local_axis = DetectedPlane.CenterPose.rotation * Vector3.right;
// loop over your points
x = Vector3.Dot(your_3d_point, x_local_axis);
z = Vector3.Dot(your_3d_point, z_local_axis);
If you need a 2d coordinate system that is consistent between frames, you probably would want to attach an anchor to any plane of interest, maybe at DetectedPlane.CenterPose, and do the same math as above, but with anchor rotation instead of plane rotation. The x and z axes of the anchor will provide a 2d frame of coordinates that is consistent between frames.
So here , a new local axes are created on the center of the plane and the points obtained would have only 2 coordinates .

I needed this in Unity C# so here's some code that I used.
First of, project the point onto the plane.
Then using the dot values for the targeted transforms right and forward I got the local 2D coordinates.
So if you want the standard coords replace right with (1,0) and forward with (0,1)
public static Vector3 GetClosestPointOnPlane(Vector3 point, Plane3D plane){
Vector3 dir = point - plane.position;//Direction between the plane / Point
plane.normal = plane.normal.normalized;
float dotVal = Vector3.Dot(dir.normalized, plane.normal);//Check if the tow are facing each other.
return point + plane.normal * dir.magnitude * (-dotVal);//Multiplying the angle(DotVal) with the magnitude gives the distance.
}
Intersection.Plane3D tempPlane = new Intersection.Plane3D(transform.position, transform.up);//Plane3D is just a point and a normal.
Vector3 closestPoint = Intersection.GetClosestPointOnPlane(testPoint.position, tempPlane);
float xPos = Vector3.Dot((closestPoint - transform.position).normalized, transform.right);
float yPos = Vector3.Dot((closestPoint - transform.position).normalized, transform.forward);
float dist = (closestPoint - transform.position).magnitude;
planePos = new Vector2(xPos * dist, yPos * dist);//This is the value that your looking for.
#William Martens
FYI: The GetClosestPointOnPlane function is part of some old stuff I made over a decade ago back in school made in C++ converted to C#. It might be based on something in my old school book but I cant say for sure. The rest I made my self after looking around for a while and not finding something that worked.

Related

Three.js rotating object around xyz world axis using quaternions

I've been struggling with this for the past 3 days so here we go:
I am building a virtual photo-studio using Three.js. The rotation is set with three sliders, one for each axis. Rotation needs to happen around the world axis. So far, I can get the object to rotate around the world x axis, however, y and z rotation only happens locally. Here is the code for one of the rotation sliders using quaternions.
let rotationX = new THREE.Quaternion()
sliderX.oninput = function () {
let newVec = new THREE.Vector3(1,0,0)
let newRad = THREE.Math.degToRad(this.value)
rotationX.setFromAxisAngle(newVec, newRad)
pivot.quaternion.multiplyQuaternions(rotationX, rotationY).multiply(rotationZ)
}
This approach has gotten me the farthest. The problem is that the rotation always happens around the vector of the first quaternion in the multiplication chain. That would be the quaternion "rotationX" of the following line.
pivot.quaternion.multiplyQuaternions(rotationX, rotationY).multiply(rotationZ)
Because I am working with quaternions, switching around the order of multiplication is also not an option as it changes the outcome.
Any help would be greatly appreciated.
Here is a link to the dependency free repo in case you want to recreate the situation: https://github.com/maxibenner/exporter

ThreeJS flip reversed X axis (left handed coordinate system)

I'm trying to create this 3D tile system world merged from smaller 3D objects - in order create these we use another application made in Unity which loads all small 3D assets separate and may be used to create your new model. Upon saving these model files there will be a JSON file created which contains all scales, positions, rotation etc. of all used 3D models.
We have decided to use this system of 'North, East, South, West' to make sure everything will look good in production. However now when we're trying to render these same JSON files in ThreeJS we have noticed the X axis is reversed compared to the Unity application that we're using .
What we want is this:
North is increasing Z value (north and south are fine)
East is increasing X value
West is decreasing X value
At the moment this is what's going wrong in ThreeJS:
East is decreasing X value
West is increasing X value
What we already have tried is this:
mirror / flip the camera view
when a coordinate drops below 0 we make it absolute (-10 will be 10)
when a coordinate is above 0 we make it negative (10 will be -10)
But nothing of the above had the desired effect. Reversing the coordinates with code brings other problems when it comes to scaled, rotated objects that are smaller or larger than 1x1x1 size. Ideally would be that we don't have to change our coordinates and that still can be used as a solid reference by changing the direction of the X axis from the left side to the right side of 0,0,0
Currently ThreeJS uses the 'right handed coordinate system' and what we desire is a left handed coordinate system. Is this something that is possible to configure within ThreeJS?
Anyone an idea what i can try except flipping all X coordinates?
It's not something you can configure in three.js or Unity. Different file formats typically have a notional coordinate system built into them. GLTF, for example, is represented in a right-handed coordinate system. It's the responsibility of the format importers and exporters to handle the conversion -- this is what the builtin three.js importers do.
I would suggest using an existing format such as GLTF to represent your scene (there is an existing Unity exporter available and an importer available for three.js).
Or if you'd like to retain control over your own file format you can do the left to right handed coordinate system conversion yourself either at export from Unity or import to three.js. Looking at your image it looks like you'll want to multiple all of the X values by -1.0 to get them to look the same. You'll want to save your rotations as quaternions, as well, to avoid rotation order differences.
Of course you could always just scale the whole scene by -1.0 on X but that may make it difficult to work with other parts of three.js.
I would consider to apply a (-1, 1, 1) scale to the root of your "Unity exported scene", this way you can still keep the other part of your scene unchanged.
obj3d.scale.set(-1, 1, 1);

Flip 3D object around point/line

I'm making a 3D monster maker. I recently added a feature to flip parts along the x and y axes, this works perfectly fine on its own, however, I also have a feature that allows users to combine parts (sets flags, doesn't combine mesh), this means that simply flipping the individual objects won't flip the "shape" of the combined object. I have had two ideas of how to do this which didn't work and I'll list them below. I have access to the origin of the objects and the centre of mass of all instances that are combined - the 0, 0, 0 point on a theoretical number plane
In these examples we're flipping across the y axis, the axis plane is X = width, Y = height, Z = depth
Attempt #1 - Simply flipping the individual object's X scale, getting the X distance from the centreMass and taking that from the centreMass for position, this works when the direction of the object is (0, 0, 1) and the right (1, 0, 0) or (-1, 0, 0), in any other direction X isn't the exact "left/right" of the object. Here's a video to clarify: https://youtu.be/QXdEF4ScP10
code:
modelInstance[i].scale.x *= -1;
modelInstance[i].basePosition.set(centre.x - modelInstance[i].distFromCentre.x, modelInstance[I].basePosition.y, modelInstance[I].basePosition.z);
modelInstance[i].transform.set(modelInstance[i].basePosition, modelInstance[i].baseRotation, modelInstance[i].scale);
Attempt #2 - Rotate the objects Y180° around the centreMass and then flip their z value. As far as I understand, this is a solution, but I don't think I can do this. The way to rotate an object around a point AFAIK involves transforming the matrix to the point, rotating it, and then translating it back which I can't use. Due to the ability to rotate, join, flip, and scale objects I keep the rotation, position, and scale completely separate because issues with scaling/rotating and movement occur. I have a Vector3 for the position, a matrix for the rotation, and a Vector3 for the scale, whenever I change any of these I use object.transform.set(position, matrix.getRotation(), scale); So when I attempt to do this method (translating rotation matrix to point etc) the objects individually flip but remain in the same place, translating the objects transform matrix has weird results and doesn't work. Video of both variations: https://youtu.be/5xzTAHA1vCU
code:
modelInstance[i].scale.z *= -1;
modelInstance[i].baseRotationMatrix.translate(modelInstance[i].distFromCentre).rotate(Vector3.Y, 180).translate( modelInstance[i].distFromCentre.scl(-1));
modelInstance[i].transform.set(modelInstance[i].basePosition, modelInstance[i].baseRotation, modelInstance[i].scale);
Ok, since no one else has helped I'll give you some code that you can either use directly or use to help you alter your code so that it is done in a similar way.
First of all, I tend to just deal with matrices and pass them to shaders as projection matrices, ie. I don't really know what modelInstance[i] is, is it an actor (I never use them), or some other libgdx class? Whatever it is, if you do use this code to generate your matrices, you should be able to overwrite your modelInstance[i] matrix at the end of it. If not, maybe it'll give you pointers on how to alter your code.
First, rotate or flip your object with out any translation. Don't translate or scale first, because when you rotate you'll also rotate the translation you've performed. I use this function to generate a rotation matrix, it rotates around the y axis first, which I think is way better then other rotation orders. Alternatively you could create an identity matrix and use the libgdx rotation functions on it to create a similar matrix.
public static void setYxzRotationMatrix(double xRotation, double yRotation, double zRotation, Matrix4 matrix)
{
// yxz - y rotation performed first
float c1=(float)Math.cos(yRotation);
float c2=(float)Math.cos(xRotation);
float c3=(float)Math.cos(zRotation);
float s1=(float)Math.sin(yRotation);
float s2=(float)Math.sin(xRotation);
float s3=(float)Math.sin(zRotation);
matrix.val[0]= -c1*c3 - s1*s2*s3; matrix.val[1]=c2*s3; matrix.val[2]=c1*s2*s3-c3*s1; matrix.val[3]=0;
matrix.val[4]= -c3*s1*s2 + c1*s3; matrix.val[5]=c2*c3; matrix.val[6]=c1*c3*s2+s1*s3; matrix.val[7]=0;
matrix.val[8]= -c2*s1; matrix.val[9]=-s2; matrix.val[10]=c1*c2; matrix.val[11]=0;
matrix.val[12]=0; matrix.val[13]=0; matrix.val[14]=0; matrix.val[15]=1.0f;
}
I use the above function to rotate my object to the correct orientation, I then translate it to the correct location, then multiply it by the cameras matrix and scale as the final operation. This will definitely work if you can do it that way, but I just pass my final matrix to the shader. I'm not sure how you use your matrices. If you want to flip the model using the scale, you should try it immediately after the rotation matrix has been created. I'd recommend getting it working without flipping with scale first, so you can test both matrix.scl() and matrix.scale() as the final step. Off hand, I'm not sure which scale function you'll need.
Matrix4 matrix1;
setYxzRotationMatrix(xRotationInRadians, yRotationInRadians, zRotationInRadians,matrix1);
//matrix1 will rotate your model to the correct orientation, around the origin.
//here is where you may wish to use matrix1.scl(-1,1,1) or matrix1.scale(-1,1,1).
//get anchor position here if required - see notes later
//now translate to the correct location, I alter the matrix directly so I know exactly
what is going on. I think matrix1.trn(x, y, z) would do the same.
matrix1.val[12]=x;
matrix1.val[13]=y;
matrix1.val[14]=z;
//Combine with your camera, this may be part of your stage or scene, but I don't use
//these, so can't help.
Matrix4 matrix2;
//set matrix2 to an identity matrix, multiply it by the cameras projection matrix, then
//finally with your rotation/flip/transform matrix1 you've created.
matrix2.idt().mul(yourCamera.combined).mul(matrix1);
matrix2.scale(-1,1,1); //flipping like this will work, but may screw up any anchor
//position if you calculated one earlier.
//matrix2 is the final projection matrix for your model. ie. you just pass that matrix
to a shader and it should be used to multiply with each vertex position vector to create
the fragment positions.
Hopefully you'll be able to adapt the above to your needs. I suggest trying one operation at a time and making sure your next operation doesn't screw up what you've already done.
The above code assumes you know where you want to translate the model to, that is you know where the center is going to be. If you have an anchor point, lets say -3 units in the x direction, you need to find out where that anchor point has been moved to after the rotation and maybe flip. You can do that by multiplying a vector with matrix1, I'd suggest before any translation to the correct location.
Vector3 anchor=new vector3(-3,0,0);
anchor.mul(matrix1); //after this operation anchor is now set to the correct location
//for the new rotation and flipping of the model. This offset should
//be applied to your translation if your anchor point is not at 0,0,0
//of the model.
This can all be a bit of a pain, particularly if you don't like matrices. It doesn't help that everything is done in a different way to what you've tried so far, but this is the method I use to display all the 3D models in my game and will work if you can adapt it to your code. Hopefully it'll help someone anyway.

webgl - model coordinates to screen coordinates

Im having an issue with calculating screen coordinates of a vertex. this is not a specifically a webgl issue, more of a general 3d graphics issue.
the sequence of matrix transformations that Im using is:
result_vec4 = perspective_matrix * camera_matrix * model_matrix * vertex_coords_vec4
model_matrix being the transformation of a vertex in its local coordinate system into the global scene coord system.
so my understanding is that the final result_vec4 is in clip space? which should then be in the [-1,1] range. which is not what Im getting... result_vec4 just ends up containing some standard values for the coords, not corresponding to the correct screen position of the vertex.
does anyone have any ideas as to what might be the issue here?
thank you very much for any thoughts.
To go in clip space you need to project result_vec4 on the hyperplane w=1 using:
result_vec4 /= result_vec4.w
By applying this perspective division result_vec4.xyz will be in [-1,1].

3D space: following the direction that an object is pointing towards, using the mouse pointer

Given the 3D vector of the direction that the camera is facing and the orientation/direction vector of a 3D object in the 3D space, how can I calculate the 2-dimensional slope that the mouse pointer must follow on the screen in order to visually be moving along the direction of said object?
Basically I'd like to be able to click on an arrow and make it move back and forth by dragging it, but only if the mouse pointer drags (roughly) along the length of the arrow, i.e. in the direction that it's pointing to.
thank you
I'm not sure I 100% understand your question. Would you mind posting a diagram?
You might find these of interest. I answered previous questions to calculate a local X Y Z axis given a camera direction (look at) vector, and also a question to translate an object in a plane parallel to the camera.
Both of these examples use Vector dot product, Vector cross product to compute the required vectors. In your example the vector dot product can be also used to output the angle between two vectors once you have found them.
It depends to an extent on the transformation that you are using to convert your 3d real world coordinates to 2d screen coordinates, e.g. perspective, isometric, etc... You will typically have a forward (3d -> 2d) and backward (2d -> 3d) transformation in play, where the backward transformation loses information. (i.e. going forward each 3d point will map to a unique 2d point, but going back from the point may not yield the same 3d point). You can often project the mouse point onto the object to get the missing dimension.
For mouse dragging, you typically get the user to specify an operation (translation on the plane of projection, zooming in or out, or rotating about an anchor point). Your input is the mouse coordinate at the start and end of the drag, which you transform into your 3d coordinate system to get two 3d coordinates, which will give you dx, dy, dz for dragging / translation etc...

Resources