Im having an issue with calculating screen coordinates of a vertex. this is not a specifically a webgl issue, more of a general 3d graphics issue.
the sequence of matrix transformations that Im using is:
result_vec4 = perspective_matrix * camera_matrix * model_matrix * vertex_coords_vec4
model_matrix being the transformation of a vertex in its local coordinate system into the global scene coord system.
so my understanding is that the final result_vec4 is in clip space? which should then be in the [-1,1] range. which is not what Im getting... result_vec4 just ends up containing some standard values for the coords, not corresponding to the correct screen position of the vertex.
does anyone have any ideas as to what might be the issue here?
thank you very much for any thoughts.
To go in clip space you need to project result_vec4 on the hyperplane w=1 using:
result_vec4 /= result_vec4.w
By applying this perspective division result_vec4.xyz will be in [-1,1].
Related
I am trying to solve a question related to transformation of coordinates in 3-D space but not sure how to approach it.
Lets a vertex point named P is drawn at the origin with a 4x4 transformation matrix. It's then views through a camera that's positioned with a model view matrix and then through a simple projective transform matrix.
How do I calculate the new screen coordinates of P' (x,y,z)?
Before explain of pipeline, you need to know is how pipeline do process to draw on screen.
Everything between process is just matrix multiplication with vector
Model - World - Camera - Projection(or Nomalized Coordinate) - Screen
First step, we call it 'Model Space' because of (0,0,0) is based in model.
And we need to move model space to world space because of we are gonna place model to world so
we need to do transform will be (translate, rotation, scale)TRS * Model(Vector4) because definition of world transform will be different
After do it, model place in world.
Thrid, need to render on camrea space because what we see is through the camera. in world, camera also has position, viewport size and
rotation.. It needs to project from the camera. see
General Formula for Perspective Projection Matrix
After this job done, you will get nomalized coordinate which is Techinically 0-1 coordinates.
Finaly, Screen space. suppose that we are gonna make vido game for mobile. mobile has a lot of screen resolution. so how to get it done?
Simple, scale and translate to get result in screen space coordinate. Because of origin and screen size is different.
So what you are tring to do is 'step 4'.
If you want to get screen position P1 from world, formula will be "Screen Matrix * projection matrix * Camera matrix * P1"
If you want to get position from camera space it would be "Screen Matrix * projection matrix * P1".
There are useful links to understand matrix and calculation so see above links.
https://answers.unity.com/questions/1359718/what-do-the-values-in-the-matrix4x4-for-cameraproj.html
https://www.google.com/search?q=unity+camera+to+screen+matrix&newwindow=1&rlz=1C5CHFA_enKR857KR857&source=lnms&tbm=isch&sa=X&ved=0ahUKEwjk5qfQ18DlAhUXfnAKHabECRUQ_AUIEigB&biw=1905&bih=744#imgrc=Y8AkoYg3wS4PeM:
I have the normal vector of the plane . I want to convert the 3D points onto a 2D plane maintaining the same distances between them. Basically what I want to do is make the z coordinate of all the points on the plane equal.
How do I go about achieving this and writing a program for it (Preferably C#)? . Are there any good libraries that I can use .
Will this library be useful Point Cloud Library
My objective in doing this is I have several lines(on the same plane) in 3D space and I want to represent these lines in 2D along with their measurements
An example plane of my problem.
I am doing this for an application I am developing in unity using Google ARcore
Ok I have invested a fair amount of time in finding a solution to this problem . I figured out a simple solution to this problem using ARcore as I am doing this using ARCore(Augmented reality SDK provided by Google) . For those who want to achieve this without using ARcore refer these questions Question 1 Question 2 where a new orthonormal basis has to be created or the plane has to be rotated in order to align with the default planes.
For those who are using ARcore in unity , there is a simpler solution given in this issue on GitHub created by me . Basically we can easily create new axes on the 3D plane and record coordinates from this newly created coordinate system.
If you want to project 3d points on a plane, you need to have a 2d coordinate system in mind. A natural one is the one defined by the global axis, but that will work well with one kind of planes (say horizontal) but not another (say vertical).
Another choice of coordinates is the one defined by CenterPose, but it can change every frame. So if you need the 2d points only for 1 frame, this is can be written as:
x_local_axis = DetectedPlane.CenterPose.rotation * Vector3.forward;
z_local_axis = DetectedPlane.CenterPose.rotation * Vector3.right;
// loop over your points
x = Vector3.Dot(your_3d_point, x_local_axis);
z = Vector3.Dot(your_3d_point, z_local_axis);
If you need a 2d coordinate system that is consistent between frames, you probably would want to attach an anchor to any plane of interest, maybe at DetectedPlane.CenterPose, and do the same math as above, but with anchor rotation instead of plane rotation. The x and z axes of the anchor will provide a 2d frame of coordinates that is consistent between frames.
So here , a new local axes are created on the center of the plane and the points obtained would have only 2 coordinates .
I needed this in Unity C# so here's some code that I used.
First of, project the point onto the plane.
Then using the dot values for the targeted transforms right and forward I got the local 2D coordinates.
So if you want the standard coords replace right with (1,0) and forward with (0,1)
public static Vector3 GetClosestPointOnPlane(Vector3 point, Plane3D plane){
Vector3 dir = point - plane.position;//Direction between the plane / Point
plane.normal = plane.normal.normalized;
float dotVal = Vector3.Dot(dir.normalized, plane.normal);//Check if the tow are facing each other.
return point + plane.normal * dir.magnitude * (-dotVal);//Multiplying the angle(DotVal) with the magnitude gives the distance.
}
Intersection.Plane3D tempPlane = new Intersection.Plane3D(transform.position, transform.up);//Plane3D is just a point and a normal.
Vector3 closestPoint = Intersection.GetClosestPointOnPlane(testPoint.position, tempPlane);
float xPos = Vector3.Dot((closestPoint - transform.position).normalized, transform.right);
float yPos = Vector3.Dot((closestPoint - transform.position).normalized, transform.forward);
float dist = (closestPoint - transform.position).magnitude;
planePos = new Vector2(xPos * dist, yPos * dist);//This is the value that your looking for.
#William Martens
FYI: The GetClosestPointOnPlane function is part of some old stuff I made over a decade ago back in school made in C++ converted to C#. It might be based on something in my old school book but I cant say for sure. The rest I made my self after looking around for a while and not finding something that worked.
I have a 2D camera defined by Direct2D 3x2 matrix like this :
ViewMatrix = ScaleMatrix * TranslationMatrix;
But when trying to do hit testing, I get at a point where I need to know my X,Y camera coordinate. I tried to keep track of hit in a vector but without success, scaling with offset center complicate a lot the work..
So I guess it should be possible to find my camera coordinate from these two matrix right ? But how ?
Thank a lot for help.
I am using Three.js version 65.
I am displaying a set of points # time t=0 in 3D space using ParticleSystem. And also I am having next set of points at time t=1. Now I want to animate it as in JSONLoader morphTarget animation? Could anybody suggest me the best way to achieve this?
(or)
Can I prefer WebGL shader programming for this? Please suggest.
Thanks in advance.
Yes you can do that with shaders. You'd e.g. create a custom shader for your particle system with the attributes vec3 position, vec3 nextPosition and a uniform float scale which goes from 0 to 1.
Then you can add some logic to the shader where you calculate the new position like vec3 pos = position * scale + nextPosition * (1.0 - scale) (along with the usual billboard / GL_Point code ofc). And when you reached scale 1 you swap position with nextPosition and fill nextPosition with the relative follower.
Good luck have fun :)
PS: My code mentioned is just for linear interpolation. In your case you might consider other interpolations. Maybe even add another two attribute vectors to indicate the leading and the following point in order to calculate the new position with a bezier curve.
Lastly you'll have to give a thought to performance sooner or later. If you have 10k particles and 1k "states" you might run into performance issues.
I am writing a particle engine for iOS using Monotouch and openTK. My approach is to project the coordinate of each particle, and then write a correctly scaled textured rectangle at this screen location.
it works fine, but I have trouble calculating the correct depth value so that the sprite will correctly overdraw and be overdrawn by 3D objects in the scene.
This is the code I am using today:
//d=distance to projection plane
float d=(float)(1.0/(Math.Tan(MathHelper.DegreesToRadians(fovy/2f))));
Vector3 screenPos=Vector3.Transform(ref objPos,ref viewMatrix, out screenPos);
float depth=1-d/-screenPos.Z;
Then I am drawing a trianglestrip at the screen coordinate where I put the depth value calculated above as the z coordinate.
The results are almost correct, but not quite. I guess I need to take the near and far clipping planes into account somehow (near is 1 and far is 10000 in my case), but I am not sure how. I tried various ways and algorithms without getting accurate results.
I'd appreciate some help on this one.
What you really want to do is take your source position and pass it through modelview and projection or whatever you've got set up instead if you're not using the fixed pipeline. Supposing you've used one of the standard calls to set up the stack, such as glFrustum, and otherwise left things at identity then you can get the relevant formula directly from the man page. So reading directly from that you'd transform as:
z_clip = -( (far + near) / (far - near) ) * z_eye - ( (2 * far * near) / (far - near) )
w_clip = -z
Then, finally:
z_device = z_clip / w_clip;
EDIT: as you're working in ES 2.0, you can actually avoid the issue entirely. Supply your geometry for rendering as GL_POINTS and perform a normal transform in your vertex shader but set gl_PointSize to be the size in pixels that you want that point to be.
In your fragment shader you can then read gl_PointCoord to get a texture coordinate for each fragment that's part of your point, allowing you to draw a point sprite if you don't want just a single colour.