Get position from Translation and Scale Matrix (Direct2D) - matrix

I have a 2D camera defined by Direct2D 3x2 matrix like this :
ViewMatrix = ScaleMatrix * TranslationMatrix;
But when trying to do hit testing, I get at a point where I need to know my X,Y camera coordinate. I tried to keep track of hit in a vector but without success, scaling with offset center complicate a lot the work..
So I guess it should be possible to find my camera coordinate from these two matrix right ? But how ?
Thank a lot for help.

Related

How to calculate screen coordinates after transformations?

I am trying to solve a question related to transformation of coordinates in 3-D space but not sure how to approach it.
Lets a vertex point named P is drawn at the origin with a 4x4 transformation matrix. It's then views through a camera that's positioned with a model view matrix and then through a simple projective transform matrix.
How do I calculate the new screen coordinates of P' (x,y,z)?
Before explain of pipeline, you need to know is how pipeline do process to draw on screen.
Everything between process is just matrix multiplication with vector
Model - World - Camera - Projection(or Nomalized Coordinate) - Screen
First step, we call it 'Model Space' because of (0,0,0) is based in model.
And we need to move model space to world space because of we are gonna place model to world so
we need to do transform will be (translate, rotation, scale)TRS * Model(Vector4) because definition of world transform will be different
After do it, model place in world.
Thrid, need to render on camrea space because what we see is through the camera. in world, camera also has position, viewport size and
rotation.. It needs to project from the camera. see
General Formula for Perspective Projection Matrix
After this job done, you will get nomalized coordinate which is Techinically 0-1 coordinates.
Finaly, Screen space. suppose that we are gonna make vido game for mobile. mobile has a lot of screen resolution. so how to get it done?
Simple, scale and translate to get result in screen space coordinate. Because of origin and screen size is different.
So what you are tring to do is 'step 4'.
If you want to get screen position P1 from world, formula will be "Screen Matrix * projection matrix * Camera matrix * P1"
If you want to get position from camera space it would be "Screen Matrix * projection matrix * P1".
There are useful links to understand matrix and calculation so see above links.
https://answers.unity.com/questions/1359718/what-do-the-values-in-the-matrix4x4-for-cameraproj.html
https://www.google.com/search?q=unity+camera+to+screen+matrix&newwindow=1&rlz=1C5CHFA_enKR857KR857&source=lnms&tbm=isch&sa=X&ved=0ahUKEwjk5qfQ18DlAhUXfnAKHabECRUQ_AUIEigB&biw=1905&bih=744#imgrc=Y8AkoYg3wS4PeM:

Rotate camera along "eye" direction in rgl

In rgl, you can set up camera direction with rgl.viewpoint. It accepts theta, phi: polar coordinates. They specify the position of the camera looking at the origin. However, there is yet another one degree of freedom: angle of rotation of the camera along "eye" vector. I.e. one can imagine two vectors associated with camera: "eye" vector and "up" vector; theta and phi allow one to adjust "eye" vector, but I want then to adjust "up" vector after it. Is it possible to do it?
I guess that it can be possible to do it with userMatrix parameter («4x4 matrix specifying user point of view»), but I found no information how to use it.
The ?par3d help topic documents the rendering process in the "Rendering" section. It's often tricky to accomplish what you're asking for, but in this case it's not too hard:
par3d(userMatrix = rotationMatrix(20*pi/180, 0,0,1)
%*% par3d("userMatrix"))
will rotate by 20 degrees around the user's z-axis, i.e. line of sight.

how get homography matrix from intrinsic and extrinsic parameters to obtain top view image

I'm trying to get top view image from the image captured by a camera in perspective. Already got the intrinsic and extrinsic parameters of the camera to the plane , the ground. The camera is positioned on a robot pointed to the ground at a certain height and tilt. For this camera position got:
Intrinsic matrix (3X3):
KK= [624.2745,0,327.0749;0,622.0777,232.3448;0,0,1]
The translation vector and rotation matrix:
T = [-323.708466;-66.960728;1336.693284]
R =[0.0029,1.0000,-0.0034;0.3850,-0.0042,-0.9229;-0.9229,0.0013,-0.3850]
How can I get a Homography matrix to get top view image of the ground (chess set in the ground), with the information I have?
I'm using matlab. I've done the code to apply the matrix H to the captured image, but I still haven´t i way the get this H matrix properly.
Thanks in advance.

webgl - model coordinates to screen coordinates

Im having an issue with calculating screen coordinates of a vertex. this is not a specifically a webgl issue, more of a general 3d graphics issue.
the sequence of matrix transformations that Im using is:
result_vec4 = perspective_matrix * camera_matrix * model_matrix * vertex_coords_vec4
model_matrix being the transformation of a vertex in its local coordinate system into the global scene coord system.
so my understanding is that the final result_vec4 is in clip space? which should then be in the [-1,1] range. which is not what Im getting... result_vec4 just ends up containing some standard values for the coords, not corresponding to the correct screen position of the vertex.
does anyone have any ideas as to what might be the issue here?
thank you very much for any thoughts.
To go in clip space you need to project result_vec4 on the hyperplane w=1 using:
result_vec4 /= result_vec4.w
By applying this perspective division result_vec4.xyz will be in [-1,1].

Can't center a 3D object on screen

Currently, I'm taking each corner of my object's bounding box and converting it to Normalized Device Coordinates (NDC) and I keep track of the maximum and minimum NDC. I then calculate the middle of the NDC, find it in the world and have my camera look at it.
<Determine max and minimum NDCs>
centerX = (maxX + minX) / 2;
centerY = (maxY + minY) / 2;
point.set(centerX, centerY, 0);
projector.unprojectVector(point, camera);
direction = point.sub(camera.position).normalize();
point = camera.position.clone().add(direction.multiplyScalar(distance));
camera.lookAt(point);
camera.updateMatrixWorld();
This is an approximate method correct? I have seen it suggested in a few places. I ask because every time I center my object the min and max NDCs should be equal when their are calculated again (before any other change is made) but they are not. I get close but not equal numbers (ignoring the negative sign) and as I step closer and closer the 'error' between the numbers grows bigger and bigger. IE the error for the first few centers are: 0.0022566539084770687, 0.00541687811360958, 0.011035676399427596, 0.025670088917273515, 0.06396864345885889, and so on.
Is there a step I'm missing that would cause this?
I'm using this code as part of a while loop to maximize and center the object on screen. (I'm programing it so that the user can enter a heading an elevation and the camera will be positioned so that it's viewing the object at that heading and elevation. After a few weeks I've determined that (for now) it's easier to do it this way.)
However, this seems to start falling apart the closer I move the camera to my object. For example, after a few iterations my max X NDC is 0.9989318709122867 and my min X NDC is -0.9552042384799428. When I look at the calculated point though, I look too far right and on my next iteration my max X NDC is 0.9420058636660581 and my min X NDC is 1.0128126740876888.
Your approach to this problem is incorrect. Rather than thinking about this in terms of screen coordinates, think about it terms of the scene.
You need to work out how much the camera needs to move so that a ray from it hits the centre of the object. Imagine you are standing in a field and opposite you are two people Alex and Burt, Burt is standing 2 meters to the right of Alex. You are currently looking directly at Alex but want to look at Burt without turning. If you know the distance and direction between them, 2 meters and to the right. You merely need to move that distance and direction, i.e. right and 2 meters.
In a mathematical context you need to do the following:
Get the centre of the object you are focusing on in 3d space, and then project a plane parallel to your camera, i.e. a tangent to the direction the camera is facing, which sits on that point.
Next from your camera raycast to the plane in the direction the camera is facing, the resultant difference between the centre point of the object and the point you hit the plane from the camera is the amount you need to move the camera. This should work irrespective of the direction or position of the camera and object.
You are playing the what came first problem. The chicken or the egg. Every time you change the camera attributes you are effectively changing where your object is projected in NDC space. So even though you think you are getting close, you will never get there.
Look at the problem from a different angle. Place your camera somewhere and try to make it as canonical as possible (ie give it a 1 aspect ratio) and place your object around the cameras z-axis. Is this not possible?

Resources