Axis transformation in Python - transformation

I am trying to get the position of a target picked up by my aircraft radar in the ecef frame from the azimuth elevation and slant rage data wrt body axis of my aircraft. Can some one help

Related

How to calculate screen coordinates after transformations?

I am trying to solve a question related to transformation of coordinates in 3-D space but not sure how to approach it.
Lets a vertex point named P is drawn at the origin with a 4x4 transformation matrix. It's then views through a camera that's positioned with a model view matrix and then through a simple projective transform matrix.
How do I calculate the new screen coordinates of P' (x,y,z)?
Before explain of pipeline, you need to know is how pipeline do process to draw on screen.
Everything between process is just matrix multiplication with vector
Model - World - Camera - Projection(or Nomalized Coordinate) - Screen
First step, we call it 'Model Space' because of (0,0,0) is based in model.
And we need to move model space to world space because of we are gonna place model to world so
we need to do transform will be (translate, rotation, scale)TRS * Model(Vector4) because definition of world transform will be different
After do it, model place in world.
Thrid, need to render on camrea space because what we see is through the camera. in world, camera also has position, viewport size and
rotation.. It needs to project from the camera. see
General Formula for Perspective Projection Matrix
After this job done, you will get nomalized coordinate which is Techinically 0-1 coordinates.
Finaly, Screen space. suppose that we are gonna make vido game for mobile. mobile has a lot of screen resolution. so how to get it done?
Simple, scale and translate to get result in screen space coordinate. Because of origin and screen size is different.
So what you are tring to do is 'step 4'.
If you want to get screen position P1 from world, formula will be "Screen Matrix * projection matrix * Camera matrix * P1"
If you want to get position from camera space it would be "Screen Matrix * projection matrix * P1".
There are useful links to understand matrix and calculation so see above links.
https://answers.unity.com/questions/1359718/what-do-the-values-in-the-matrix4x4-for-cameraproj.html
https://www.google.com/search?q=unity+camera+to+screen+matrix&newwindow=1&rlz=1C5CHFA_enKR857KR857&source=lnms&tbm=isch&sa=X&ved=0ahUKEwjk5qfQ18DlAhUXfnAKHabECRUQ_AUIEigB&biw=1905&bih=744#imgrc=Y8AkoYg3wS4PeM:

How to translate MKT expression to D3 options on Albers projection?

This is the standard MKT expression (here also translated to Proj.4 string) of Albers conicEqualArea for official Statistical Grid of Brazil:
PROJCS["Conica_Equivalente_de_Albers_Brasil",
GEOGCS["GCS_SIRGAS2000",
DATUM["D_SIRGAS2000",
SPHEROID["Geodetic_Reference_System_of_1980",6378137,298.2572221009113]],
PRIMEM["Greenwich",0],
UNIT["Degree",0.017453292519943295]],
PROJECTION["Albers"],
PARAMETER["standard_parallel_1",-2],
PARAMETER["standard_parallel_2",-22],
PARAMETER["latitude_of_origin",-12],
PARAMETER["central_meridian",-54],
PARAMETER["false_easting",5000000],
PARAMETER["false_northing",10000000],
UNIT["Meter",1]]
The DATUM is the WGS 84 ("SIRGAS2000" is a alias for it).
How to translate all details to the D3.js v5 parametrization?
I try the obvious, as center and parallels, but it was not sufficient
var projection = d3.geoConicEqualArea()
.parallels([-2,-22]) // IS IT?
.scale(815)
//.rotate([??,??]) // HERE THE PROBLEM...
.center([-54, -12]) // IS IT?
PS: where the D3 documentation for it? The D3 source-code of geoConicEqualArea() have no clues.
The parts that translate to a d3 Albers projection are as follows:
PROJECTION["Albers"],
PARAMETER["standard_parallel_1",-2],
PARAMETER["standard_parallel_2",-22],
PARAMETER["latitude_of_origin",-12],
PARAMETER["central_meridian",-54],
You have the parallels, now you need to rotate. Also note, for any D3 projection, the rotation is applied to the centering coordinates. Generally, you'll want to rotate on the x and center on the y:
d3.geoAlbers()
.parallels([-2,-22])
.center([0,-12])
.rotate([54,0])
.translate([width/2,height/2])
.scale(k)
I've rotated in the opposite direction along the x axis (rotated the earth under me so that I'm overtop of the central meridian, hence my rotation by -x). I've then centered on the y. Lastly I translate so that the intersection of the central longitude and meridian is centered in the map and apply a scale value that is appropriate.
If I want to center on a different area but keep the projection the same, I can modify projection.center(), but keep in mind that the coordinates provided here are relative to the rotation. I can also use projection.fitSize() or projection.fitExtent(), both of which set 'translate' and 'scale' values for the projection. None of center/scale/translate change the distortion in the D3 projection.
Of course this isn't a true replication of your projection as the coordinate space units are pixels, you will remain unable to measure distances in meters directly without some extra work.
See also

Can I get the TangoPose relative to gravity?

I'm using Tango motion tracking and it is very easy to get the pose of the device relative to the TANGO_START_OF_SERVICE. For the translation that works fine for me, but I'd like my orientation to be aligned with gravity, so that the yaw and roll angles are aligned with gravity rather than with the arbitrary position at which the Tango service started. I'm fine with an arbitrary azimuth angle.
I can do this by using the accelerometer data to get the absolute orientation at one point in time and then use that going forward, but is there an easier way?
I think the Z axis of TANGO_COORDINATE_FRAME_CAMERA_DEPTH frame is always aligned with gravity.

Is it possible to know the amount of rotation one made so far in three.js?

I've built a scene with a cube and a camera placed in the center of the cube. I am able to rotate the cube. Ideally I would like to limit to rotation to horizontal.
Is there any way to count the amount of rotation? For example, being able to know at every amount that I rotated 30 radians in a direction, and if I rotate backgword the amount gets updated accordingly.

Camera matching application

I am trying to build a simple camera matching (or match moving) application. The functionality is the same as that in most 3d applications like 3ds Max or Maya. Given an image of a cube and a 3d model of the cube, the user selects points on the image corresponding to each vertex of the model. The application must then generate a camera view that displays the 3d cube model from the same angle as shown in the image.
Can anyone point me in the direction of an algorithm for that?
PS: The camera is calibrated and the camera calibration matrix is available to the program
You can try with the algorithm illustrated step-by-step on http://www.offbytwo.net/camera-matching/. The Octave source code is provided, too.
As a plus, you don't need to start with a cube, but just with any two edges parallel to the x axis and two in the y direction.

Resources