A really have no a problem(app is running), what I want is you help me understanding this
The problem
Pick from isometric tiles
Conditions
Use transformation matrices
Reference
Reference tutorial
My understanding problem(lol)
I don't understand in the final part
touch.mul(invIsotransform);
Why the inverted matrix?
That tutorial describes the math for a transform that converts a point in Cartesian coordinates to a point in isometric coordinates. But when you touch the screen that is visually in isometric coordinates, you want to convert it back to Cartesian coordinates to easily pick the correct tile. Inverting the matrix produces a new matrix that does the opposite transformation of going from isometric to Cartesian.
Related
With a camera inside a cylinder I capture a image. I want to transform that image into a plane 2d. The image inside the cylinder have a lot of dots which forms a grid.
What I tried to do was estimating the transformation. With blob analysis I can detect the center of each point and obtain the coordinates in pixels. I save this in matrix called ImCilynder. After that i create a matrix with coordinates of that points in the plane with the name Im2d.
I calculate the transformation (H) solving the equation:
Imcilynder * H= Im2d;
H= matrix [9x1]
H=pinv(Imcilynder) * Im2d
But, when i'm doing the test with the same points, the result is completely random, so i'm doing something wrong.
Is there a better way to solve this? Can you help me?
Explaining better,
I'm trying to find the transformation which transforms the image above to this image:
So, to clarify, I want the projection of the points which i see in the first image to a plane. Basically i want o unwrap the cylinder.
After the calculation of the transformation matrix. I'm expecting to multiply the first image with the transformation matrix and obtain the points in the plane. Or to multiply the coordinates of the center of the black dots and obtain the coordinates of that dots in the plane. Is this possibly?
Thank you very much,
Afonso
Well, what do yo wish to have in a plane? the circles forming a grid? Because if this is the case you need to remove the radial distortion, these kind of models are represented by some parameters, are non-linear by the way. May be if you can find a very good algorithm, you are going to obtain something like this:
If this is not your idea, you need to apply an elastic transformation and this kind of transformation needs to use a kind of grid that is the model of the transformation and you need to propose your model of grid. If you want to do this automatically you need to resort to elastic registration algorithms and you can use a model like this one:
Any ways, this is not a trivial task, there are a lot of research about complex transformations of course if you want to automatically obtain the transformation. Otherwise you can use photoshop ;).
I am working on a Perspective camera. The constructor must be:
PerspectiveCamera::PerspectiveCamera(Vec3f ¢er, Vec3f &direction, Vec3f &up, float angle)
This is construction different from most others, as it lacks near and far clipping planes. I know what to with center, direction, and up -- the standard look at algorithm.
We can construct the view matrix and translate matrix accordingly:
Thus, the viewing transformation is:
For an orthographic camera (which is working correctly for me), the inverse transformation is used to go from screen space to world space. The camera coordinates go from (-1,-1,0) --> (1,1,0) in screen space.
For perspective transformation, only the field of view is given. The Wikipedia 3D projection article gives a perspective projection matrix using the field of view angle and assuming camera coordinates go from (-1,-1) --> (1,1):
In my code, (ex,ey,ez) are the camera coordinates that go from (-1,-1, ez) --> (1,1, ez). Note that the 1 in (3,3) spot of K isn't in the Wikipedia article -- I put it in to make the matrix invertible. So that may be a problem.
But anyways, for perspective projection, I used this transformation:
K inverse is multiplied with p to make the canonical view volume to a view frustum, and the result of that is multiplied with M inverse to move into world coordinates.
I get the wrong results. The correct output is:
My output looks like this:
Am I using the right algorithm for perspective projection given my constraint (no near and far plane inputs)???
Just in case somebody else runs into this issue, the method presented in the question is not proper way to create a viewing frustum. The perspective matrix (K) is for projecting the far plane onto the near plane, and we don't have those planes in this case
To create a frustum, do the inverse transformation on (x,y,ez) [as opposed to (x,y,0) for orthographic projection). Find a new direction by subtracting the transformed point from the center of projection. Shoot the ray.
My problem involves matching a set of 2d points to a set of 3d points, with known correspondence between the two. Basically I have points on an image, and I need the optimal translation and rotation to fit the points to a known 3d point cloud. Kabsch algorithm is originally meant for finding the best fit of 3d points to another point cloud, and there are implementations out there for 2d to 2d, but not something I can use. I do know it's possible, but just don't know how to go about it. I searched for code out there and came up empty. I'm programming in matlab at the moment, but any language would do.
Thank you.
Edit: The goal is getting a rotation and translation of the 3d point cloud to best match the 2d points when it is projected onto an image plane.
I should also mention that the 3d to 2d projection is done using a weak perspective.
So basically, you have a "plane" or a "line" of points, like the third dimension was 0. You could threat them like this, and use the tipicall kabsh algorithm of squared distance minimisation, don't you?
EDIT: maybe it's a nonsense, but what about projecting the 3d body to 2d coordinates, and do a 2d comparison? Computationally is expensive, so it includes exploring all the angles of the 3d object + projection, but it's easier losing one dimension by applying a projection, that adding a new dimenssion to a 2d point.
I am learning camera matrix stuff. I already known that I can get the homography of the camera (3*3 matrix) by using four points in a plane in object space. I want to know if we can get the homagraphy with four points not in a plane? If yes, how can I get the matrix? What formulas should I look at?
I also confused homography with another concept: I only need to know three points if I want to convert from points from one coordinate to another coordinate system. So why we need four points in computing homography?
Homography maps points
1. On plane to points at another plane
2. Projections of points in 3D (no obligatory lying on the same plane) during a pure camera rotation or zoom.
The latter can be easily verified if you look at the rays that connect points while sensor plane rotates: green are two sensor positions and black is a 3d object
Since Homography is between projections and not between objects in 3D you don’t care what these projections represent. But this can be confusing, I agree. For example you can point your camera at 3D scene (that is not flat!), then rotate your camera and the two resulting pictures of the scene will be related by homography. This is, by the way, a foundation for image panoramas.
Three point correspondences you mentioned may be reladte to a transformation called Affine (happens during large zooms when a perspective effects disappears) or to the finding a rigid rotation and translation in 3D space. Both require 3 point correspondences but the former needs only 2D points while the latter needs 3D points. The latter case has 6DOF ( 3 for rotation and 3 for translation) while each correspondence provides 2DOF, hence 6/2=3 correspondences. Homography has 8 DOF so there should be 8/2=4 correspondences;
Below is a little diagram that explains the difference between affine and homographs transformation when the original square tilts forward. In affine case the perspective effect is negligible that is far side has the same length as a near one. In the case of Homography the far side is shorter.
If you only have 4 points - and they're not on the same plane - then computing a homography will not work.
If you have a loads of points, and 4 of them do lie on a plane but some don't, there are filters you can use to try to remove the ones not lying on a plane. The filters implemented by OpenCV are called RANSAC and LMeDs.
Also as Hammer says in a comment under your question - The 4th point is there to figure out perspective.
Homography is a 3X3 matrix, which consists of 8 independent unknowns which means it requires 4 equations to solve these unknowns. So, in order to calculate homography we need at least 4 points.
In homography we assume that Z=0 in world scene, so the image projected is assumed as 2D. In a very famous journal named ORB-SLAM, the author formulated a scene-selective approach depending on motion parallax in scene.
Homography is the relation between two planes and the degree of freedom in case of homography transform is 7; hence you need minimum 4 corresponding points.
4 points will give you 4 pair of (x,y) hence you can calculate 7 variables. Homography is homogines transfrom hence the (3,3) value in homography matrix is always 1.
So your first question that can you calculate homography with 3 points in the plane and 4th not on the plane : it's not possible. You need projection of that point on the plane and then you can calculate the homography.
Your 2nd question about how to calculate homography matrix, you can see implemetation of findHomography() in opencv.
I have two points that describe line, problem is that i know coordinates of one for orthographic matrix (ie 150x250x0), and coordinates for second for perspective matrix (0.5x0.5x20.0f). I would like to translate orthographic coordinates to perspective so i can draw a line using glsl shader :). How to accomplish this task?
You need to move one of your vertices to other matrix space. For example let's move 150x250x0 from orthographic to perspective space. To do this you need to transform your vertex by inverted orthographic matrix. I don't know what math library you use, maybe it already has function for matrix inversion. Otherwise use code from this link: http://www.gamedev.net/topic/180189-matrix-inverse/ . After this step your vertex is in world space.
PS: Matrix inversion takes some significant time for calculations. If you can track trasformations steps (translation, rotation and scale) the easier way should be to invert these steps separately and compose a matrix after that.