skew matrix algorithm - algorithm

I'm looking for skew algorithm, just like on photoshop, edit->transform->skew
is there any simple matrix which could do that?
what I've seen so far was basic skew matrix (shear) but its lack of control point, doesn't like on photoshop which have at least 4 points on each corner of rectangle and we can move each control point freely.
I need to implement it to transform a plane.

Looking at http://www.w3.org/TR/SVG11/coords.html, which talks about SVG, it says:
A skew transformation along the x-axis is equivalent to the matrix
or [1 0 tan(a) 1 0 0], which has the effect of skewing X coordinates by angle a.
A skew transformation along the y-axis is equivalent to the matrix
or [1 tan(a) 0 1 0 0], which has the effect of skewing Y coordinates by angle a.
Hope that helps! :)

Related

Algorithm for generating triangle meshes from vertex array

Let's say I have a set of points in 3D. The points are uniformly spaced on the x and y axes. So one can think of the points as a function z = f(x,y). As an example, x can be from {0,1,2} and y can be {0,1,2}, giving us a total of nine 3D points on a square grid. I am trying to implement a simple algorithm to generate a triangle mesh of these points, given their coordinates. I do not know much aboout mesh generation, but I do know that my points are evenly spaced in the x and y dimensions on a grid. So if my points were of the form:
0 0 0
0 5 0
0 0 0
Where the row number represents the y coordinate and the column number represents the x coordinate, and the value represents the z coordinate. This set of points should generate a triangular mesh that looks like a square base pyramid where the peak of the pyramid is at (1,1,5). I am looking for a simple algorithm that I could code up that would generate such a mesh, given the specifics of this problem.
I have heard of Delaunay triangulation, but am not sure if it is applicable to this problem. Thanks.
A very easy solution is to consider the four vertices of every mesh and create two triangles, using a diagonal and the four sides.

Understanding Matrices - Reading Rotation

I am trying to learn more about matrices. If I have a 4x4 matrice such as :
0.005 0.978 -0.20 60.62
-0.98 -0.027 0.15 -18.942
-0.15 0.20 0.96 -287.13
0 0 0 1
Which part of the matrix tells me the rotation that is applied to an object ? I know that column 4 is the position of the object and suspect row 1,2 and 3 are the x,y and z rotation ?
Thanks in advance.
The first three columns are directional vectors in the x, y, z directions, possibly including scaling of the object. If you imagine a cube, the first column's vector points in the direction of the positive-x-face of the cube, the second in the direction of the positive-y-face and the third in the direction of the positive-z-face.
Note that when object-scaling was applied to the matrix (which doesn't appear to be the case in your example), those direction vectors are not normalized.
But this isn't "rotation" in the euler-angle or quaternion-rotation sense. In fact calculating any angles from this matrix is pretty tricky.
Here are some links that explain how to do it, but this comes with a lot of problems and you should avoid it if it's not absolutely necessary:
http://www.euclideanspace.com/maths/geometry/rotations/conversions/matrixToEuler/index.htm
http://www.euclideanspace.com/maths/geometry/rotations/conversions/quaternionToEuler/index.htm

Bounding ellipse constrained to horizontal/vertical axes

Context: I'm trying to clip a topographic map into the minimum-size ellipse around a number of wind turbines, to minimize the size of the map. The program doing this map clipping can clip in ellipses, but only ellipses with axes aligned along the x and y axes.
I know the algorithm for the bounding ellipse problem (finding the smallest-area ellipse that encloses a set of points).
But how do I constrain this algorithm (or make a different algorithm) such that the resulting ellipse is required to have its major axis oriented either horizontally or vertically, whichever gives the smallest ellipse -- and never at an angle?
Of course, this constraint makes the resulting ellipse larger than it "needs" to be to enclose all the points, but that's the constraint nonetheless.
The algorithm described here (referenced in the link you provided) is about solving the following optimization problem:
minimize log(det(A))
s.t. (P_i - c)'*A*(P_i - c)<= 1
One can extend this system of inequalities with the following constraint (V is the ellipse rotation matrix, for detailed info refer the link above):
V == [[1, 0], [0, 1]] // horizontal ellipse
or
V == [[0, -1], [1, 0]] // vertical ellipse
Solving the optimization problem with either of these constraints and calculating the square of the resulting ellipses will give you the required result.

implement 3d sobel operator

I am currently working on inhomogeniety removal from MRI data volume which contains voxels.
I want to apply sobel operator on those volumes to find the gradient. I am familiar with 2d sobel mask and the neighbourhood of 2d images.
sobel mask:
1 2 1
0 0 0
-1 -2 -1
1 0 -1
2 0 -2
1 0 -1
neighbourhood of(x,y):
(x+1,y-1) (x+1,y) (x+1,y+1)
(x,y-1) (x,y) (x,y+1)
(x-1,y-1) (x-1,y) (x-1,y+1)
Now I want to apply it on 3d.
Please suggest me how should I proceed??
Thank you.
Wikipedia has a nice introduction about that : http://en.wikipedia.org/wiki/Sobel_operator
Basically, since the sobel filter is separable, you can apply 1D filters in each of x, y and z directions consecutively. Theses filters are h(x) and h'(x) as given on wikipedia. Doing so will allow you to get the edges in the direction where you applied h'(x).
For example, if you do h(x)*h(y)*h'(z), you'll get the edges in the direction z.
Alternatively (and more expensively), you can compute the whole 3D 3x3x3 kernel and apply the convolution in 3D. The kernel for the z direction is given on wikipedia as well.
Good question! For 3D images you have to use 3 different 3x3x3 sobel operators: 1 for each direction, that is x, y and z. Be aware that in digital image processing the x-coordinate axis points right, y-coordinate downwards, and z-coordinate into the screen!
I visualized all three 3D sobel operators to make it more intuitive. Here the sobel filters in X direction, Y direction and Z direction.
Furthermore, if you want to see the equations behind it (basically what your code has to compute) here you go: SobelFilterEquations

Is there any algorithm for determining 3d position in such case? (images below)

So first of all I have such image (and ofcourse I have all points coordinates in 2d so I can regenerate lines and check where they cross each other)
(source: narod.ru)
But hey, I have another Image of same lines (I know thay are same) and new coords of my points like on this image
(source: narod.ru)
So... now Having points (coords) on first image, How can I determin plane rotation and Z depth on second image (asuming first one's center was in point (0,0,0) with no rotation)?
What you're trying to find is called a projection matrix. Determining precise inverse projection usually requires that you have firmly established coordinates in both source and destination vectors, which the images above aren't going to give you. You can approximate using pixel positions, however.
This thread will give you a basic walkthrough of the techniques you need to use.
Let me say this up front: this problem is hard. There is a reason Dan Story's linked question has not been answered. Let provide an explanation for people who want to take a stab at it. I hope I'm wrong about how hard it is, though.
I will assume that the 2D screen coordinates and projection/perspective matrix is known to you. You need to know at least this much (if you don't know the projection matrix, essentially you are using a different camera to look at the world). Let's call each pair of 2D screen coordinates (a_i, b_i), and I will assume the projection matrix is of the form
P = [ px 0 0 0 ]
[ 0 py 0 0 ]
[ 0 0 pz pw]
[ 0 0 s 0 ], s = +/-1
Almost any reasonable projection has this form. Working through the rendering pipeline, you find that
a_i = px x_i / (s z_i)
b_i = py y_i / (s z_i)
where (x_i, y_i, z_i) are the original 3D coordinates of the point.
Now, let's assume you know your shape in a set of canonical coordinates (whatever you want), so that the vertices is (x0_i, y0_i, z0_i). We can arrange these as columns of a matrix C. The actual coordinates of the shape are a rigid transformation of these coordinates. Let's similarly organize the actual coordinates as columns of a matrix V. Then these are related by
V = R C + v 1^T (*)
where 1^T is a row vector of ones with the right length, R is an orthogonal rotation matrix of the rigid transformation, and v is the offset vector of the transformation.
Now, you have an expression for each column of V from above: the first column is { s a_1 z_1 / px, s b_1 z_1 / py, z_1 } and so on.
You must solve the set of equations (*) for the set of scalars z_i, and the rigid transformation defined R and v.
Difficulties
The equation is nonlinear in the unknowns, involving quotients of R and z_i
We have assumed up to now that you know which 2D coordinates correspond to which vertices of the original shape (if your shape is a square, this is slightly less of a problem).
We assume there is even a solution at all; if there are errors in the 2D data, then it's hard to say how well equation (*) will be satisfied; the transformation will be nonrigid or nonlinear.
It's called (digital) photogrammetry. Start Googling.
If you are really interested in this kind of problems (which are common in computer vision, tracking objects with cameras etc.), the following book contains a detailed treatment:
Ma, Soatto, Kosecka, Sastry, An Invitation to 3-D Vision, Springer 2004.
Beware: this is an advanced engineering text, and uses many techniques which are mathematical in nature. Skim through the sample chapters featured on the book's web page to get an idea.

Resources