edit: I decided to split this question into two parts, because it were really two questions: 1. how to make a polar surface plot in MATLAB (this question) and 2. how to put fit polar data points into a coarse (and non-polar) matrix
I have a matrix that contains certain grey values (values between zero and one). These points are stored in a rectangular matrix, but really the data points are acquired by rotating the detector. This means that I actually have polar coordinates (I know the polar coordinates for every single pixel in my starting matrix).
I want to make a polar plot of the data points. I have the example of this below.
Because MATLAB stores images as matrices, the polar coordinates I have do not exactly match the 'bins' of the matrix. Therefore, we currently use an interpolation algorithm to put the polar coordinates into a square matrix. However, this is extremely slow. I see two methods to solve this issue:
let MATLAB directly plot the data points as polar.
calculate once how to convert from the start matrix to the end matrix and let MATLAB do this through matrix multiplication.
Some basic information:
Input matrix size: 512×960
Current output matrix size: 1024×1024
I think there is built in function for polar plot in matlab.
Z = [2+3i 2 -1+4i 3-4i 5+2i -4-2i -2+3i -2 -3i 3i-2i];
polarplot(Z,'*')
this command plots:
plot polar
See this link:
http://www.mathworks.com/help/matlab/ref/polarplot.html
To plot in grayscale, use "pcolor" and specify colormap to "gray"
www.mathworks.com/help/matlab/ref/ pcolor.html
The question was solved (apart from a minor flaw), partially because K.M. Shihab Uddin pointed me in the right direction. Unfortunately, using surf means continuously really plotting the image in a figure, and this is slow as well.
So I have X and Y values both in separate matrices and greyscale values (in a matrix called C) for every X and Y combination.
I found out that pcolor is just surf with a viewpoint from the top. So I used the following code to plot my graph.
surf(X,Y,C*255)
view([0,0,500])
However, this gave me a completely black image. This is because surf (and pcolor) create 960 grid lines radially in my case. The solution is to use:
surf(X,Y,img2*255,'EdgeColor','none')
view([0,0,500])
Now I have an almost perfect image, like I had before. Only, of my 960 radial lines, one is left white, so I still have to solve that. However, I feel this is a technical detail of the function surf, and answering this part does not belong in this question.
The resulting image
Related
Ok, this might be a little dumb to ask but I'm really having a hard time understanding the image coordinates in Matlab.
So, in a mathematical equation, f(x,y) f is the image function where x and y are the coordinates of the image. For example, in matlab code, we can:
img = imread('autumn.tif');
img(1,4); %f(x,y)
where img(1,4) is equivalent to the function f(x,y). Now, in Matlab, there is an option to convert the cartesian coordinate (x,y) to polar coordinate (rho,theta) with cart2pol() function.
Now, here's where I don't understand. Is it possible to apply f(rho,theta) which is the polar coordinates instead of the cartesian coordinate in Matlab?
I tried doing something like:
img(2.14,1.5)
But I get the error message saying about array indexing is only supported with integers or logical values.
Could anyone clear up my understanding on this? Because I'm required to apply f(rho,theta) instead of the conventional f(x,y).
An image in Matlab is basically just an 2D array (if you consider just a greyscale image). Therefore you also need integer indices, just as for all other arrays, to access the pixels of the image.
% i,j integers, not doubles, floates, etc.
pixel = img(i,j);
The polar coordinates from yor last question (theta, rho) can therefore not be used to access the image array. That is also the exact reason for the error message. At least you'd need to round them or find some other way to use them as indices (e.g. convert them back to cartesian coords. would be best, due to matrix indexing)
Regarding your application: As far as I have found out, these polar coordinates are used as parameters for the Zernike polynomials. So why would you use them to access the image?
I am using matlab's built in function called Procrustes to see the rotation translation and scale between two images. But, I am just using coordinates of the brightest points in the image and rotating these coordinates about the center of the image. Procrustes compares two matrices and gives you the rotation, translation, and scale. However, procrustes only works correctly if the matrices are in the same order for comparison.
I am given an image and a separate comparison coordinate matrix. The end goal is to find how much the image has been rotated, translated, and scaled compared to the coordinate matrix. I can just use Procrustes for this, but I need to correctly order the coordinates found from the image to match the order in the comparison coordinate matrix. My thought was to compare the distance between every possible combination of points in the coordinate matrix and compare it to the coordinates that I find in the picture. I just do not know how to write this code due to the fact if there is n coordinates, there will be n! possible combinations.
Just searching for the shortest distance is not so hard.
A = rand(1E4,2);
B = rand(1E4,2);
tic
idx = nan(1,1E4);
for ct = 1:size(A,1)
d = sum((A(ct,:)-B).^2,2);
idx(ct) = find(d==min(d));
end
toc
plot(A(1:10,1),A(1:10,2),'.r',B(idx(1:10),1),B(idx(1:10),2),'.b')
takes half a second on my PC.
The problems can start when two points in set A are matched to the same location in set B.
length(unique(idx))==length(idx)
This can be solved in several ways. The best (imho) is to determine a probability that point B matches with point A based on the distance (usually something that decreases exponentially), and solve for the most probable situation.
A simpler method (but more error prone) is to remove the matched point from set B.
On a discrete grid-based plane (think: pixels of an image), I have a closed contour that can be expressed either by:
a set of 2D points (x1,y1);(x2,y2);(x3,y3);...
or a 4-connected Freeman code, with a starting point: (x1,y1) + 00001112...
I know how to switch from one to the other of these representations. This will be the input data.
I want to get the set of grid coordinates that are bounded by the contour.
Consider this example, where the red coordinates are the contour, and the gray one the starting point:
If the gray coordinate is, say, at (0,0), then I want a vector holding:
(1,1),(2,1),(3,1),(3,2)
Order is not important, and the output vector can also hold the contour itself.
Language of choice is C++, but I'm open to any existing code, algorithm, library, pointer, whatever...
I though that maybe CGAL would have something like this, but I am unfamiliar with it and couldn't find my way through the manual, so I'm not even sure.
I also looked toward Opencv but I think it does not provide this algorithm (but I can be wrong?).
I was thinking about finding the bounding rectangle, then checking each of the points in the rectangle to see if they are inside/outside, but this seems suboptimal. Any idea ?
One way to solve this is drawContours, and you have contours points with you.
Create blank Mat and draw contour with thickness = 1(boundary).
Create another blank Mat and draw contour with thickness = CV_FILLED(whole area including boundary).
Now bitwise_and between above two(you got filled area excluding boundary).
Finally check for non-zero pixel.
As a followup to my previous question about determining camera parameters I have formulated a new problem.
I have two pictures of the same rectangle:
The first is an image without any transformations and shows the rectangle as it is.
The second image shows the rectangle after some 3d transformation (XYZ-rotation, scaling, XY-translation) is applied. This has caused the rectangle to look a trapezoid.
I hope the following picture describes my problem:
alt text http://wilco.menge.nl/application.data/cms/upload/transformation%20matrix.png
How do determine what transformations (more specifically: what transformation matrix) have caused this tranformation?
I know the pixel locations of the corners in both images, hence i also know the distances between the corners.
I'm confused. Is this a 2d or a 3d problem?
The way I understand it, you have a flat rectangle embedded in 3d space, and you're looking at two 2d "pictures" of it - one of the original version and one based on the transformed version. Is this correct?
If this is correct, then there is not enough information to solve the problem. For example, suppose the two pictures look exactly the same. This could be because the translation is the identity, or it could be because the translation moves the rectangle twice as far away from the camera and doubles its size (thus making it look exactly the same).
This is a math problem, not programming ..
you need to define a set of equations (your transformation matrix, my guess is 3 equations) and then solve it for the 4 transformations of the corner-points.
I've only ever described this using German words ... so the above will sound strange ..
Based on the information you have, this is not that easy. I will give you some ideas to play with, however. If you had the 3D coordinates of the corners, you'd have an easier time. Here's the basic idea.
Move a corner to the origin. Thereafter, rotations will take place about the origin.
Determine vectors of the axes. Do this by subtracting the adjacent corners from the origin point. These will be a local x and y axis for your world.
Determine angles using the vectors. You can use the dot and cross products to determine the angle between the local x axis and the global x axis (1, 0, 0).
Rotate by the angle in step 3. This will give you a new x axis which should match the global x axis and a new local y axis. You can then determine another rotation about the x axis which will bring the y axis into alignment with the global y axis.
Without the z coordinates, you can see that this will be difficult, but this is the general process. I hope this helps.
The solution will not be unique, as Alex319 points out.
If the second image is really a trapezoid as you say, then this won't be too hard. It is a trapezoid (not a parallelogram) because of perspective, so it must be an isosceles trapezoid.
Draw the two diagonals. They intersect at the center of the rectangle, so that takes care of the translation.
Rotate the trapezoid until its parallel sides are parallel to two sides of the original rectangle. (Which two? It doesn't matter.)
Draw a third parallel through the center. Scale this to the sides of the rectangle you chose.
Now for the rotation out of the plane. Measure the distance from the center to one of the parallel sides and use the law of sines.
If it's not a trapezoid, just a quadralateral, then it'll be harder, you'll have to use the angles between the diagonals to find the axis of rotation.
Where can I find algorithms for image distortions? There are so much info of Blur and other classic algorithms but so little of more complex ones. In particular, I am interested in swirl effect image distortion algorithm.
I can't find any references, but I can give a basic idea of how distortion effects work.
The key to the distortion is a function which takes two coordinates (x,y) in the distorted image, and transforms them to coordinates (u,v) in the original image. This specifies the inverse function of the distortion, since it takes the distorted image back to the original image
To generate the distorted image, one loops over x and y, calculates the point (u,v) from (x,y) using the inverse distortion function, and sets the colour components at (x,y) to be the same as those at (u,v) in the original image. One ususally uses interpolation (e.g. http://en.wikipedia.org/wiki/Bilinear_interpolation ) to determine the colour at (u,v), since (u,v) usually does not lie exactly on the centre of a pixel, but rather at some fractional point between pixels.
A swirl is essentially a rotation, where the angle of rotation is dependent on the distance from the centre of the image. An example would be:
a = amount of rotation
b = size of effect
angle = a*exp(-(x*x+y*y)/(b*b))
u = cos(angle)*x + sin(angle)*y
v = -sin(angle)*x + cos(angle)*y
Here, I assume for simplicity that the centre of the swirl is at (0,0). The swirl can be put anywhere by subtracting the swirl position coordinates from x and y before the distortion function, and adding them to u and v after it.
There are various swirl effects around: some (like the above) swirl only a localised area, and have the amount of swirl decreasing towards the edge of the image. Others increase the swirling towards the edge of the image. This sort of thing can be done by playing about with the angle= line, e.g.
angle = a*(x*x+y*y)
There is a Java implementation of lot of image filters/effects at Jerry's Java Image Filters. Maybe you can take inspiration from there.
The swirl and others like it are a matrix transformation on the pixel locations. You make a new image and get the color from a position on the image that you get from multiplying the current position by a matrix.
The matrix is dependent on the current position.
here is a good CodeProject showing how to do it
http://www.codeproject.com/KB/GDI-plus/displacementfilters.aspx
there has a new graphic library have many feature
http://code.google.com/p/picasso-graphic/
Take a look at ImageMagick. It's a image conversion and editing toolkit and has interfaces for all popular languages.
The -displace operator can create swirls with the correct displacement map.
If you are for some reason not satisfied with the ImageMagick interface, you can always take a look at the source code of the filters and go from there.