Rectify an Image with Matlab's "camerParams" (Computer Vision System Toolbox) - image

I'm working on a PIV-Workflow and I'm currently pre-processing the images. I need to get rid of the perspective distortion in the images. I do have the "image processing toolbox" and the "camera calibrator". I already got rid of the lens distortion with "undistortImage();" and the cameraParams object, which is inferred through a chessboard pattern.
First Question: Is it possible to use the cameraParams object to distort the image perspectively, so that my chessboard is rectified in the image?
Second Question: Since I were not able to use the cameraParams object, I tried to use the transformation functions manually. I tried to use pairs of control-points (with cpselection tool, the original image and a generated chessboard-image) and the fitgeotrans(movingPoints, fixedPoints, 'projective'); function to get my tform-object. However I always get the error message:
Error using fitgeotrans>findProjectiveTransform (line 189)
At least 4 non-collinear points needed to infer projective transform.
Error in fitgeotrans (line 102)
tform = findProjectiveTransform(movingPoints,fixedPoints);
I tried a lot of different pairs of control-points (4 pairs or more). But I'm still getting this error. I believe I must overlook something here.
Any help is appreciated, thank you.
Stephan

If you are using one of the calibration images, then all the information you need is in the cameraParams object.
Let's say you are using calibration image 1, and let's call it I.
First, undistort the image:
I = undistortImage(I, cameraParams);
Get the extrinsics (rotation and translation) for your image:
R = cameraParams.RotationMatrices(:,:,1);
t = cameraParams.TranslationVectors(1, :);
Then combine rotation and translation into one matrix:
R(3, :) = t;
Now compute the homography between the checkerboard and the image plane:
H = R * cameraParams.IntrinsicMatrix;
Transform the image using the inverse of the homography:
J = imwarp(I, projective2d(inv(H)));
imshow(J);
You should see a "bird's eye" view of the checkerboard. If you are not using one of the calibration images, then you can compute R and t using the extrinsics function.
Another way to do this is to use detectCheckerboardPoints and generateCheckerboardPoints, and then compute the homography using fitgeotform.

Related

Rotation and translation in 3D reconstruction using 2D images

I am trying to do 3D model reconstruction using 2D images from different views. I am following this example code from Matlab to get the desired results:
Structure From Motion From Two Views.
Following are the test images taken from the camera:
Manually taken images of 1st and 2nd image with translation of 1cm:
Overlay with matched features of first and second image:
Manually taken images of 1st and 2nd image with translation of 2cm:
Overlay with matched features of first and second image:
These are the translation vectors and rotation matrices I get for each case:
1cm translation:
translation vector:[0.0245537412606279 -0.855696925927505 -0.516894461905255]
rotation matrix:
[0.999958322438693 0.00879926762261436 0.00243439415451741;
-0.00887800587357739 0.999365801035702 0.0344844418829408;
-0.00212941243132160 -0.0345046172211024 0.999402269855899]
2cm translation:
translation vector:[-0.215835469166982 -0.228607603749042 -0.949291111175908]
rotation matrix:
[0.999989695803078 -0.00104036790630347 -0.00441881457943975;
0.00149220346018613 0.994626852476622 0.103514238930121;
0.00428737874479779 -0.103519766069424 0.994618156086259]
In documentation, it says it is relative rotation and translation between the 2 images.
But I am unable to understand what these numbers mean and what is the unit of the above values.
Can anyone at least let me know in what units are we getting the translation and rotation or how to extract the rotation and translation which is in any way comparable to the real world values like cm/mm and radians/degrees respectively?
You can translate the rotation matrix into a axis-angle-representation where you get the angles in radians. This can be done using the vrrotmat2vec function or by implementing a translater yourself by following this if you don't have access to the package. The angle will then be in radians.
When it comes to translation however you wont get it in a unit that makes sense in the real world, since you don't know the scale. This is unfortunately a problem with structure from motion in general. It is impossible to know if you take a image close to something small or far away from something large.
When using structure from motion to construct a 3D model this is fortunately not a problem since you still get relative distances correctly. Therefore you will be able to capture the scene (by following the rest of the tutorial) but you wont be able to say if something is 2cm or 2km tall, unless you have something in the image that you know the real life size of.
Hope it helps :)

MATLAB: Set points within plot polygon equal to zero

I am currently doing some seismic modelling and processing in MATLAB, and would like to come up with an easy way of muting parts of various datasets. If I plot the frequency-wavenumber spectrum of some of my data, for instance, I obtain the following result:
Now, say that I want to mute some of the data present here. I could of course attempt to run through the entire matrix represented here and specify a threshold value where everything above said value should be set equal to zero, but this will be very difficult and time-consuming when I later will work with more complicated fk-spectra. I recently learned that MATLAB has an inbuilt function called impoly which allows me to interactively draw a polygon in plots. So say I, for instance, draw the following polygon in my plot with the impoly-function:
Is there anything I can do now to set all points within this polygon equal to zero? After defining the polygon as illustrated above I haven't found out how to proceed in order to mute the information contained in the polygon, so if anybody can give me some help here, then i would greatly appreciate it!
Yes, you can use the createMask function that's part of the impoly interface once you delineate the polygon in your figure. Once you use create this mask, you can use the mask to index into your data and set the right regions to zero.
Here's a quick example using the pout.tif image in MATLAB:
im = imread('pout.tif');
figure; imshow(im);
h = impoly;
I get this figure and I draw a polygon inside this image:
Now, use the createMask function with the handle to the impoly call to create a binary mask that encapsulates this polygon:
mask = createMask(h);
I get this mask:
imshow(mask);
You can then use this mask to index into your data and set the right regions to 0. First make a copy of the original data then set the data accordingly.
im_zero = im;
im_zero(mask) = 0;
I now get this:
imshow(im_zero);
Note that this only applies to single channel (2D) data. If you want to apply this to multi-channel (3D) data, then perhaps a multiplication channel-wise with the opposite of the mask may be prudent.
Something like this:
im_zero = bsxfun(#times, im, cast(~mask, class(im)));
The above code takes the opposite of the polygon mask, converts it into the same class as the original input im, then performs an element-wise multiplication of this mask with each channel of the input separately. The result will zero each spatial location that's defined in the mask over all channels.

Polydata .vtk generated from nifti .nii volume through cannot supperpose each other

Problem:
I am trying to construct a vtk polydata model from a CT Nifity volume using marching cube method.
What I did:
So far I can produce a perfectly-to-scale skull model using vtk's polydata writer. However, the skull.vtk is rotated and translated rigidly when compared to the original ct.nii volume. I understand that Nifities have a QForm matrix to map voxel data to real world and vktPolyData do not have this data explicitly. However, the result of applying the QForm matrix to the vtkPolyData is not even close to perfect overlapping.
Does anyone know why this happens?
I think the reason is that the y-axis is inverted in VTK. Most likely applying a rotation matrix of [-1 0 0; 0 -1 0; 0 0 1] (row-major) will solve your problem.
A quick and easy check is to use an external tool, e.g. 3D Slicer or Paraview, to apply the rotation. For example in paraview, you can do the following:
-Load the polydata and your image in Paraview.
-Change the rotation to (180, 180, 0) degrees (equivalent to above rotation matrix)
-See if the polydata and image line up
I have attached an example image that shows how to do this in paraview. The red is the result of applying rotation to the original blue polydata. The location of the rotation parameters is given in the green box.

What is this effect called and how does one achieve it using Matlab?

I am trying to generate the following "effect" from a basic shape in MATLAB:
But I don't even know how this process is called. Let's say I have an image containing the brown shape, what I want is generate the contours outside of it, that get smoother as they get bigger.
Is there either a name for this effect, a function to do this in MATLAB or an algorithm that does it from scratch?
thanks
I think you are looking for bwdist.
The image you are displaying looks like the positive part of a distance function from the boundary of your shape. You can perform this easily in Matlab using the examples on the aforementioned manual page.
Try this:
I = imread('brown_image.png');
I_bw = (rgb2gray(I) > 0); % or whatever, just so I_bw is 1 in the 'brown' region
r = 10;
se1 = strel('disk', r);
se2 = strel('disk', r-1);
imshow(imdilate(I_bw, se1) - imdilate(I_bw, se2))
Requires image processing toolbox, but the basic idea is to dilate the image twice with dilation elements that differ by 1 (or however thick you want the contours to be) and subtract the result of the smaller one from the bigger one. You could then color them however you want.

Get POINT CLOUD through 360 Degree Rotation and Image Processing

My Question is as below in two parts……
QUESTION (IN SHORT):
• To generate point cloud of real-world object….
• Through 360 degree rotation of it….on rotating table
• Getting 360 images… one image at each degree (1° to 360°).
• I know how to process image and getting pixel value of it.
• See one sample image below…you can see image is black and white...because I have to deal with the objects which are much shiny (glittery)…and it is DIAMOND. So I have setting up background so that shiny object (diamond) converted in to B/W object. And so I can easily scan outer edge of object (e.g. Diamond).
• And one thing to consider is I don’t using any laser… I just using one rotating table and one camera for taking image…you can see one sample project over here… but there MATLAB hides all the things…because that guy using MATLAB’s in Built functionality.
• Actually I am looking for Math routine or Algorithm or any Technique which helping me out to how getting point cloud…….using the way I have mentioned……..
MORE ELABORATION:
I need to have point-cloud of real-world object. So, I can display it in Computer Screen.
For that I am using one rotating table. I will put my object on it and I will rotate table a complete 360° degree rotation and I will take 360 images…one image at each degree (1° to 360°).
Camera which is used for taking image is well calibrated. I have given one sample image as below. I also know how to scan image and getting pixel value of it.
Also take in consideration that my images are Silhouette type…means just black and white... No color images.
But my problem is or where I am trapped down is in...
Getting Points cloud of object…….from the data which I have getting through processing of image.
One same kind of project I found over here……..
But it just using built in MATLAB functions…I am using Microsoft Visual C#.Net so I have to build the entire algorithm myself….because MATLAB hides all the things which I want to know….
Is there any master…….who know this entire thing well and getting me out of trap...!!!!
Thanks…..
I have no experience of this but If I wanted to do something like this I would have tried this:
Use a single color light source
if Possible create a lightsource which falls on a thin verticle slice of the object.
have 360 B/W Images, those Images will be images of a verticle line having variyng intensity. If you use matlab your matrix will have a/few column with sime values.
now asume a verticle line(your Axis of rotation).
5 plot or convert (imageno, rownoOfMatrix, ValueInPopulatedColumnInSameRow)... [Assuming numbering Image from 0 to 360]
under ideal conditions A lame way To get X and Y use K1 * cos imgNo * ValInCol and K1 * sin imgNo * ValInCol, and Z will be some K2 * rowNum.. K1 and K2 can be caliberated knowing actual size of object.
I mean Something like this:
http://fab.cba.mit.edu/content/processes/structured_light/
but instead of using structured light using a single verticle light
http://www.geom.uiuc.edu/~samuelp/del_project.html This link might help in triangulation...

Resources