Crop and transform image in Matlab - image

I'm trying to crop an image but not with a rectangle (like in imcrop()) but with a polygon that has four corners. I've searched a lot and discovered that I need to perform a homography to reajust the cropped polygon into a rectangle.
So I've used imcrop() to select a polygon in an image :
img = imread('pout.tif');
imshow(img);
h = impoly;
position = wait(h);
x1 = min(position(:, 1));
x2 = max(position(:, 1));
y1 = min(position(:, 2));
y2 = max(position(:, 2));
BW = createMask(h);
How could I use these two things to crop out an area in the shape of a polygon with four corners ?

First of all, it is a bad idea to transform the image for cropping. It will results in changing the content of the ROI with artifacts due to interpolation when applying the homography. In addition, if one day you want to turn into a ROI defined by more than 4 points, this approach doesn't apply anylonger.
Second, I make some minor changes to your script, like this:
img = imread('circuit.tif');
imshow(img);
h = impoly;
position = wait(h);
boundbox = [min(position(:,1)), ....
min(position(:,2)), ....
max(position(:,1))-min(position(:,1)), ....
max(position(:,2))-min(position(:,2))];
BW = createMask(h);
img = imcrop(uint8(BW).*img, boundbox);
imshow(img)
You were almost there ... just mask the ROI of the image you want and crop with the bounding box of the ROI. Here it puts 0 outside the mask; you can adapt differently if you want.

Try "impoly" function in MATLAB
refer http://www.mathworks.in/help/images/ref/impoly.html

Related

Plot image on dome

I am struggling to plot a round image on the surface of the dome in Matlab. Here is what I have. This is the png image:
And this is the dome:
Now, I need to project this image on the dome. I've written a code to place the image on the surface:
r = 10;
r2 = 9;
cdata = imread('circle_image.png');
props.EdgeColor = 'none';
figure();
n = 50;
[X,Y,Z] = sphere(n) ;
X1 = X * r;
Y1 = Y * r;
Z1 = Z * r;
for i = 1:n+1
for j = 1:n+1
if Z1(i,j) < r2
X1(i,j) = NaN;
Y1(i,j) = NaN;
Z1(i,j) = NaN;
end
end
end
my_dome = surf(X1,Y1,Z1,props) ;
alpha = 1;
set(kopula, 'FaceColor', 'texturemap', 'CData', cdata, 'FaceAlpha', alpha, 'EdgeColor', 'none');
axis equal
What I am getting looks like this:
It seems like the image is centred in the wrong place, or even in the wrong axis. How can I fix that?
Yes, the image is centered in the wrong place.
The texture mapping is applied on the whole surface, not just the "active" points (the one you didn't NaN). Basically, what you are getting is the image being spread onto the full sphere, and then when you crop the top of the sphere to get your dome, the image is cropped as well:
What you need to do, is actually remove all those points you converted to NaN, so they are not part of the surface at all and the texture mapping is aplied only on the top dome surface.
So replace your nested for loop with the following code:
idx_Raws2crop = Z1(:,1) < r2 ;
X1(idx_Raws2crop,:) = [] ;
Y1(idx_Raws2crop,:) = [] ;
Z1(idx_Raws2crop,:) = [] ;
Then continue with your code, you'll get:
Actually I also added the instruction:
cdata = flipud(cdata) ;
in order to have the THE CIRCLEwriting in the proper orientation (otherwise it appears upside down).
Edit:
To render the picture on the dome in the way you'd like according to your comment, I can see 2 options:
Options 1: Build upon what we have already
This will consist of:
Extend the [X,Y] domain on a square grid (so the picture is not distorted when texture mapped onto the surface).
Extend the picture itself (add some transparent margin), so the margin will cover the domain extension we introduced and the actual visible part of the picture will be nicely centered on the dome.
With the same X1, Y1 and Z1 than we obtained with the code above:
% Create a square grid large enough to cover the dome and a bit more:
[X2,Y2] = meshgrid(linspace(-5,5,100),linspace(-5,5,100)) ;
% reinterpolate Z1 over this new grid
Z2 = griddata(X1,Y1,Z1,X2,Y2) ;
Now your surface looks like this:
As I warned you, the image is now properly applied but the edge of the dome looks rather ugly. To ease that you could replace the NaN with the base value for your dome, this will transition a lot better between dome and flat domain:
Z2(isnan(Z2)) = 9 ;
Will now yield:
The next problem, as you can see, is that (as in your first question), the image is stretched onto the whole surface. So part of the writing is now on the flat part of the surface. To simply alleviate that, you can modify the picture (add a bit of transparent margin on every side), until the visible part of the image matches the dome size.
Options 2: Build your surface differently
This is actually easier and more straighforward. We will build a dome out of a square surface in the first place (no trimming and NaNing).
This code does not require any part of your previous code, it is self contained:
r = 10 ;
% Build a square grid
[X2,Y2] = meshgrid(linspace(-5,5,100),linspace(-5,5,100)) ;
% build Z2 according to the sphere equation.
Z2 = sqrt( r^2 - X2.^2 - Y2.^2) ;
figure();
my_dome = surf(X2,Y2,Z2,'EdgeColor', 'none', 'FaceColor', 'texturemap', 'CData', cdata, 'FaceAlpha', 1) ;
axis equal
Which yields a nicely centered texture map:
Note: The texture mapping works so well because your actual surface is still a square, only bent a bit to conform to a sphere. The texture mapping does not display the part of the surface where the picture is transparent, so it is invisible, but if you look at your surface without the texture mapping you'll understand what went on in the background:
hs=surf(X2,Y2,Z2) ; shading interp ; axis equal

Rotating an image matrix around its center in MATLAB

Assume I have a 2x2 matrix filled with values which will represent a plane. Now I want to rotate the plane around itself in a 3-D way, in the "z-Direction". For a better understanding, see the following image:
I wondered if this is possible by a simple affine matrix, thus I created the following simple script:
%Create a random value matrix
A = rand*ones(200,200);
%Make a box in the image
A(50:200-50,50:200-50) = 1;
Now I can apply transformations in the 2-D room simply by a rotation matrix like this:
R = affine2d([1 0 0; .5 1 0; 0 0 1])
tform = affine3d(R);
transformed = imwarp(A,tform);
However, this will not produce the desired output above, and I am not quite sure how to create the 2-D affine matrix to create such behavior.
I guess that a 3-D affine matrix can do the trick. However, if I define a 3-D affine matrix I cannot work with the 2-D representation of the matrix anymore, since MATLAB will throw the error:
The number of dimensions of the input image A must be 3 when the
specified geometric transformation is 3-D.
So how can I code the desired output with an affine matrix?
The answer from m3tho correctly addresses how you would apply the transformation you want: using fitgeotrans with a 'projective' transform, thus requiring that you specify 4 control points (i.e. 4 pairs of corresponding points in the input and output image). You can then apply this transform using imwarp.
The issue, then, is how you select these pairs of points to create your desired transformation, which in this case is to create a perspective projection. As shown below, a perspective projection takes into account that a viewing position (i.e. "camera") will have a given view angle defining a conic field of view. The scene is rendered by taking all 3-D points within this cone and projecting them onto the viewing plane, which is the plane located at the camera target which is perpendicular to the line joining the camera and its target.
Let's first assume that your image is lying in the viewing plane and that the corners are described by a normalized reference frame such that they span [-1 1] in each direction. We need to first select the degree of perspective we want by choosing a view angle and then computing the distance between the camera and the viewing plane. A view angle of around 45 degrees can mimic the sense of perspective of normal human sight, so using the corners of the viewing plane to define the edge of the conic field of view, we can compute the camera distance as follows:
camDist = sqrt(2)./tand(viewAngle./2);
Now we can use this to generate a set of control points for the transformation. We first apply a 3-D rotation to the corner points of the viewing plane, rotating around the y axis by an amount theta. This rotates them out of plane, so we now project the corner points back onto the viewing plane by defining a line from the camera through each rotated corner point and finding the point where it intersects the plane. I'm going to spare you the mathematical derivations (you can implement them yourself from the formulas in the above links), but in this case everything simplifies down to the following set of calculations:
term1 = camDist.*cosd(theta);
term2 = camDist-sind(theta);
term3 = camDist+sind(theta);
outP = [-term1./term2 camDist./term2; ...
term1./term3 camDist./term3; ...
term1./term3 -camDist./term3; ...
-term1./term2 -camDist./term2];
And outP now contains your normalized set of control points in the output image. Given an image of size s, we can create a set of input and output control points as follows:
scaledInP = [1 s(1); s(2) s(1); s(2) 1; 1 1];
scaledOutP = bsxfun(#times, outP+1, s([2 1])-1)./2+1;
And you can apply the transformation like so:
tform = fitgeotrans(scaledInP, scaledOutP, 'projective');
outputView = imref2d(s);
newImage = imwarp(oldImage, tform, 'OutputView', outputView);
The only issue you may come across is that a rotation of 90 degrees (i.e. looking end-on at the image plane) would create a set of collinear points that would cause fitgeotrans to error out. In such a case, you would technically just want a blank image, because you can't see a 2-D object when looking at it edge-on.
Here's some code illustrating the above transformations by animating a spinning image:
img = imread('peppers.png');
s = size(img);
outputView = imref2d(s);
scaledInP = [1 s(1); s(2) s(1); s(2) 1; 1 1];
viewAngle = 45;
camDist = sqrt(2)./tand(viewAngle./2);
for theta = linspace(0, 360, 360)
term1 = camDist.*cosd(theta);
term2 = camDist-sind(theta);
term3 = camDist+sind(theta);
outP = [-term1./term2 camDist./term2; ...
term1./term3 camDist./term3; ...
term1./term3 -camDist./term3; ...
-term1./term2 -camDist./term2];
scaledOutP = bsxfun(#times, outP+1, s([2 1])-1)./2+1;
tform = fitgeotrans(scaledInP, scaledOutP, 'projective');
spinImage = imwarp(img, tform, 'OutputView', outputView);
if (theta == 0)
hImage = image(spinImage);
set(gca, 'Visible', 'off');
else
set(hImage, 'CData', spinImage);
end
drawnow;
end
And here's the animation:
You can perform a projective transformation that can be estimated using the position of the corners in the first and second image.
originalP='peppers.png';
original = imread(originalP);
imshow(original);
s = size(original);
matchedPoints1 = [1 1;1 s(1);s(2) s(1);s(2) 1];
matchedPoints2 = [1 1;1 s(1);s(2) s(1)-100;s(2) 100];
transformType = 'projective';
tform = fitgeotrans(matchedPoints1,matchedPoints2,'projective');
outputView = imref2d(size(original));
Ir = imwarp(original,tform,'OutputView',outputView);
figure; imshow(Ir);
This is the result of the code above:
Original image:
Transformed image:

Merging 2 images by showing one next to the other separated by a diagonal line

I have 2 images ("before" and "after"). I would like to show a final image where the left half is taken from the before image and the right half is taken from the after image.
The images should be separated by a white diagonal line of predefined width (2 or 3 pixels), where the diagonal is specified either by a certain angle or by 2 start and end coordinates. The diagonal should overwrite a part of the final image such that the size is the same as the sources'.
Example:
I know it can be done by looping over all pixels to recombine and create the final image, but is there an efficient way, or better yet, a built-in function that can do this?
Unfortunately I don't believe there is a built-in solution to your problem, but I've developed some code to help you do this but it will unfortunately require the image processing toolbox to play nicely with the code. As mentioned in your comments, you have this already so we should be fine.
The logic behind this is relatively simple. We will assume that your before and after pictures are the same size and also share the same number of channels. The first part is to declare a blank image and we draw a straight line down the middle of a certain thickness. The intricacy behind this is to declare an image that is slightly bigger than the original size of the image. The reason why is because I'm going to draw a line down the middle, then rotate this blank image by a certain angle to achieve the first part of what you desire. I'll be using imrotate to rotate an image by any angle you desire. The first instinct is to declare an image that's the same size as either the originals, draw a line down the middle and rotate it. However, if you do this you'll end up with the line being disconnected and not draw from the top to the bottom of the image. That makes sense because the line being drawn on an angle covers more pixels than if you were to draw this vertically.
Using Pythagorean's theorem, we know that the longest line that can ever be drawn on your image is the diagonal. Therefore we declare an image that is sqrt(rows*rows + cols*cols) in both the rows and columns where rows and cols are the rows and columns of the original image. After, we'll take the ceiling to make sure we've covered as much as possible and we add a bit of extra room to accommodate for the width of the line. We draw a line on this image, rotate it then we'll crop the image after so that it's the same size as the input images. This ensures that the line drawn at whatever angle you wish is fully drawn from top to bottom.
That logic is the hardest part. Once you do that, you declare two logical masks where you use imfill to fill the left side of the mask as one mask and we'll invert the mask to find the other mask. You will also need to use the line image that we created earlier with imrotate to index into the masks and set the values to false so that we ignore these pixels that are on the line.
Finally, you take each mask, index into your image and copy over each portion of the image you desire. You finally use the line image to index into the output and set the values to white.
Without further ado, here's the code:
% Load some example data
load mandrill;
% im is the image before
% im2 is the image after
% Before image is a colour image
im = im2uint8(ind2rgb(X, map));
% After image is a grayscale image
im2 = rgb2gray(im);
im2 = cat(3, im2, im2, im2);
% Declare line image
rows = size(im, 1); cols = size(im, 2);
width = 5;
m = ceil(sqrt(rows*rows + cols*cols + width*width));
ln = false([m m]);
mhalf = floor(m / 2); % Find halfway point width wise and draw the line
ln(:,mhalf - floor(width/2) : mhalf + floor(width/2)) = true;
% Rotate the line image
ang = 20; % 20 degrees
lnrotate = imrotate(ln, ang, 'crop');
% Crop the image so that it's the same dimensions as the originals
mrowstart = mhalf - floor(rows/2);
mcolstart = mhalf - floor(cols/2);
lnfinal = lnrotate(mrowstart : mrowstart + rows - 1, mcolstart : mcolstart + cols - 1);
% Make the masks
mask1 = imfill(lnfinal, [1 1]);
mask2 = ~mask1;
mask1(lnfinal) = false;
mask2(lnfinal) = false;
% Make sure the masks have as many channels as the original
mask1 = repmat(mask1, [1 1 size(im,3)]);
mask2 = repmat(mask2, [1 1 size(im,3)]);
% Do the same for the line
lnfinal = repmat(lnfinal, [1 1 size(im, 3)]);
% Specify output image
out = zeros(size(im), class(im));
out(mask1) = im(mask1);
out(mask2) = im2(mask2);
out(lnfinal) = 255;
% Show the image
figure;
imshow(out);
We get:
If you want the line to go in the other direction, simply make the angle ang negative. In the example script above, I've made the angle 20 degrees counter-clockwise (i.e. positive). To reproduce the example you gave, specify -20 degrees instead. I now get this image:
Here's a solution using polygons:
function q44310306
% Load some image:
I = imread('peppers.png');
B = rgb2gray(I);
lt = I; rt = B;
% Specify the boundaries of the white line:
width = 2; % [px]
offset = 13; % [px]
sz = size(I);
wlb = [floor(sz(2)/2)-offset+[0,width]; ceil(sz(2)/2)+offset-[width,0]];
% [top-left, top-right; bottom-left, bottom-right]
% Configure two polygons:
leftPoly = struct('x',[1 wlb(1,2) wlb(2,2) 1], 'y',[1 1 sz(1) sz(1)]);
rightPoly = struct('x',[sz(2) wlb(1,1) wlb(2,1) sz(2)],'y',[1 1 sz(1) sz(1)]);
% Define a helper grid:
[XX,YY] = meshgrid(1:sz(2),1:sz(1));
rt(inpolygon(XX,YY,leftPoly.x,leftPoly.y)) = intmin('uint8');
lt(repmat(inpolygon(XX,YY,rightPoly.x,rightPoly.y),1,1,3)) = intmin('uint8');
rt(inpolygon(XX,YY,leftPoly.x,leftPoly.y) & ...
inpolygon(XX,YY,rightPoly.x,rightPoly.y)) = intmax('uint8');
final = bsxfun(#plus,lt,rt);
% Plot:
figure(); imshow(final);
The result:
One solution:
im1 = imread('peppers.png');
im2 = repmat(rgb2gray(im1),1,1,3);
imgsplitter(im1,im2,80) %imgsplitter(image1,image2,angle [0-100])
function imgsplitter(im1,im2,p)
s1 = size(im1,1); s2 = size(im1,2);
pix = floor(p*size(im1,2)/100);
val = abs(pix -(s2-pix));
dia = imresize(tril(ones(s1)),[s1 val]);
len = min(abs([0-pix,s2-pix]));
if p>50
ind = [ones(s1,len) fliplr(~dia) zeros(s1,len)];
else
ind = [ones(s1,len) dia zeros(s1,len)];
end
ind = uint8(ind);
imshow(ind.*im1+uint8(~ind).*im2)
hold on
plot([pix,s2-pix],[0,s1],'w','LineWidth',1)
end
OUTPUT:

Crop the largest square inside a circle object - Matlab

I am trying to find a way to crop from a circle object (Image A) the largest square that can fit inside it.
Can someone please explain/show me how to find the biggest square fit parameters of the white space inside the circle (Image I) and based on them crop the square in the original image (Image A).
Script:
A = imread('E:/CirTest/Test.jpg');
%imshow(A)
level = graythresh(A);
BW = im2bw(A,level);
%imshow(BW)
I = imfill(BW, 'holes');
imshow(I)
d = imdistline;
[centers, radii, metric] = imfindcircles(A,[1 500]);
imageCrop=imcrop(A, [BoxBottomX BoxBottomY NewX NewY]);
I have a solution for you but it requires a bit of extra work. What I would do first is use imfill but directly on the grayscale image. This way, noisy pixels in uniform areas get inpainted with the same intensities so that thresholding is easier. You can still use graythresh or Otsu's thresholding and do this on the inpainted image.
Here's some code to get you started:
figure; % Open up a new figure
% Read in image and convert to grayscale
A = rgb2gray(imread('http://i.stack.imgur.com/vNECg.jpg'));
subplot(1,3,1); imshow(A);
title('Original Image');
% Find the optimum threshold via Otsu
level = graythresh(A);
% Inpaint noisy areas
I = imfill(A, 'holes');
subplot(1,3,2); imshow(I);
title('Inpainted image');
% Threshold the image
BW = im2bw(I, level);
subplot(1,3,3); imshow(BW);
title('Thresholded Image');
The above code does the three operations that I mentioned, and we see this figure:
Notice that the thresholded image has border pixels that need to be removed so we can concentrate on the circular object. You can use the imclearborder function to remove the border pixels. When we do that:
% Clear off the border pixels and leave only the circular object
BW2 = imclearborder(BW);
figure; imshow(BW2);
... we now get this image:
Unfortunately, there are some noisy pixels, but we can very easily use morphology, specifically the opening operation with a small circular disk structuring element to remove these noisy pixels. Using strel with the appropriate structuring element in addition to imopen should help do the trick:
% Clear out noisy pixels
SE = strel('disk', 3, 0);
out = imopen(BW2, SE);
figure; imshow(out);
We now get:
This mask contains the locations of the circular object we now need to use to crop our original image. The last part is to determine the row and column locations using this mask to locate the top left and bottom right corner of the original image and we thus crop it:
% Find row and column locations of circular object
[row,col] = find(out);
% Find top left and bottom right corners
top_row = min(row);
top_col = min(col);
bottom_row = max(row);
bottom_col = max(col);
% Crop the image
crop = A(top_row:bottom_row, top_col:bottom_col);
% Show the cropped image
figure; imshow(crop);
We now get:
It's not perfect, but it will of course get you started. If you want to copy and paste this in its entirety and run this on your computer, here we are:
figure; % Open up a new figure
% Read in image and convert to grayscale
A = rgb2gray(imread('http://i.stack.imgur.com/vNECg.jpg'));
subplot(2,3,1); imshow(A);
title('Original Image');
% Find the optimum threshold via Otsu
level = graythresh(A);
% Inpaint noisy areas
I = imfill(A, 'holes');
subplot(2,3,2); imshow(I);
title('Inpainted image');
% Threshold the image
BW = im2bw(I, level);
subplot(2,3,3); imshow(BW);
title('Thresholded Image');
% Clear off the border pixels and leave only the circular object
BW2 = imclearborder(BW);
subplot(2,3,4); imshow(BW2);
title('Cleared Border Pixels');
% Clear out noisy pixels
SE = strel('disk', 3, 0);
out = imopen(BW2, SE);
% Show the final mask
subplot(2,3,5); imshow(out);
title('Final Mask');
% Find row and column locations of circular object
[row,col] = find(out);
% Find top left and bottom right corners
top_row = min(row);
top_col = min(col);
bottom_row = max(row);
bottom_col = max(col);
% Crop the image
crop = A(top_row:bottom_row, top_col:bottom_col);
% Show the cropped image
subplot(2,3,6);
imshow(crop);
title('Cropped Image');
... and our final figure is:
You can use bwdist with L_inf distance (aka 'chessboard') to get the axis-aligned distance to the edges of the region, thus concluding the dimensions of the largest bounded box:
bw = imread('http://i.stack.imgur.com/7yCaD.png');
lb = bwlabel(bw);
reg = lb==2; %// pick largest area
d = bwdist(~reg,'chessboard'); %// compute the axis aligned distance from boundary inward
r = max(d(:)); %// find the largest distance to boundary
[cy cx] = find(d==r,1); %// find the location most distant
boundedBox = [cx-r, cy-r, 2*r, 2*r];
And the result is
figure;
imshow(bw,'border','tight');
hold on;
rectangle('Position', boundedBox, 'EdgeColor','r');
Once you have the bounding box, you can use imcrop to crop the original image
imageCrop = imcrop(A, boundedBox);
Alternatively, you can
imageCrop = A(cy + (-r:r-1), cx + (-r:r-1) );

Image Cropping using Predefined ROI Matlab

Basically what I'm trying to do is using predefined ROI's to crop and image into multiple new images.
Longer is that I have a map of brain, with sections of it defined. using that I define many ROI's from it in MATLAB using imfreehand or roipoly. From there I have stained slides of these sections. I want to use the ROI's that I defined from the map to crop the image of the real brain into many new images.
Just having trouble finding something that uses the ROI's as the cropping area and not just some rectangle.
If I need to explain a bit more let me know.
Simple example of what I think you want using imfreehand:
I = imread('pout.tif');
imshow(I);
h = imfreehand; % now pick ROI
BW = createMask(h); % get BW mask for that ROI
pos = getPosition(h); % get position for that ROI
% define bounding box
x1 = round(min(pos(:,2)));
y1 = round(min(pos(:,1)));
x2 = round(max(pos(:,2)));
y2 = round(max(pos(:,1)));
I2 = I.*uint8(BW); % apply mask to image
I2 = I2(x1:x2,y1:y2);
figure;
subplot(1,2,1);
imshow(I);
subplot(1,2,2);
imshow(I2);
If you have the ROI's already saved in some way, and don't want to run imfreehand again, all you really need is to calculate BW (a mask with ones within the ROI and zeros elsewhere) and the bounding box (to crop tight around the ROI).

Resources