I want to crop a detected faces in my code. Here is my code.
function DisplayDetections(im, dets)
imshow(im);
k = size(dets,1);
hold on;
for i=1:k
rectangle('Position', dets(i,:),'LineWidth',2,'EdgeColor', 'r');
end
imcrop(rectangle);
hold off;
Their is syntax error in cropping.
Can anybody help in cropping rectangle box detected in above box.
That code only draws the rectangles in your image. If you actually want to crop out portions of the image with the defined rectangles, use imcrop.
As such, you would do something like this to store all of your cropped rectangles. This is assuming that im and dets are already defined in your code from your function:
k = size(dets,1);
cropped = cell(1,k);
for i=1:k
cropped{k} = imcrop(im, dets(i,:));
end
cropped would be a cell array where each element will store a cropped image defined by each rectangle within your dets array. This is assuming that dets is a 2D array where there are 4 columns, and the number of rows determines how many rectangles you have. Each row of dets should be structured like:
[xmin ymin width height]
xmin, ymin are the horizontal and vertical co-ordinate of the top-left corner of the rectangle, and width and height are the width and height of the rectangle.
If you want to access a cropped portion in the cell array, simply do:
crp = cropped{k};
k would be the kth rectangle detected in your image.
Related
I have an image which has three classes. Each class is labelled by number {2,3,4} and background is {1}. I want to draw contours of each class in an image. I tried the MATLAB code below, but the contours look overlap together (blue and green, yellow and green). How can I draw a contour per class?
Img=ones(128,128);
Img(20:end-20,20:end-20)=2;
Img(30:end-30,30:end-30)=3;
Img(50:end-50,50:end-50)=4;
%%Img(60:end-60,60:end-60)=3; %% Add one more rectangular
imagesc(Img);colormap(gray);hold on; axis off;axis equal;
[c2,h2] = contour(Img==2,[0 1],'g','LineWidth',2);
[c3,h3] = contour(Img==3,[0 1],'b','LineWidth',2);
[c4,h4] = contour(Img==4,[0 1],'y','LineWidth',2);
hold off;
This is my expected result
This is happening because each "class" is defined as a hollow square in terms of its shape. Therefore, when you use contour it traces over all boundaries of the square. Take for example just one class when you plot this on the figure. Specifically, take a look at your first binary image you create with Img == 2. We get this image:
Therefore, if you called contour on this shape, you'd actually be tracing the boundaries of this object. It makes more sense now doesn't it? If you repeated this for the rest of your classes, this is the reason why the contour lines are overlapping in colour. The innermost part of the hollow square is overlapping with the outermost part of another square. Now when you call contour the first time you actually will get this:
As you can see, "class 2" is actually defined to be the hollowed out grey square. If you want to achieve what you desire, one way is to fill in each hollow square then apply contour to this result. Assuming you have the image processing toolbox, use imfill with the 'holes' option at each step:
Img=ones(128,128);
Img(20:end-20,20:end-20)=2;
Img(50:end-50,50:end-50)=3;
Img(30:end-30,30:end-30)=3;
Img(35:end-35,35:end-35)=3;
Img(50:end-50,50:end-50)=4;
imagesc(Img);colormap(gray);hold on; axis off;axis equal;
%// New
%// Create binary mask with class 2 and fill in the holes
im = Img == 2;
im = imfill(im, 'holes');
%// Now draw contour
[c2,h2] = contour(im,[0 1],'g','LineWidth',2);
%// Repeat for the rest of the classes
im = Img == 3;
im = imfill(im, 'holes');
[c3,h3] = contour(im,[0 1],'b','LineWidth',2);
im = Img == 4;
im = imfill(im, 'holes');
[c4,h4] = contour(im,[0 1],'y','LineWidth',2);
hold off;
We now get this:
the background of my question is the following.
I have a picture and a crop rectangle which describes how the picture should be cropped to produce the resulting picture. The crop rectangle is always smaller or at maximum the size of the picture.
Now it should be possible to rotate the crop rectangle.
This means that when rotating the crop regtanle inside the picture, the crop must be scaled in order that its extends does not exceed the photo.
Can anybode help me with a formula of how to compute the scale of the crop rectanlge based on the axis aligned photo regtancle?
My first attempt was to compute a axis aligned bounding box of the crop rectanlge and than make this fit it the photo rectangle. But somehow i get stuck with this approach,
Edited:
One more think to note:
- The crop rectangle can have other dimension and another center point inside the surrounding rectangle. This means the crop rectangle can be much smaller but for example is located at the lower left bound of the picture rectangle. So when rotating the smaller crop it will also exceed its limits
Thanks in advance
Sebastian
When you rotate an axis-aligned rectangle of width w and height h by an angle φ, the width and height of the rotated rectangle's axis-aligned bounding box are:
W = w·|cos φ| + h·|sin φ|
H = w·|sin φ| + h·|cos φ|
(The notation |x| denotes an absolute value.) This is the bounding box of the rotated crop rectangle which you can scale to fit the original rectangle of width wo and height ho with the factor
a = min(wo / W, ho / H)
if a is less than 1, the rotated crop rectangle fits inside the original rectangle and you don't have to scale. Otherwise, reduce the crop rectangle to the scaled dimensions
W′ = a·W
H′ = a·H
You could start checking if the dimension of the cropped rectangle fit in the old rectangle:
bound_x = a * cos(theta) + b * sin(theta)
bound_y = b * cos(theta) + a * sin(theta)
Where a and b are the new dimensions, theta us the angle and bound_x and bound_y should be smaller of the original rectangle.
I have a four element position vector [xmin ymin width hight] that specifies the size and position of crop rectangle from image I. How can i find the new position and size for the resized image I?
It is not entirely clear, what you want, as we don't know your coordinate system. Assuming x is the horizontal axis and y is the vertical axis and your point (1,1) is at the top left corner, you can use the following snippet:
p = [xmin ymin width height];
I = I_orig(p(2):p(2)+p(4)-1,p(1):p(1)+p(3)-1);
The size is of course your specified width and height.
You can convert your original bounding box to relative values (that is assuming the image size is 1x1)
[origH origW] = size( origI(:,:,1) );
relativeBB = [xmin / origW, ymin / origH, width / origW, hight / origH];
Now, no matter how you resized your origI, you can recover the bounding box w.r.t the new size from the relative representation:
[currH currW] = size(I(:,:,1));
currBB = relativeBB .* [currW, currH, currW, currH];
You might need to round things a bit: you might find floor better for xmin and ymin and ceil more suitable for width and height.
I have to transform pixels from one image onto another image, by feature detection. I have calculated the projective transformation matrix. One image is the base image, and the other is a linearly translated image.
Now I have to define a larger grid and assign pixels from the base image to it. For example, if the base image is 20 at (1,1), on the larger grid I will have 20 at (1,1). and assign zeroes to all the unfilled values of the grid. Then I have to map the linearly translated image onto the base image and write my own algorithm based on "delaunay triangulation" to interpolate between the images.
My question is that when I map the translated image to the base image, I use the concept
(w,z)=inv(T).*(x,y)
A=inv(T).*B
where (w,z) are coordinates of the base image, (x,y) are coordinates of the translated image, A is a matrix containing coordinates (w z 1) and B is matrix containing coordinates (x y 1).
If I use the following code I get the new coordinates, but how do I relate these things to the image? Are my pixels from the second image also translated onto the first image? If not, how can I do this?
close all; clc; clear all;
image1_gray=imread('C:\Users\Javeria Farooq\Desktop\project images\a.pgm');
figure; imshow(image1_gray); axis on; grid on;
title('Base image');
impixelinfo
hold on
image2_gray =imread('C:\Users\Javeria Farooq\Desktop\project images\j.pgm');
figure(2); imshow(image2_gray); axis on; grid on;
title('Unregistered image1');
impixelinfo
% Detect and extract features from both images
points_image1= detectSURFFeatures(image1_gray, 'NumScaleLevels', 100, 'NumOctaves', 5, 'MetricThreshold', 500 );
points_image2 = detectSURFFeatures(image2_gray, 'NumScaleLevels', 100, 'NumOctaves', 12, 'MetricThreshold', 500 );
[features_image1, validPoints_image1] = extractFeatures(image1_gray, points_image1);
[features_image2, validPoints_image2] = extractFeatures(image2_gray, points_image2);
% Match feature vectors
indexPairs = matchFeatures(features_image1, features_image2, 'Prenormalized', true) ;
% Get matching points
matched_pts1 = validPoints_image1(indexPairs(:, 1));
matched_pts2 = validPoints_image2(indexPairs(:, 2));
figure; showMatchedFeatures(image1_gray,image2_gray,matched_pts1,matched_pts2,'montage');
legend('matched points 1','matched points 2');
figure(5); showMatchedFeatures(image1_gray,image3_gray,matched_pts4,matched_pts3,'montage');
legend('matched points 1','matched points 3');
% Compute the transformation matrix using RANSAC
[tform, inlierFramePoints, inlierPanoPoints, status] = estimateGeometricTransform(matched_pts1, matched_pts2, 'projective')
figure(6); showMatchedFeatures(image1_gray,image2_gray,inlierPanoPoints,inlierFramePoints,'montage');
[m n] = size(image1_gray);
image1_gray = double(image1_gray);
[x1g,x2g]=meshgrid(m,n) % A MESH GRID OF 2X2
k=imread('C:\Users\Javeria Farooq\Desktop\project images\a.pgm');
ind = sub2ind( size(k),x1g,x2g);
%[tform1, inlierFramepPoints, inlierPanopPoints, status] = estimateGeometricTransform(matched_pts4, matched_pts3, 'projective')
%figure(7); showMatchedFeatures(image1_gray,image3_gray,inlierPanopPoints,inlierFramepPoints,'montage');
%invtform=invert(tform)
%x=invtform
%[xq,yq]=meshgrid(1:0.5:200.5,1:0.5:200.5);
r=[];
A=[];
k=1;
%i didnot know how to refer to variable tform so i wrote the transformation
%matrix from variable structure tform
T=[0.99814272,-0.0024304502,-1.2932052e-05;2.8876773e-05,0.99930143,1.6285858e-06;0.029063907,67.809265,1]
%lets take i=1:400 so my r=2 and resulting grid is 400x400
for i=1:200
for j=1:200
A=[A; i j 1];
z=A*T;
r=[r;z(k,1)/z(k,3),z(k,2)/z(k,3)];
k=k+1;
end
end
%i have transformed the coordinates but how to assign values??
%r(i,j)=c(i,j)
d1=[];
d2=[];
for l=1:40000
d1=[d1;A(l,1)];
d2=[d2;r(l,1)];
X=[d1 d2];
X=X(:);
end
c1=[];
c2=[];
for l=1:40000
c1=[c1;A(l,2)];
c2=[c2;r(l,2)];
Y=[c1 c2];
Y=Y(:);
end
%this delaunay triangulation is of vertices as far as i understand it
%doesnot have any pixel value of any image
DT=delaunayTriangulation(X,Y);
triplot(DT,X,Y);
I solved this problem by using these two steps:
Use transformPointsForward command to transform the coordinates of image ,using the tform object returned by estimateGeometrcTransform
Use the scatteredInterpolant class in Matlab and use command scatteredInterpolant
to assign the transformed coordinates their respective pixel values.
F=scatteredInterpolant(P,z)
here P=nx2 matrix containing all the transformed coordinates
z=nx1 matrix containing pixel values of image that is transformed,it is obtained by converting image to column vector using image=image(:)
finally all the transformed coordinates are present along with their pixel values on the base image and can be interpolated.
You are doing way too much work here, and I don't think you need the Delaunay Triangulation at all. Use the imwarp function from the Image Processing Toolbox to transform the image. It takes the original image and the tform object returned by estimateGeometricTransform.
Im looking to create an image in Matlab of a large black rectangle with 9 small circles arranged as a a 3x3 array aligned in the centre of the rectangle, i.e. the centre circle will have its midpoint in the centre of the square.
I need the circles evenly spaced apart with some distance between each circle and between the outer circles and the border of the rectangle (think of a square piece of paper with 9 holes placed in it by stabbing it with a pen). I need this so that i can see how image convolution using a 2D gaussian will distort this image.
However I’m relatively new to Matlab and have been trying to create this image. I have successfully made a black/white square and a white circle in a black square which takes up most of the square itself but I cant seem to make a small white circle in any desired location in a black square let alone multiple small circles in a specific alignment.
This is what I have used to create the black square with a large circle:
X = ones([100,1])*([-50:49]);
Y = ([-50:49]')*(ones([1,100]));
Z = (X.^2)+(Y.^2);
image = zeros([100 100]);
image(find(Z<=50^2)) = 1;
imshow(image)
If I understood correctly, try this:
% size of each small box. Final image will be 3Nx3N
N = 100;
% create a circle mask
t = linspace(0,2*pi,50); % approximated by 100 lines
r = (N-10)/2; % circles will be separated by a 10 pixels border
circle = poly2mask(r*cos(t)+N/2+0.5, r*sin(t)+N/2+0.5, N, N);
% replicate to build image
img = repmat(circle, 3,3);
subplot(121), imshow(img)
% after applying Gaussian filter
h = fspecial('gaussian', [15 15], 2.5);
img2 = imfilter(im2double(img), h);
subplot(122), imshow(img2)