Edge orientation - image

Hi I am trying to get the boundary orientation of an image from the image gradient or canny edge detector as in equation 11 of http://www.cs.swan.ac.uk/~csjason/papers/xxmm-pami2008.pdf
I currently have:
clear all
Img = imread('littlecircle.png');
Img = Img(:,:,1);
Img = double(Img);
w = size(Img,1); % width size
h = size(Img,2); % height size
[Ix,Iy] = gradient(Img); %gradient of image
i=1; %iteration for magnetic field loop
b=0; %initialize b to zero
% Magnetic Field
for pxRow = 1:h % fixed pixel row
for pxCol = 1:w % fixed pixel column
for r = 1:h % row of distant pixel
for c = 1:w % column of distant pixel
O(c,r) = [-Iy(c,r),Ix(c,r)]; % O(x) = (-1).^lambda(-Iy(x),Ix(x)) --ERROR HERE
end
end
B(i) = {O}; % filling a cell array with results. read below
i = i+1;
end
end
However I am getting a subscript indices mismatch when storing into O(c,r). Why is this? and also if anyone thinks there is a better way to do this from the paper then I would love to here it. Thanks.

You could do the canny + orientation detection in one step, by modifying matlab's canny edge detection code or modify an alternative like this. Canny works by determining the orientation on each step, so you could modify the canny code to also return an orientation map for each pixel.

Related

Merging 2 images by showing one next to the other separated by a diagonal line

I have 2 images ("before" and "after"). I would like to show a final image where the left half is taken from the before image and the right half is taken from the after image.
The images should be separated by a white diagonal line of predefined width (2 or 3 pixels), where the diagonal is specified either by a certain angle or by 2 start and end coordinates. The diagonal should overwrite a part of the final image such that the size is the same as the sources'.
Example:
I know it can be done by looping over all pixels to recombine and create the final image, but is there an efficient way, or better yet, a built-in function that can do this?
Unfortunately I don't believe there is a built-in solution to your problem, but I've developed some code to help you do this but it will unfortunately require the image processing toolbox to play nicely with the code. As mentioned in your comments, you have this already so we should be fine.
The logic behind this is relatively simple. We will assume that your before and after pictures are the same size and also share the same number of channels. The first part is to declare a blank image and we draw a straight line down the middle of a certain thickness. The intricacy behind this is to declare an image that is slightly bigger than the original size of the image. The reason why is because I'm going to draw a line down the middle, then rotate this blank image by a certain angle to achieve the first part of what you desire. I'll be using imrotate to rotate an image by any angle you desire. The first instinct is to declare an image that's the same size as either the originals, draw a line down the middle and rotate it. However, if you do this you'll end up with the line being disconnected and not draw from the top to the bottom of the image. That makes sense because the line being drawn on an angle covers more pixels than if you were to draw this vertically.
Using Pythagorean's theorem, we know that the longest line that can ever be drawn on your image is the diagonal. Therefore we declare an image that is sqrt(rows*rows + cols*cols) in both the rows and columns where rows and cols are the rows and columns of the original image. After, we'll take the ceiling to make sure we've covered as much as possible and we add a bit of extra room to accommodate for the width of the line. We draw a line on this image, rotate it then we'll crop the image after so that it's the same size as the input images. This ensures that the line drawn at whatever angle you wish is fully drawn from top to bottom.
That logic is the hardest part. Once you do that, you declare two logical masks where you use imfill to fill the left side of the mask as one mask and we'll invert the mask to find the other mask. You will also need to use the line image that we created earlier with imrotate to index into the masks and set the values to false so that we ignore these pixels that are on the line.
Finally, you take each mask, index into your image and copy over each portion of the image you desire. You finally use the line image to index into the output and set the values to white.
Without further ado, here's the code:
% Load some example data
load mandrill;
% im is the image before
% im2 is the image after
% Before image is a colour image
im = im2uint8(ind2rgb(X, map));
% After image is a grayscale image
im2 = rgb2gray(im);
im2 = cat(3, im2, im2, im2);
% Declare line image
rows = size(im, 1); cols = size(im, 2);
width = 5;
m = ceil(sqrt(rows*rows + cols*cols + width*width));
ln = false([m m]);
mhalf = floor(m / 2); % Find halfway point width wise and draw the line
ln(:,mhalf - floor(width/2) : mhalf + floor(width/2)) = true;
% Rotate the line image
ang = 20; % 20 degrees
lnrotate = imrotate(ln, ang, 'crop');
% Crop the image so that it's the same dimensions as the originals
mrowstart = mhalf - floor(rows/2);
mcolstart = mhalf - floor(cols/2);
lnfinal = lnrotate(mrowstart : mrowstart + rows - 1, mcolstart : mcolstart + cols - 1);
% Make the masks
mask1 = imfill(lnfinal, [1 1]);
mask2 = ~mask1;
mask1(lnfinal) = false;
mask2(lnfinal) = false;
% Make sure the masks have as many channels as the original
mask1 = repmat(mask1, [1 1 size(im,3)]);
mask2 = repmat(mask2, [1 1 size(im,3)]);
% Do the same for the line
lnfinal = repmat(lnfinal, [1 1 size(im, 3)]);
% Specify output image
out = zeros(size(im), class(im));
out(mask1) = im(mask1);
out(mask2) = im2(mask2);
out(lnfinal) = 255;
% Show the image
figure;
imshow(out);
We get:
If you want the line to go in the other direction, simply make the angle ang negative. In the example script above, I've made the angle 20 degrees counter-clockwise (i.e. positive). To reproduce the example you gave, specify -20 degrees instead. I now get this image:
Here's a solution using polygons:
function q44310306
% Load some image:
I = imread('peppers.png');
B = rgb2gray(I);
lt = I; rt = B;
% Specify the boundaries of the white line:
width = 2; % [px]
offset = 13; % [px]
sz = size(I);
wlb = [floor(sz(2)/2)-offset+[0,width]; ceil(sz(2)/2)+offset-[width,0]];
% [top-left, top-right; bottom-left, bottom-right]
% Configure two polygons:
leftPoly = struct('x',[1 wlb(1,2) wlb(2,2) 1], 'y',[1 1 sz(1) sz(1)]);
rightPoly = struct('x',[sz(2) wlb(1,1) wlb(2,1) sz(2)],'y',[1 1 sz(1) sz(1)]);
% Define a helper grid:
[XX,YY] = meshgrid(1:sz(2),1:sz(1));
rt(inpolygon(XX,YY,leftPoly.x,leftPoly.y)) = intmin('uint8');
lt(repmat(inpolygon(XX,YY,rightPoly.x,rightPoly.y),1,1,3)) = intmin('uint8');
rt(inpolygon(XX,YY,leftPoly.x,leftPoly.y) & ...
inpolygon(XX,YY,rightPoly.x,rightPoly.y)) = intmax('uint8');
final = bsxfun(#plus,lt,rt);
% Plot:
figure(); imshow(final);
The result:
One solution:
im1 = imread('peppers.png');
im2 = repmat(rgb2gray(im1),1,1,3);
imgsplitter(im1,im2,80) %imgsplitter(image1,image2,angle [0-100])
function imgsplitter(im1,im2,p)
s1 = size(im1,1); s2 = size(im1,2);
pix = floor(p*size(im1,2)/100);
val = abs(pix -(s2-pix));
dia = imresize(tril(ones(s1)),[s1 val]);
len = min(abs([0-pix,s2-pix]));
if p>50
ind = [ones(s1,len) fliplr(~dia) zeros(s1,len)];
else
ind = [ones(s1,len) dia zeros(s1,len)];
end
ind = uint8(ind);
imshow(ind.*im1+uint8(~ind).*im2)
hold on
plot([pix,s2-pix],[0,s1],'w','LineWidth',1)
end
OUTPUT:

Extract a page from a uniform background in an image

If I have an image, in which there is a page of text shot on a uniform background, how can I auto detect the boundaries between the paper and the background?
An example of the image I want to detect is shown below. The images that I will be dealing with consist of a single page on a uniform background and they can be rotated at any angle.
One simple method would be to threshold the image by some known value once you convert the image to grayscale. The problem with that approach is that we are applying a global threshold and so some of the paper at the bottom of the image will be lost if you make the threshold too high. If you make the threshold too low, then you'll certainly get the paper, but you'll include a lot of the background pixels too and it will probably be difficult to remove those pixels with post-processing.
One thing I can suggest is to use an adaptive threshold algorithm. An algorithm that has worked for me in the past is the Bradley-Roth adaptive thresholding algorithm. You can read up about it here on a post I commented on a while back:
Bradley Adaptive Thresholding -- Confused (questions)
However, if you want the gist of it, an integral image of the grayscale version of the image is taken first. The integral image is important because it allows you to calculate the sum of pixels within a window in O(1) complexity. However, the calculation of the integral image is usually O(n^2), but you only have to do that once. With the integral image, you scan neighbourhoods of pixels of size s x s and you check to see if the average intensity is less than t% of the actual average within this s x s window then this is pixel classified as the background. If it's larger, then it's classified as being part of the foreground. This is adaptive because the thresholding is done using local pixel neighbourhoods rather than using a global threshold.
I've coded an implementation of the Bradley-Roth algorithm here for you. The default parameters for the algorithm are s being 1/8th of the width of the image and t being 15%. Therefore, you can just call it this way to invoke the default parameters:
out = adaptiveThreshold(im);
im is the input image and out is a binary image that denotes what belongs to foreground (logical true) or background (logical false). You can play around with the second and third input parameters: s being the size of the thresholding window and t the percentage we talked about above and can call the function like so:
out = adaptiveThreshold(im, s, t);
Therefore, the code for the algorithm looks like this:
function [out] = adaptiveThreshold(im, s, t)
%// Error checking of the input
%// Default value for s is 1/8th the width of the image
%// Must make sure that this is a whole number
if nargin <= 1, s = round(size(im,2) / 8); end
%// Default value for t is 15
%// t is used to determine whether the current pixel is t% lower than the
%// average in the particular neighbourhood
if nargin <= 2, t = 15; end
%// Too few or too many arguments?
if nargin == 0, error('Too few arguments'); end
if nargin >= 4, error('Too many arguments'); end
%// Convert to grayscale if necessary then cast to double to ensure no
%// saturation
if size(im, 3) == 3
im = double(rgb2gray(im));
elseif size(im, 3) == 1
im = double(im);
else
error('Incompatible image: Must be a colour or grayscale image');
end
%// Compute integral image
intImage = cumsum(cumsum(im, 2), 1);
%// Define grid of points
[rows, cols] = size(im);
[X,Y] = meshgrid(1:cols, 1:rows);
%// Ensure s is even so that we are able to index the image properly
s = s + mod(s,2);
%// Access the four corners of each neighbourhood
x1 = X - s/2; x2 = X + s/2;
y1 = Y - s/2; y2 = Y + s/2;
%// Ensure no co-ordinates are out of bounds
x1(x1 < 1) = 1;
x2(x2 > cols) = cols;
y1(y1 < 1) = 1;
y2(y2 > rows) = rows;
%// Count how many pixels there are in each neighbourhood
count = (x2 - x1) .* (y2 - y1);
%// Compute row and column co-ordinates to access each corner of the
%// neighbourhood for the integral image
f1_x = x2; f1_y = y2;
f2_x = x2; f2_y = y1 - 1; f2_y(f2_y < 1) = 1;
f3_x = x1 - 1; f3_x(f3_x < 1) = 1; f3_y = y2;
f4_x = f3_x; f4_y = f2_y;
%// Compute 1D linear indices for each of the corners
ind_f1 = sub2ind([rows cols], f1_y, f1_x);
ind_f2 = sub2ind([rows cols], f2_y, f2_x);
ind_f3 = sub2ind([rows cols], f3_y, f3_x);
ind_f4 = sub2ind([rows cols], f4_y, f4_x);
%// Calculate the areas for each of the neighbourhoods
sums = intImage(ind_f1) - intImage(ind_f2) - intImage(ind_f3) + ...
intImage(ind_f4);
%// Determine whether the summed area surpasses a threshold
%// Set this output to 0 if it doesn't
locs = (im .* count) <= (sums * (100 - t) / 100);
out = true(size(im));
out(locs) = false;
end
When I use your image and I set s = 500 and t = 5, here's the code and this is the image I get:
im = imread('http://i.stack.imgur.com/MEcaz.jpg');
out = adaptiveThreshold(im, 500, 5);
imshow(out);
You can see that there are some spurious white pixels at the bottom white of the image, and there are some holes we need to fill in inside the paper. As such, let's use some morphology and declare a structuring element that's a 15 x 15 square, perform an opening to remove the noisy pixels, then fill in the holes when we're done:
se = strel('square', 15);
out = imopen(out, se);
out = imfill(out, 'holes');
imshow(out);
This is what I get after all of that:
Not bad eh? Now if you really want to see what the image looks like with the paper segmented, we can use this mask and multiply it with the original image. This way, any pixels that belong to the paper are kept while those that belong to the background go away:
out_colour = bsxfun(#times, im, uint8(out));
imshow(out_colour);
We get this:
You'll have to play around with the parameters until it works for you, but the above parameters were the ones I used to get it working for the particular page you showed us. Image processing is all about trial and error, and putting processing steps in the right sequence until you get something good enough for your purposes.
Happy image filtering!

How to compute horizontal gradient value?

So I want to measure the vertical edges of an image to use it later as depth cue for 2D to 3D conversion.
To do so I will have to compute the horizontal gradient value for each block to measure the vertical edges as follow:
̅ g(x,y) = 1/N ∑_((x',y')∈ Ω(x,y))〖g(x', y')〗
Where:
g(x',y') is a horizontal gradient at a pixel location (x',y'),
omega(x,y) is the nighborhood of the pixel location(x',y')
and N is the number of pixels in omega(x,y).
So Here is what I did on matlab:
I = im2double(imread('landscape.jpg'));
% convert RGB to gray
gI = rgb2gray(I);
[nrow, ncol] = size(gI);
% divide the image into 4-by-4 blocks
gI = mat2tiles((gI),[4,4]);
N = 4*4; % block size
% For each block, compute the horizontal gradient
gI = reshape([gI{:}],4*4, []);
mask = fspecial('sobel');
g = imfilter(gI, mask);
g_bar = g./N;
g_bar = reshape(g_bar,nrow, ncol);
I'm new to Matlab so I'm not sure if my code is expressing the equation in the right way.
Can you please let me know if you think it is correct? as I'm not sure how to test the output!
There's no need for you to decompose your image into 4 x 4 blocks. The horizontal gradient can be used with a Sobel filter or Prewitt filter, which is 3 x 3 and can directly be put into imfilter. imfilter performs 2D convolution / filtering with a specified mask / kernel for you, so tiling is not necessary. As such, you can just use imfilter with the mask defined through fspecial, and define N = 9. Therefore:
I = im2double(imread('landscape.jpg'));
% convert RGB to gray
gI = rgb2gray(I);
N = 9;
mask = fspecial('sobel');
g = imfilter(gI, mask);
g_bar = g./N;
From experience, increasing the size of your gradient mask won't give you much better results. You want to ensure that the mask is as small as possible to capture as many local changes as possible.

Plot images as axis labels in MATLAB

I am plotting a 7x7 pixel 'image' in MATLAB, using the imagesc command:
imagesc(conf_matrix, [0 1]);
This represents a confusion matrix, between seven different objects. I have a thumbnail picture of each of the seven objects that I would like to use as the axes tick labels. Is there an easy way to do this?
I don't know an easy way. The axes properties XtickLabel which determines the labels, can only be strings.
If you want a not-so-easy way, you could do something in the spirit of the following non-complete (in the sense of a non-complete solution) code, creating one label:
h = imagesc(rand(7,7));
axh = gca;
figh = gcf;
xticks = get(gca,'xtick');
yticks = get(gca,'ytick');
set(gca,'XTickLabel','');
set(gca,'YTickLabel','');
pos = get(axh,'position'); % position of current axes in parent figure
pic = imread('coins.png');
x = pos(1);
y = pos(2);
dlta = (pos(3)-pos(1)) / length(xticks); % square size in units of parant figure
% create image label
lblAx = axes('parent',figh,'position',[x+dlta/4,y-dlta/2,dlta/2,dlta/2]);
imagesc(pic,'parent',lblAx)
axis(lblAx,'off')
One problem is that the label will have the same colormap of the original image.
#Itmar Katz gives a solution very close to what I want to do, which I've marked as 'accepted'. In the meantime, I made this dirty solution using subplots, which I've given here for completeness. It only works up to a certain size input matrix though, and only displays well when the figure is square.
conf_mat = randn(5);
A = imread('peppers.png');
tick_images = {A, A, A, A, A};
n = length(conf_mat) + 1;
% plotting axis labels at left and top
for i = 1:(n-1)
subplot(n, n, i + 1);
imshow(tick_images{i});
subplot(n, n, i * n + 1);
imshow(tick_images{i});
end
% generating logical array for where the confusion matrix should be
idx = 1:(n*n);
idx(1:n) = 0;
idx(mod(idx, n)==1) = 0;
% plotting the confusion matrix
subplot(n, n, find(idx~=0));
imshow(conf_mat);
axis image
colormap(gray)

How can I draw a circle on an image in MATLAB?

I have an image in MATLAB:
im = rgb2gray(imread('some_image.jpg');
% normalize the image to be between 0 and 1
im = im/max(max(im));
And I've done some processing that resulted in a number of points that I want to highlight:
points = some_processing(im);
Where points is a matrix the same size as im with ones in the interesting points.
Now I want to draw a circle on the image in all the places where points is 1.
Is there any function in MATLAB that does this? The best I can come up with is:
[x_p, y_p] = find (points);
[x, y] = meshgrid(1:size(im,1), 1:size(im,2))
r = 5;
circles = zeros(size(im));
for k = 1:length(x_p)
circles = circles + (floor((x - x_p(k)).^2 + (y - y_p(k)).^2) == r);
end
% normalize circles
circles = circles/max(max(circles));
output = im + circles;
imshow(output)
This seems more than somewhat inelegant. Is there a way to draw circles similar to the line function?
You could use the normal PLOT command with a circular marker point:
[x_p,y_p] = find(points);
imshow(im); %# Display your image
hold on; %# Add subsequent plots to the image
plot(y_p,x_p,'o'); %# NOTE: x_p and y_p are switched (see note below)!
hold off; %# Any subsequent plotting will overwrite the image!
You can also adjust these other properties of the plot marker: MarkerEdgeColor, MarkerFaceColor, MarkerSize.
If you then want to save the new image with the markers plotted on it, you can look at this answer I gave to a question about maintaining image dimensions when saving images from figures.
NOTE: When plotting image data with IMSHOW (or IMAGE, etc.), the normal interpretation of rows and columns essentially becomes flipped. Normally the first dimension of data (i.e. rows) is thought of as the data that would lie on the x-axis, and is probably why you use x_p as the first set of values returned by the FIND function. However, IMSHOW displays the first dimension of the image data along the y-axis, so the first value returned by FIND ends up being the y-coordinate value in this case.
This file by Zhenhai Wang from Matlab Central's File Exchange does the trick.
%----------------------------------------------------------------
% H=CIRCLE(CENTER,RADIUS,NOP,STYLE)
% This routine draws a circle with center defined as
% a vector CENTER, radius as a scaler RADIS. NOP is
% the number of points on the circle. As to STYLE,
% use it the same way as you use the rountine PLOT.
% Since the handle of the object is returned, you
% use routine SET to get the best result.
%
% Usage Examples,
%
% circle([1,3],3,1000,':');
% circle([2,4],2,1000,'--');
%
% Zhenhai Wang <zhenhai#ieee.org>
% Version 1.00
% December, 2002
%----------------------------------------------------------------
Funny! There are 6 answers here, none give the obvious solution: the rectangle function.
From the documentation:
Draw a circle by setting the Curvature property to [1 1]. Draw the circle so that it fills the rectangular area between the points (2,4) and (4,6). The Position property defines the smallest rectangle that contains the circle.
pos = [2 4 2 2];
rectangle('Position',pos,'Curvature',[1 1])
axis equal
So in your case:
imshow(im)
hold on
[y, x] = find(points);
for ii=1:length(x)
pos = [x(ii),y(ii)];
pos = [pos-0.5,1,1];
rectangle('position',pos,'curvature',[1 1])
end
As opposed to the accepted answer, these circles will scale with the image, you can zoom in an they will always mark the whole pixel.
Hmm I had to re-switch them in this call:
k = convhull(x,y);
figure;
imshow(image); %# Display your image
hold on; %# Add subsequent plots to the image
plot(x,y,'o'); %# NOTE: x_p and y_p are switched (see note below)!
hold off; %# Any subsequent plotting will overwrite the image!
In reply to the comments:
x and y are created using the following code:
temp_hull = stats_single_object(k).ConvexHull;
for k2 = 1:length(temp_hull)
i = i+1;
[x(i,1)] = temp_hull(k2,1);
[y(i,1)] = temp_hull(k2,2);
end;
it might be that the ConvexHull is the other way around and therefore the plot is different. Or that I made a mistake and it should be
[x(i,1)] = temp_hull(k2,2);
[y(i,1)] = temp_hull(k2,1);
However the documentation is not clear about which colum = x OR y:
Quote: "Each row of the matrix contains the x- and y-coordinates of one vertex of the polygon. "
I read this as x is the first column and y is the second colum.
In newer versions of MATLAB (I have 2013b) the Computer Vision System Toolbox contains the vision.ShapeInserter System object which can be used to draw shapes on images. Here is an example of drawing yellow circles from the documentation:
yellow = uint8([255 255 0]); %// [R G B]; class of yellow must match class of I
shapeInserter = vision.ShapeInserter('Shape','Circles','BorderColor','Custom','CustomBorderColor',yellow);
I = imread('cameraman.tif');
circles = int32([30 30 20; 80 80 25]); %// [x1 y1 radius1;x2 y2 radius2]
RGB = repmat(I,[1,1,3]); %// convert I to an RGB image
J = step(shapeInserter, RGB, circles);
imshow(J);
With MATLAB and Image Processing Toolbox R2012a or newer, you can use the viscircles function to easily overlay circles over an image. Here is an example:
% Plot 5 circles at random locations
X = rand(5,1);
Y = rand(5,1);
% Keep the radius 0.1 for all of them
R = 0.1*ones(5,1);
% Make them blue
viscircles([X,Y],R,'EdgeColor','b');
Also, check out the imfindcircles function which implements the Hough circular transform. The online documentation for both functions (links above) have examples that show how to find circles in an image and how to display the detected circles over the image.
For example:
% Read the image into the workspace and display it.
A = imread('coins.png');
imshow(A)
% Find all the circles with radius r such that 15 ≤ r ≤ 30.
[centers, radii, metric] = imfindcircles(A,[15 30]);
% Retain the five strongest circles according to the metric values.
centersStrong5 = centers(1:5,:);
radiiStrong5 = radii(1:5);
metricStrong5 = metric(1:5);
% Draw the five strongest circle perimeters.
viscircles(centersStrong5, radiiStrong5,'EdgeColor','b');
Here's the method I think you need:
[x_p, y_p] = find (points);
% convert the subscripts to indicies, but transposed into a row vector
a = sub2ind(size(im), x_p, y_p)';
% assign all the values in the image that correspond to the points to a value of zero
im([a]) = 0;
% show the new image
imshow(im)

Resources