Extract a page from a uniform background in an image - image

If I have an image, in which there is a page of text shot on a uniform background, how can I auto detect the boundaries between the paper and the background?
An example of the image I want to detect is shown below. The images that I will be dealing with consist of a single page on a uniform background and they can be rotated at any angle.

One simple method would be to threshold the image by some known value once you convert the image to grayscale. The problem with that approach is that we are applying a global threshold and so some of the paper at the bottom of the image will be lost if you make the threshold too high. If you make the threshold too low, then you'll certainly get the paper, but you'll include a lot of the background pixels too and it will probably be difficult to remove those pixels with post-processing.
One thing I can suggest is to use an adaptive threshold algorithm. An algorithm that has worked for me in the past is the Bradley-Roth adaptive thresholding algorithm. You can read up about it here on a post I commented on a while back:
Bradley Adaptive Thresholding -- Confused (questions)
However, if you want the gist of it, an integral image of the grayscale version of the image is taken first. The integral image is important because it allows you to calculate the sum of pixels within a window in O(1) complexity. However, the calculation of the integral image is usually O(n^2), but you only have to do that once. With the integral image, you scan neighbourhoods of pixels of size s x s and you check to see if the average intensity is less than t% of the actual average within this s x s window then this is pixel classified as the background. If it's larger, then it's classified as being part of the foreground. This is adaptive because the thresholding is done using local pixel neighbourhoods rather than using a global threshold.
I've coded an implementation of the Bradley-Roth algorithm here for you. The default parameters for the algorithm are s being 1/8th of the width of the image and t being 15%. Therefore, you can just call it this way to invoke the default parameters:
out = adaptiveThreshold(im);
im is the input image and out is a binary image that denotes what belongs to foreground (logical true) or background (logical false). You can play around with the second and third input parameters: s being the size of the thresholding window and t the percentage we talked about above and can call the function like so:
out = adaptiveThreshold(im, s, t);
Therefore, the code for the algorithm looks like this:
function [out] = adaptiveThreshold(im, s, t)
%// Error checking of the input
%// Default value for s is 1/8th the width of the image
%// Must make sure that this is a whole number
if nargin <= 1, s = round(size(im,2) / 8); end
%// Default value for t is 15
%// t is used to determine whether the current pixel is t% lower than the
%// average in the particular neighbourhood
if nargin <= 2, t = 15; end
%// Too few or too many arguments?
if nargin == 0, error('Too few arguments'); end
if nargin >= 4, error('Too many arguments'); end
%// Convert to grayscale if necessary then cast to double to ensure no
%// saturation
if size(im, 3) == 3
im = double(rgb2gray(im));
elseif size(im, 3) == 1
im = double(im);
else
error('Incompatible image: Must be a colour or grayscale image');
end
%// Compute integral image
intImage = cumsum(cumsum(im, 2), 1);
%// Define grid of points
[rows, cols] = size(im);
[X,Y] = meshgrid(1:cols, 1:rows);
%// Ensure s is even so that we are able to index the image properly
s = s + mod(s,2);
%// Access the four corners of each neighbourhood
x1 = X - s/2; x2 = X + s/2;
y1 = Y - s/2; y2 = Y + s/2;
%// Ensure no co-ordinates are out of bounds
x1(x1 < 1) = 1;
x2(x2 > cols) = cols;
y1(y1 < 1) = 1;
y2(y2 > rows) = rows;
%// Count how many pixels there are in each neighbourhood
count = (x2 - x1) .* (y2 - y1);
%// Compute row and column co-ordinates to access each corner of the
%// neighbourhood for the integral image
f1_x = x2; f1_y = y2;
f2_x = x2; f2_y = y1 - 1; f2_y(f2_y < 1) = 1;
f3_x = x1 - 1; f3_x(f3_x < 1) = 1; f3_y = y2;
f4_x = f3_x; f4_y = f2_y;
%// Compute 1D linear indices for each of the corners
ind_f1 = sub2ind([rows cols], f1_y, f1_x);
ind_f2 = sub2ind([rows cols], f2_y, f2_x);
ind_f3 = sub2ind([rows cols], f3_y, f3_x);
ind_f4 = sub2ind([rows cols], f4_y, f4_x);
%// Calculate the areas for each of the neighbourhoods
sums = intImage(ind_f1) - intImage(ind_f2) - intImage(ind_f3) + ...
intImage(ind_f4);
%// Determine whether the summed area surpasses a threshold
%// Set this output to 0 if it doesn't
locs = (im .* count) <= (sums * (100 - t) / 100);
out = true(size(im));
out(locs) = false;
end
When I use your image and I set s = 500 and t = 5, here's the code and this is the image I get:
im = imread('http://i.stack.imgur.com/MEcaz.jpg');
out = adaptiveThreshold(im, 500, 5);
imshow(out);
You can see that there are some spurious white pixels at the bottom white of the image, and there are some holes we need to fill in inside the paper. As such, let's use some morphology and declare a structuring element that's a 15 x 15 square, perform an opening to remove the noisy pixels, then fill in the holes when we're done:
se = strel('square', 15);
out = imopen(out, se);
out = imfill(out, 'holes');
imshow(out);
This is what I get after all of that:
Not bad eh? Now if you really want to see what the image looks like with the paper segmented, we can use this mask and multiply it with the original image. This way, any pixels that belong to the paper are kept while those that belong to the background go away:
out_colour = bsxfun(#times, im, uint8(out));
imshow(out_colour);
We get this:
You'll have to play around with the parameters until it works for you, but the above parameters were the ones I used to get it working for the particular page you showed us. Image processing is all about trial and error, and putting processing steps in the right sequence until you get something good enough for your purposes.
Happy image filtering!

Related

Separating the components of an image and saving them as new image

I have a black and white image as shown below:
I want to separate the white components of this image and then save them as separate image. This image has four white parts. I want to separate them and save four new images; each containing a white part of the image.
To achieve this, I wrote the following code:
BW=imread('img11_Inp.jpg');
imshow(BW);
BW=imbinarize(BW);
[L, num] = bwlabel(BW);
for k = 1 : num
thisBlob = ismember(L, k);
h = int2str(k);
filname = strcat(h,'_Out.jpg');
imwrite(thisBlob,filname);
figure
imshow(thisBlob, []);
end
Problem
This code separates the white parts and saves them but the size of the white part saved in the new image is same as in the original image. See the output images below:
Output images
Desired Output images
I want the output images to contain increased size of the white part of the original image. Following images are the ones I want:
Question
How can I modify the above code so that I can get the desired output images ?
Steps:
Find the boundary of the white portion.
To include the black portion, subtract a constant from the top left corner.If it is less than or equal to zero, it means we have reached or exceeded the left corner of the actual image, so set it 1. If it is greater than zero then all is fine.
Make similar adjustments for the right bottom corner.
Crop to the desired size.
Code:
%Finding the boundary of the white
[~, c1] = find(thisBlob, 1); [~, r1] = find(thisBlob.', 1);
[~, c2] = find(thisBlob, 1, 'last'); [~, r2] = find(thisBlob.', 1, 'last');
%Making adjustments to include the black portion
k = 10; %constant defining max number of black pixels
mxlim = size(X); %to be used to confirm that we don't exceed the boundary of the image
r1 = r1-10; r1(r1<=0)=1; c1 = c1-10; c1(c1<=0)=1;
r2 = r2+10; r2(r2>mxlims(1)) = mxlim(1); c2 = c2+10; c2(c2>mxlim(2)) = mxlims(2);
%Extracting the desired portion
thisBlob = thisBlob(r1:r2, c1:c2);
Output for the provided images:
You can change the number of black pixels by changing the constant k in the code.
Test Case when the white portion is on the Edge:
To verify if it also works if the white portion is on the edge like this image:
The code gives the following output for the above image:
Actually, what you want to perform is a crop with a little bit of span around the object. This can be easily achieved using imcrop that you must call providing the rectangle you want to keep.
In order to identify the rectangle:
Find the minimum an maximum rows that contain a white pixel (y-axis);
Find the minimum an maximum columns that contain a white pixel (x-axis);
Calculate width and height of the rectangle using maximum - minimum.
Since you want to crop using a little margin (in my example I set its value to 10 but you have full control over it), you must subtract that margin to minimum values and adding it to maximum values, but paying attention not to go out of the boundaries of the image (that's where the little min-max game comes into play).
Here is the full working code:
img = imread('img11_Inp.jpg');
imshow(img);
img_bin = imbinarize(img);
[lab,num] = bwlabel(img_bin);
span = 10;
for k = 1:num
file = [num2str(k) '_Out.jpg'];
blob = ismember(lab,k);
blob_size = size(blob);
col_idx = find(any(blob == true,1));
x1 = max([1 (min(col_idx) - span)]);
x2 = min([blob_size(2) (max(col_idx) + span)]);
width = x2 - x1;
row_idx = find(any(blob == true,2));
y1 = max([1 (min(row_idx) - span)]);
y2 = min([blob_size(1) (max(row_idx) + span)]);
height = y2 - y1;
blob_crop = imcrop(blob,[x1 y1 width height]);
imwrite(blob_crop,file);
figure();
imshow(blob_crop,[]);
end
Also, don't use int2str(k) in order to obtain a string representation of your index. Your index is actually a double so you are forcing a double (no pun intended) cast: double -> int and then int -> char array. Just use num2str.
Result:

Place image in black pixels of another image

I have an image (white background with 1-5 black dots) that is called main.jpg (main image).
I am trying to place another image (secondary.jpg) in every black dot that is found in main image.
In order to do that:
I found the black pixels in main image
resize the secondary image to specific size that I want
plot the image in every coordinate that I found in step one. (the black pixel should be the center coordinates of the secondary image)
Unfortunately, I don't know how to do the third step.
for example:
main image is:
secondary image is:
output:
(The dots are behind the chairs. They are the image center points)
This is my code:
mainImage=imread('main.jpg')
secondaryImage=imread('secondary.jpg')
secondaryImageResized = resizeImage(secondaryImage)
[m n]=size(mainImage)
for i=1:n
for j=1:m
% if it's black pixel
if (mainImage(i,j)==1)
outputImage = plotImageInCoordinates(secondaryImageResized, i, j)
% save this image
imwrite(outputImage,map,'clown.bmp')
end
end
end
% resize the image to (250,350) width, height
function [ Image ] = resizeImage(img)
image = imresize(img, [250 350]);
end
function [outputImage] = plotImageInCoordinates(image, x, y)
% Do something
end
Any help appreciated!
Here's an alternative without convolution. One intricacy that you must take into account is that if you want to place each image at the centre of each dot, you must determine where the top left corner is and index into your output image so that you draw the desired object from the top left corner to the bottom right corner. You can do this by taking each black dot location and subtracting by half the width horizontally and half the height vertically.
Now onto your actual problem. It's much more efficient if you loop through the set of points that are black, not the entire image. You can do this by using the find command to determine the row and column locations that are 0. Once you do this, loop through each pair of row and column coordinates, do the subtraction of the coordinates and then place it on the output image.
I will impose an additional requirement where the objects may overlap. To accommodate for this, I will accumulate pixels, then find the average of the non-zero locations.
Your code modified to accommodate for this is as follows. Take note that because you are using JPEG compression, you will have compression artifacts so regions that are 0 may not necessarily be 0. I will threshold with an intensity of 128 to ensure that zero regions are actually zero. You will also have the situation where objects may go outside the boundaries of the image. Therefore to accommodate for this, pad the image sufficiently with twice of half the width horizontally and twice of half the height vertically then crop it after you're done placing the objects.
mainImage=imread('https://i.stack.imgur.com/gbhWJ.png');
secondaryImage=imread('https://i.stack.imgur.com/P0meM.png');
secondaryImageResized = imresize(secondaryImage, [250 300]);
% Find half height and width
rows = size(secondaryImageResized, 1);
cols = size(secondaryImageResized, 2);
halfHeight = floor(rows / 2);
halfWidth = floor(cols / 2);
% Create a padded image that contains our main image. Pad with white
% pixels.
rowsMain = size(mainImage, 1);
colsMain = size(mainImage, 2);
outputImage = 255*ones([2*halfHeight + rowsMain, 2*halfWidth + colsMain, size(mainImage, 3)], class(mainImage));
outputImage(halfHeight + 1 : halfHeight + rowsMain, ...
halfWidth + 1 : halfWidth + colsMain, :) = mainImage;
% Find a mask of the black pixels
mask = outputImage(:,:,1) < 128;
% Obtain black pixel locations
[row, col] = find(mask);
% Reset the output image so that they're all zeros now. We use this
% to output our final image. Also cast to ensure accumulation is proper.
outputImage(:) = 0;
outputImage = double(outputImage);
% Keeps track of how many times each pixel was hit by the object
% This is so that we can find the average at each location.
counts = zeros([size(mask), size(mainImage, 3)]);
% For each row and column location in the image
for i = 1 : numel(row)
% Get the row and column locations
r = row(i); c = col(i);
% Offset to get the top left corner
r = r - halfHeight;
c = c - halfWidth;
% Place onto final image
outputImage(r:r+rows-1, c:c+cols-1, :) = outputImage(r:r+rows-1, c:c+cols-1, :) + double(secondaryImageResized);
% Accumulate the counts
counts(r:r+rows-1,c:c+cols-1,:) = counts(r:r+rows-1,c:c+cols-1,:) + 1;
end
% Find average - Any values that were not hit, change to white
outputImage = outputImage ./ counts;
outputImage(counts == 0) = 255;
outputImage = uint8(outputImage);
% Now crop and show
outputImage = outputImage(halfHeight + 1 : halfHeight + rowsMain, ...
halfWidth + 1 : halfWidth + colsMain, :);
close all; imshow(outputImage);
% Write the final output
imwrite(outputImage, 'finalimage.jpg', 'Quality', 100);
We get:
Edit
I wasn't told that your images had transparency. Therefore what you need to do is use imread but ensure that you read in the alpha channel. We then check to see if one exists and if one does, we will ensure that the background of any values with no transparency are set to white. You can do that with the following code. Ensure this gets placed at the very top of your code, replacing the images being loaded in:
mainImage=imread('https://i.stack.imgur.com/gbhWJ.png');
% Change - to accommodate for transparency
[secondaryImage, ~, alpha] = imread('https://i.imgur.com/qYJSzEZ.png');
if ~isempty(alpha)
m = alpha == 0;
for i = 1 : size(secondaryImage,3)
m2 = secondaryImage(:,:,i);
m2(m) = 255;
secondaryImage(:,:,i) = m2;
end
end
secondaryImageResized = imresize(secondaryImage, [250 300]);
% Rest of your code follows...
% ...
The code above has been modified to read in the basketball image. The rest of the code remains the same and we thus get:
You can use convolution to achieve the desired effect. This will place a copy of im everywhere there is a black dot in imz.
% load secondary image
im = double(imread('secondary.jpg'))/255.0;
% create some artificial image with black indicators
imz = ones(500,500,3);
imz(50,50,:) = 0;
imz(400,200,:) = 0;
imz(200,400,:) = 0;
% create output image
imout = zeros(size(imz));
imout(:,:,1) = conv2(1-imz(:,:,1),1-im(:,:,1),'same');
imout(:,:,2) = conv2(1-imz(:,:,2),1-im(:,:,2),'same');
imout(:,:,3) = conv2(1-imz(:,:,3),1-im(:,:,3),'same');
imout = 1-imout;
% output
imshow(imout);
Also, you probably want to avoid saving main.jpg as a .jpg since it results in lossy compression and will likely cause issues with any method that relies on exact pixel values. I would recommend using .png which is lossless and will also likely compress better than .jpg for synthetic images where the same colors repeat many times.

Merging 2 images by showing one next to the other separated by a diagonal line

I have 2 images ("before" and "after"). I would like to show a final image where the left half is taken from the before image and the right half is taken from the after image.
The images should be separated by a white diagonal line of predefined width (2 or 3 pixels), where the diagonal is specified either by a certain angle or by 2 start and end coordinates. The diagonal should overwrite a part of the final image such that the size is the same as the sources'.
Example:
I know it can be done by looping over all pixels to recombine and create the final image, but is there an efficient way, or better yet, a built-in function that can do this?
Unfortunately I don't believe there is a built-in solution to your problem, but I've developed some code to help you do this but it will unfortunately require the image processing toolbox to play nicely with the code. As mentioned in your comments, you have this already so we should be fine.
The logic behind this is relatively simple. We will assume that your before and after pictures are the same size and also share the same number of channels. The first part is to declare a blank image and we draw a straight line down the middle of a certain thickness. The intricacy behind this is to declare an image that is slightly bigger than the original size of the image. The reason why is because I'm going to draw a line down the middle, then rotate this blank image by a certain angle to achieve the first part of what you desire. I'll be using imrotate to rotate an image by any angle you desire. The first instinct is to declare an image that's the same size as either the originals, draw a line down the middle and rotate it. However, if you do this you'll end up with the line being disconnected and not draw from the top to the bottom of the image. That makes sense because the line being drawn on an angle covers more pixels than if you were to draw this vertically.
Using Pythagorean's theorem, we know that the longest line that can ever be drawn on your image is the diagonal. Therefore we declare an image that is sqrt(rows*rows + cols*cols) in both the rows and columns where rows and cols are the rows and columns of the original image. After, we'll take the ceiling to make sure we've covered as much as possible and we add a bit of extra room to accommodate for the width of the line. We draw a line on this image, rotate it then we'll crop the image after so that it's the same size as the input images. This ensures that the line drawn at whatever angle you wish is fully drawn from top to bottom.
That logic is the hardest part. Once you do that, you declare two logical masks where you use imfill to fill the left side of the mask as one mask and we'll invert the mask to find the other mask. You will also need to use the line image that we created earlier with imrotate to index into the masks and set the values to false so that we ignore these pixels that are on the line.
Finally, you take each mask, index into your image and copy over each portion of the image you desire. You finally use the line image to index into the output and set the values to white.
Without further ado, here's the code:
% Load some example data
load mandrill;
% im is the image before
% im2 is the image after
% Before image is a colour image
im = im2uint8(ind2rgb(X, map));
% After image is a grayscale image
im2 = rgb2gray(im);
im2 = cat(3, im2, im2, im2);
% Declare line image
rows = size(im, 1); cols = size(im, 2);
width = 5;
m = ceil(sqrt(rows*rows + cols*cols + width*width));
ln = false([m m]);
mhalf = floor(m / 2); % Find halfway point width wise and draw the line
ln(:,mhalf - floor(width/2) : mhalf + floor(width/2)) = true;
% Rotate the line image
ang = 20; % 20 degrees
lnrotate = imrotate(ln, ang, 'crop');
% Crop the image so that it's the same dimensions as the originals
mrowstart = mhalf - floor(rows/2);
mcolstart = mhalf - floor(cols/2);
lnfinal = lnrotate(mrowstart : mrowstart + rows - 1, mcolstart : mcolstart + cols - 1);
% Make the masks
mask1 = imfill(lnfinal, [1 1]);
mask2 = ~mask1;
mask1(lnfinal) = false;
mask2(lnfinal) = false;
% Make sure the masks have as many channels as the original
mask1 = repmat(mask1, [1 1 size(im,3)]);
mask2 = repmat(mask2, [1 1 size(im,3)]);
% Do the same for the line
lnfinal = repmat(lnfinal, [1 1 size(im, 3)]);
% Specify output image
out = zeros(size(im), class(im));
out(mask1) = im(mask1);
out(mask2) = im2(mask2);
out(lnfinal) = 255;
% Show the image
figure;
imshow(out);
We get:
If you want the line to go in the other direction, simply make the angle ang negative. In the example script above, I've made the angle 20 degrees counter-clockwise (i.e. positive). To reproduce the example you gave, specify -20 degrees instead. I now get this image:
Here's a solution using polygons:
function q44310306
% Load some image:
I = imread('peppers.png');
B = rgb2gray(I);
lt = I; rt = B;
% Specify the boundaries of the white line:
width = 2; % [px]
offset = 13; % [px]
sz = size(I);
wlb = [floor(sz(2)/2)-offset+[0,width]; ceil(sz(2)/2)+offset-[width,0]];
% [top-left, top-right; bottom-left, bottom-right]
% Configure two polygons:
leftPoly = struct('x',[1 wlb(1,2) wlb(2,2) 1], 'y',[1 1 sz(1) sz(1)]);
rightPoly = struct('x',[sz(2) wlb(1,1) wlb(2,1) sz(2)],'y',[1 1 sz(1) sz(1)]);
% Define a helper grid:
[XX,YY] = meshgrid(1:sz(2),1:sz(1));
rt(inpolygon(XX,YY,leftPoly.x,leftPoly.y)) = intmin('uint8');
lt(repmat(inpolygon(XX,YY,rightPoly.x,rightPoly.y),1,1,3)) = intmin('uint8');
rt(inpolygon(XX,YY,leftPoly.x,leftPoly.y) & ...
inpolygon(XX,YY,rightPoly.x,rightPoly.y)) = intmax('uint8');
final = bsxfun(#plus,lt,rt);
% Plot:
figure(); imshow(final);
The result:
One solution:
im1 = imread('peppers.png');
im2 = repmat(rgb2gray(im1),1,1,3);
imgsplitter(im1,im2,80) %imgsplitter(image1,image2,angle [0-100])
function imgsplitter(im1,im2,p)
s1 = size(im1,1); s2 = size(im1,2);
pix = floor(p*size(im1,2)/100);
val = abs(pix -(s2-pix));
dia = imresize(tril(ones(s1)),[s1 val]);
len = min(abs([0-pix,s2-pix]));
if p>50
ind = [ones(s1,len) fliplr(~dia) zeros(s1,len)];
else
ind = [ones(s1,len) dia zeros(s1,len)];
end
ind = uint8(ind);
imshow(ind.*im1+uint8(~ind).*im2)
hold on
plot([pix,s2-pix],[0,s1],'w','LineWidth',1)
end
OUTPUT:

How to apply Thresholding in image processing

This is sample code for K means algorithm.
k = 5;
[Centroid,new_cluster]=kmeans_algorithm(inv_trans_img,k);
for i_loop = 1:k
cluster = zeros(size(inv_trans_img));
pos = find(new_cluster==i_loop);
cluster(pos) = new_cluster(pos);
figure; imshow(cluster,[]);title('K-means');
end
I need to get the final image from this K means algorithm and I need to pass that image for thresholding process.I did it like below.
tumour_image=cluster;
n = 512;
binarized_img = zeros(n,n);
sort_val = sort(tumour_image(:));
mid_val = ceil(length(sort_val)/2);
threshold = tumour_image(mid_val);
binarized_img(find(tumour_image>=threshold)) = 1;
binarized_img(find(tumour_image<threshold)) = 0;
imshow(binarized_img);title('binarized image');
But now the problem is,only a white image is coming as a result.How can i solve this out.
Your threshold should be:
threshold = sort_val(mid_val);
You need to get the median of the sorted values, not the center element of tumour_image.
As #NeilSlater mentions in the comments, the reason that you're getting an all-white image from your existing code is that you are, by chance, selecting a black pixel from the original image, so when you threshold, the entire image is greater than or equal to that pixel in value.
In the case of images in which the majority of the pixels are 0, this will still give you an all-white image as as result. One way around this, and the most analogous to what you're currently doing, is to take the median of the nonzero pixels.
mid_val = ceil((find(sort_val, 1)+length(sort_val))/2);
Alternatively, if you know which clusters you're interested in you can simply keep only those clusters.
binarized_image = tumour_image >= 3; % keep clusters 3 and above

Convert angle quantitative data into qualitative images

I am a crystallographer trying to analyse crystals orientations from up to 5000 files. Can Matlab convert angle values in a table that look like this:
Into a table that looks like this?:
Here's a more concrete example based on Lakesh's idea. However, this will handle any amount of rotation. First start off with a base circular image with a strip in the middle. Once you do this, simply run a for loop that stacks all of these rotated images in a grid that resembles the angles seen in your rotation values matrix for every rotation angle that we see in this matrix.
The trick is to figure out how to define the base orientation image. As such, let's define a white square, with a black circle in the middle. We will also define a red strip in the middle. For now, let's assume that the base orientation image is 51 x 51. Therefore, we can do this:
%// Define a grid of points between -25 to 25 for both X and Y
[X,Y] = meshgrid(-25:25,-25:25);
%// Define radius
radius = 22;
%// Generate a black circle that has the above radius
base_image = (X.^2 + Y.^2) <= radius^2;
%// Make into a 3 channel colour image
base_image = ~base_image;
base_image = 255*cast(repmat(base_image, [1 1 3]), 'uint8');
%// Place a strip in the middle of the circle that's red
width_strip = 44;
height_strip = 10;
strip_locs = (X >= -width_strip/2 & X <= width_strip/2 & Y >= -height_strip/2 & Y <= height_strip/2);
base_image(strip_locs) = 255;
With the above, this is what I get:
Now, all you need to do is create a final output image which has as many images as we have rows and columns in your matrix. Given that your rotation matrix values are stored in M, we can use imrotate from the image processing toolbox and specify the 'crop' flag to ensure that the output image is the same size as the original. However, with imrotate, whatever values don't appear in the image after you rotate it, it defaults to 0. You want this to appear white in your example, so we're going to have to do a bit of work. What you'll need to do is create a logical matrix that is the same size as the input image, then rotate it in the same way like you did with the base image. Whatever pixels appear black (which are also false) in this rotated white image, these are the values we need to set to white. As such:
%// Get size of rotation value matrix
[rows,cols] = size(M);
%// For storing the output image
output_image = zeros(rows*51, cols*51, 3);
%// For each value in our rotation value matrix...
for row = 1 : rows
for col = 1 : cols
%// Rotate the image
rotated_image = imrotate(base_image, M(row,col), 'crop');
%// Take a completely white image and rotate this as well.
%// Invert so we know which values were outside of the image
Mrot = ~imrotate(true(size(base_image)), M(row,col), 'crop');
%// Set these values outside of each rotated image to white
rotated_image(Mrot) = 255;
%// Store in the right slot.
output_image((row-1)*51 + 1 : row*51, (col-1)*51 + 1 : col*51, :) = rotated_image;
end
end
Let's try a few angles to be sure this is right:
M = [0 90 180; 35 45 60; 190 270 55];
With the above matrix, this is what I get for my image. This is stored in output_image:
If you want to save this image to file, simply do imwrite(output_image, 'output.png');, where output.png is the name of the file you want to save to your disk. I chose PNG because it's lossless and has a relatively low file size compared to other lossless standards (save JPEG 2000).
Edit to show no line when the angle is 0
If you wish to use the above code where you want to only display a black circle if the angle is around 0, it's just a matter of inserting an if statement inside the for loop as well creating another image that contains a black circle with no strip through it. When the if condition is satisfied, you'd place this new image in the right grid location instead of the black circle with the red strip.
Therefore, using the above code as a baseline do something like this:
%// Define matrix of sample angles
M = [0 90 180; 35 45 60; 190 270 55];
%// Define a grid of points between -25 to 25 for both X and Y
[X,Y] = meshgrid(-25:25,-25:25);
%// Define radius
radius = 22;
%// Generate a black circle that has the above radius
base_image = (X.^2 + Y.^2) <= radius^2;
%// Make into a 3 channel colour image
base_image = ~base_image;
base_image = 255*cast(repmat(base_image, [1 1 3]), 'uint8');
%// NEW - Create a black circle image without the red strip
black_circle = base_image;
%// Place a strip in the middle of the circle that's red
width_strip = 44;
height_strip = 10;
strip_locs = (X >= -width_strip/2 & X <= width_strip/2 & Y >= -height_strip/2 & Y <= height_strip/2);
base_image(strip_locs) = 255;
%// Get size of rotation value matrix
[rows,cols] = size(M);
%// For storing the output image
output_image = zeros(rows*51, cols*51, 3);
%// NEW - define tolerance
tol = 5;
%// For each value in our rotation value matrix...
for row = 1 : rows
for col = 1 : cols
%// NEW - If the angle is around 0, then draw a black circle only
if M(row,col) >= -tol && M(row,col) <= tol
rotated_image = black_circle;
else %// This is the logic if the angle is not around 0
%// Rotate the image
rotated_image = imrotate(base_image, M(row,col), 'crop');
%// Take a completely white image and rotate this as well.
%// Invert so we know which values were outside of the image
Mrot = ~imrotate(true(size(base_image)), M(row,col), 'crop');
%// Set these values outside of each rotated image to white
rotated_image(Mrot) = 255;
end
%// Store in the right slot.
output_image((row-1)*51 + 1 : row*51, (col-1)*51 + 1 : col*51, :) = rotated_image;
end
end
The variable tol in the above code defines a tolerance where anything within -tol <= angle <= tol has the black circle drawn. This is to allow for floating point tolerances when comparing because it's never a good idea to perform equality operations with floating point values directly. Usually it is accepted practice to compare within some tolerance of where you would like to test for equality.
Using the above modified code with the matrix of angles M as seen in the previous example, I get this image now:
Notice that the top left entry of the matrix has an angle of 0, which is thus visualized as a black circle with no strip through it as we expect.
General idea to solve your problem:
1. Store two images, 1 for 0 degrees and 180 degrees and another for 90 and 270 degrees.
2. Read the data from the file
3. if angle = 0 || angle == 180
image = image1
else
image = image2
end
To handle any angle:
1. Store one image. E.g image = imread('yourfile.png')
2. angle = Read the data from the file
3. B = imrotate(image,angle)

Resources