Place image in black pixels of another image - image

I have an image (white background with 1-5 black dots) that is called main.jpg (main image).
I am trying to place another image (secondary.jpg) in every black dot that is found in main image.
In order to do that:
I found the black pixels in main image
resize the secondary image to specific size that I want
plot the image in every coordinate that I found in step one. (the black pixel should be the center coordinates of the secondary image)
Unfortunately, I don't know how to do the third step.
for example:
main image is:
secondary image is:
output:
(The dots are behind the chairs. They are the image center points)
This is my code:
mainImage=imread('main.jpg')
secondaryImage=imread('secondary.jpg')
secondaryImageResized = resizeImage(secondaryImage)
[m n]=size(mainImage)
for i=1:n
for j=1:m
% if it's black pixel
if (mainImage(i,j)==1)
outputImage = plotImageInCoordinates(secondaryImageResized, i, j)
% save this image
imwrite(outputImage,map,'clown.bmp')
end
end
end
% resize the image to (250,350) width, height
function [ Image ] = resizeImage(img)
image = imresize(img, [250 350]);
end
function [outputImage] = plotImageInCoordinates(image, x, y)
% Do something
end
Any help appreciated!

Here's an alternative without convolution. One intricacy that you must take into account is that if you want to place each image at the centre of each dot, you must determine where the top left corner is and index into your output image so that you draw the desired object from the top left corner to the bottom right corner. You can do this by taking each black dot location and subtracting by half the width horizontally and half the height vertically.
Now onto your actual problem. It's much more efficient if you loop through the set of points that are black, not the entire image. You can do this by using the find command to determine the row and column locations that are 0. Once you do this, loop through each pair of row and column coordinates, do the subtraction of the coordinates and then place it on the output image.
I will impose an additional requirement where the objects may overlap. To accommodate for this, I will accumulate pixels, then find the average of the non-zero locations.
Your code modified to accommodate for this is as follows. Take note that because you are using JPEG compression, you will have compression artifacts so regions that are 0 may not necessarily be 0. I will threshold with an intensity of 128 to ensure that zero regions are actually zero. You will also have the situation where objects may go outside the boundaries of the image. Therefore to accommodate for this, pad the image sufficiently with twice of half the width horizontally and twice of half the height vertically then crop it after you're done placing the objects.
mainImage=imread('https://i.stack.imgur.com/gbhWJ.png');
secondaryImage=imread('https://i.stack.imgur.com/P0meM.png');
secondaryImageResized = imresize(secondaryImage, [250 300]);
% Find half height and width
rows = size(secondaryImageResized, 1);
cols = size(secondaryImageResized, 2);
halfHeight = floor(rows / 2);
halfWidth = floor(cols / 2);
% Create a padded image that contains our main image. Pad with white
% pixels.
rowsMain = size(mainImage, 1);
colsMain = size(mainImage, 2);
outputImage = 255*ones([2*halfHeight + rowsMain, 2*halfWidth + colsMain, size(mainImage, 3)], class(mainImage));
outputImage(halfHeight + 1 : halfHeight + rowsMain, ...
halfWidth + 1 : halfWidth + colsMain, :) = mainImage;
% Find a mask of the black pixels
mask = outputImage(:,:,1) < 128;
% Obtain black pixel locations
[row, col] = find(mask);
% Reset the output image so that they're all zeros now. We use this
% to output our final image. Also cast to ensure accumulation is proper.
outputImage(:) = 0;
outputImage = double(outputImage);
% Keeps track of how many times each pixel was hit by the object
% This is so that we can find the average at each location.
counts = zeros([size(mask), size(mainImage, 3)]);
% For each row and column location in the image
for i = 1 : numel(row)
% Get the row and column locations
r = row(i); c = col(i);
% Offset to get the top left corner
r = r - halfHeight;
c = c - halfWidth;
% Place onto final image
outputImage(r:r+rows-1, c:c+cols-1, :) = outputImage(r:r+rows-1, c:c+cols-1, :) + double(secondaryImageResized);
% Accumulate the counts
counts(r:r+rows-1,c:c+cols-1,:) = counts(r:r+rows-1,c:c+cols-1,:) + 1;
end
% Find average - Any values that were not hit, change to white
outputImage = outputImage ./ counts;
outputImage(counts == 0) = 255;
outputImage = uint8(outputImage);
% Now crop and show
outputImage = outputImage(halfHeight + 1 : halfHeight + rowsMain, ...
halfWidth + 1 : halfWidth + colsMain, :);
close all; imshow(outputImage);
% Write the final output
imwrite(outputImage, 'finalimage.jpg', 'Quality', 100);
We get:
Edit
I wasn't told that your images had transparency. Therefore what you need to do is use imread but ensure that you read in the alpha channel. We then check to see if one exists and if one does, we will ensure that the background of any values with no transparency are set to white. You can do that with the following code. Ensure this gets placed at the very top of your code, replacing the images being loaded in:
mainImage=imread('https://i.stack.imgur.com/gbhWJ.png');
% Change - to accommodate for transparency
[secondaryImage, ~, alpha] = imread('https://i.imgur.com/qYJSzEZ.png');
if ~isempty(alpha)
m = alpha == 0;
for i = 1 : size(secondaryImage,3)
m2 = secondaryImage(:,:,i);
m2(m) = 255;
secondaryImage(:,:,i) = m2;
end
end
secondaryImageResized = imresize(secondaryImage, [250 300]);
% Rest of your code follows...
% ...
The code above has been modified to read in the basketball image. The rest of the code remains the same and we thus get:

You can use convolution to achieve the desired effect. This will place a copy of im everywhere there is a black dot in imz.
% load secondary image
im = double(imread('secondary.jpg'))/255.0;
% create some artificial image with black indicators
imz = ones(500,500,3);
imz(50,50,:) = 0;
imz(400,200,:) = 0;
imz(200,400,:) = 0;
% create output image
imout = zeros(size(imz));
imout(:,:,1) = conv2(1-imz(:,:,1),1-im(:,:,1),'same');
imout(:,:,2) = conv2(1-imz(:,:,2),1-im(:,:,2),'same');
imout(:,:,3) = conv2(1-imz(:,:,3),1-im(:,:,3),'same');
imout = 1-imout;
% output
imshow(imout);
Also, you probably want to avoid saving main.jpg as a .jpg since it results in lossy compression and will likely cause issues with any method that relies on exact pixel values. I would recommend using .png which is lossless and will also likely compress better than .jpg for synthetic images where the same colors repeat many times.

Related

Separating the components of an image and saving them as new image

I have a black and white image as shown below:
I want to separate the white components of this image and then save them as separate image. This image has four white parts. I want to separate them and save four new images; each containing a white part of the image.
To achieve this, I wrote the following code:
BW=imread('img11_Inp.jpg');
imshow(BW);
BW=imbinarize(BW);
[L, num] = bwlabel(BW);
for k = 1 : num
thisBlob = ismember(L, k);
h = int2str(k);
filname = strcat(h,'_Out.jpg');
imwrite(thisBlob,filname);
figure
imshow(thisBlob, []);
end
Problem
This code separates the white parts and saves them but the size of the white part saved in the new image is same as in the original image. See the output images below:
Output images
Desired Output images
I want the output images to contain increased size of the white part of the original image. Following images are the ones I want:
Question
How can I modify the above code so that I can get the desired output images ?
Steps:
Find the boundary of the white portion.
To include the black portion, subtract a constant from the top left corner.If it is less than or equal to zero, it means we have reached or exceeded the left corner of the actual image, so set it 1. If it is greater than zero then all is fine.
Make similar adjustments for the right bottom corner.
Crop to the desired size.
Code:
%Finding the boundary of the white
[~, c1] = find(thisBlob, 1); [~, r1] = find(thisBlob.', 1);
[~, c2] = find(thisBlob, 1, 'last'); [~, r2] = find(thisBlob.', 1, 'last');
%Making adjustments to include the black portion
k = 10; %constant defining max number of black pixels
mxlim = size(X); %to be used to confirm that we don't exceed the boundary of the image
r1 = r1-10; r1(r1<=0)=1; c1 = c1-10; c1(c1<=0)=1;
r2 = r2+10; r2(r2>mxlims(1)) = mxlim(1); c2 = c2+10; c2(c2>mxlim(2)) = mxlims(2);
%Extracting the desired portion
thisBlob = thisBlob(r1:r2, c1:c2);
Output for the provided images:
You can change the number of black pixels by changing the constant k in the code.
Test Case when the white portion is on the Edge:
To verify if it also works if the white portion is on the edge like this image:
The code gives the following output for the above image:
Actually, what you want to perform is a crop with a little bit of span around the object. This can be easily achieved using imcrop that you must call providing the rectangle you want to keep.
In order to identify the rectangle:
Find the minimum an maximum rows that contain a white pixel (y-axis);
Find the minimum an maximum columns that contain a white pixel (x-axis);
Calculate width and height of the rectangle using maximum - minimum.
Since you want to crop using a little margin (in my example I set its value to 10 but you have full control over it), you must subtract that margin to minimum values and adding it to maximum values, but paying attention not to go out of the boundaries of the image (that's where the little min-max game comes into play).
Here is the full working code:
img = imread('img11_Inp.jpg');
imshow(img);
img_bin = imbinarize(img);
[lab,num] = bwlabel(img_bin);
span = 10;
for k = 1:num
file = [num2str(k) '_Out.jpg'];
blob = ismember(lab,k);
blob_size = size(blob);
col_idx = find(any(blob == true,1));
x1 = max([1 (min(col_idx) - span)]);
x2 = min([blob_size(2) (max(col_idx) + span)]);
width = x2 - x1;
row_idx = find(any(blob == true,2));
y1 = max([1 (min(row_idx) - span)]);
y2 = min([blob_size(1) (max(row_idx) + span)]);
height = y2 - y1;
blob_crop = imcrop(blob,[x1 y1 width height]);
imwrite(blob_crop,file);
figure();
imshow(blob_crop,[]);
end
Also, don't use int2str(k) in order to obtain a string representation of your index. Your index is actually a double so you are forcing a double (no pun intended) cast: double -> int and then int -> char array. Just use num2str.
Result:

Merging 2 images by showing one next to the other separated by a diagonal line

I have 2 images ("before" and "after"). I would like to show a final image where the left half is taken from the before image and the right half is taken from the after image.
The images should be separated by a white diagonal line of predefined width (2 or 3 pixels), where the diagonal is specified either by a certain angle or by 2 start and end coordinates. The diagonal should overwrite a part of the final image such that the size is the same as the sources'.
Example:
I know it can be done by looping over all pixels to recombine and create the final image, but is there an efficient way, or better yet, a built-in function that can do this?
Unfortunately I don't believe there is a built-in solution to your problem, but I've developed some code to help you do this but it will unfortunately require the image processing toolbox to play nicely with the code. As mentioned in your comments, you have this already so we should be fine.
The logic behind this is relatively simple. We will assume that your before and after pictures are the same size and also share the same number of channels. The first part is to declare a blank image and we draw a straight line down the middle of a certain thickness. The intricacy behind this is to declare an image that is slightly bigger than the original size of the image. The reason why is because I'm going to draw a line down the middle, then rotate this blank image by a certain angle to achieve the first part of what you desire. I'll be using imrotate to rotate an image by any angle you desire. The first instinct is to declare an image that's the same size as either the originals, draw a line down the middle and rotate it. However, if you do this you'll end up with the line being disconnected and not draw from the top to the bottom of the image. That makes sense because the line being drawn on an angle covers more pixels than if you were to draw this vertically.
Using Pythagorean's theorem, we know that the longest line that can ever be drawn on your image is the diagonal. Therefore we declare an image that is sqrt(rows*rows + cols*cols) in both the rows and columns where rows and cols are the rows and columns of the original image. After, we'll take the ceiling to make sure we've covered as much as possible and we add a bit of extra room to accommodate for the width of the line. We draw a line on this image, rotate it then we'll crop the image after so that it's the same size as the input images. This ensures that the line drawn at whatever angle you wish is fully drawn from top to bottom.
That logic is the hardest part. Once you do that, you declare two logical masks where you use imfill to fill the left side of the mask as one mask and we'll invert the mask to find the other mask. You will also need to use the line image that we created earlier with imrotate to index into the masks and set the values to false so that we ignore these pixels that are on the line.
Finally, you take each mask, index into your image and copy over each portion of the image you desire. You finally use the line image to index into the output and set the values to white.
Without further ado, here's the code:
% Load some example data
load mandrill;
% im is the image before
% im2 is the image after
% Before image is a colour image
im = im2uint8(ind2rgb(X, map));
% After image is a grayscale image
im2 = rgb2gray(im);
im2 = cat(3, im2, im2, im2);
% Declare line image
rows = size(im, 1); cols = size(im, 2);
width = 5;
m = ceil(sqrt(rows*rows + cols*cols + width*width));
ln = false([m m]);
mhalf = floor(m / 2); % Find halfway point width wise and draw the line
ln(:,mhalf - floor(width/2) : mhalf + floor(width/2)) = true;
% Rotate the line image
ang = 20; % 20 degrees
lnrotate = imrotate(ln, ang, 'crop');
% Crop the image so that it's the same dimensions as the originals
mrowstart = mhalf - floor(rows/2);
mcolstart = mhalf - floor(cols/2);
lnfinal = lnrotate(mrowstart : mrowstart + rows - 1, mcolstart : mcolstart + cols - 1);
% Make the masks
mask1 = imfill(lnfinal, [1 1]);
mask2 = ~mask1;
mask1(lnfinal) = false;
mask2(lnfinal) = false;
% Make sure the masks have as many channels as the original
mask1 = repmat(mask1, [1 1 size(im,3)]);
mask2 = repmat(mask2, [1 1 size(im,3)]);
% Do the same for the line
lnfinal = repmat(lnfinal, [1 1 size(im, 3)]);
% Specify output image
out = zeros(size(im), class(im));
out(mask1) = im(mask1);
out(mask2) = im2(mask2);
out(lnfinal) = 255;
% Show the image
figure;
imshow(out);
We get:
If you want the line to go in the other direction, simply make the angle ang negative. In the example script above, I've made the angle 20 degrees counter-clockwise (i.e. positive). To reproduce the example you gave, specify -20 degrees instead. I now get this image:
Here's a solution using polygons:
function q44310306
% Load some image:
I = imread('peppers.png');
B = rgb2gray(I);
lt = I; rt = B;
% Specify the boundaries of the white line:
width = 2; % [px]
offset = 13; % [px]
sz = size(I);
wlb = [floor(sz(2)/2)-offset+[0,width]; ceil(sz(2)/2)+offset-[width,0]];
% [top-left, top-right; bottom-left, bottom-right]
% Configure two polygons:
leftPoly = struct('x',[1 wlb(1,2) wlb(2,2) 1], 'y',[1 1 sz(1) sz(1)]);
rightPoly = struct('x',[sz(2) wlb(1,1) wlb(2,1) sz(2)],'y',[1 1 sz(1) sz(1)]);
% Define a helper grid:
[XX,YY] = meshgrid(1:sz(2),1:sz(1));
rt(inpolygon(XX,YY,leftPoly.x,leftPoly.y)) = intmin('uint8');
lt(repmat(inpolygon(XX,YY,rightPoly.x,rightPoly.y),1,1,3)) = intmin('uint8');
rt(inpolygon(XX,YY,leftPoly.x,leftPoly.y) & ...
inpolygon(XX,YY,rightPoly.x,rightPoly.y)) = intmax('uint8');
final = bsxfun(#plus,lt,rt);
% Plot:
figure(); imshow(final);
The result:
One solution:
im1 = imread('peppers.png');
im2 = repmat(rgb2gray(im1),1,1,3);
imgsplitter(im1,im2,80) %imgsplitter(image1,image2,angle [0-100])
function imgsplitter(im1,im2,p)
s1 = size(im1,1); s2 = size(im1,2);
pix = floor(p*size(im1,2)/100);
val = abs(pix -(s2-pix));
dia = imresize(tril(ones(s1)),[s1 val]);
len = min(abs([0-pix,s2-pix]));
if p>50
ind = [ones(s1,len) fliplr(~dia) zeros(s1,len)];
else
ind = [ones(s1,len) dia zeros(s1,len)];
end
ind = uint8(ind);
imshow(ind.*im1+uint8(~ind).*im2)
hold on
plot([pix,s2-pix],[0,s1],'w','LineWidth',1)
end
OUTPUT:

Re-sizing a rectangular image to a square image

I have an image, size 213 x 145 pixels. I want to resize it to 128 x 128 pixels for example. I've already tried the code below:
i = imread ('alif1.png');
I = imresize (i, [128 128], 'bilinear');
OR
i = imread ('alif1.png');
I = imresize (i, [128 128], 'lanczos3');
it gave me a square image, but the image became disproportionate. However, I believe the aspect ratio was preserved.
I want to resize the image to a square shape without distorting or stretching the image, rather to pad/crop the white background instead. I still can't figure out the right code. I hope anyone could help.
any help will be very much appreciated :)
I = imread('alifi.png');
Crop image, specifying crop rectangle.
I2 = imcrop(I,[75 68 128 128]);
Size and position of the crop rectangle, specified as a four-element position vector of the form [xmin ymin width height].
for more understanding follow this(matlab ) and this(blog) links.
If you want to resize (not crop) the image and keep the aspect ratio (so you don't loose any part of the image AND it doesn't get distorted), you can first add margins to make the image squared.
You can achieve this using the function padarray, or just creating a new image of zeros and then adding your image in the appropiate coordinates.
Once your image is squared, you can resize it to 128x128 using imresize.
In order to add margins, you will have to see where to add them (top&bottom OR left&right).
Also since padarray adds the same amount of margins in both sides, you have to check if the number you need is even. If it's odd add first a last row (or column) of zeros to your image.
So basically you have three options:
Make the image squared by not preserving aspect ratio (which is what you already tried)
Cropping the image as suggested by #ShvetChakra and #bla (but you will loose some image info)
Add margins to the image and resize (but you will end up with a squared image with margins)
Magic doesn't exist so "you must choose, but choose wisely"
(Quote from Indiana Jones and the Last Crusade).
EDIT:
% Example with a 5x2 image, so an extra column will be added
% in order to use padarray.
im = [1 2; 3 4; 5 6; 7 8; 9 10];
nrows = size(a,1);
ncols = size(a,2);
d = abs(ncols-nrows); % difference between ncols and nrows:
if(mod(d,2) == 1) % if difference is an odd number
if (ncols > nrows) % we add a row at the end
im = [im; zeros(1, ncols)];
nrows = nrows + 1;
else % we add a col at the end
im = [im zeros(nrows, 1)];
ncols = ncols + 1;
end
end
if ncols > nrows
im = padarray(im, [(ncols-nrows)/2 0]);
else
im = padarray(im, [0 (nrows-ncols)/2]);
end
% Here im is a 5x5 matix, not perfectly centered
% because we added an odd number of columns: 3

Extract a page from a uniform background in an image

If I have an image, in which there is a page of text shot on a uniform background, how can I auto detect the boundaries between the paper and the background?
An example of the image I want to detect is shown below. The images that I will be dealing with consist of a single page on a uniform background and they can be rotated at any angle.
One simple method would be to threshold the image by some known value once you convert the image to grayscale. The problem with that approach is that we are applying a global threshold and so some of the paper at the bottom of the image will be lost if you make the threshold too high. If you make the threshold too low, then you'll certainly get the paper, but you'll include a lot of the background pixels too and it will probably be difficult to remove those pixels with post-processing.
One thing I can suggest is to use an adaptive threshold algorithm. An algorithm that has worked for me in the past is the Bradley-Roth adaptive thresholding algorithm. You can read up about it here on a post I commented on a while back:
Bradley Adaptive Thresholding -- Confused (questions)
However, if you want the gist of it, an integral image of the grayscale version of the image is taken first. The integral image is important because it allows you to calculate the sum of pixels within a window in O(1) complexity. However, the calculation of the integral image is usually O(n^2), but you only have to do that once. With the integral image, you scan neighbourhoods of pixels of size s x s and you check to see if the average intensity is less than t% of the actual average within this s x s window then this is pixel classified as the background. If it's larger, then it's classified as being part of the foreground. This is adaptive because the thresholding is done using local pixel neighbourhoods rather than using a global threshold.
I've coded an implementation of the Bradley-Roth algorithm here for you. The default parameters for the algorithm are s being 1/8th of the width of the image and t being 15%. Therefore, you can just call it this way to invoke the default parameters:
out = adaptiveThreshold(im);
im is the input image and out is a binary image that denotes what belongs to foreground (logical true) or background (logical false). You can play around with the second and third input parameters: s being the size of the thresholding window and t the percentage we talked about above and can call the function like so:
out = adaptiveThreshold(im, s, t);
Therefore, the code for the algorithm looks like this:
function [out] = adaptiveThreshold(im, s, t)
%// Error checking of the input
%// Default value for s is 1/8th the width of the image
%// Must make sure that this is a whole number
if nargin <= 1, s = round(size(im,2) / 8); end
%// Default value for t is 15
%// t is used to determine whether the current pixel is t% lower than the
%// average in the particular neighbourhood
if nargin <= 2, t = 15; end
%// Too few or too many arguments?
if nargin == 0, error('Too few arguments'); end
if nargin >= 4, error('Too many arguments'); end
%// Convert to grayscale if necessary then cast to double to ensure no
%// saturation
if size(im, 3) == 3
im = double(rgb2gray(im));
elseif size(im, 3) == 1
im = double(im);
else
error('Incompatible image: Must be a colour or grayscale image');
end
%// Compute integral image
intImage = cumsum(cumsum(im, 2), 1);
%// Define grid of points
[rows, cols] = size(im);
[X,Y] = meshgrid(1:cols, 1:rows);
%// Ensure s is even so that we are able to index the image properly
s = s + mod(s,2);
%// Access the four corners of each neighbourhood
x1 = X - s/2; x2 = X + s/2;
y1 = Y - s/2; y2 = Y + s/2;
%// Ensure no co-ordinates are out of bounds
x1(x1 < 1) = 1;
x2(x2 > cols) = cols;
y1(y1 < 1) = 1;
y2(y2 > rows) = rows;
%// Count how many pixels there are in each neighbourhood
count = (x2 - x1) .* (y2 - y1);
%// Compute row and column co-ordinates to access each corner of the
%// neighbourhood for the integral image
f1_x = x2; f1_y = y2;
f2_x = x2; f2_y = y1 - 1; f2_y(f2_y < 1) = 1;
f3_x = x1 - 1; f3_x(f3_x < 1) = 1; f3_y = y2;
f4_x = f3_x; f4_y = f2_y;
%// Compute 1D linear indices for each of the corners
ind_f1 = sub2ind([rows cols], f1_y, f1_x);
ind_f2 = sub2ind([rows cols], f2_y, f2_x);
ind_f3 = sub2ind([rows cols], f3_y, f3_x);
ind_f4 = sub2ind([rows cols], f4_y, f4_x);
%// Calculate the areas for each of the neighbourhoods
sums = intImage(ind_f1) - intImage(ind_f2) - intImage(ind_f3) + ...
intImage(ind_f4);
%// Determine whether the summed area surpasses a threshold
%// Set this output to 0 if it doesn't
locs = (im .* count) <= (sums * (100 - t) / 100);
out = true(size(im));
out(locs) = false;
end
When I use your image and I set s = 500 and t = 5, here's the code and this is the image I get:
im = imread('http://i.stack.imgur.com/MEcaz.jpg');
out = adaptiveThreshold(im, 500, 5);
imshow(out);
You can see that there are some spurious white pixels at the bottom white of the image, and there are some holes we need to fill in inside the paper. As such, let's use some morphology and declare a structuring element that's a 15 x 15 square, perform an opening to remove the noisy pixels, then fill in the holes when we're done:
se = strel('square', 15);
out = imopen(out, se);
out = imfill(out, 'holes');
imshow(out);
This is what I get after all of that:
Not bad eh? Now if you really want to see what the image looks like with the paper segmented, we can use this mask and multiply it with the original image. This way, any pixels that belong to the paper are kept while those that belong to the background go away:
out_colour = bsxfun(#times, im, uint8(out));
imshow(out_colour);
We get this:
You'll have to play around with the parameters until it works for you, but the above parameters were the ones I used to get it working for the particular page you showed us. Image processing is all about trial and error, and putting processing steps in the right sequence until you get something good enough for your purposes.
Happy image filtering!

How to remove non-barcode region in an image? - MATLAB

After I did a 'imclearborder', there are still a bit of unwanted object around the barcode. How can I remove those objects to isolate the barcode? I have pasted my code for your reference.
rgb = imread('barcode2.jpg');
% Resize Image
rgb = imresize(rgb,0.33);
figure(),imshow(rgb);
% Convert from RGB to Gray
Igray = double(rgb2gray(rgb));
% Calculate the Gradients
[dIx, dIy] = gradient(Igray);
B = abs(dIx) - abs(dIy);
% Low-Pass Filtering
H = fspecial('gaussian', 20, 10);
C = imfilter(B, H);
C = imclearborder(C);
figure(),imagesc(C);colorbar;
Well, i have already explained it in your previous question How to find the location of red region in an image using MATLAB? , but with a opencv code and output images.
Instead of asking for code, try to implement it yourself.
Below is what to do next.
1) convert image 'C' in your code to binary.
2) Apply some erosion to remove small noises.( this time, barcode region also shrinks)
3) Apply dilation to compensate previous erosion.(most of noise will have removed in previous erosion. So they won't come back)
4) Find contours in the image.
5) Find their area. Most probably, contour which has maximum area will be the barcode, because other things like letters, words etc will be small ( you can understand it in the grayscale image you have provided)
6) Select contour with max. area. Draw a bounding rectangle for it.
Its result is already provided in your previous question. It works very nice. Try to implement it yourself with help of MATLAB documentation. Come here only when you get an error which you don't understand.
%%hi, i am ading my code to yours at the end of your code%%%%
clear all;
rgb = imread('barcode.jpeg');
% Resize Image
rgb = imresize(rgb,0.33);
figure(),imshow(rgb);
% Convert from RGB to Gray
Igray = double(rgb2gray(rgb));
Igrayc = Igray;
% Calculate the Gradients
[dIx, dIy] = gradient(Igray);
B = abs(dIx) - abs(dIy);
% Low-Pass Filtering
H = fspecial('gaussian', 10, 5);
C = imfilter(B, H);
C = imclearborder(C);
imshow(Igray,[]);
figure(),imagesc(C);colorbar;
%%%%%%%%%%%%%%%%%%%%%%%%from here my code starts%%%%%%%%%%%%%%%%
bw = im2bw(C);%%%binarising the image
% imshow(bw);
%%%%if there are letters or any other noise is present around the barcode
%%Note: the size of the noise and letters should be smaller than the
%%barcode size
labelImage = bwlabel(bw,8);
len=0;labe=0;
for i=1:max(max(labelImage))
a = find(labelImage==i);
if(len<length(a))
len=length(a);
labe=i;
end
end
imag = zeros(size(l));
imag(find(labelImage==labe))=255;
% imtool(imag);
%%%if Necessary do errossion
% se2 = strel('line',10,0);
% imag= imerode(imag,se2);
% imag= imerode(imag,se2);
[r c]= find(imag==255);
minr = min(r);
maxc = max(c);
minc = min(c);
maxr = max(r);
imag1 = zeros(size(l));
for i=minr:maxr
for j=minc:maxc
imag1(i,j)=255;
end
end
% figure,imtool(imag1);
varit = find(imag1==0);
Igrayc(varit)=0;
%%%%%result image having only barcode
imshow(Igrayc,[]);
%%%%%original image
figure(),imshow(Igray,[]);
Hope it is useful

Resources