Blue patch appear after subtracting image background in MATLAB - image

I have tried image subtraction in MatLab, but realised that there is a big blue patch on the image. Please see image for more details.
Another images showing where the blue patch approximately cover till.
The picture on the left in the top 2 images shows the picture after subtraction.You can ignore the picture on the right of the top 2 images. This is one of the original image:
and this is the background I am subtracting.
The purpose is to get the foreground image and blob it, followed by counting the number of blobs to see how many books are stacked vertically from their sides. I am experimenting how blobs method works on matlab.
Do anybody have any idea? Below is the code on how I carry out my background subtraction as well as display it. Thanks.
[filename, user_canceled] = imgetfile;
fullFileName=filename;
rgbImage = imread(fullFileName);
folder = fullfile('C:\Users\Aaron\Desktop\OPENCV\Book Detection\Sample books');
baseFileName = 'background.jpg';
fullFileName = fullfile(folder, baseFileName);
backgroundImage =imread(fullFileName);
rgbImage= rgbImage - backgroundImage;
%display foreground image after background substraction%%%%%%%%%%%%%%
subplot( 1,2,1);
imshow(rgbImage, []);

Because the foreground objects (i.e. the books) are opaque, the background does not affect those pixels at all. In other words, you are subtracting out something that is not there. What you need is a method of detecting which pixels in your image correspond to foreground, and which correspond to background. Unfortunately, solving this problem might be at least as difficult as the problem you set out to solve in the first place.
If you just want a pixel-by-pixel comparison with the background you could try something like this:
thresh = 250;
imdiff = sum(((rgbImage-backgroundImage).^2),3);
mask = uint8(imdiff > thresh);
maskedImage = rgbImage.*cat(3,mask,mask,mask);
imshow(maskedImage, []);
You will have to play around with the threshold value until you get the desired masking. The problem you are going to have is that the background is poorly suited for the task. If you had the books in front of a green screen for example, you could probably do a much better job.

You are getting blue patches because you are subtracting two color RGB images. Ideally, in the difference image you expect to get zeros for the background pixels, and non-zeros for the foreground pixels. Since you are in RGB, the foreground pixels may end up having some weird color, which does not really matter. All you care about is that the absolute value of the difference is greater than 0.
By the way, your images are probably uint8, which is unsigned. You may want to convert them to double using im2double before you do the subtraction.

Related

how to whiten the white parts and blacken the black parts of a scanned image in MatLab or photoshop

I have a scanned image, scanned from printed word (docx) file. I want the scanned image to be looked like the original word file i.e. to remove noise and enhancement. i.e. to fully whiten the white parts and fully blacken the black parts without changing colorful parts on the fileenter image description here
There are a number of ways you could approach this. The simplest would be to apply a levels filter with the black point raised a bit and the white point lowered a bit. This can be done to all 3 color channels or selectively to a subset. Since you're going for creating pure black and white and there's no color cast on the image, I would apply the same settings to all 3 color channels. It works like this:
destVal = (srcVal - blackPt) + srcVal / (whitePt - blackPt);
This will slightly change the colored parts of the image, probably resulting in making them slightly more or less saturated.
I tried this in a photo editing app and was disappointed with the results. I was able to remove most of the noise by bringing the white point down to about 66%. However, the logo in the upper left is so wispy that it also ended up turning it very white. The black point didn't really need to be moved.
I think you're going to have a tough time with that logo. You could isolate it from your other changes, though, and that might help. A simple circular area around it where you just ignore any processing would probably do the trick.
But I got to thinking - this was made with Word. Do you have a copy of Word? It probably wouldn't be too difficult to put together a layout that's nearly identical. It still wouldn't help with the logo. But what you could do is layout the text the same and export it to a PDF or other image format. (Or if you can find the original template, just use it directly.) Then you could write some code to process your scanned copy and wherever a pixel is grayscale (red = green = blue), use the corresponding pixel from the version you made, otherwise use the pixels from the scan. That would get you all the stamps and signatures, while having the text nice and sharp. Perhaps you could even find the organization's logo online. In fact, Wikipedia even has a copy of their logo.
You'd probably need to have some sort of threshold for the grayscale because some pixels might be close but have a slight color cast. One option might be something like this:
if ((fabs(red - green) < threshold) && (fabs(red - blue) < threshold))
{
destVal = recreationVal; // The output is the same as the copy you made manually
}
else
{
destVal = scannedVal; // The output is the same as the scan
}
You may find this eats away at some of the colored marks, so you could do a second pass over the output where any pixel that's adjacent to a colored pixel brings in the corresponding pixel from the original scan.

How to identify and crop a rectangle inside an image in MATLAB

I have an image which has a rectangle drawn in it. The rectangle can be of any design but the background isn't a single color. It's a photo taken from a phone camera like this one.
I want to crop the to inner picture (scenary) in the image.
How can I do this in MATLAB?
I tried this code
img = im2double(imread('https://i.stack.imgur.com/iS2Ht.jpg'));
BW = im2bw(img);
dim = size(BW)
col = round(dim(2)/2)-90;
row = min(find(BW(:,col)))
boundary = bwtraceboundary(BW,[row, col],'N');
r = [min(boundary) , max(boundary)];
img_cropped = img(r(1) : r(3) , r(2) : r(4) , :);
imshow(img_cropped);
but it works only for one image, this one
and not the one above or this one
I need to find a code which works for any image with a specific rectangle design.Any help will be aprreciated.Thank you
The processing below considers the following:
backgroung is monochrome, possible with gradient
a border is monochrome(gradient), is easily distinguishable from background, not barocco/rococco style
a picture is kind of real world picture with lots of details, not Malevich's Black Square
So we first search the picture and flatten background by entropy filtering
img=imread('http://i.stack.imgur.com/KMBRg.jpg');
%dimensions for neighbourhood are just a guess
cross_nhood = false(11,11); cross_nhood(:,6)=1;cross_nhood(6,:)=1;
img_ent = entropyfilt(img./255,repmat(cross_nhood,[1 1 3]));
img_ent_gray = rgb2gray(img_ent);
Then we find corners using Harris detector and choose 4 point: two leftmost and two rightmost, and crop the image thus removing background (with precision up to inclination). I use r2011a, you may have a bit different functions, refer to MATLAB help
harris_pts = corner(img_ent_gray);
corn_pts = sortrows(harris_pts,1);
corn_pts = [corn_pts(1:2,:);...
corn_pts(size(corn_pts,1)-1:size(corn_pts,1),:)];
crop_img=img(min(corn_pts(:,2)):max(corn_pts(:,2)),...
min(corn_pts(:,1)):max(corn_pts(:,1)),:);
corn_pts(:,1)=corn_pts(:,1) - min(corn_pts(:,1));
corn_pts(:,2)=corn_pts(:,2) - min(corn_pts(:,2));
corn_pts = corn_pts + 1;
An here is a problem: the lines between the corner points are inclined at a bit different angle. It can be both problem of corner detection and image capture (objective distortion and/or a bit wrong acquisition angle). There is no straightforward, always right solution. I'd better choose the biggest inclination (it will crop the picture a little) and start processing the image line by line (to split image use Bresenham algorithm) till any or most, you to choose, pixels belongs to the picture, not the inner border. The distinguishable feature can be local entropy, std of colour values, specific colour threshold, different methods of statistics, whatever.
Another approach is to do colour segmentation, I like most Gram-Shmidt orthogonalization or a*-b* color segmentation. However, you'll get all the same problems if image is skewed and part of a picture matches the colour of a border (see last picture, bottom left corner).

Applying an image as a mask in matlab

I am a new user on image processing via Matlab. My first aim is applying the article and comparing my results and authors' results.
The article can be found here: http://arxiv.org/ftp/arxiv/papers/1306/1306.0139.pdf
First problem, Image Quality: In Figure 7, masks are defined but I couldn't reach the mask data set, and I use the screenshot so image quality is low. In my view, it can effect the results. Is there any suggestions?
Second problem, Merging images: I want to apply mask 1 on the Lena. But I don't want to use paint =) On the other hand, is it possible merging the images and keeping the lena?
You need to create the mask array. The first step is probably to turn your captured image from Figure 7 into a black and white image:
Mask = im2bw(Figure7, 0.5);
Now the background (white) is all 1 and the black line (or text) is 0.
Let's make sure your image of Lena that you got from imread is actually grayscale:
LenaGray = rgb2gray(Lena);
Finally, apply your mask on Lena:
LenaAndMask = LenaGray.*Mask;
Of course, this last line won't work if Lena and Figure7 don't have the same size, but this should be an easy fix.
First of all, You have to know that this paper is published in archive. when papers published in archive it is always a good idea to know more about the author and/or the university that published the paper.
TRUST me on that: you do not need to waste your time on this paper.
I understand your demand: but it is not a good idea to do get the mask by doing print screen. The pixel values that can be achieved by using print screen may not be the same as the original values. The zoom may change the size. so you need to be sure that the sizes are the same.
you can do print screen. past the image.
crop the mask.
convert rgb to gray scale.
threshold the gray scale to get the binary.
if you saved the image as jpeg. distortions because of high frequency edges will change edge shape.

imshow() not showing changed pixel values

I converted a RGB iamge to gray and then to binary. I wanted that the white pixels of binary image be replaced by pixel value of gray image. Though the command window shows that all 1s are replaced with gray pixel value but same is not reflected in the image.
The binary image (bw) and the new image (newbw) looks exactly. Why so ?
clc;clear all;close all;
i = imread('C:\Users\asus\Documents\Academics 2014 (SEM 7)\DIP\matlabTask\im1.jpg');
igray = rgb2gray(i);
bw = im2bw(igray);
[m,n]=size(bw);
newbw = zeros(m,n);
for i=1:m
for j=1:n
if bw(i,j)==1
newbw(i,j)=igray(i,j);
else
newbw(i,j)=bw(i,j);
end
end
end
subplot(311),imshow(igray),subplot(312),imshow(bw),subplot(313),imshow(newbw)
The reason why is because when you are creating your new blank image, this is automatically created as double type. When doing imshow, if you provide a double type image, the dynamic range of the pixel intensities are expected to be between [0,1] where 0 is black and 1 is white. Anything less than 0 (negative) will be shown as black, and anything greater than 1 will be shown as white.
Because this is surely not the case in your image, and a lot of the values are going to be > 1, you will get an output of either black or white. I suspect your image is of type uint8, and so the dynamic range is between [0,255].
As such, what you need to do is cast your output image so that it is of the same type as your input image. Once you do this, you should be able to see the gray values displayed properly. All you have to do now is simply change your newbw statement so that the variable is of the same class as the input image. In other words:
newbw = zeros(m,n,class(igray));
Your code should now work. FWIW, you're not the first one to encounter this problem. Almost all of my questions that I answer when using imshow are due to the fact that people forget that putting in an image of type double and type uint* behave differently.
Minor note
For efficiency purposes, I personally would not use a for loop. You can achieve the above behaviour by using indexing with your boolean array. As such, I would replace your for loop with this statement:
newbw(bw) = igray(bw);
Whichever locations in bw are true or logical 1, you copy those locations from igray over to newbw.

Find and crop defined image areas automatically

I want to process an image in matlab
The image consists out of a solid back ground and two specimens (top and bottom side). I already have a code that separate the top and bottom and make it two images. But the part what I don't get working is to crop the image to the glued area only (red box in the image, I've only marked the top one). However, the cropped image should be a rectangle just like the red box (the yellow background, can be discarded afterwards).
I know this can be done with imcrop, but this requires manual input from the user. The code needs to be automated such that it is possible to process more images without user input. All image will have the same colors (red for glue, black for material).
Can someone help me with this?
edit: Thanks for the help. I used the following code to solve the problem. However, I couldn't get rid of the black part right of the red box. This can be fix by taping that part off before making pictures. The code which I used looks a bit weird, but it succeeds in counting the black region in the picture and getting a percentage.
a=imread('testim0.png');
level = graythresh(a);
bw2=im2bw(a, level);
rgb2=bw2rgb(bw2);
IM2 = imclearborder(rgb2,4);
pic_negative = ait_imgneg(IM2);
%% figures
% figure()
% image(rgb2)
%
% figure()
% imshow(pic_negative)
%% Counting percentage
g=0;
for j=1:size(rgb2,2)
for i=1:size(rgb2,1)
if rgb2(i,j,1) <= 0 ...
& rgb2(i,j,2) <= 0 ...
& rgb2(i,j,3) <= 0
g=g+1;
end
end
end
h=0;
for j=1:size(pic_negative,2)
for i=1:size(pic_negative,1)
if pic_negative(i,j)== 0
h=h+1;
end
end
end
per=g/(g+h)
If anyone has some suggestions to improve the code, I'm happy to hear it.
For a straight-forward image segmentation into 2 regions (background, foreground) based on color (yellow, black are prominent in your case), an option can be clustering image color values using kmeans algorithm. For additional robustness you can transform the image from RGB to Lab* colorspace.
The example for your case follows the MATLAB Imape Processing example here.
% read and transform to L*a*b space
im_rgb = double(imread('testim0.png'))./256;
im_lab = applycform(im_rgb, makecform('srgb2lab'));
% keep only a,b-channels and form feature vector
ab = double(lab_I(:,:,2:3));
[nRows, nCols, ~] = size(ab);
ab = reshape(ab,nRows * nCols,2);
% apply k-means for 2 regions, repeat c times, e.g. c = 5
nRegions = 2;
[cluster_idx cluster_center] = kmeans(ab,nRegions, 'Replicates', 5);
% get foreground-background mask
im_regions = reshape(cluster_idx, nRows, nCols);
You can use the resulting binary image to index the regions of interest (or find the boundary) in the original reference image.
Images are saved as matrices. If you know the bounds in pixels of the crop box want to crop you can execute the crop using indexing.
M = rand(100); % create a 100x100 matrix or load it from an image
left = 50;
right = 75;
top = 80;
bottom = 10;
croppedM = M(bottom:top, left:right);
%save croppedm
You can easily get the unknown bounded crop by
1) Contour plotting the image,
2) find() on the result for the max/min X/ys,
3) use #slayton's method to perform the actual crop.
EDIT: Just looked at your actual image - it won't be so easy. But color enhance/threshold your image first, and the contours should work with reasonable accuracy. Needless to say, this requires tweaking to your specific situation.
since you've already been able to seperate the top and bottom, and also able to segment the region you want (including the small part of the right side that you don't want), I propose you just add a fix at the end of the code via the following.
after segmentation, sum each column Blue intensity value, so that you are compressing the image from 2d to 1d. so if original region is width=683 height=59, the new matrix/image will be simply width=683 height=1.
now, you can apply a small threshold to determine where the edge should lie, and apply a crop to the image at that location. Now you get your stats.

Resources