Need to automatically eliminate noise in image and outer boundary of a object - image

i am mechanical engineering student working on a project to automatically detect the weld seam (The seam is a edge that is to be welded) present in a workshop. This gives a basic terminology involved in welding (http://i.imgur.com/Hfwjq0w.jpg).
To separate the weldment from the other objects, i have taken the background image and subtracted the foreground image having the weldment to obatin only the weldment(http://i.imgur.com/v7yBWs1.jpg). After image subtraction,there are the shadow ,glare and remnant noises of subtracted background are still present.
As i want to automatically identify only the weld seam without the outer boundary of weldment, i have tried to detect the edges in the weldment image using canny algorithm and tried to eliminate the isolated noises using the function bwareopen.I have somehow obtained the approximate boundary of weldment and weld seam. The threshold i have used are purely on trial and error approach as dont know a way to automatically set a threshold to detect them.
The problem now i am facing is that i cant specify an definite threshold as this algorithm should be able to identify the seam of any material regardless of its surface texture,glare and shadow present there. I need some assistance to remove the glare,shadow and isolated points from the background subtracted image.
Also i need help to get rid of the outer boundary and obtain only smooth weld seam from starting point to end point.
i have tried to use the following code:
a=imread('imageofworkpiece.jpg'); %http://i.imgur.com/3ngu235.jpg
b=imread('background.jpg'); %http://i.imgur.com/DrF6wC2.jpg
Ip = imsubtract(b,a);
imshow(Ip) % weldment separated %http://i.imgur.com/v7yBWs1.jpg
BW = rgb2gray(Ip);
c=edge(BW,'canny',0.05); % by trial and error
figure;imshow(c) % %http://i.imgur.com/1UQ8E3D.jpg
bw = bwareaopen(c, 100); % by trial and error
figure;imshow(bw) %http://i.imgur.com/Gnjy2aS.jpg
Can anybody please suggest me a adaptive way to set a threhold and remove the outer boundary to detect only the seam? Thank you

Well this doesn't solve your problem of finding an automatic thresholding algorithm. but I can help with isolation the seam. The seam is along the y axis (will this always be the case?) so I used hough transform to isolate only near vertical lines. Normally it finds all lines but I restricted the theta search parameter. The code I'm using now happens to highlight the longest line segment (I got it directly from the matlab website) and it is coincidentally the weld seam. This was purely coincidental. But using your bwareaopened image as input the hough line detector is able to find the seam. Of course it required a bit of playing around to work, so you are stuck at your original problem of finding optimal settings somehow
Maybe this can be a springboard for someone else
a=imread('weldment.jpg'); %http://i.imgur.com/3ngu235.jpg
b=imread('weld_bg.jpg'); %http://i.imgur.com/DrF6wC2.jpg
Ip = imsubtract(b,a);
imshow(Ip) % weldment separated %http://i.imgur.com/v7yBWs1.jpg
BW = rgb2gray(Ip);
c=edge(BW,'canny',0.05); % by trial and error
bw = bwareaopen(c, 100); % by trial and error
figure(1);imshow(c) ;title('canny') % %http://i.imgur.com/1UQ8E3D.jpg
figure(2);imshow(bw);title('bw area open') %http://i.imgur.com/Gnjy2aS.jpg
[H,T,R] = hough(bw,'RhoResolution',1,'Theta',-15:5:15);
figure(3)
imshow(H,[],'XData',T,'YData',R,...
'InitialMagnification','fit');
xlabel('\theta'), ylabel('\rho');
axis on, axis normal, hold on;
P = houghpeaks(H,5,'threshold',ceil(0.5*max(H(:))));
x = T(P(:,2)); y = R(P(:,1));
plot(x,y,'s','color','white');
% Find lines and plot them
lines = houghlines(BW,T,R,P,'FillGap',2,'MinLength',30);
figure(4), imshow(BW), hold on
max_len = 0;
for k = 1:length(lines)
xy = [lines(k).point1; lines(k).point2];
plot(xy(:,1),xy(:,2),'LineWidth',2,'Color','green');
% Plot beginnings and ends of lines
plot(xy(1,1),xy(1,2),'x','LineWidth',2,'Color','yellow');
plot(xy(2,1),xy(2,2),'x','LineWidth',2,'Color','red');
% Determine the endpoints of the longest line segment
len = norm(lines(k).point1 - lines(k).point2);
if ( len > max_len)
max_len = len;
xy_long = xy;
end
end
% highlight the longest line segment
plot(xy_long(:,1),xy_long(:,2),'LineWidth',2,'Color','blue');

from your image it looks like the weld seam will be usually very dark with sharp intensity edge so why don't you use that ?
do not use background
create derivation image
dx[y][x]=pixel[y][x]-pixel[y][x-1]
do this for whole image (if on place then x must decrease in loop!!!)
filter out all derivations lower then thresholds
if (|dx[y][x]|<threshold) dx[y][x]=0; else pixel[y][x]=255;` // or what ever values you use
how to obtain threshold value ?
compute min and max intensity and set threshold as (max-min)*scale where scale is value lower then 1.0 (start with 0.02 or 0.1 for example ...
do this also for y axis
so compute dy[][]... and combine dx[][] and dy[][] together. Either with OR or by AND logical functions
filter out artifacts
you can use morphologic filters or smooth threshold for this. After all this you will have mask of pixels of weld seam
if you need boundig box then just loop through all pixels and remember min,max x,y coords ...
[Notes]
if your images will have good lighting then you can ignore the derivation and threshold the intensity directly with something like:
threshold = 0.5*(average_intensity+lowest_intensity)
if you want really fully automate this then you have to use adaptive thresholds. So try more thresholds in a loop and remember result closest to desired output based on geometry size,position etc ...
[edit1] finally have some time/mood for this so
Intensity image threshold
you provided just single image which is far from enough to make reliable algorithm. This is the result
as you can see without further processing this is not good approach
Derivation image threshold
threshold derivation by x (10%)
threshold derivation by y (5%)
AND combination of both 10% di/dx and 1.5% di/dy
The code in C++ looks like this (sorry do not use Matlab):
int x,y,i,i0,i1,tr2,tr3;
pic1=pic0; // copy input image pic0 to pic1
pic2=pic0; // copy input image pic0 to pic2 (just to resize to desired size for derivation)
pic3=pic0; // copy input image pic0 to pic3 (just to resize to desired size for derivation)
pic1.rgb2i(); // RGB -> grayscale
// abs derivate by x
for (y=pic1.ys-1;y>0;y--)
for (x=pic1.xs-1;x>0;x--)
{
i0=pic1.p[y][x ].dd;
i1=pic1.p[y][x-1].dd;
i=i0-i1; if (i<0) i=-i;
pic2.p[y][x].dd=i;
}
// compute min,max derivation
i0=pic2.p[1][1].dd; i1=i0;
for (y=1;y<pic1.ys;y++)
for (x=1;x<pic1.xs;x++)
{
i=pic2.p[y][x].dd;
if (i0>i) i0=i;
if (i1<i) i1=i;
}
tr2=i0+((i1-i0)*100/1000);
// abs derivate by y
for (y=pic1.ys-1;y>0;y--)
for (x=pic1.xs-1;x>0;x--)
{
i0=pic1.p[y ][x].dd;
i1=pic1.p[y-1][x].dd;
i=i0-i1; if (i<0) i=-i;
pic3.p[y][x].dd=i;
}
// compute min,max derivation
i0=pic3.p[1][1].dd; i1=i0;
for (y=1;y<pic1.ys;y++)
for (x=1;x<pic1.xs;x++)
{
i=pic3.p[y][x].dd;
if (i0>i) i0=i;
if (i1<i) i1=i;
}
tr3=i0+((i1-i0)*15/1000);
// threshold the derivation images and combine them
for (y=1;y<pic1.ys;y++)
for (x=1;x<pic1.xs;x++)
{
// copy original (pic0) pixel for non thresholded areas the rest fill with green color
if ((pic2.p[y][x].dd>=tr2)&&(pic3.p[y][x].dd>=tr3)) i=0x00FF00;
else i=pic0.p[y][x].dd;
pic1.p[y][x].dd=i;
}
pic0 is input image
pic1 is output image
pic2,pic3 are just temporary storage for derivations
pic?.xy,pic?.ys is the size of pic?
pic.p[y][x].dd is pixel axes (dd means access pixel as DWORD ...)
as you can see there is a lot of stuff around (nod visible in the first image you provided) so you need to process this further
segmentate and separate...,
use hough transform ...
filter out small artifacts ...
identify object by expected geometry properties (aspect ratio,position,size)
Adaptive thresholds:
you need for this to know the desired output image properties (not possible to reliably deduce from single image input) then create function that do the above processing with variable tr2,tr3. Try in loop more options of tr2,tr3 (loop through all values or iterate to better results and remember the best output (so you also need some function that detects the quality of output) for example:
quality=0.0; param=0.0;
for (a=0.2;a<=0.8;a+=0.1)
{
pic1=process_image(pic0,a);
q=detect_quality(pic1);
if (q>quality) { quality=q; param=a; pico=pic1; }
}
after this the pic1 should hold the relatively best threshold image ... You should process like this all threshold separately inside the process_image the targeted threshold must be scaled by a for example tr2=i0+((i1-i0)*a);

Related

How to detect defect/missing pills in blister pack? (Matlab)

How do I detect defect/missing tablets in the tablet strips. Assuming there is one missing tablet in the tablet strip. I've tried stdfilt(), but the image contains alot noise .I ve also tried average and median filtring such as canny and prewitt. I also added noise such as salt and paper to the image.
Is there any othe segmentation method? Any coding will be helpful
I2=rgb2gray(I);
J = imnoise(I2,'salt & pepper',0.02);
figure
imshow(J)
Kaverage = filter2(fspecial('average',3),J)/255;
figure
imshow(Kaverage)
Kmedian = medfilt2(J);
imshowpair(Kaverage,Kmedian,'montage')
BW1 = edge(Kmedian,'Canny');
BW2 = edge(Kmedian,'Prewitt');`
Here's an approach based on my comment
% reduce to grayscale
d=rgb2gray(your_img);
% find edge and blur a bit so we can find circles
d2=conv2(edge(d),ones(9),'same');
d2=max(d2(:))-d2;
% find circles
Rmin = 71; Rmax = 80;
[center, radius] = imfindcircles(d2,[Rmin Rmax],'Sensitivity',0.98);
% Display what we found
imagesc(d);axis square
hold on;
viscircles(center,radius);
plot(center(:,1),center(:,2),'yx','LineWidth',2);
hold off;
% histogram of each circle content:
[x, y]=meshgrid(1:size(d,2),1:size(d,1));
for n=1:numel(radius)
circle_pixels{n}=find ((x-center(n,1)).^2+(y-center(n,2)).^2<=radius(n).^2);
h(:,n) = histcounts(d(circle_pixels{n}),0:max(d(:)) );
subplot(2,5,n); plot(h(:,n));title(['circle # ' num2str(n)]);
end
Now we can see how the intensity is distributed in each circle and choose a metric to discriminate the missing pills. We can see that for the missing pills (#5,#9,#10) we have a less simple distribution of intensities (more than one peak) , and in particular a saturation of the maximal intensity that happen because of the glare reflection of the foil probably. So you can choose now a threshold based on that, or any other statistical metric you want (# of peaks in the distributions etc)...

Average set of color images and standard deviation

I am learning image analysis and trying to average set of color images and get standard deviation at each pixel
I have done this, but it is not by averaging RGB channels. (for ex rchannel = I(:,:,1))
filelist = dir('dir1/*.jpg');
ims = zeros(215, 300, 3);
for i=1:length(filelist)
imname = ['dir1/' filelist(i).name];
rgbim = im2double(imread(imname));
ims = ims + rgbim;
end
avgset1 = ims/length(filelist);
figure;
imshow(avgset1);
I am not sure if this is correct. I am confused as to how averaging images is useful.
Also, I couldn't get the matrix holding standard deviation.
Any help is appreciated.
If you are concerned about finding the mean RGB image, then your code is correct. What I like is that you converted the images using im2double before accumulating the mean and so you are making everything double precision. As what Parag said, finding the mean image is very useful especially in machine learning. It is common to find the mean image of a set of images before doing image classification as it allows the dynamic range of each pixel to be within a normalized range. This allows the training of the learning algorithm to converge quickly to the optimum solution and provide the best set of parameters to facilitate the best accuracy in classification.
If you want to find the mean RGB colour which is the average colour over all images, then no your code is not correct.
You have summed over all channels individually which is stored in sumrgbims, so the last step you need to do now take this image and sum over each channel individually. Two calls to sum in the first and second dimensions chained together will help. This will produce a 1 x 1 x 3 vector, so using squeeze after this to remove the singleton dimensions and get a 3 x 1 vector representing the mean RGB colour over all images is what you get.
Therefore:
mean_colour = squeeze(sum(sum(sumrgbims, 1), 2));
To address your second question, I'm assuming you want to find the standard deviation of each pixel value over all images. What you will have to do is accumulate the square of each image in addition to accumulating each image inside the loop. After that, you know that the standard deviation is the square root of the variance, and the variance is equal to the average sum of squares subtracted by the mean squared. We have the mean image, now you just have to square the mean image and subtract this with the average sum of squares. Just to be sure our math is right, supposing we have a signal X with a mean mu. Given that we have N values in our signal, the variance is thus equal to:
Source: Science Buddies
The standard deviation would simply be the square root of the above result. We would thus calculate this for each pixel independently. Therefore you can modify your loop to do that for you:
filelist = dir('set1/*.jpg');
sumrgbims = zeros(215, 300, 3);
sum2rgbims = sumrgbims; % New - for standard deviation
for i=1:length(filelist)
imname = ['set1/' filelist(i).name];
rgbim = im2double(imread(imname));
sumrgbims = sumrgbims + rgbim;
sum2rgbims = sum2rgbims + rgbim.^2; % New
end
rgbavgset1 = sumrgbims/length(filelist);
% New - find standard deviation
rgbstdset1 = ((sum2rgbims / length(filelist)) - rgbavgset.^2).^(0.5);
figure;
imshow(rgbavgset1, []);
% New - display standard deviation image
figure;
imshow(rgbstdset1, []);
Also to make sure, I've scaled the display of each imshow call so the smallest value gets mapped to 0 and the largest value gets mapped to 1. This does not change the actual contents of the images. This is just for display purposes.

Compute curvature of a bent pipe using image processing (Hough transform parabola detection)

I'm trying to design a way to detect this pipe's curvature. I tried applying hough transform and found detected line but they don't lie along the surface of pipe so smoothing it out to fit a beizer curve is not working .Please suggest some good way to start for the image like this.[
The image obtained by hough transform to detect lines is as follows
[
I'm using standard Matlab code for probabilistic hough transform line detection that generates line segment surrounding the structure. Essentially the shape of pipe resembles a parabola but for hough parabola detection I need to provide eccentricity of the point prior to the detection. Please suggest a good way for finding discrete points along the curvature that can be fitted to a parabola. I have given tag to opencv and ITK so if there is function that can be implemented on this particular picture please suggest the function I will try it out to see the results.
img = imread('test2.jpg');
rawimg = rgb2gray(img);
[accum, axis_rho, axis_theta, lineprm, lineseg] = Hough_Grd(bwtu, 8, 0.01);
figure(1); imagesc(axis_theta*(180/pi), axis_rho, accum); axis xy;
xlabel('Theta (degree)'); ylabel('Pho (pixels)');
title('Accumulation Array from Hough Transform');
figure(2); imagesc(bwtu); colormap('gray'); axis image;
DrawLines_2Ends(lineseg);
title('Raw Image with Line Segments Detected');
The edge map of the image is as follows and the result generated after applying Hough transform on edge map is also not good. I was thinking a solution that does general parametric shape detection like this curve can be expressed as a family of parabola and so we do a curve fitting to estimate the coefficients as it bends to analyze it's curvature. I need to design a real time procedure so please suggest anything in this direction.
I suggest the following approach:
First stage: generate a segmentation of the pipe.
perform thresholding on the image.
find connected components in the thresholded image.
search for a connected component which represents the pipe.
The connected component which represents the pipe should have an edge map which is divided into top and bottom edges (see attached image).
The top and bottom edges should have similar size, and they should have a relatively constant distance from one another. In other words, the variance of their per-pixel distances should be low.
Second stage - extract curve
At this stage, you should extract the points of the curve for performing Beizer fitting.
You can either perform this calculation on the top edge, or the bottom edge.
another option is to do it on the skeleton of the pipe segmentation.
Results
The pipe segmentation. Top and bottom edges are mark with blue and red correspondingly.
Code
I = mat2gray(imread('ILwH7.jpg'));
im = rgb2gray(I);
%constant values to be used later on
BW_THRESHOLD = 0.64;
MIN_CC_SIZE = 50;
VAR_THRESHOLD = 2;
SIMILAR_SIZE_THRESHOLD = 0.85;
%stage 1 - thresholding & noise cleaning
bwIm = im>BW_THRESHOLD;
bwIm = imfill(bwIm,'holes');
bwIm = imopen(bwIm,strel('disk',1));
CC = bwconncomp(bwIm);
%iterates over the CC list, and searches for the CC which represents the
%pipe
for ii=1:length(CC.PixelIdxList)
%ignore small CC
if(length(CC.PixelIdxList{ii})<50)
continue;
end
%extracts CC edges
ccMask = zeros(size(bwIm));
ccMask(CC.PixelIdxList{ii}) = 1;
ccMaskEdges = edge(ccMask);
%finds connected components in the edges mat(there should be two).
%these are the top and bottom parts of the pipe.
CC2 = bwconncomp(ccMaskEdges);
if length(CC2.PixelIdxList)~=2
continue;
end
%tests that the top and bottom edges has similar sizes
s1 = length(CC2.PixelIdxList{1});
s2 = length(CC2.PixelIdxList{2});
if(min(s1,s2)/max(s1,s2) < SIMILAR_SIZE_THRESHOLD)
continue;
end
%calculate the masks of these two connected compnents
topEdgeMask = false(size(ccMask));
topEdgeMask(CC2.PixelIdxList{1}) = true;
bottomEdgeMask = false(size(ccMask));
bottomEdgeMask(CC2.PixelIdxList{2}) = true;
%tests that the variance of the distances between the points is low
topEdgeDists = bwdist(topEdgeMask);
bottomEdgeDists = bwdist(bottomEdgeMask);
var1 = std(topEdgeDists(bottomEdgeMask));
var2 = std(bottomEdgeDists(topEdgeMask));
%if the variances are low - we have found the CC of the pipe. break!
if(var1<VAR_THRESHOLD && var2<VAR_THRESHOLD)
pipeMask = ccMask;
break;
end
end
%performs median filtering on the top and bottom boundaries.
MEDIAN_SIZE =5;
[topCorveY, topCurveX] = find(topEdgeMask);
topCurveX = medfilt1(topCurveX);
topCurveY = medfilt1(topCurveY);
[bottomCorveY, bottomCurveX] = find(bottomEdgeMask);
bottomCurveX = medfilt1(bottomCurveX);
bottomCorveY = medfilt1(bottomCorveY);
%display results
imshow(pipeMask); hold on;
plot(topCurveX,topCorveY,'.-');
plot(bottomCurveX,bottomCorveY,'.-');
Comments
In this specific example, acquiring the pipe segmentation by thresholding was relatively easy. In some scenes it may be more complex. in these cases, you may want to use region growing algorithm for generating the pipe segmentation.
Detecting the connected component which represents the pipe can be done by using some more hueristics. For example - the local curvature of it's boundaries should be low.
You can find the connected components (CCs) of your inverted edge-map image. Then you can somehow filter those components, say for example, based on their pixel count, using region-properties. Here are the connected components I obtained using the given Octave code.
Now you can fit a model to each of these CCs using something like nlinfit or any suitable method.
im = imread('uFBtU.png');
gr = rgb2gray(uint8(im));
er = imerode(gr, ones(3)) < .5;
[lbl, n] = bwlabel(er, 8);
imshow(label2rgb(lbl))

Resize an image with bilinear interpolation without imresize

I've found some methods to enlarge an image but there is no solution to shrink an image. I'm currently using the nearest neighbor method. How could I do this with bilinear interpolation without using the imresize function in MATLAB?
In your comments, you mentioned you wanted to resize an image using bilinear interpolation. Bear in mind that the bilinear interpolation algorithm is size independent. You can very well use the same algorithm for enlarging an image as well as shrinking an image. The right scale factors to sample the pixel locations are dependent on the output dimensions you specify. This doesn't change the core algorithm by the way.
Before I start with any code, I'm going to refer you to Richard Alan Peters' II digital image processing slides on interpolation, specifically slide #59. It has a great illustration as well as pseudocode on how to do bilinear interpolation that is MATLAB friendly. To be self-contained, I'm going to include his slide here so we can follow along and code it:
Please be advised that this only resamples the image. If you actually want to match MATLAB's output, you need to disable anti-aliasing.
MATLAB by default will perform anti-aliasing on the images to ensure the output looks visually pleasing. If you'd like to compare apples with apples, make sure you disable anti-aliasing when comparing between this implementation and MATLAB's imresize function.
Let's write a function that will do this for us. This function will take in an image (that is read in through imread) which can be either colour or grayscale, as well as an array of two elements - The image you want to resize and the output dimensions in a two-element array of the final resized image you want. The first element of this array will be the rows and the second element of this array will be the columns. We will simply go through this algorithm and calculate the output pixel colours / grayscale values using this pseudocode:
function [out] = bilinearInterpolation(im, out_dims)
%// Get some necessary variables first
in_rows = size(im,1);
in_cols = size(im,2);
out_rows = out_dims(1);
out_cols = out_dims(2);
%// Let S_R = R / R'
S_R = in_rows / out_rows;
%// Let S_C = C / C'
S_C = in_cols / out_cols;
%// Define grid of co-ordinates in our image
%// Generate (x,y) pairs for each point in our image
[cf, rf] = meshgrid(1 : out_cols, 1 : out_rows);
%// Let r_f = r'*S_R for r = 1,...,R'
%// Let c_f = c'*S_C for c = 1,...,C'
rf = rf * S_R;
cf = cf * S_C;
%// Let r = floor(rf) and c = floor(cf)
r = floor(rf);
c = floor(cf);
%// Any values out of range, cap
r(r < 1) = 1;
c(c < 1) = 1;
r(r > in_rows - 1) = in_rows - 1;
c(c > in_cols - 1) = in_cols - 1;
%// Let delta_R = rf - r and delta_C = cf - c
delta_R = rf - r;
delta_C = cf - c;
%// Final line of algorithm
%// Get column major indices for each point we wish
%// to access
in1_ind = sub2ind([in_rows, in_cols], r, c);
in2_ind = sub2ind([in_rows, in_cols], r+1,c);
in3_ind = sub2ind([in_rows, in_cols], r, c+1);
in4_ind = sub2ind([in_rows, in_cols], r+1, c+1);
%// Now interpolate
%// Go through each channel for the case of colour
%// Create output image that is the same class as input
out = zeros(out_rows, out_cols, size(im, 3));
out = cast(out, class(im));
for idx = 1 : size(im, 3)
chan = double(im(:,:,idx)); %// Get i'th channel
%// Interpolate the channel
tmp = chan(in1_ind).*(1 - delta_R).*(1 - delta_C) + ...
chan(in2_ind).*(delta_R).*(1 - delta_C) + ...
chan(in3_ind).*(1 - delta_R).*(delta_C) + ...
chan(in4_ind).*(delta_R).*(delta_C);
out(:,:,idx) = cast(tmp, class(im));
end
Take the above code, copy and paste it into a file called bilinearInterpolation.m and save it. Make sure you change your working directory where you've saved this file.
Except for sub2ind and perhaps meshgrid, everything seems to be in accordance with the algorithm. meshgrid is very easy to explain. All you're doing is specifying a 2D grid of (x,y) co-ordinates, where each location in your image has a pair of (x,y) or column and row co-ordinates. Creating a grid through meshgrid avoids any for loops as we will have generated all of the right pixel locations from the algorithm that we need before we continue.
How sub2ind works is that it takes in a row and column location in a 2D matrix (well... it can really be any amount of dimensions you want), and it outputs a single linear index. If you're not aware of how MATLAB indexes into matrices, there are two ways you can access an element in a matrix. You can use the row and column to get what you want, or you can use a column-major index. Take a look at this matrix example I have below:
A =
1 2 3 4 5
6 7 8 9 10
11 12 13 14 15
If we want to access the number 9, we can do A(2,4) which is what most people tend to default to. There is another way to access the number 9 using a single number, which is A(11)... now how is that the case? MATLAB lays out the memory of its matrices in column-major format. This means that if you were to take this matrix and stack all of its columns together in a single array, it would look like this:
A =
1
6
11
2
7
12
3
8
13
4
9
14
5
10
15
Now, if you want to access element number 9, you would need to access the 11th element of this array. Going back to the interpolation bit, sub2ind is crucial if you want to vectorize accessing the elements in your image to do the interpolation without doing any for loops. As such, if you look at the last line of the pseudocode, we want to access elements at r, c, r+1 and c+1. Note that all of these are 2D arrays, where each element in each of the matching locations in all of these arrays tell us the four pixels we need to sample from in order to produce the final output pixel. The output of sub2ind will also be 2D arrays of the same size as the output image. The key here is that each element of the 2D arrays of r, c, r+1, and c+1 will give us the column-major indices into the image that we want to access, and by throwing this as input into the image for indexing, we will exactly get the pixel locations that we want.
There are some important subtleties I'd like to add when implementing the algorithm:
You need to make sure that any indices to access the image when interpolating outside of the image are either set to 1 or the number of rows or columns to ensure you don't go out of bounds. Actually, if you extend to the right or below the image, you need to set this to one below the maximum as the interpolation requires that you are accessing pixels to one over to the right or below. This will make sure that you're still within bounds.
You also need to make sure that the output image is cast to the same class as the input image.
I ran through a for loop to interpolate each channel on its own. You could do this intelligently using bsxfun, but I decided to use a for loop for simplicity, and so that you are able to follow along with the algorithm.
As an example to show this works, let's use the onion.png image that is part of MATLAB's system path. The original dimensions of this image are 135 x 198. Let's interpolate this image by making it larger, going to 270 x 396 which is twice the size of the original image:
im = imread('onion.png');
out = bilinearInterpolation(im, [270 396]);
figure;
imshow(im);
figure;
imshow(out);
The above code will interpolate the image by increasing each dimension by twice as much, then show a figure with the original image and another figure with the scaled up image. This is what I get for both:
Similarly, let's shrink the image down by half as much:
im = imread('onion.png');
out = bilinearInterpolation(im, [68 99]);
figure;
imshow(im);
figure;
imshow(out);
Note that half of 135 is 67.5 for the rows, but I rounded up to 68. This is what I get:
One thing I've noticed in practice is that upsampling with bilinear has decent performance in comparison to other schemes like bicubic... or even Lanczos. However, when you're shrinking an image, because you're removing detail, nearest neighbour is very much sufficient. I find bilinear or bicubic to be overkill. I'm not sure about what your application is, but play around with the different interpolation algorithms and see what you like out of the results. Bicubic is another story, and I'll leave that to you as an exercise. Those slides I referred you to does have material on bicubic interpolation if you're interested.
Good luck!

How to filter binary image with unwanted region and hole region

I have a hard problem that need your help. I have a binary image that maintains some unwanted region (small white dot) and hole regions (in figure 1).My idea is that the first I will remove unwanted region by calculating area these region and then filter with small area value.At the second step, I fill in hole region to make clear image.What do you think best method to fill in hole region. Do you have any idea to resolve it. Could you help me implement it by matlab. Thank you so much. This is my reference code for remove unwanted region. But it need threshold term. You can download image test at here
function exImage=rmUnwantedRegion(Img,threshold)
lb = bwlabel(Img);
st = regionprops(lb, 'Area', 'PixelIdxList' );
toRemove = [st.Area] <threshold; % fix your threshold here
exImage = Img;
exImage( vertcat(st(toRemove).PixelIdxList ) ) = 0; % remove
end
Here is an example implementation based on my comment:
subplot(1,3,1), imshow(input);
title('Original Image');
Calculating the opening of the image:
openInput=bwareaopen(input, 20);
subplot(1,3,2), imshow(bwareaopen(input, 20));
title('Opened Image');
And the subsequent closing:
ClosedInput = imclose(openInput,ones(10));
subplot(1,3,3), imshow(ClosedInput);
title('Closed Image');
Result:
Assuming white pixel is 1
Black is 0
Step 1:
Use convultion matrix (http://en.wikipedia.org/wiki/Kernel_%28image_processing%29)
with blur filter
Step 2:
Treshold each pixel with some static value (for example 0.5)
if pixel is >0.5 pixel = 1
else pixel = 0
This looks like a job for binary dilation and erosion. Generally an erosion is done first to remove unwanted noise and then dilation is performed with the same structuring element to fill in the gaps left by the erosion. Matlab uses strel to create structuring elements for morphological operations. You can also read about morphological operators here
Example:
SE=strel('square',5);
im_eroded=imerode(im,SE);
im_dilated=imdilate(im_eroded,SE);
You need to do an erosion (Wikipedia or Matlab) followed by a dilation (Wikipedia or Matlab). This is done using the imerode and the imdilate functions in Matlab.
Doing this require to specify the size of the element eroding and dilating using the strel function with a shape ('square', 'disk', 'octagon', etc.) and a size.
SE=strel('disk',5);
im_eroded=imerode(im,SE);
im_dilated=imdilate(im_eroded,SE);

Resources