How can I implement region filling(Conditional Dilation) algorithm with matlab? - image

How can I implement region filling(Conditional Dilation) algorithm that The algorithm terminates at step k if Xk=Xk-1 with matlab!
Fill this image with region filling algorithm in matlab

You can use the imfill function from the Image Processing toolbox.
You can either specify the points where to start the filling, or use the 'holes' option to fill all holes:
I = imread('http://i.stack.imgur.com/BkHkg.png');
I = I>0; % convert to binary image
J = imfill(I,'holes');
--
If you want to implement the algorithm yourself, then please specify what algorithm you are using, add the code you have and tell us exactly what problems you are having. Nobody here will write the code for you from scratch, but we are glad to help with problems.

Related

Shape detection in image using Matlab

I have an image with many shapes and I need to write some Matlab code which remove all the shapes except the rectangle.. Does it availabe to do it using only with strel,imclose and bwareaopen? if you think yes i will be very happy to hear your opinion.
Image:
If I understood right from your comment, rectangle may have any size. I think this can be asked only if the other shapes have fixed size since you are asked to use strel, imclose and bwareaopen. To briefly explain,
strel function creates a structuring element with given size for rectangle, disk or any other shape on the picture you added.
imclose should be used to connect the similar shapes you give as input(basically the structuring element you should find from the strel function).
bwareaopen will be used to delete the objects which has less than P -given as input- pixels.
So, if the rectangle can be given in any size for this image, the other shapes should stay the same in order to be able to define them with strel function, and connect by using imclose function. In this way, you may connect them all, take inverse, remove using bwareaopen and take inverse once again to end up with rectangle.
I could not think about any other solution, hope it helps!

How to detect texture or non-texture in image

I known that the question can be not satisfy for forum,but I think I can find the help from many smart image processing guys. My question is that, I have a image include texture and non-texture in image. How to detect the region that is texture region? Could you suggest to me any algorithm or parameter to distinguish non-texture region and texture region?
Thank you so much
UPDATE:
Based on the suggestion about Gray Level Matrix. I use a tool to extract that texture feature. However, I don't know which is best for my case. Let see the my result and explain help me which feature will be chosen
#rayryeng: Could you said to me what is purpose of Neighboring gray-level dependence matrix (NGLDM). How to use it in my case?
You can use texture descriptors such as those used in MPEG-7 :
Homogeneous Texture Descriptor (HTD)
Texture Browsing Descriptor (TBD)
Edge Histogram Descriptor (EHD)
You can find the details in some scientific papers such as Evaluation and comparison of texture descriptors proposed in MPEG-7 or Texture Descriptors in MPEG-7
A basic way to compute texture descriptors is to use Gabor filter. Some of MPEG-7 descriptors are based on it.
You can also take a look to the Grey-Level Co-occurrence Matrix texture measurements.
I am not sure if this is a valid way, or anybody uses this approach (I could not find any scholar papers) but I have an intuitive approach which I used a couple of times and worked fine for me.
I calculate the number of valid SURF features in an image and sort images with respect to the number of features. As the number of features increase, texture level also increases in my intuition. Below is my Matlab function that extract the number of features:
function [num_pts] = im2surf_feature(im)
if nargin>=1 && ischar(im) && exist(im, 'file')
im = imread(im);
end
if size(im,3)==3
im = rgb2gray(im);
end
ptsI1 = detectSURFFeatures(im);
[~, validPtsI1] = extractFeatures(im, ptsI1);
num_pts = size(validPtsI1,1);
end
detectSURFFeatures and extractFeatures are Matlab functions.
Note: I know this is a very late answer, but maybe someone can use it or give me feedback as to why this method is good or bad.

detect the position of an object in the image using matlab

I am trying to implement the 2D correlation algorithm to detect the position of an object in the image, i don't want to use any built in function estimates 2d correlation.
Here is my code:
I=imread('image.tif'); % image is a black image contains white letters.
h=imread('template.tif'); %template is a small image taken from the original image, it contains one white letter.
I=double(I);
h=double(h);
[nrows ncolumns]=size(I);
[nrows2 ncolumns2]=size(h);
C=zeros(nrows,ncolumns);
for u=1:(nrows-nrows2+1)
for v=1:(ncolumns-ncolumns2+1)
for x=1:nrows2
for y=1:ncolumns2
C(u,v)=C(u,v)+(h(x,y)*I(u+x-1,v+y-1));
end
end
end
end
[maxC,ind] = max(C(:));
[m,n] = ind2sub(size(C),ind) % the index represents the position of the letter.
output_image=(3.55/4).*C./100000;
imshow(uint8(output_image));
I think it is working! but it is very slow.
How can i replace the following code by a better code to speed up the algorithm?
for x=1:nrows2
for y=1:ncolumns2
C(u,v)=C(u,v)+(h(x,y)*I(u+x-1,v+y-1));
end
end
I am thinking that in every time i have the following two matrices
h(1:nrows2,1:ncolumns2) and I(u:u+nrows2-1,v:v+ncolumns2-1)
another question, are there any improvements?
thanks.
Whenever you can, try to use matrix ops. So try something like:
rowInds = (1:nrows2)-1;
colInds = (1:ncolumns2)-1;
temp = h.*I(u+rowInds,v+colInds);
C(u,v) = sum(temp(:));
Instead of:
for x=1:nrows2
for y=1:ncolumns2
C(u,v)=C(u,v)+(h(x,y)*I(u+x-1,v+y-1));
end
end
Yes there are many improvements. You don't need a for loop at all. Since you do not want to use matlab's xcorr2 function, you can use conv2. See the answer I gave here.
How about determining the cross correlation in the Fourier domain, following the cross-correlation theorem? That should guarantee a dramatic speed increase.

Determining if an image is more or less similar to a goal image

I'm trying to think of a fast algorithm for the following issue.
Given a goal image G, and two images A and B, determine which of A or B is more similar to G. Note that images A, B, and G are all the same dimension.
By more similar, I mean it looks more like image G overall.
Any ideas for algorithms? I am doing this in Objective-C, and have the capability to scan each and every single pixel in images A, B, and G.
I implemented the following: scan each and every pixel, determine the absolute error in each of red, green, and blue values for A to G and for B to G. The one with the less error is more similar. It works okay, but it is extremely extremely slow.
It is not possible to do better than X*Y where X, Y are the image dimensions. Since you need to scan each pixel of the input anyways.
However, one technique you can try is scan random pixels in the image and find the difference. Once you see an image considerably similar or dissimilar than A or B, you can stop.
# X, Y are the dimensions
sim_A = 0
sim_B = 0
while( abs(sim_A - sim_B) > MAX_DISSIMILARITY):
rand_x = random(X)
rand_y = random(Y)
sim_A += dissimilar(img_G, img_A, rand_X, rand_Y)
sim_B += dissimilar(img_G, img_B, rand_X, rand_Y)
You may try using SIFT Algorithm (Scale Invariant Feature Transform). As you just mentioned that you want to find which image is MORE similar to the goal image, then I guess this is the best algorithm. It basically extracts the Invariant features of the image (features that dont change with change in luminous intensity, scale, perspective etc) and then creates a feature vector of these. Then you may use this Feature vector to compare it with other images. you may check this and this for further reference.
Ideally there are computer vision libraries that make things way simpler (i guess it might be difficult to read and write to images in objective C, without any computer vision library). OpenCV (opensource computer vision Library) is best suited for stuff like these. It has many inbuilt functions to handle common stuff with images/videos.
Hope this helps :)
I would recommend checking out OpenCV, which is an image processing library. I don't think it has Objective-C support, but I think it is a better starting place than writing your own algorithm. Usually better not to reinvent the wheel unless you are doing it for personal practice.
The best way, I found out, is to do the following.
First, invert all pixels on the image to make the opposite of the image. This is the most dissimilar image.
Then, to compare image to the target image, compute how far away it is from the most dissimilar image. If it's more far, it's a better image.

Smoothening the lines of the segmented image

Hello,
I have a segmented image as shown. Is there a way to smoothen the lines so that it does not look so wavy? Thanks.
The following code requires Image Processing Toolbox:
url = 'http://i182.photobucket.com/albums/x11/veronicafmy/FYP/picture5segmentedimage.jpg';
rgb = imread(url);
bw = im2bw(rgb2gray(rgb), 0.5);
se = strel('line',50,74); % 74 degrees determined by inspection
bw2 = imclose(bw,se);
se2 = strel('line',50,74+90);
bw3 = imclose(bw2,se2);
Here's the result:
Optional step: postprocess by thinning:
bw4 = bwmorph(bw3,'thin',inf);
I think you should ask yourself why it has to be smoother. If you have segmented an image and gotten that result, are you sure that smoothening will give you a correct result?
If it does then Steve Eddins answer seems to do the trick.
If, on the other hand, the object you are trying to segment is much smoother than the result I'd suggest one of two approaches.
If the target object is a cross (two lines), I'd probably calculate the lines and change the representation to two line segments. These can then be rendered at whatever precision and smoothness. To do this you could either find the center and rotation using some kind of feature detection algorithm, or you could use hough transforms to find the lines. The latter is probably much simpler.
If the target can have any form then I'd look into a better segmentation algorithm. There are segmentation algorithms that is not based on hard thresholds. I have used graph partitioning algorithms for this, and while slow, they work well.

Resources