Smoothening the lines of the segmented image - image

Hello,
I have a segmented image as shown. Is there a way to smoothen the lines so that it does not look so wavy? Thanks.

The following code requires Image Processing Toolbox:
url = 'http://i182.photobucket.com/albums/x11/veronicafmy/FYP/picture5segmentedimage.jpg';
rgb = imread(url);
bw = im2bw(rgb2gray(rgb), 0.5);
se = strel('line',50,74); % 74 degrees determined by inspection
bw2 = imclose(bw,se);
se2 = strel('line',50,74+90);
bw3 = imclose(bw2,se2);
Here's the result:
Optional step: postprocess by thinning:
bw4 = bwmorph(bw3,'thin',inf);

I think you should ask yourself why it has to be smoother. If you have segmented an image and gotten that result, are you sure that smoothening will give you a correct result?
If it does then Steve Eddins answer seems to do the trick.
If, on the other hand, the object you are trying to segment is much smoother than the result I'd suggest one of two approaches.
If the target object is a cross (two lines), I'd probably calculate the lines and change the representation to two line segments. These can then be rendered at whatever precision and smoothness. To do this you could either find the center and rotation using some kind of feature detection algorithm, or you could use hough transforms to find the lines. The latter is probably much simpler.
If the target can have any form then I'd look into a better segmentation algorithm. There are segmentation algorithms that is not based on hard thresholds. I have used graph partitioning algorithms for this, and while slow, they work well.

Related

extract motion blur of an image in matlab

I found that there are some paper said can analysis the gradient histogram
(blur image has gradient follows a heavy-tailed distribution)
or using fft (blur image has lower frequency)
Is there a way to detect if an image is blurry?
to detect blur in image.
But I am not quite sure how to implement it in matlab. How to define the threshold value and so on.
[Gx, Gy] = imgradientxy(a);
G = sqrt(Gx.^2+Gy.^2)
What should I do after running the command and find the G?
What should I do if I wanna plot a graph of number of pixel verse G
I am new to matlab and image processing. Could anyone kindly provide more details of how to implement it
Preparation: we read the cameraman image, which is often used for visualizing image processing algorithms, and add some motion blur.
origIm = imread('cameraman.tif');
littleBlurredIm = imfilter(origIm,fspecial('motion',5,45),'replicate');
muchBlurredIm = imfilter(origIm,fspecial('motion',20,45),'replicate');
which gives us the following images to start with:
To calculate the Laplacian, you can use the imgradient function, which returns magnitude and angle, so we'll simply discard the angle:
[lpOrigIm,~] = imgradient(origIm);
[lpLittleBlurredIm,~] = imgradient(littleBlurredIm);
[lpMuchBlurredIm,~] = imgradient(muchBlurredIm);
which gives:
You can visually see that the original image has very sharp and clear edges. The image with a little blur still has some features, and the image with much blur only contains a few non-zero values.
As proposed in the answer by nikie to this question, we can now create some measure for the blurriness. A (more or less) robust measure would for example be the median of the top 0.1% of the values:
% Number of pixels to look at: 0.1%
nPx = round(0.001*numel(origIm));
% Sort values to pick top values
sortedOrigIm = sort(lpOrigIm(:));
sortedLittleBlurredIm = sort(lpLittleBlurredIm(:));
sortedMuchBlurredIm = sort(lpMuchBlurredIm(:));
% Calculate measure
measureOrigIm = median(sortedOrigIm(end-nPx+1:end));
measureLittleBlurredIm = median(sortedLittleBlurredIm(end-nPx+1:end));
measureMuchBlurredIm = median(sortedMuchBlurredIm(end-nPx+1:end));
Which gives the following results:
Original image: 823.7
Little Blurred image: 593.1
Much Blurred image: 490.3
Here is a comparison of this blurriness measure for different motion blur angles and blur amplitudes.
Finally, I tried it on the test images from the answer linked above:
which gives
Interpretation: As you see it is possible to detect, if an image is blurred. It however appears difficult to detect how strongly blurred the image is, as this also depends on the angle of the blur with relation to the scene, and due to the imperfect gradient calculation. Further the absolute value is very much scene-dependent, so you might have to put some prior knowledge about the scene into the interpretation of this value.
This is a very interesting topic.
Although gradient magnitude can be used as good feature for blur detection but this feature will fail when dealing with uniform regions in images. In other words, this feature will not be able to distinguish between blur and flat regions. There are many other solutions. Some of them detect flat regions to avoid classifying flat regions as blur. if you want more information you can check these links:
You can find many good recent papers in cvpr conference.
Many of them they have websites where they discuss the details and provide the code.
This one http://www.cse.cuhk.edu.hk/leojia/projects/dblurdetect/
is one of the papers that I worked on
you can find the code available.
You can check also other papers in cvpr. most of them they have the code
this is another one
http://shijianping.me/jnb/index.html

Which way is my yarn oriented?

I have an image processing problem. I have pictures of yarn:
The individual strands are partly (but not completely) aligned. I would like to find the predominant direction in which they are aligned. In the center of the example image, this direction is around 30-34 degrees from horizontal. The result could be the average/median direction for the whole image, or just the average in each local neighborhood (producing a vector map of local directions).
What I've tried: I rotated the image in small steps (1 degree) and calculated statistics in the vertical vs horizontal direction of the rotated image (for example: standard deviation of summed rows or summed columns). I reasoned that when the strands are oriented exactly vertically or exactly horizontally the difference in statistics would be greatest, and so that angle of rotation is the correct direction in the original image. However, for at least several kinds of statistical properties I tried, this did not work.
I further thought that perhaps this wasn't working because there were too many different directions at the same time in the whole image, so I tired it in a small neighborhood. In this case, there is always a very clear preferred direction (different for each neighborhood), but it is not the direction that the fibers really go... I can post my sample code but it is basically useless.
I keep thinking there has to be some kind of simple linear algebra/statistical property of the whole image, or some value derived from the 2D FFT that would give the correct direction in one step... but how?
What probably won't work: detecting individual fibers. They are not necessarily the same color, and the image can shade from light to dark so edge detectors don't work well, and the image may not even be in focus sometimes. Because of that, it is not always even possible to see individual fibers for a human (see top-right in the example), they kinda have to be detected as preferred direction in a statistical sense.
You might try doing this in the frequency domain. The output of a Fourier Transform is orientation dependent so, if you have some kind of oriented pattern, you can apply a 2D FFT and you will see a clustering around a specific orientation.
For example, making a greyscale out of your image and performing FFT (with ImageJ) gives this:
You can see a distinct cluster that is oriented orthogonally with respect to the orientation of your yarn. With some pre-processing on your source image, to remove noise and maybe enhance the oriented features, you can probably achieve a much stronger signal in the FFT. Once you have a cluster, you can use something like PCA to determine the vector for the major axis.
For info, this is a technique that is often used to enhance oriented features, such as fingerprints, by applying a selective filter in the FFT and then taking the inverse to obtain a clearer image.
An alternative approach is to try a series of Gabor filters see here pre-built with a selection of orientations and frequencies and use the resulting features as a metric for identifying the most likely orientation. There is a scikit article that gives some examples here.
UPDATE
Just playing with ImageJ to give an idea of some possible approaches to this - I started with the FFT shown above, then - in the following image, I performed these operations (clockwise from top left) - Threshold => Close => Holefill => Erode x 3:
Finally, rather than using PCA, I calculated the spatial moments of the lower left blob using this ImageJ Plugin which handily calculates the orientation of the longest axis based on the 2nd order moment. The result gives an orientation of approximately -38 degrees (with respect to the X axis):
Depending on your frame of reference you can calculate the approximate average orientation of your yarn from this rather than from PCA.
I tried to use Gabor filters to enhance the orientations of your yarns. The parameters I used are:
phi = x*pi/16; % x = 1, 3, 5, 7
theta = 3;
sigma = 0.65*theta;
filterSize = 3;
And the imag part of the convoluted image are shown below:
As you mentioned, the most orientations lies between 30-34 degrees, thus the filter with phi = 5*pi/16 in left bottom yields the best contrast among the four.
I would consider using a Hough Transform for this type of problem, there is a nice write-up here.

image enhancement - cleaning given image from writing

i need to clean this picture delete the writing "clean me" and make it bright.
as a part of my homework in image processing course i may use matlab functions ginput, to find specific points in the image (of course in the script you should hard code the coordinates you need).
You may use conv2, fft2, ifft2, fftshift etc.
You may also use median, mean, max, min, sort, etc.
my basic idea was to use the white and black values from the middle of the picture and insert them into the other parts of the black and white strips. however gives a very synthetic look to the picture.
can you please give me a direction what to do ? a median filter will not give good results.
The general technique to do such thing is called Inpainting. But in order to do it, you need a mask of the regions that you want to in paint. So, let us suppose that we managed to get a good mask and inpainted the original image considering a morphological dilation of this mask:
To get that mask, we don't need anything much fancy. Start with a binarization of the difference between the original image and the result of a median filtering of it:
You can remove isolated pixels; join the pixels representing the stars of your flag by a combination of dilation in horizontal followed by another dilation with a small square; remove this just created largest component; and then perform a geodesic dilation with the result so far against the initial mask. This gives the good mask above.
Now to inpaint there are many algorithms, but one of the simplest ones I've found is described at Fast Digital Image Inpainting, which should be easy enough to implement. I didn't use it, but you could and verify which results you can obtain.
EDIT: I missed that you also wanted to brighten the image.
An easy way to brighten an image, without making the brighter areas even brighter, is by applying a gamma factor < 1. Being more specific to your image, you could first apply a relatively large lowpass filter, negate it, multiply the original image by it, and then apply the gamma factor. In this second case, the final image will likely be darker than the first one, so you multiply it by a simple scalar value. Here are the results for these two cases (left one is simply a gamma 0.6):
If you really want to brighten the image, then you can apply a bilateral filter and binarize it:
I see two options for removing "clean me". Both rely on the horizontal similarity.
1) Use a long 1D low-pass filter in the horizontal direction only.
2) Use a 1D median filter maybe 10 pixels long
For both solutions you of course have to exlude the stars-part.
When it comes to brightness you could try a histogram equalization. However that won't fix the unevenness of the brightness. Maybe a high-pass before equalization can fix that.
Regards
The simplest way to remove the text is, like KlausCPH said, to use a long 1-d median filter in the region with the stripes. In order to not corrupt the stars, you would need to keep a backup of this part and replace it after the median filter has run. To do this, you could use ginput to mark the lower right side of the star part:
% Mark lower right corner of star-region
figure();imagesc(Im);colormap(gray)
[xCorner,yCorner] = ginput(1);
close
xCorner = round(xCorner); yCorner = round(yCorner);
% Save star region
starBackup = Im(1:yCorner,1:xCorner);
% Clean up stripes
Im = medfilt2(Im,[1,50]);
% Replace star region
Im(1:yCorner,1:xCorner) = starBackup;
This produces
To fix the exposure problem (the middle part being brighter than the corners), you could fit a 2-D Gaussian model to your image and do a normalization. If you want to do this, I suggest looking into fit, although this can be a bit technical if you have not been working with model fitting before.
My found 2-D gaussian looks something like this:
Putting these two things together, gives:
I used gausswin() function to make a gaus. mask:
Pic_usa_g = abs(1 - gausswin( size(Pic_usa,2) ));
Pic_usa_g = Pic_usa_g + 0.6;
Pic_usa_g = Pic_usa_g .* 2;
Pic_usa_g = Pic_usa_g';
C = repmat(Pic_usa_g, size(Pic_usa,1),1);
and after multiply the image with the mask you get the fixed image.

Detecting thin lines in blurry image

I'm looking for some ideas to detect lines in the attached image. Lines are assumed to be vertical, but their are very poor quality and there are only 2-3 pixels between each blurry line.
I tried these methods already:
Erosion& Dilation in vertical ->good result for enhancement
CLAHE -> Good for enhancement
Hough -> Failed since converting the images to Black & while will have too many broken lines or bridges.
Also I tried vertical line Mask too.
Basically methods based on Black&White image conversion won't be applicable for this.
I would collapse the image along the lines to get 1d profile. And do the detection there (e.g. by looking at the peaks above the median.
Here is the collapsed image
The object detection there is obvious
Very promising works regarding faint edges detection in noisy images:
Basic version for straight lines:
http://www.wisdom.weizmann.ac.il/~meirav/EdgesGalunBasriBrandt.pdf
More advanced version:
http://www.wisdom.weizmann.ac.il/~meirav/Curves_Alpert_Galun_Nadler_Basri.pdf
I'm not sure if the authors made their code publicly available. It might be worth-while contacting the authors directly.
These works proposes a well studied and principled method for faint-edge detection.
Here's an alternative approach, that will find you the lines, assuming that the peak is apparent within ~5 pixels. It will be tolerant to small rotations of the image.
img = imread('http://i.stack.imgur.com/w7qMT.jpg');
img = rgb2gray(img);
%# smoothen the image a little with an anisotroic Gaussian
fimg = imfilter(double(img),fspecial('gaussian',[3 1]));
%# find the lines as local maxima
msk = ones(5);
msk(:,2:4) = 0;
lines = fimg > imdilate(fimg,msk);

Algorithm for following the path of ridges on a 3D image

I'm trying to find an algorithm (or algorithm ideas) for following a ridge on a 3D image, derived from a digital elevation model (DEM). I've managed to get very basic program working which just iterates across each row of the image marking a ridge line wherever it finds a large change in aspect (ie. from < 180 degrees to > 180 degrees).
However, the lines this produces aren't brilliant, there are often gaps and various strange artefacts. I'm hoping to try and extend this by using some sort of algorithm to follow the ridge lines, thus producing lines that are complete (that is, no gaps) and more accurate.
A number of people have mentioned snake algorithms to me, but they don't seem to be quite what I'm looking for. I've also done a lot of searching about path-finding algorithms, but again, they don't seem to be quite the right thing.
Does anyone have any suggestions for types or algorithms or specific algorithms I should look at?
Update: I've been asked to add some more detail on the exact area I'll be applying this to. It's working with gridded elevation data of sand dunes. I'm trying to extract the crests if these sand dunes, which look similar to the boundaries between drainage basins, but can be far more complex (for example, there can be multiple sand dunes very close to each other with gradually merging crests)
You can get a good estimate of the ridges using sign changes of the curvature. Note that the curvature will be near infinity at flat regions. Hence possible psuedo-code for a ridge detection algorithm could be:
for each face in the mesh
compute 1/curvature
if abs(1/curvature) != zeroTolerance
flag face as ridge
else
continue
(zeroTolerance is a number near but not equal to zero e.g. 0.003 etc)
Also Meshlab provides a module for normal & curvature estimation on most formats. You can test the idea using it, before you code it up.
I don't know how what your data is like or how much automation you need. This won't work if if consists of peaks without clear ridges (but then you probably wouldn't be asking the question.)
startPoint = highest point in DEM (or on ridge)
curPoint = startPoint;
line += curPoint;
Loop
curPoint = highest point adjacent to curPoint not in line; // (Don't backtrack)
line += point;
Repeat
Curious what the real solution turns out to be.
Edited to add: depending on the coarseness of your data set, 'point' can be a single point or a smoothed average of a local region of points.
http://en.wikipedia.org/wiki/Ridge_detection
You can treat the elevation as you would a grayscale color, then use a 2D edge recognition filter. There are lots of edge recognition methods available. The best would depend on your specific needs.

Resources