detect a wrinkles on face in matlab? - image

I would like to know something about wrinkles detection in Matlab:
I thought of using Hough Transform but it could not work for this. Is there any idea that I could processed further?
I even thought of using sobe, canny and other edge detector. But when I read their documentation, they are not really an edge detector.
close all
clear all
clc
Image = imread('imagename.jpg');
GrayImage = rgb2gray(Image);
FiltImage = edge(GrayImage ,'sobel');
imshow(FiltImage)
i want all the wrinkles as white pixel and the rest of the image as black.

I borrowed the method used in vessel detection from the paper Hessian-based Multiscale Vessel Enhancement Filtering by Frangi et al. There is a Matlab implementation, FrangiFilter2D, that works on 2D vessel images. And I tried to apply it to wrinkle detection.
options = struct('FrangiScaleRange', [5 5], 'FrangiScaleRatio', 1, 'FrangiBetaOne', 1,...
'FrangiBetaTwo', 500, 'verbose',true,'BlackWhite',true);
[outIm,whatScale,Direction] = FrangiFilter2D(double(GrayImage), options);
imshow(uint8(outIm/max(outIm(:))*256))
It looks better than pure edge extraction, though some improvement is need by (i) tuning the parameters, and (ii) combining with other image processing strategies.

Matlab has a ton of fun tools that you can essentially play with in combination, to detect the wrinkles. Here are some things to look at.
1). Study thresholding and see how it applies to your situation (this will help you a lot because of the contrast that exists between the wrinkles and the rest of the face color).
2). Remember you can add and subtract images.
3).Study watershed algorithm if you feel adventurous.

Related

How to detect a crack in an image?

How to detect such cracks that you can see in the attached images? I have tried some OpenCV algorithms like blob detection (cv::SimpleBlobDetector) but couldn't get any results.
It is a cropped image, the full image has some other features as well, so I am not sure thresholding can work because I have to get the bounding box of the detected crack. One way is to assign several (region of interest) ROI and try to detect within that ROI, but this crack doesn't appear at the same location in the image. Any idea?
Can this problem be solved with machine/deep learning (like object detection)? If I train a model with a crack dataset? Because the crack part of the image doesn't have lots of features so I am not sure this method will work. Please guide.
Thanks.
These cracks are difficult to detect because the image is noisy (presumably X-ray) and the contrast poor, so the signal-to-noise ratio is low.
I would try by applying a gaussian filter for denoising, but only in the horizontal direction, to preserve the horizontal edges. Then detection of the horizontal edges.
This is about what a Gabor filter does. You can try different orientations.
Use mathematical morphology operation.
By example Matlab code:
a=imread('in.png');
se=strel( 'disk', 7);
b = imgaussfilt(a,1.3);
c=b-imopen(b,se);
c=3*c;
d=imclearborder(c);
imwrite(d, 'out.png');

extract motion blur of an image in matlab

I found that there are some paper said can analysis the gradient histogram
(blur image has gradient follows a heavy-tailed distribution)
or using fft (blur image has lower frequency)
Is there a way to detect if an image is blurry?
to detect blur in image.
But I am not quite sure how to implement it in matlab. How to define the threshold value and so on.
[Gx, Gy] = imgradientxy(a);
G = sqrt(Gx.^2+Gy.^2)
What should I do after running the command and find the G?
What should I do if I wanna plot a graph of number of pixel verse G
I am new to matlab and image processing. Could anyone kindly provide more details of how to implement it
Preparation: we read the cameraman image, which is often used for visualizing image processing algorithms, and add some motion blur.
origIm = imread('cameraman.tif');
littleBlurredIm = imfilter(origIm,fspecial('motion',5,45),'replicate');
muchBlurredIm = imfilter(origIm,fspecial('motion',20,45),'replicate');
which gives us the following images to start with:
To calculate the Laplacian, you can use the imgradient function, which returns magnitude and angle, so we'll simply discard the angle:
[lpOrigIm,~] = imgradient(origIm);
[lpLittleBlurredIm,~] = imgradient(littleBlurredIm);
[lpMuchBlurredIm,~] = imgradient(muchBlurredIm);
which gives:
You can visually see that the original image has very sharp and clear edges. The image with a little blur still has some features, and the image with much blur only contains a few non-zero values.
As proposed in the answer by nikie to this question, we can now create some measure for the blurriness. A (more or less) robust measure would for example be the median of the top 0.1% of the values:
% Number of pixels to look at: 0.1%
nPx = round(0.001*numel(origIm));
% Sort values to pick top values
sortedOrigIm = sort(lpOrigIm(:));
sortedLittleBlurredIm = sort(lpLittleBlurredIm(:));
sortedMuchBlurredIm = sort(lpMuchBlurredIm(:));
% Calculate measure
measureOrigIm = median(sortedOrigIm(end-nPx+1:end));
measureLittleBlurredIm = median(sortedLittleBlurredIm(end-nPx+1:end));
measureMuchBlurredIm = median(sortedMuchBlurredIm(end-nPx+1:end));
Which gives the following results:
Original image: 823.7
Little Blurred image: 593.1
Much Blurred image: 490.3
Here is a comparison of this blurriness measure for different motion blur angles and blur amplitudes.
Finally, I tried it on the test images from the answer linked above:
which gives
Interpretation: As you see it is possible to detect, if an image is blurred. It however appears difficult to detect how strongly blurred the image is, as this also depends on the angle of the blur with relation to the scene, and due to the imperfect gradient calculation. Further the absolute value is very much scene-dependent, so you might have to put some prior knowledge about the scene into the interpretation of this value.
This is a very interesting topic.
Although gradient magnitude can be used as good feature for blur detection but this feature will fail when dealing with uniform regions in images. In other words, this feature will not be able to distinguish between blur and flat regions. There are many other solutions. Some of them detect flat regions to avoid classifying flat regions as blur. if you want more information you can check these links:
You can find many good recent papers in cvpr conference.
Many of them they have websites where they discuss the details and provide the code.
This one http://www.cse.cuhk.edu.hk/leojia/projects/dblurdetect/
is one of the papers that I worked on
you can find the code available.
You can check also other papers in cvpr. most of them they have the code
this is another one
http://shijianping.me/jnb/index.html

Which way is my yarn oriented?

I have an image processing problem. I have pictures of yarn:
The individual strands are partly (but not completely) aligned. I would like to find the predominant direction in which they are aligned. In the center of the example image, this direction is around 30-34 degrees from horizontal. The result could be the average/median direction for the whole image, or just the average in each local neighborhood (producing a vector map of local directions).
What I've tried: I rotated the image in small steps (1 degree) and calculated statistics in the vertical vs horizontal direction of the rotated image (for example: standard deviation of summed rows or summed columns). I reasoned that when the strands are oriented exactly vertically or exactly horizontally the difference in statistics would be greatest, and so that angle of rotation is the correct direction in the original image. However, for at least several kinds of statistical properties I tried, this did not work.
I further thought that perhaps this wasn't working because there were too many different directions at the same time in the whole image, so I tired it in a small neighborhood. In this case, there is always a very clear preferred direction (different for each neighborhood), but it is not the direction that the fibers really go... I can post my sample code but it is basically useless.
I keep thinking there has to be some kind of simple linear algebra/statistical property of the whole image, or some value derived from the 2D FFT that would give the correct direction in one step... but how?
What probably won't work: detecting individual fibers. They are not necessarily the same color, and the image can shade from light to dark so edge detectors don't work well, and the image may not even be in focus sometimes. Because of that, it is not always even possible to see individual fibers for a human (see top-right in the example), they kinda have to be detected as preferred direction in a statistical sense.
You might try doing this in the frequency domain. The output of a Fourier Transform is orientation dependent so, if you have some kind of oriented pattern, you can apply a 2D FFT and you will see a clustering around a specific orientation.
For example, making a greyscale out of your image and performing FFT (with ImageJ) gives this:
You can see a distinct cluster that is oriented orthogonally with respect to the orientation of your yarn. With some pre-processing on your source image, to remove noise and maybe enhance the oriented features, you can probably achieve a much stronger signal in the FFT. Once you have a cluster, you can use something like PCA to determine the vector for the major axis.
For info, this is a technique that is often used to enhance oriented features, such as fingerprints, by applying a selective filter in the FFT and then taking the inverse to obtain a clearer image.
An alternative approach is to try a series of Gabor filters see here pre-built with a selection of orientations and frequencies and use the resulting features as a metric for identifying the most likely orientation. There is a scikit article that gives some examples here.
UPDATE
Just playing with ImageJ to give an idea of some possible approaches to this - I started with the FFT shown above, then - in the following image, I performed these operations (clockwise from top left) - Threshold => Close => Holefill => Erode x 3:
Finally, rather than using PCA, I calculated the spatial moments of the lower left blob using this ImageJ Plugin which handily calculates the orientation of the longest axis based on the 2nd order moment. The result gives an orientation of approximately -38 degrees (with respect to the X axis):
Depending on your frame of reference you can calculate the approximate average orientation of your yarn from this rather than from PCA.
I tried to use Gabor filters to enhance the orientations of your yarns. The parameters I used are:
phi = x*pi/16; % x = 1, 3, 5, 7
theta = 3;
sigma = 0.65*theta;
filterSize = 3;
And the imag part of the convoluted image are shown below:
As you mentioned, the most orientations lies between 30-34 degrees, thus the filter with phi = 5*pi/16 in left bottom yields the best contrast among the four.
I would consider using a Hough Transform for this type of problem, there is a nice write-up here.

Detecting thin lines in blurry image

I'm looking for some ideas to detect lines in the attached image. Lines are assumed to be vertical, but their are very poor quality and there are only 2-3 pixels between each blurry line.
I tried these methods already:
Erosion& Dilation in vertical ->good result for enhancement
CLAHE -> Good for enhancement
Hough -> Failed since converting the images to Black & while will have too many broken lines or bridges.
Also I tried vertical line Mask too.
Basically methods based on Black&White image conversion won't be applicable for this.
I would collapse the image along the lines to get 1d profile. And do the detection there (e.g. by looking at the peaks above the median.
Here is the collapsed image
The object detection there is obvious
Very promising works regarding faint edges detection in noisy images:
Basic version for straight lines:
http://www.wisdom.weizmann.ac.il/~meirav/EdgesGalunBasriBrandt.pdf
More advanced version:
http://www.wisdom.weizmann.ac.il/~meirav/Curves_Alpert_Galun_Nadler_Basri.pdf
I'm not sure if the authors made their code publicly available. It might be worth-while contacting the authors directly.
These works proposes a well studied and principled method for faint-edge detection.
Here's an alternative approach, that will find you the lines, assuming that the peak is apparent within ~5 pixels. It will be tolerant to small rotations of the image.
img = imread('http://i.stack.imgur.com/w7qMT.jpg');
img = rgb2gray(img);
%# smoothen the image a little with an anisotroic Gaussian
fimg = imfilter(double(img),fspecial('gaussian',[3 1]));
%# find the lines as local maxima
msk = ones(5);
msk(:,2:4) = 0;
lines = fimg > imdilate(fimg,msk);

How to remove a scratch from an image using matlab

Let's say I have this image this:
With a black scratch and I want to remove it from my image. I know it is noise. I have tried neighbourhood filter and also gaussian filter but no success.
If you know the location of the scratch, this problem is known as inpainting, and there are very sophisticated algorithms for that. So one approach would be to detect the scratch as good as you can, then use a standard inpainting algorithm on it. I've played with your image in Mathematica a little:
First I applied a median filter to the image. As you found out yourself, this removes the scratch, but also removes a lot of detail. The difference between median and original image is a good indicator for your scratch, though:
When I binarize this image with a manually selected threshold, I get a quick&dirty scratch detector:
If you have more knowledge about what your scratches look like, you can improve this detector a lot. e.g. are the scratches always dark? Do they always have high contrast? Are they always smooth curves, i.e. is their curvature always low? - Each of these properties can be measured somehow, so you'd combine these measurements to a single image and binarize that.
One small improvement is to remove small components:
This is still not perfect, but the result is good enough to use it as an inpainting mask:
This will remove some detail, too, but the differences are harder to spot.
Full Mathematica code:
difference = ImageDifference[sourceImage, MedianFilter[sourceImage, 2]];
mask = DeleteSmallComponents[Binarize[difference, 0.15], 15];
Inpaint[sourceImage, mask]
EDIT:
If you're don't have access to a standard inpainting algorithm (like Navier Stokes or Telea), a poor man's algorithm would be to use the median filtered image in those regions where the mask is 1 (probably something like mask*sourceImage + (1-mask)*medialFilteredImage in Matlab). Depending on the image data, the difference might not be worth the extra effort of a "real" inpainting algorithm:
A filter for Avisynth and a plugin for VirtualDub (my two favourite video editing tools). It will hardly get better than these two (You can learn from them if you really need to implement it yourself).
My result using median filter with ImageJ

Resources