Identify small defects in objects contour - algorithm

I want to match symmetry of object contours. I tried using matchShape(), computeDistance(), Humoments() from opencv 3.0 library. But none of them is close to what I want.
Following are the images on which I am working.
Good Shape-1
Defected
I expect to get highest value of dis-symmetry for image-2(named Defected)

You can do it yourself, by using several simple tools:
Find the centroid
Use PCA (http://docs.opencv.org/master/d1/dee/tutorial_introduction_to_pca.html#gsc.tab=0) to find the main axis
Rotate the shape so that the main axis points up
In each row, count the number of pixels on either side of the center and compare (There should be a single center coordinate, probably using the median value of all rows centers)
You can tune your own thresholds to fit the problem.

Related

matlab find peak images

I have a binary image below:
it's an image of random abstract picture, and by using matlab, what I wanna do is to detect, how many peaks does it have so I'll know that there are roughly 5 objects in it.
As you can see, there are, 5 peaks in it, so it means there are 5 objects in it.
I've tried using imregionalmax(), but I don't find it usefull, since my image already in binary image. I also tried to use regionprops('Area'), but it shows wrong number since there is no exact whitespace between each object. Thanks in advance
An easy way to do this would be to simply sum across the rows for each column and find the peaks of the result using findpeaks. In the example below, I have opted to use the inverse of the image which will result in positive peaks where the columns are.
rowSum = sum(1 - image, 1);
If we plot this, it looks like the bottom plot
We can then use findpeaks to identify the peaks in this plot. We will apply a 5-point moving average to it to help eliminate false peaks.
[peaks, locations, widths, prominences] = findpeaks(smooth(rowSum));
You can then select the "true" peaks by thresholding based on any of these outputs. For this example we can use prominences and find the more prominent peaks.
isPeak = prominences > 50;
nPeaks = sum(isPeak)
5
Then we can plot the peaks locations to confirm
plot(locations(isPeak), peaks(isPeak), 'r*');
If you have some prior knowledge about the expected widths of the peaks, you could adjust the smooth span to match this expected width and obtain some cleaner peaks when using findpeaks.
Using an expected width of 40 for your image, findpeaks was able to perfectly detect all 5 peaks with no false positive.
findpeaks(smooth(rowSum, 40));
As your they are peaks, they are vertical structures. So in this particular case, you case use projection histograms (also know as histogram projection function): you make all the black pixels fall as if they were effected by gravity. Then you will find a curve of black pixels on the bottom of your image. Then you can count the number of peaks.
Here is the algorithm:
Invert the image (black is normally the absence of information)
Histogram projection
Closing and opening in order to clean the signal and get the final result.
You can add a maxima detection to get the top of the peaks.

Which way is my yarn oriented?

I have an image processing problem. I have pictures of yarn:
The individual strands are partly (but not completely) aligned. I would like to find the predominant direction in which they are aligned. In the center of the example image, this direction is around 30-34 degrees from horizontal. The result could be the average/median direction for the whole image, or just the average in each local neighborhood (producing a vector map of local directions).
What I've tried: I rotated the image in small steps (1 degree) and calculated statistics in the vertical vs horizontal direction of the rotated image (for example: standard deviation of summed rows or summed columns). I reasoned that when the strands are oriented exactly vertically or exactly horizontally the difference in statistics would be greatest, and so that angle of rotation is the correct direction in the original image. However, for at least several kinds of statistical properties I tried, this did not work.
I further thought that perhaps this wasn't working because there were too many different directions at the same time in the whole image, so I tired it in a small neighborhood. In this case, there is always a very clear preferred direction (different for each neighborhood), but it is not the direction that the fibers really go... I can post my sample code but it is basically useless.
I keep thinking there has to be some kind of simple linear algebra/statistical property of the whole image, or some value derived from the 2D FFT that would give the correct direction in one step... but how?
What probably won't work: detecting individual fibers. They are not necessarily the same color, and the image can shade from light to dark so edge detectors don't work well, and the image may not even be in focus sometimes. Because of that, it is not always even possible to see individual fibers for a human (see top-right in the example), they kinda have to be detected as preferred direction in a statistical sense.
You might try doing this in the frequency domain. The output of a Fourier Transform is orientation dependent so, if you have some kind of oriented pattern, you can apply a 2D FFT and you will see a clustering around a specific orientation.
For example, making a greyscale out of your image and performing FFT (with ImageJ) gives this:
You can see a distinct cluster that is oriented orthogonally with respect to the orientation of your yarn. With some pre-processing on your source image, to remove noise and maybe enhance the oriented features, you can probably achieve a much stronger signal in the FFT. Once you have a cluster, you can use something like PCA to determine the vector for the major axis.
For info, this is a technique that is often used to enhance oriented features, such as fingerprints, by applying a selective filter in the FFT and then taking the inverse to obtain a clearer image.
An alternative approach is to try a series of Gabor filters see here pre-built with a selection of orientations and frequencies and use the resulting features as a metric for identifying the most likely orientation. There is a scikit article that gives some examples here.
UPDATE
Just playing with ImageJ to give an idea of some possible approaches to this - I started with the FFT shown above, then - in the following image, I performed these operations (clockwise from top left) - Threshold => Close => Holefill => Erode x 3:
Finally, rather than using PCA, I calculated the spatial moments of the lower left blob using this ImageJ Plugin which handily calculates the orientation of the longest axis based on the 2nd order moment. The result gives an orientation of approximately -38 degrees (with respect to the X axis):
Depending on your frame of reference you can calculate the approximate average orientation of your yarn from this rather than from PCA.
I tried to use Gabor filters to enhance the orientations of your yarns. The parameters I used are:
phi = x*pi/16; % x = 1, 3, 5, 7
theta = 3;
sigma = 0.65*theta;
filterSize = 3;
And the imag part of the convoluted image are shown below:
As you mentioned, the most orientations lies between 30-34 degrees, thus the filter with phi = 5*pi/16 in left bottom yields the best contrast among the four.
I would consider using a Hough Transform for this type of problem, there is a nice write-up here.

Invoice / OCR: Detect two important points in invoice image

I am currently working on OCR software and my idea is to use templates to try to recognize data inside invoices.
However scanned invoices can have several 'flaws' with them:
Not all invoices, based on a single template, are correctly aligned under the scanner.
People can write on invoices
etc.
Example of invoice: (Have to google it, sadly cannot add a more concrete version as client data is confidential obviously)
I find my data in the invoices based on the x-values of the text.
However I need to know the scale of the invoice and the offset from left/right, before I can do any real calculations with all data that I have retrieved.
What have I tried so far?
1) Making the image monochrome and use the left and right bounds of the first appearance of a black pixel. This fails due to the fact that people can write on invoices.
2) Divide the invoice up in vertical sections, use the sections that have the highest amount of black pixels. Fails due to the fact that the distribution is not always uniform amongst similar templates.
I could really use your help on (1) how to identify important points in invoices and (2) on what I should focus as the important points.
I hope the question is clear enough as it is quite hard to explain.
Detecting rotation
I would suggest you start by detecting straight lines.
Look (perhaps randomly) for small areas with high contrast, i.e. mostly white but a fair amount of very black pixels as well. Then try to fit a line to these black pixels, e.g. using least squares method. Drop the outliers, and fit another line to the remaining points. Iterate this as required. Evaluate how good that fit is, i.e. how many of the pixels in the observed area are really close to the line, and how far that line extends beyond the observed area. Do this process for a number of regions, and you should get a weighted list of lines.
For each line, you can compute the direction of the line itself and the direction orthogonal to that. One of these numbers can be chosen from an interval [0°, 90°), the other will be 90° plus that value, so storing one is enough. Take all these directions, and find one angle which best matches all of them. You can do that using a sliding window of e.g. 5°: slide accross that (cyclic) region and find a value where the maximal number of lines are within the window, then compute the average or median of the angles within that window. All of this computation can be done taking the weights of the lines into account.
Once you have found the direction of lines, you can rotate your image so that the lines are perfectly aligned to the coordinate axes.
Detecting translation
Assuming the image wasn't scaled at any point, you can then try to use a FFT-based correlation of the image to match it to the template. Convert both images to gray, pad them with zeros till the originals take up at most 1/2 the edge length of the padded image, which preferrably should be a power of two. FFT both images in both directions, multiply them element-wise and iFFT back. The resulting image will encode how much the two images would agree for a given shift relative to one another. Simply find the maximum, and you know how to make them match.
Added text will cause no problems at all. This method will work best for large areas, like the company logo and gray background boxes. Thin lines will provide a poorer match, so in those cases you might have to blur the picture before doing the correlation, to broaden the features. You don't have to use the blurred image for further processing; once you know the offset you can return to the rotated but unblurred version.
Now you know both rotation and translation, and assumed no scaling or shearing, so you know exactly which portion of the template corresponds to which portion of the scan. Proceed.
If rotation is solved already, I'd just sum up all pixel color values horizontally and vertically to a single horizontal / vertical "line". This should provide clear spikes where you have horizontal and vertical lines in the form.
p.s. Generated a corresponding horizontal image with Gimp's scaling capabilities, attached below (it's a bit hard to see because it's only one pixel high and may get scaled down because it's > 700 px wide; the url is http://i.stack.imgur.com/Zy8zO.png ).

How to extract a linear slice from an image in OpenCV / EMGU

I have an image and two points,
and I want to read the pixels between these two points,
and resample them into a small 1x40 array.
I'm using EMGU which is a C# wrapper for OpenCV.
thanks,
SW
What you are looking for is Bresenham's line algorithm. It will allow you to get the points in the pixel array that best approximate a straight line. The Wikipedia link also contains psuedo code to get you started.
Emgu CV includes method in the Image class for sampling color along a line called Sample.
Refer to the manual for the definition. Here's the link to Image.Sample in version 2.3.0.
You will still have to re-sample/interpolate the points in array returned from Sample to end up with a 40 element array. Since there are a number of ways to re-sample, I'll suggest you look to other questions for that.
Rotate and crop
I'd first try to do it like this:
calculate rotation matrix with (GetRotationMatrix2D)
warp the image so that this line is horisontal (WarpAffine)
calculate new positions of two of your points (you can use Transform)
get image rectangle of suitable width and 1 px high (GetRectSubPix)
Interpolation here and there may affect the results, but you have to interpolate anyway. You may consider cropping the image before rotation.
Iterate over the 8-connected pixels of the line
Otherwise you may use the line iterator to iterate over the pixels between two points. See documentation for InitLineIterator (Sorry, the link is to the Python version of OpenCV, I've never heard of EMGU). I suppose that in this case you iterate over pixels of a line which was not antialiased. But this should be much faster.
Interpolate manually
Finally, you may convert the image to an array, calculate which elements the line is passing through and subsample and interpolate manually.

Best approach for specific Object/Image Recognition task?

I'm searching for an certain object in my photograph:
Object: Outline of a rectangle with an X in the middle. It looks like a rectangular checkbox. That's all. So, no fill, just lines. The rectangle will have the same ratios of length to width but it could be any size or any rotation in the photograph.
I've looked a whole bunch of image recognition approaches. But I'm trying to determine the best for this specific task. Most importantly, the object is made of lines and is not a filled shape. Also, there is no perspective distortion, so the rectangular object will always have right angles in the photograph.
Any ideas? I'm hoping for something that I can implement fairly easily.
Thanks all.
You could try using a corner detector (e.g. Harris) to find the corners of the box, the ends and the intersection of the X. That simplifies the problem to finding points in the right configuration.
Edit (response to comment):
I'm assuming you can find the corner points in your image, the 4 corners of the rectangle, the 4 line endings of the X and the center of the X, plus a few other corners in the image due to noise or objects in the background. That simplifies the problem to finding a set of 9 points in the right configuration, out of a given set of points.
My first try would be to look at each corner point A. Then I'd iterate over the points B close to A. Now if I assume that (e.g.) A is the upper left corner of the rectangle and B is the lower right corner, I can easily calculate, where I would expect the other corner points to be in the image. I'd use some nearest-neighbor search (or a library like FLANN) to see if there are corners where I'd expect them. If I can find a set of points that matches these expected positions, I know where the symbol would be, if it is present in the image.
You have to try if that is good enough for your application. If you have too many false positives (sets of corners of other objects that accidentially form a rectangle + X), you could check if there are lines (i.e. high contrast in the right direction) where you would expect them. And you could check if there is low contrast where there are no lines in the pattern. This should be relatively straightforward once you know the points in the image that correspond to the corners/line endings in the object you're looking for.
I'd suggest the Generalized Hough Transform. It seems you have a fairly simple, fixed shape. The generalized Hough transform should be able to detect that shape at any rotation or scale in the image. You many need to threshold the original image, or pre-process it in some way for this method to be useful though.
You can use local features to identify the object in image. Feature detection wiki
For example, you can calculate features on some referent image which contains only the object you're looking for and save the results, let's say, to a plain text file. After that you can search for the object just by comparing newly calculated features (on images with some complex scenes containing the object) with the referent ones.
Here's some good resource on local features:
Local Invariant Feature Detectors: A Survey

Resources