So I want to detect lines on grayscale images. I have a lot of data 9x9 matrices of pixel ints 1 to 256 and 1*4 matrices of ponnts coords X ,Y, X,Y We have 1 line per 9x9 image or non lines. So what structure should have my NN?
Assuming that you're using the most common variety of neural networks, multillayer perceptrons, you'll have exactly as many input nodes as there are features.
The inputs may include transformed variables, in addition to the raw variables. The number of hidden nodes is selected by you, but you should have enough to permit the neural network to adequately make the mapping.
The number of output nodes will be determined by the number of classes and the representation you choose. Assuming two classes ("line", "not line" seems likely), you may use 1 output node, which indicates the estimated probability of one class (the probability of the remaining class being 1 minus the probability of the first class).
Detecting simple lines on a grayscale image is a well known problem. A Hough transform would be suffice for the job. See http://opencv.willowgarage.com/documentation/cpp/imgproc_feature_detection.html?highlight=hough%20line#cv-houghlines for a function that implement finding lines using Hough Transform.
Can you try the above function and see if it works?
If it doesn't, please update your question with a sample image.
Related
Edit: this is not a duplicate of Determine if an image exists within a larger image, and if so, find it, using Python since I do not know the pattern beforehand
Suppose I have a big image (usually a picture taken with a camera so it might be a bit noisy, but let's assume it's not for now) made up of multiple smaller images all equal among themselves, something like
I need to find the contour of each one of those. The first step is recognizing that there's a recurring image (or unknown pattern) in the 2D image. How can I achieve this first step?
I did read around that I might use a FFT of the original image and search for duplicate frequencies, would that be a feasible approach?
To build a bit on the problem: I do not know the image beforehand, nor its size or how many will there be on the big image. The images can be shot from camera so they might be noisy. The images won't overlap.
You can try to use described keypoints (Sift/SURF/ORB/etc.) to find features in the image and try to detect the same features in the image.
You can see such a result in How to find euclidean distance between keypoints of a single image in opencv where 3x the same image is present and features are detected and linked between those subimages automatically.
In your image the result looks like
so you can see that the different occurances of the same pattern is indeed automatically detected and linked.
Next steps would be to group features to objects, so that the "whole" pattern can be extracted. Once you have a candidate for a pattern, you can extract a homography for each occurance of the pattern (with one reference candidate pattern) to verify that it is a pattern. One open problem is how to find such candidates. Maybe it is worth trying to find "parallel features", so keypoint matches that have parallel lines and/or same length lines (see image). Or maybe there is some graph theory approach.
All in all, this whole approach will have some advantages and disadvantes:
Advantages:
real world applicability - Sift and other keypoints are working quite well even with noise and some perspective effects, so chances are increased to find such patterns.
Disadvantages
slow
parametric (define what it means that two features are successfully
matched)
not suitable for all kind of patterns - your pattern must have some extractable keypoints
Those are some thoughts and probably not complete ;)
Unfortunately no full code yet for your concrete task, but I hope the idea is clear.
For such a clean image, it suffices to segment the patterns by blob analysis and to compare the segments or ROI that contain them. The size is a first matching criterion. The SAD, SSD or correlation similarity scores can do finer comparison.
In practice you will face more difficulties such as
not possible to segment the patterns
geometric variations in size/orientation
partial occlusion
...
Handling these is out of the scope of this answer; it makes things much harder than in the "toy" case.
The goal is to find several equal or very similar patterns which are not known before in a picture. As it is this problem is still a bit ill posed.
Are the patterns exactly equal or only similar (added noise maybe)?
Do you want to have the largest possible patterns or are smaller subpatterns okay too or are all possible patterns needed? Reason is that of course each pattern could consist of equal patterns too.
Is the background always that simple (completely white) or can it be much more difficult? What do we know about it?
Are the patterns always equally oriented, equally scaled, non-overlapping?
For the simple case of non-overlapping patterns with simple background, the answer of Yves Daoust using segmentation is well performing but fails if patterns are very close or overlapping.
For other cases the idea of the keypoints by Micka will help but might not perform well if there is noise or might be slow.
I have one alternative: look at correlations of subblocks of the image.
In pseudocode:
Divide the image in overlapping areas of size MxN for a suitable M,N (pixel width and height chosen to be approximately the size of the desired pattern)
Correlate each subblock with the whole image. Look for local maxima in the correlation. The position of these maxima denotes the position of similar regions.
Choose a global threshold on all correlations (smartly somehow) and find sets of equal patterns.
Determine the fine structure of these patterns by shanging the shape from rectangular (bounding box) to a more sophisticaed shape (maybe by looking at the shape of the peaks in the correlation)
In case the approximate size of the desired patterns is not known before, try with large values of M, N and go down to smaller ones.
To speed up the whole process start on a coarse scale (downscaled version of the image) and then process finer scales only where needed. Needs balancing of zooming in and performing correlations.
Sorry, I cannot make this a full Matlab project right now, but I hope this helps you.
Objective: Digit recognition by using Neural Networks
Description: images are normalized into 8 x 13 pixels. For each row ever black pixel is represented by 1and every white white 0. Every image is thus represented by a vector of vectors as follows:
Problem: is it possible to use a vector of vectors in Neural Networks? If not how should can the image be represented?
Combine rows into 1 vector?
Convert every row to its decimal format. Example: Row1: 11111000 = 248 etc.
Combining them into one vector simply by concatenation is certainly possible. In fact, you should notice that arbitrary reordering of the data doesn't change the results, as long as it's consistent between training and classification.
As to your second approach, I think (I am really not sure) you might lose some information that way.
To use multidimensional input, you'd need multidimensional neurons (which I suppose your formalism doesn't support). Sadly you didn't give any info on your network structure, which i think is your main source of problems an confusion. Whenever you evaluate a feature representation, you need to know how the input layer will be structured: If it's impractical, you probably need a different representation.
Your multidimensional vector:
A network that accepts 1 image as input has only 1 (!) input node containing multiple vectors (of rows, respectively). This is the worst possible representation of your data. If we:
flatten the input hierarchy: We get 1 input neuron for every row.
flatten the input hierarchy completely: we get 1 input neuron for every pixel.
Think about all 3 approaches and what it does to your data. The latter approach is almost always as bad as the first approach. Neural networks work best with features. Features are not restructurings of the pixels (your row vectors). They should be META-data you can gain from the pixels: Brightness, locations where we go from back to white, bounding boxes, edges, shapes, masses of gravity, ... there's tons of stuff that can be chosen as features in image processing. You have to think about your problem and choose one (or more).
In the end, when you ask about how to "combine rows into 1 vector": You're just rephrasing "finding a feature vector for the whole image". You definitely don't want to "concatenate" your vectors and feed raw data into the network, you need to find information before you use the network. This is critical for pre-processing.
For further information on which features might be viable for OCR, just read into some papers. The most successful network atm is Convolutional Neural Network. A starting point for the topic feature extraction is here.
1 ) Yes combine into one vector is suitable i use this way
http://vimeo.com/52775200
2) No it is not suitable because after normalization from rang ( 0-255 ) -> to range ( 0 - 1 ) differt rows gives aprox same values so lose data
I got school task again. This time, my teacher gave me task to create algorithm to count how many ducks on picture.
The picture is similar to this one:
I think I should use pattern recognition for searching how many ducks on it. But I don't know which pattern match for each duck.
I think that you can solve this problem by segmenting the ducks' beaks and counting the number of connected components in the binary image.
To segment the ducks' beaks, first convert the image to HSV color space and then perform a binarization using the hue component. Note that the ducks' beaks hue are different from other parts of the image.
Here's one way:
Hough transform for circles:
Initialize an accumulator array indexed by (x,y,radius)
For each pixel:
calculate an edge (e.g. Sobel operator will provide both magnitude and direction), if magnitude exceeds some threshold then:
increment every accumulator for which this edge could possibly lend evidence (only the (x,y) in the direction of the edge, only radii between min_duck_radius and max_duck_radius)
Now smooth and threshold the accumulator array, and the coordinates of highest accumulators show you where the heads are. The threshold may leap out at you if you histogram the values in the accumulators (there may be a clear difference between "lots of evidence" and "noise").
So that's very terse, but it can get you started.
It might be just because I'm working with SIFT right now, but to me it looks like it could be good for your problem.
It is an algorithm that matches the same object on two different pictures, where the objects can have different orientations, scales and be viewed from different perspectives on the two pictures. It can also work when an object is partially hidden (as your ducks are) by another object.
I'd suggest finding a good clear picture of a rubber ducky ( :D ) and then use some SIFT implementation (VLFeat - C library with SIFT but no visualization, SIFT++ - based on VLFeat, but in C++ , Rob Hess in C with OpenCV...).
You should bear in mind that matching with SIFT (and anything else) is not perfect - so you might not get the exact number of rubber duckies in the picture.
I have some map files consisting of 'polylines' (each line is just a list of vertices) representing tunnels, and I want to try and find the tunnel 'center line' (shown, roughly, in red below).
I've had some success in the past using Delaunay triangulation but I'd like to avoid that method as it does not (in general) allow for easy/frequent modification of my map data.
Any ideas on how I might be able to do this?
An "algorithm" that works well with localized data changes.
The critic's view
The Good
The nice part is that it uses a mixture of image processing and graph operations available in most libraries, may be parallelized easily, is reasonable fast, may be tuned to use a relatively small memory footprint and doesn't have to be recalculated outside the modified area if you store the intermediate results.
The Bad
I wrote "algorithm", in quotes, just because I developed it and surely is not robust enough to cope with pathological cases. If your graph has a lot of cycles you may end up with some phantom lines. More on this and examples later.
And The Ugly
The ugly part is that you need to be able to flood fill the map, which is not always possible. I posted a comment a few days ago asking if your graphs can be flood filled, but didn't receive an answer. So I decided to post it anyway.
The Sketch
The idea is:
Use image processing to get a fine line of pixels representing the center path
Partition the image in chunks commensurated to the tunnel thinnest passages
At each partition, represent a point at the "center of mass" of the contained pixels
Use those pixels to represent the Vertices of a Graph
Add Edges to the Graph based on a "near neighbour" policy
Remove spurious small cycles in the induced Graph
End- The remaining Edges represent your desired path
The parallelization opportunity arises from the fact that the partitions may be computed in standalone processes, and the resulting graph may be partitioned to find the small cycles that need to be removed. These factors also allow to reduce the memory needed by serializing instead of doing calcs in parallel, but I didn't go trough this.
The Plot
I'll no provide pseudocode, as the difficult part is just that not covered by your libraries. Instead of pseudocode I'll post the images resulting from the successive steps.
I wrote the program in Mathematica, and I can post it if is of some service to you.
A- Start with a nice flood filled tunnel image
B- Apply a Distance Transformation
The Distance Transformation gives the distance transform of image, where the value of each pixel is replaced by its distance to the nearest background pixel.
You can see that our desired path is the Local Maxima within the tunnel
C- Convolve the image with an appropriate kernel
The selected kernel is a Laplacian-of-Gaussian kernel of pixel radius 2. It has the magic property of enhancing the gray level edges, as you can see below.
D- Cutoff gray levels and Binarize the image
To get a nice view of the center line!
Comment
Perhaps that is enough for you, as you ay know how to transform a thin line to an approximate piecewise segments sequence. As that is not the case for me, I continued this path to get the desired segments.
E- Image Partition
Here is when some advantages of the algorithm show up: you may start using parallel processing or decide to process each segment at a time. You may also compare the resulting segments with the previous run and re-use the previous results
F- Center of Mass detection
All the white points in each sub-image are replaced by only one point at the center of mass
XCM = (Σ i∈Points Xi)/NumPoints
YCM = (Σ i∈Points Yi)/NumPoints
The white pixels are difficult to see (asymptotically difficult with param "a" age), but there they are.
G- Graph setup from Vertices
Form a Graph using the selected points as Vertex. Still no Edges.
H- select Candidate Edges
Using the Euclidean Distance between points, select candidate edges. A cutoff is used to select an appropriate set of Edges. Here we are using 1.5 the subimagesize.
As you can see the resulting Graph have a few small cycles that we are going to remove in the next step.
H- Remove Small Cycles
Using a Cycle detection routine we remove the small cycles up to a certain length. The cutoff length depends on a few parms and you should figure it empirically for your graphs family
I- That's it!
You can see that the resulting center line is shifted a little bit upwards. The reason is that I'm superimposing images of different type in Mathematica ... and I gave up trying to convince the program to do what I want :)
A Few Shots
As I did the testing, I collected a few images. They are probably the most un-tunnelish things in the world, but my Tunnels-101 went astray.
Anyway, here they are. Remember that I have a displacement of a few pixels upwards ...
HTH !
.
Update
Just in case you have access to Mathematica 8 (I got it today) there is a new function Thinning. Just look:
This is a pretty classic skeletonization problem; there are lots of algorithms available. Some algorithms work in principle on outline contours, but since almost everyone uses them on images, I'm not sure how available such things will be. Anyway, if you can just plot and fill the sewer outlines and then use a skeletonization algorithm, you could get something close to the midline (within pixel resolution).
Then you could walk along those lines and do a binary search with circles until you hit at least two separate line segments (three if you're at a branch point). The midpoint of the two spots you first hit, or the center of a circle touching the three points you first hit, is a good estimate of the center.
Well in Python using package skimage it is an easy task as follows.
import pylab as pl
from skimage import morphology as mp
tun = 1-pl.imread('tunnel.png')[...,0] #your tunnel image
skl = mp.medial_axis(tun) #skeleton
pl.subplot(121)
pl.imshow(tun,cmap=pl.cm.gray)
pl.subplot(122)
pl.imshow(skl,cmap=pl.cm.gray)
pl.show()
How do I segment a 2D image into blobs of similar values efficiently? The given input is a n array of integer, which includes hue for non-gray pixels and brightness of gray pixels.
I am writing a virtual mobile robot using Java, and I am using segmentation to analyze the map and also the image from the camera. This is a well-known problem in Computer Vision, but when it's on a robot performance does matter so I wanted some inputs. Algorithm is what matters, so you can post code in any language.
Wikipedia article: Segmentation (image processing)
[PPT] Stanford CS-223-B Lecture 11 Segmentation and Grouping (which says Mean Shift is perhaps the best technique to date)
Mean Shift Pictures (paper is also available from Dorin Comaniciu)
I would downsample,in colourspace and in number of pixels, use a vision method(probably meanshift) and upscale the result.
This is good because downsampling also increases the robustness to noise, and makes it more likely that you get meaningful segments.
You could use floodfill to smooth edges afterwards if you need smoothness.
Some more thoughts (in response to your comment).
1) Did you blend as you downsampled? y[i]=(x[2i]+x[2i+1])/2 This should eliminate noise.
2)How fast do you want it to be?
3)Have you tried dynamic meanshift?(also google for dynamic x for all algorithms x)
Not sure if it is too efficient, but you could try using a Kohonen neural network (or, self-organizing map; SOM) to group the similar values, where each pixel contains the original color and position and only the color is used for the Kohohen grouping.
You should read up before you implement this though, as my knowledge of the Kohonen network goes as far as that it is used for grouping data - so I don't know what the performance/viability options are for your scenario.
There are also Hopfield Networks. They can be mangled into grouping from what I read.
What I have now:
Make a buffer of the same size as the input image, initialized to UNSEGMENTED.
For each pixel in the image where the corresponding buffer value is not UNSEGMENTED, flood the buffer using the pixel value.
a. The border checking of the flooding is done by checking if pixel is within EPSILON (currently set to 10) of the originating pixel's value.
b. Flood filling algorithm.
Possible issue:
The 2.a.'s border checking is called many times in the flood filling algorithm. I could turn it into a lookup if I could precalculate the border using edge detection, but that may add more time than current check.
private boolean isValuesCloseEnough(int a_lhs, int a_rhs) {
return Math.abs(a_lhs - a_rhs) <= EPSILON;
}
Possible Enhancement:
Instead of checking every single pixel for UNSEGMENTED, I could randomly pick a few points. If you are expecting around 10 blobs, picking random points in that order may suffice. Drawback is that you might miss a useful but small blob.
Check out Eyepatch (eyepatch.stanford.edu). It should help you during the investigation phase by providing a variety of possible filters for segmentation.
An alternative to flood-fill is the connnected-components algorithm. So,
Cheaply classify your pixels. e.g. divide pixels in colour space.
Run the cc to find the blobs
Retain the blobs of significant size
This approach is widely used in early vision approaches. For example in the seminal paper "Blobworld: A System for Region-Based Image Indexing and Retrieval".