thinning/skeletonization algorithm with 4 known neighbors - algorithm

I am searching for a thinning/skeletonization algorithm which works if I only know 4 neighbors not 8.
From all algorithms I could find I assume that I have knowledge about the diagonal neighbors.
So does anybody know about a thinning algorithm which also works if I only know the top, right, bottom, left neighbor?
The outcome should be like this:
http://www.cs.ru.nl/~ths/rt2/col/h9/thinning.GIF
These are not what I am looking for:
http://upload.wikimedia.org/wikipedia/commons/thumb/9/93/Skel.png/220px-Skel.png
The shape should be maintained as in the first example

I'd suggest using one of the 8-neighbour algorithms, but feeding it dummy information for the diagonal cells or otherwise modifying the part of the algorithm that considers neighbours.
Since you're not too specific about the kinds of things you're looking at it's hard to offer concrete suggestions. Most algorithms will contain a part that looks like this:
for n in neighbours:
do stuff
in which case you need to edit neighbours.
Others will apply some kind of mask or kernel function. Edit that kernel.

Related

Divide an image by grouping similar pixels into rectangles

consider an image like this:
by grouping pixels by color into distinct rectangles, different configurations might be achieved, for example:
the goal is to find one of the best configurations, i.e. a configuration which has the least possible number of rectangles (rectangles sizes are not important).
any idea on how to design an efficient algorithm which is able to solve this problem?
EDIT:
i think the best answer is the one by #dshin, as they proved that this problem is a NP-HARD one so there probably isn't any efficient solution that is able to guarantee an optimal result.
other answers provide reasonable compromises to get an acceptable solution, but that won't always be the optimal one.
Each connected colored region is a rectilinear polygon that can be considered independently, and so your problem amounts to solving the minimum rectangle covering for rectilinear polygons. This is a well-studied problem that finds applications in some fields, like VLSI.
For convex rectilinear polygons, there is an algorithm that finds the optimal solution in polynomial time, described in this 1984 thesis.
The non-convex case is NP-hard (reference), so an efficient optimal solution likely does not exist. But there are several algorithms which produce good empirical results. This 1990 publication describes three separate algorithms, each of which are guaranteed to use at most twice as many rectangles as the optimal solution. This 2016 publication describes an algorithm that uses the common IP + LP relaxation technique, which apparently produces better results in real-life problem instances, although lacking in theoretical guarantees. Unfortunately, both publications are behind paywalls, and I haven't been able to find free resources that describe the algorithms.
If you are just looking for something reasonable, and your problem instances are not pathological in nature, then the algorithms described in other answers are probably good enough.
I don't have a proof but my feeling is a greedy approach should solve this problem:
Start on the upper left (or in whichever corner)
Expand rectangle 1px to the right as long as colors match
Expand rectangle 1px to the bottom as long as all colors in that row match
Line by line and column by column, find the next pixel that is not already part of a square (maybe keep track of visited pixels in a second array) and repeat 2 and 3.
You can switch lines and columns and go up and left or whatever instead and end up with different configurations, but from playing this through in my mind I think the number of rectangles should always be the same.
The idea here is based on the following links: Link 1 and Link 2.
In both the cases, the largest possible rectangle is computed within a given polygon/shape. Check both the above links for details.
We can extend the idea above to the problem at hand.
Steps:
Filter the image by color (say red)
Find the largest possible rectangle in the red region. After doing so mask it.
Repeat to find the next biggest rectangle until all the portions in red have been covered.
Repeat the above for every unique color.
Overview:

PathFinding algorithm within this implementation of a grid

Reading about Dijkstra's algorithm for pathfinding, I see that every example applicable to a "grid based" game is related to the case in which you have a "cell" that is passable or not passable. I better examplain with an image:
I need to implement an algorithm for pathfinding from A to B (returning a list of Cells to "follow") for the case II. As you can see from the image, in this model there aren't cells which are "unpassable", but every cell has stored 4 informations that determines if, while inside a cell, you can go up, down, left, right.
Searching on the net I found a lot of implementations of Dijkstra's algorithm for Case I.
Is it possible to implement it for case II?
If yes, can you please give me an advice?
Should I use another algorithm for that case (The grid will be 32x14)?
Yes, it is possible. Transform your cells into a graph by modelling cells as nodes, and only connect two cells with an edge, if no wall separates them.
However, Dijkstra is not the best algorithm to use, for such an easy example. If all edges in the graph have a distance of one, you can simply use a BFS search to find the quickest path.
Additionally, the fact that the path is a grid may mean that you could even find faster algorithms to solve the problem. However, this only makes sense if your grid is really big. For your 32x14 grid, I highly doubt that a sophisticated algorithm will be faster than BFS.

opencv: Best way to detect corners on chessboard

BACKGROUND
So I'm creating a program that recognizes chess moves. So far, I have implemented a fair number of algorithms to come up with the best results possible. What I've found so far is that the combination of undistorting an image (using undistort ), then applying a histogram equalization algorithm, and finally the goodFeaturesToTrack algorithm (I've found this to be better than the harris corner detection) yields pretty decent results. The goal here is to have every corner of every square accounted for with a point. That way, when I apply canny edge detection, I can process individual squares.
EXAMPLE
WHAT I'VE CONSIDERED
http://www.nandanbanerjee.com/index.php?option=com_content&view=article&id=71:buttercup-chess-robot&catid=78&Itemid=470
To summarize the link above, the idea is to find the upper-leftmost, upper-rightmost, lower-leftmost, and lower-rightmost points and divide the distance between them by eight. From there you would come up with probable points and compare them to the points that are actually on the board. If one of the points doesn't match, simply replace the point.
I've also considered some sort of mode, like finding the distance between neighboring points and storing them in a list. Then I would perform a mode operation to figure out the most probable distance and use that to draw points.
QUESTION
As you can see, the points are fairly accurate over most of the squares (though there are random points that do not do what I want). My question is what do you think the best way to find all corners on the chessboard (I'm open to all ideas) and could you give me a somewhat detailed description (just enough to steer me in the right direction or more if you choose :)? Also, (and this is a secondary question) do you have any recommendations on how to proceed in order to best recognize a move? I'm attempting to implement multiple ways of doing so and am going to compare methods to obtain best results! Thank you.
Please read these two links:
http://www.aishack.in/tutorials/sudoku-grabber-opencv-plot/
How to remove convexity defects in a Sudoku square?

Find start and end position of "cover-all" path then connect them

I have the shape in my 2dArray like this (for example):
It is known that the points A and B (I do not know where) and a path that covers the entire shape (must walk through each cell) must exist. Can you give me some help on how to determine points A and B and then the "cover-all" path? Maybe there are some known algorithms for such case. Or some help with a pseudo-code algorithm. Thanks in advance.
Check nhahdth's link to see that your problem in general is np-hard. this mathoverflow article cites a paper establishing the result for graphs on grids with holes - you won't fare significantly better than using brute force unless you can come up with more constraints.
You may be lucky in identifying at least one of your start and end nodes by searching for vertices of degree 1 in the underlying grid cell graph.

Algorithm to align and compare two sets of vectors which may be incomplete and ignoring scaling?

Here is the problem:
I have many sets of points, and want to come up with a function that can take one set and rank matches based on their similarity to the first. Scaling, translation, and rotation do not matter, and some points may be missing from any of the sets of points. The best match is the one that if scaled and translated in the ideal way has the least mean square error between points (maybe with a cap on penalty, or considering only the best fraction of points to handle missing points).
I am trying to come up with a good way to do this, and am wondering if there are any well known algorithms that can handle this type of problem? Just the name of something would be awesome! I lack a formal CSCI or math education, and am doing the best to teach myself.
A few things I have tried
The first thing that comes to mind is to normalize the points somehow, but I dont think that this is helpful because the missing points may throw things off.
The best way I can think of is to estimate a starting point by translating to match their centroids, scaling so that the largest distances from the centroid of the sets match. From there, do an A* search, scaling, rotating, and translating until I reach a maximum, and then compare the two sets. (I hope I am using the term A* correctly, I mean trying small translations and scalings and selecting the move giving the best match) I think this will find the global maximum most of the time, but is not guaranteed to. I am looking for a better way that will always be correct.
Thanks a ton for the help! It has been fun and interesting trying to figure this out so far, so I hope it is for you as well.
There's a very clever algorithm for identifying starfields. You find 4 points in a diamond shape and then using the two stars farthest apart you define a coordinate system locating the other two stars. This is scale and rotation invariant because the locations are relative to the first two stars. This forms a hash. You generate several of these hashes and use those to generate candidates. Once you have the candidates you look for ones where multiple hashes have the correct relationships.
This is described in a paper and a presentation on http://astrometry.net/ .
This paper may be useful: Shape Matching and Object Recognition Using Shape Contexts
Edit:
There is a couple of relatively simple methods to solve the problem:
To combine all possible pairs of points (one for each set) to nodes, connect these nodes where distances in both sets match, then solve the maximal clique problem for this graph. Since the maximal clique problem is NP-complete, the complexity is probably O(exp(n^2)), so if you have too many points, don't use this algorithm directly, use some approximation.
Use Generalised Hough transform to match two sets of points. This approach has less complexity (O(n^4)). But it is more complicated, so I cannot explain it here.
You can find the details in computer vision books, for example "Machine vision: theory, algorithms, practicalities" by E. R. Davies (2005).

Resources