Greedy algorithm for active contours - shrinking - algorithm

I am studying and implementing the Greedy algorithm for active contours as described in paper by Donna Williams - A Fast Algorithm For Active Contours And Curvature Estimation.
One of the advantages over another implementation (by Kass et al.) should be uniform distribution of points along the contour curve. In every iteration each point tries to move itself so the distance to the previous point is as close to the average as possible.
The contour is expected to be drawn around an object in image and then to shrink around it until it is "attached" to the object edges.
But the problem is that the contour won't shrink. It evolves so that the points are equally spaced to each other along the contour, but the contour cannot shrink around the image object because distances between points would go below average and the algorithm would move them back.
Do you have any thoughts on this? What am I missing? Other implementations of active contours do shrink, but have another drawbacks and the Greedy algorithm is supposed to be better and more stable.

The researchers hardly emphasize the disadvantages of their new solution.
Dont trust the paper, too much, If you don't have heard from other sources, that
this algorithm works.
I would implement only an algorithm if it is well accepted in literature (or if I have invented it ;-) ).
Companies need a robust solution that works, a researcher must publish something new,
which may be less useable in practise, and sometimes only works well on specific test sets.

Related

opencv: Best way to detect corners on chessboard

BACKGROUND
So I'm creating a program that recognizes chess moves. So far, I have implemented a fair number of algorithms to come up with the best results possible. What I've found so far is that the combination of undistorting an image (using undistort ), then applying a histogram equalization algorithm, and finally the goodFeaturesToTrack algorithm (I've found this to be better than the harris corner detection) yields pretty decent results. The goal here is to have every corner of every square accounted for with a point. That way, when I apply canny edge detection, I can process individual squares.
EXAMPLE
WHAT I'VE CONSIDERED
http://www.nandanbanerjee.com/index.php?option=com_content&view=article&id=71:buttercup-chess-robot&catid=78&Itemid=470
To summarize the link above, the idea is to find the upper-leftmost, upper-rightmost, lower-leftmost, and lower-rightmost points and divide the distance between them by eight. From there you would come up with probable points and compare them to the points that are actually on the board. If one of the points doesn't match, simply replace the point.
I've also considered some sort of mode, like finding the distance between neighboring points and storing them in a list. Then I would perform a mode operation to figure out the most probable distance and use that to draw points.
QUESTION
As you can see, the points are fairly accurate over most of the squares (though there are random points that do not do what I want). My question is what do you think the best way to find all corners on the chessboard (I'm open to all ideas) and could you give me a somewhat detailed description (just enough to steer me in the right direction or more if you choose :)? Also, (and this is a secondary question) do you have any recommendations on how to proceed in order to best recognize a move? I'm attempting to implement multiple ways of doing so and am going to compare methods to obtain best results! Thank you.
Please read these two links:
http://www.aishack.in/tutorials/sudoku-grabber-opencv-plot/
How to remove convexity defects in a Sudoku square?

Cutting Heuristic Solving Algorithm for n-edged polygone

I want to programm a tool that can place objects on a rectangle with the minumum of waste, this problem is also known as the cutting problem.
So i looked around to find some algorithms and i found out there are a few for rectangles but not that much for n-edged polygones.
my first approach was to get a bounding box for the polygone, then run the normal rectangle algorithm. After that you cound slowly try to increase the number of edges but still have only isometric lines (only vertical and horizontal), to approximate the polygone.
I wonder if there is any good algorithm that implement such thing, but is more common than create my own stuff.
the other way ive come up with could be something with two dimensional knapsack and some sorting heuristics that sort the best fitting polygones and try to put them on the rectangle.
But all i come up with has some good detection of special polygones (such as a square or normal rectangle) but does not work on common polygones.

Placing 2D shapes in a rectangle efficiently. How to approach it?

I've been searching far and wide on the seven internets, and have come to no avail. The closest to what I need seems to be The cutting stock problem, only in 2D (which is disappointing since Wikipedia doesn't provide any directions on how to solve that one). Another look-alike problem would be UV unwrapping. There are solutions there, but only those that you get from add-ons on various 3D software.
Cutting the long talk short - what I want is this: given a rectangle of known width and height, I have to find out how many shapes (polygons) of known sizes (which may be rotated at will) may I fit inside that rectangle.
For example, I could choose a T-shaped piece and in the same rectangle I could pack it both in an efficient way, resulting in 4 shapes per rectangle
as well as tiling them based on their bounding boxes, case in which I could only fit 3
But of course, this is only an example... and I don't think it would be much use to solving on this particular case. The only approaches I can think of right now are either like backtracking in their complexity or solve only particular cases of this problem. So... any ideas?
Anybody up for a game of Tetris (a subset of your problem)?
This is known as the packing problem. Without knowing what kind of shapes you are likely to face ahead of time, it can be very difficult if not impossible to come up with an algorithm that will give you the best answer. More than likely unless your polygons are "nice" polygons (circles, squares, equilateral triangles, etc.) you will probably have to settle for a heuristic that gives you the approximate best solution most of the time.
One general heuristic (though far from optimal depending on the shape of the input polygon) would be to simplify the problem by drawing a rectangle around the polygon so that the rectangle would be just big enough to cover the polygon. (As an example in the diagram below we draw a red rectangle around a blue polygon.)
Once we have done this, we can then take that rectangle and try to fit as many of that rectangle into the large rectangle as possible. This simplfies the problem into a rectangle packing problem which is easier to solve and wrap your head around. An example of an algorithm for this is at the following link:
An Effective Recursive Partitioning Approach for the Packing of Identical Rectangles in a Rectangle.
Now obviously this heuristic is not optimal when the polygon in question is not close to being the same shape as a rectangle, but it does give you a minimum baseline to work with especially if you don't have much knowledge of what your polygon will look like (or there is high variance in what the polygon will look like). Using this algorithm, it would fill up a large rectangle like so:
Here is the same image without the intermediate rectangles:
For the case of these T-shaped polygons, the heuristic is not the best it could be (in fact it may be almost a worst case scenario for this proposed approximation), but it would work very well for other types of polygons.
consider what the other answer said by placing the t's into a square, but instead of just leaving it as a square set the shapes up in a list. Then use True and False to fill the nested list as the shape i.e. [[True,True,True],[False,True,False]] for your T shape. Then use a function to place the shapes on the grid. To optimize the results, create a tracker which will pay attention to how many false in a new shape overlap with trues that are already on the grid from previous shapes. The function will place the shape in the place with the most overlaps. There will have to be modifications to create higher and higher optimizations, but that is the general premise which you are looking for.

Fill arbitrary 2D shape with given set of rectangles

I have a set of rectangles and arbitrary shape in 2D space. The shape is not necessary a polygon (it may be a circle), and rectangles have different widths and heights. The task is to approximate the shape with rectangles as close as possible. I can't change rectangles dimensions, but rotation is permitted.
It sounds very similar to packing problem and covering problem but covering area is not rectangular...
I guess it's NP problem, and I'm pretty sure there should be some papers that show good heuristics to solve it, but I don't know what to google? Where should I start?
Update: One idea just came into my mind but I'm not sure if it's worth investigating. What if we consider bounding shape as a physical mold filled with water. Each rectangle is considered as a positively charged particle with size. Now drop the smallest rectangle to it. Then drop the next by size at random point. If rectangles too close they repel each other. Keep adding rectangles until all are used. Could this method work?
I think you could look for packing and automatic layout generation algorithms. Automatic VLSI layout generation algorithms might need similar things, just like textile layout questions...
This paper Hegedüs: Algorithms for covering polygons by rectangles seems to address a similar problem. And since this paper is from 1982, it might be interesting to look at the papers which cite this one. Additionally, this meeting seems to be discussing research problems related to this, so might be a starting point for keywords or names who do research in this idea.
I don't know if the computational geometry research has algorithms for your specific problem, or if these algorithms are easy/practical enough to implement. Here is how I would approach it if I had to do it without being able to look up previous work. This is just a direction, by far not a solution...
Formulate it as an optimization problem. You have discrete variables of which rectangles you choose (yes or no) and continuous variables (location and orientation of the triangles). Now you can set up two independent optimizations: a discrete optimization which picks the rectangles; and a continuous that optimizes for the location and orientation once rectangles are given. Interleave these two optimizations. Of course the difficulty lies in the formulation of optimizations, and designing your error energy such that it does not get stuck in some strange configurations (local minima). I'd try to get the continuous as a least squares problem such that I can use standard optimizations libraries.
I think this problem is suitable for solving with genetic algorithm and/or evolutionary strategy algorithm. I've done similar box packing problem with the help of evolutionary strategy algorithm of some kind. Check this out in my blog.
So if you will use such approach - encode into chromosomes box:
x coordinate
y coordinate
angle
Then try to minimize such fitness function-
y = w1 * box_intersection_area +
w2 * box_area_out_of_shape +
w3 * average_circle_radius_in_free_space
Choose weights w1,w2,w3 such as to affect importance of factors. When genetic algorithm will find partial solution - remove boxes which still overlaps together or are out of shape - and you will have at least legal (but not necessary optimal) solution.
good luck in this interesting problem !
It is NP hard indeed and since it has hi-tech application, reasonably efficients approximate strategies are not even in patents, let alone published papers.
The best you can do with a limited budget is to start by limiting the problem. Assume that all rectangles are exactly the same, Assume that all rectangles which are binary sub-divisions of your standard rectangle are also allowed since you can efficiently pre-pack them to fit your core division. For extra points you can also form several fixed schemas for gluing core rectangles to cover a few larger shapes with substantially different proportions. Assume that you can change dimensions of your standard rectangle/cell as long as the rest (pre-packing and gluing schema) remains the same - this gives you parameters to decide approximate size of the core rectangle based on rectangles you are given.
Now you can play with aspect ratios to approximate the error such limited system could guarantee. For the first iterations assume that it can have 50% error with a simple sub-division schema and then change schema to reduce the error but without increasing asymptotic complexity of pre-packing. At the end of the day you are always just assigning given rectangles to your pre-calculated and now fixed grid and binary sub-divisions - meaning you are not trying to do a layout or backtrack at all - you are always happy with the first approximate fit into the grid.
Work on defining classes of rectangles that pack well with your schema - that's again to keep the whole process inverted - you are never trying to actually fit what you are given - you are defining what you have to be given in order to fir it well - then you punt the rest as error since it is approximation.
Then you can try to do a bit more, but not much more - any slip into backtracking or nailing arbitrary small error and it's exponential.
If you are at a research facility and can get some supercomputer time - run a set of exhaustive searches with pathological mixes there just to see how optimal packing may look like and to see if you can derive a few more sub-division schemas and/or classes of rectangle sets.
That should be enough for the first 2 yrs or research :-)

Algorithm for simplifying 3d surface?

I have a set of 3d points that approximate a surface. Each point, however, are subject to some error. Furthermore, the set of points contain a lot more points than is actually needed to represent the underlying surface.
What I am looking for is an algorithm to create a new (much smaller) set of points representing a simplified, smoother version of the surface (pardon for not having a better definition than "simplified, smoother"). The underlying surface is not a mathematical one so I'm not hoping to fit the data set to some mathematical function.
Instead of dealing with it as a point cloud, I would recommend triangulating a mesh using Delaunay triangulation: http://en.wikipedia.org/wiki/Delaunay_triangulation
Then decimate the mesh. You can research decimation algorithms, but you can get pretty good quick and dirty results with an algorithm that just merges adjacent tris that have similar normals.
I think you are looking for 'Level of detail' algorithms.
A simple one to implement is to break your volume (surface) into some number of sub-volumes. From the points in each sub-volume, choose a representative point (such as the one closest to center, or the closest to the average, or the average etc). use these points to redraw your surface.
You can tweak the number of sub-volumes to increase/decrease detail on the fly.
I'd approach this by looking for vertices (points) that contribute little to the curvature of the surface. Find all the sides emerging from each vertex and take the dot products of pairs (?) of them. The points representing very shallow "hills" will subtend huge angles (near 180 degrees) and have small dot products.
Those vertices with the smallest numbers would then be candidates for removal. The vertices around them will then form a plane.
Or something like that.
Google for Hugues Hoppe and his "surface reconstruction" work.
Surface reconstruction is used to find a meshed surface to fit the point cloud; however, this method yields lots of triangles. You can then apply mesh a reduction technique to reduce the polygon count in a way to minimize error. As an example, you can look at OpenMesh's decimation methods.
OpenMesh
Hugues Hoppe
There exist several different techniques for point-based surface model simplification, including:
clustering;
particle simulation;
iterative simplification.
See the survey:
M. Pauly, M. Gross, and L. P. Kobbelt. Efficient simplification of point-
sampled surfaces. In Proceedings of the conference on Visualization’02,
pages 163–170, Washington, DC, 2002. IEEE.
unless you parametrise your surface in some way i'm not sure how you can decide which points carry similar information (and can thus be thrown away).
i guess you can choose a bunch of points at random to get rid of, but that doesn't sound like what you want to do.
maybe points near each other (for some definition of 'near') can be considered to contain similar information, and so reduced to single representatives for each such group.
could you give some more details?
It's simpler to simplify a point cloud without the constraints of mesh triangles and indices.
smoothing and simplification are different tasks though. To simplify the cloud you should first get rid of noise artefacts by making a profile of the kind of noise that you have, it's frequency and directional caracteristics and do a noise profile compared type reduction. good normal vectors are helfpul for that.
here is a document about 5-6 simplifications using delauney, voronoi, and k nearest neighbour maths:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.10.9640&rep=rep1&type=pdf
A later version from 2008:
http://www.wseas.us/e-library/transactions/research/2008/30-705.pdf
here is a recent c++ version:
https://github.com/tudelft3d/masbcpp/blob/master/src/simplify.cpp

Resources