Inward offsetting a polygon recursively not requiring shape to be rigidly maintained - algorithm

I start with a polygon, the blue contour. And then want to keep shrinking it inwards for say 4 depths. So in next iteration, i will get something like the green contour. Notice the shape of the polygon is maintained at least that is what i want to signify. This step can be easily done using Polygon Clipping and Offsetting Matlab wrapper by Emmett for the c++ Clipper Library by Angus.
However, in next step i want to shrink the green polygon further. However, the green polygon cannot be shrunk further while maintaining the shape of the green contour. So an algorithm should adaptively shrink to the magenta contour. That is what i have in mind. The motivation comes from drawing contours in maps. As long as the contour generated by the adaptive process looks realistic enough, i am not particular about what shape modification is done to get to the magenta contour.
I am working in Matlab and have looked for available stuff in this direction rather than coding something from scratch. Most inward offsetting tools for polygons maintain the shape while shrinking like this:
Suggestions to shrink a polygon to one single smaller polygon would be helpful. Shape modification is allowed.
The code used to create above image is:
p=[110 80;65 391;438 425;120 329];
ngon.x=p(:,1);
ngon.y=p(:,2);
scale=2^15;
ngon.x=int64(ngon.x*scale);
ngon.y=int64(ngon.y*scale);
outtie=clipper(ngon,ngon,1);% perform redundant self-intersection (required)
outtie.x=int64(outtie.x);
outtie.y=int64(outtie.y);
offset=clipper(outtie,-10*scale,1);
innie.x=offset.x/scale;
innie.y=offset.y/scale;
innie.x=[innie.x;innie.x(1)];
innie.y=[innie.y;innie.y(1)];
figure;
hold on
fill(ngon.x,ngon.y,'y')
plot(innie.x,innie.y,'r','linewidth',2)
To mex the clipper fucntion, please see here.

Related

Find the corner points of a set of pixels that make up a quadrilateral boundary

I have a situation where I have a set of pixels that make up the border of a quadrilateral (very close to square). I'm trying to determine the location of the corners as best as possible and have been struggling for a while now. My first thought was to determine the straight lines of the border and then calculate the corner points, but I don't have access to OpenCV or other image processing libraries, unfortunately.
Below are three cases where the black outline is the image boundary and the red outline is the quadrilateral boundary. I have a list of all of the pixels that make up the red boundary and the red boundary thickness may vary.
My initial thought was that I could just find the pixel that is closest to each of the four image boundaries, however this won't quite work for the first case where the inner quadrilateral isn't tilted.
Any thoughts on how to tackle this problem would be great. I'm coding in dart, but am looking for a psuedocode answer that I can implement myself.
(I have seen this post, which is similar to my problem, but I think there should be a simpler solution for my problem since I have access to all of the boundary points of the quadrilateral)
Having a list of all rectangle boundary pixels, you can use simple methods like this:
Calculate gravity center of rectangle (just sum X- and Y- coordinates of pixels and divide by their number) - it is diagonal intersection.
Find the farthest pixels - they are corners.
In case of bad quality of data set (empty places, excessive pixels) center calculation might be inexact. So you can apply Hough transform to extract sides (as lines) and calculate their intersections.

Closest distance to border of shape

I have a shape (in black below) and a point inside the shape (red below). What's the algorithm to find the closest distance between my red point and the border of the shape (which is the green point on the graph) ?
The shape border is not a series of lines but a randomly drawn shape.
Thanks.
So your shape is defined as bitmap and you can access the pixels.
You could scan ever growing squares around your point for border pixels. First, check the pixel itself. Then check a square of width 2 that covers the point's eight adjacent pixels. Next, width 4 for the next 16 pixels and so on. When you find a border pixel, record its distance and check against the minimum distance found. You can stop searching when half the width of the square is greater than the current minimum distance.
An alternative is to draw Bresenham circles of growing radius around the point. The method is similar to the square method, but you can stop immediately when you have a hit, because all points are supposed to have the same distance to your point. The drawback is that this method is somewhat inaccurate, because the circle is only an approximation. You will also miss some pixels along the disgonals, because Bresenham circles have artefacts.
(Both methods are still quite brute-force and in the worst case of a fully black bitmap will visit every node.)
You need a criterion for a pixel on the border. Your shape is antialiassed, so that pixels on the border are smoothed by making them a shade of grey. If your criterion is a pixel that isn't black, you will chose a point a bit inside the shape. If you cose pure white, you'll land a bit outside. Perhaps it's best to chose a pixel with a grey value greater than 0.5 as border.
If you have to find the closest border point to many points for the same shape, you can preprocess the data and use other methods of [nearest-neighbour serach].
As always, it depends on the data, in this case, what your shapes are like and any useful information about your starting point (will it often be close to a border, will it often be near the center of mass, etc).
If they are similar to what you show, I'd probably test the border points individually against the start. Now the problem is how you find the border without having to edge detect the entire shape.
The problem is it appears you can have sharply concave borders (think of a circle with a tiny spike-like sliver jutting into it). In this case you just need to edge detect the shape and test every point.
I think these will work, but don't hold me to it. Computational geometry seems to be very well understood, so you can probably find a pro at this somewhere:
Method One
If the shape is well behaved or you don't mind being wrong try this:
1- Draw 4 lines (diving the shape into four quandrants). And check the distance to each border. What i mean by draw is keep going north until you hit a white pixel, then go south, west, and east.
2- Take the two lines you have drawn so far that have the closest intersection points, bisect the angle they create and add the new line to your set.
3- keep repeating step two until are you to a tolerance you can be happy with.
Actually you can stop before this and on a small enough interval just trace the border between two close points checking each point between them to refine the final answer.
Method Two (this wil work with the poorly behaved shapes and plays well with anti-aliasing):
1- draw a line in any direction until he hit the border (black to white). This will be your starting distance.
2- draw a circle at this distance noting everytime you go from black to white or white to black. These are your intersection points.
As long as you have more than two points, divide the radius in half and try again.
If you have no points increase your radius by 50% and try again (basically binary search until you get to two points - if you get one, you got lucky and found your answer).
3- your closet point lies in the region between your two points. Run along the border checking each one.
If you want to, to reduce the cost of step 3 you can keep doing step 2 until you get a small enough range to brute force in step 3.
Also to prevent a very unlucky start, draw four initial lines (also east, south, and west) and start with the smallest distance. Those are easy to draw and greatly reduce your chance of picking the exact longest distance and accidentally thinking that single pixel is the answer.
Edit: one last optimization: because of the symmetry, you only need to calculate the circle points (those points that make up the border of the circle) for the first quadrant, then mirror them. Should greatly cut down on computation time.
If you define the distance in terms of 'the minimum number of steps that need to be taken to reach from the start pixel to any pixel on the margin', then this problem can be solved using any shortest path search algorithm like bread first search or even better if you use A* search algorithm.

Algorithm to compute set of bins bounded by a discrete contour

On a discrete grid-based plane (think: pixels of an image), I have a closed contour that can be expressed either by:
a set of 2D points (x1,y1);(x2,y2);(x3,y3);...
or a 4-connected Freeman code, with a starting point: (x1,y1) + 00001112...
I know how to switch from one to the other of these representations. This will be the input data.
I want to get the set of grid coordinates that are bounded by the contour.
Consider this example, where the red coordinates are the contour, and the gray one the starting point:
If the gray coordinate is, say, at (0,0), then I want a vector holding:
(1,1),(2,1),(3,1),(3,2)
Order is not important, and the output vector can also hold the contour itself.
Language of choice is C++, but I'm open to any existing code, algorithm, library, pointer, whatever...
I though that maybe CGAL would have something like this, but I am unfamiliar with it and couldn't find my way through the manual, so I'm not even sure.
I also looked toward Opencv but I think it does not provide this algorithm (but I can be wrong?).
I was thinking about finding the bounding rectangle, then checking each of the points in the rectangle to see if they are inside/outside, but this seems suboptimal. Any idea ?
One way to solve this is drawContours, and you have contours points with you.
Create blank Mat and draw contour with thickness = 1(boundary).
Create another blank Mat and draw contour with thickness = CV_FILLED(whole area including boundary).
Now bitwise_and between above two(you got filled area excluding boundary).
Finally check for non-zero pixel.

Remove incorrect pixels from polygons

I have generated set of points, that create polygonal areas border. On image below, there is an example of what I mean. The black "spots" should not be there and line should be "clear". I need to remove those points.
Now the problem is double. First, I dont know, how this situation is called. Its not aliasing or jagged edge, because those points are not obtained from line generating algorithm, but from contour generator.
And if not the name, than at least some push, how to solve this, would help me.
So far, I have tried convert this to chain code and simplify it, but that didnĀ“t worked very well and it was rather slow. Convert those dots to geometry and use Ramer algorithm to simplify geometry works better, but it destroy some "fine" detail, that should be there.
You can try the following:
First search for these spots. From your figure it seems that the spots look something like the following:
1 1
1 1
That is, a square matrix of colored pixels. Such spots can easily be found by traversing the pixel matrix once.
Now once you identify these spots, you will need to check for the neighbouring pixels and see what pattern is the curve/line following and accordingly delete the unnecessary pixels.
Separate the contour curves and clean each one by itself.
For each contour:
If the curve is not closed, close it with a temporary line.
Flood-fill the contour curve to get a solid monochrome figure.
Run contour detection on the result. The edge of a monochrome figure will be a clean line.
Flood-fill the area outside the new contour curve.
Run contour detection one last time to restore the original contour.
Re-assemble the contours into a single bitmap.

How to find this kind of geometry in images

Suppose I have an image of a scene as depicted above. A sort of a pole with a blob on it next to possibly similar objects with no blobs.
How can I find the blob marked by the red circle (a binary image indicating which pixels belong to the blob).
Note that the pole together with the blob may be rotated arbitrarily and also size may vary.
Can you try to do it in below 4 steps?
Circle detection like: writing robust (color and size invariant) circle detection with opencv (based on Hough transform or other features)
Line detection, like: Finding location of rectangles in an image with OpenCV
Identify rectangle position by combining neighboring lines (For each line segment you have the start and end point position, you also know the direction of each line segment. So that you can figure out if two connecting line segments (whose endpoints are close) are orthogonal. Your goal is to find 3 such segments for each rectangle.)
Check the relative position of each circle and rectangle to see if any pair can form the knob shape.
One approach could be using Viola-Jones object detection framework.
Though the framework is mostly used for face detection - it is actually designed for generic objects you feed to the algorithm.
The algorithm basic idea is to feed samples of "good object" (what you are looking for) and "bad objects" to a machine learning algorithm - which generates patterns from the images as its features.
During Classification - using a sliding window the algorithm will search for a "match" to the object (the classifier returned a positive answer).
The algorithm uses supervised learning and thus requires a labeled set of examples (both positive and negative ones)
I'm sure there is some boundary-map algorithm in image processing to do this.
Otherwise, here is a quick fix: pick a pixel at the center of the
"undiscovered zone", which initially is the whole image.
trace the horizantal and vertical lines at 4 directions each ending at the
borders of the zone and find the value changes from 0 to 1 or the vice verse.
Trace each such value switch and complete the boundary of each figure (Step-A).
Do the same for the zones
that still are undiscovered: start at some center
point and skim thru the lines connecting the center to the image border or to a
pixel at the boundary of a known zone.
In Step-A, you can also check to see whether the boundary you traced is
a line or a curve. Whenever it is a curve, you need only two points on it--
points at some distance from one another for the accuracy of the calculation.
The lines perpendicular each to these two points of tangency
intersect at the center of the circle red in your figure.
You can segment the image. Then use only the pixels in the segments to contribute to a Hough-transform to find the circles.
Then you will only have segments with circle in them. You can use a modified hough transform to find rectangles. The 'best' rectangle and square combination will then be your match. This is very computationally intentsive.
Another approach, if you already have these binary pictures, is to transform to a (for example 256 bin) sample by taking the distance to the centroid compared to the distance travelled along the edge. If you start at the point furthest away from the centroid you have a fairly rotational robust featurevector.

Resources