I am trying to do shape analysis of an object in an image in MATLAB (specifically). To do so, I have found boundary pixels. For each boundary pixel I am calculating it's neighbor using 8-neighborhood theory. Now I am calculating tangent of a point to its only one neighbor (depends how I select clockwise or other way). My algorithm works fine if every pixel has exactly two neighbors. For shapes shown in this picture (order 9 X 15 pixels).
But if there are more than 2 neighbors of a pixel then my algorithm gets confused. For example as shown in (order 9 X 15 pixels).
I want to take tangent of each boundary pixel with its neighbor pixel in clockwise or anti-clockwise direction and if you notice second image, which is valid boundary image, if I am moving in clockwise direction then neighbor of red pixel would be green and neighbor of green in clockwise direction would be "1" and neighbor of "1" is "2" but then I cannot come back to blue and brown pixel and I cannot visit and take tangent of each boundary pixel with its neighbor pixel.
I have learned graph's node visiting algorithms in which you maintain queue or stack but then in this case I don't just want to visit each pixel but also take tangent of every pixel with only right neighbor pixel according to what direction I am moving.
This is an example problem and similar problems can occur in some other way so I am trying to generate some generic algorithm for it. I will appreciate your help. Thank you.
Like btilly already said. The solution is to find the boundaries between the pixels and not the pixels self. I recommend you a part of the potrace Algorithm. It is an Algorith for vectorize binary images. The interesting part is for you the decompisition of the Path. Here is the idea of the Path decompisition:
The Potrace Algorithm you can find here.
An other Algorithm is from Wilhelm Burge and Burger in the Book "digital image processing an algorithmic introduction using enter link description here". In the link you can see some part of the book. The interisting part is the function "TraceContour" at page 538. This Algorithm work like you thought and walks around the "inner" pixels. I found some explenation of the Alorith here, at inner boundary tracing. You can do the Algorithm with a four or eight neighboor connectivity.
Related
I have a situation where I have a set of pixels that make up the border of a quadrilateral (very close to square). I'm trying to determine the location of the corners as best as possible and have been struggling for a while now. My first thought was to determine the straight lines of the border and then calculate the corner points, but I don't have access to OpenCV or other image processing libraries, unfortunately.
Below are three cases where the black outline is the image boundary and the red outline is the quadrilateral boundary. I have a list of all of the pixels that make up the red boundary and the red boundary thickness may vary.
My initial thought was that I could just find the pixel that is closest to each of the four image boundaries, however this won't quite work for the first case where the inner quadrilateral isn't tilted.
Any thoughts on how to tackle this problem would be great. I'm coding in dart, but am looking for a psuedocode answer that I can implement myself.
(I have seen this post, which is similar to my problem, but I think there should be a simpler solution for my problem since I have access to all of the boundary points of the quadrilateral)
Having a list of all rectangle boundary pixels, you can use simple methods like this:
Calculate gravity center of rectangle (just sum X- and Y- coordinates of pixels and divide by their number) - it is diagonal intersection.
Find the farthest pixels - they are corners.
In case of bad quality of data set (empty places, excessive pixels) center calculation might be inexact. So you can apply Hough transform to extract sides (as lines) and calculate their intersections.
I have a set of 3d points that lie in a plane. Somewhere on the plane, there will be a hole (which is represented by the lack of points), as in this picture:
I am trying to find the contour of this hole. Other solutions out there involve finding convex/concave hulls but those apply to the outer boundaries, rather than an inner one.
Is there an algorithm that does this?
If you know the plane (which you could determine by PCA), you can project all points into this plane and continue with the 2D coordinates. Thus, your problem reduces to finding boundary points in a 2D data set.
Your data looks as if it might be uniformly sampled (independently per axis). Then, a very simple check might be sufficient: Calculate the centroid of the - let's say 30 - nearest neighbors of a point. If the centroid is very far away from the original point, you are very likely on a boundary.
A second approach might be recording the directions in which you have neighbors. I.e. keep something like a bit field for the discretized directions (e.g. angles in 10° steps, which will give you 36 entries). Then, for every neighbor, calculate its direction and mark that direction, including a few of the adjacent directions, as occupied. E.g. if your neighbor is in the direction of 27.4°, you could mark the direction bits 1, 2, and 3 as occupied. This additional surrounding space will influence how fine-grained the result will be. You might also want to make it depend on the distance of the neighbor (i.e. treat the neighbors as circles and find the angular range that is spanned by the circle). Finally, check if all directions are occupied. If not, you are on a boundary.
Alpha shapes can give you both the inner and outer boundaries.
convert to 2D by projecting the points onto your plane
see related QA dealing with this:
C++ plane interpolation from a set of points
find holes in 2D point set
simply apply this related QA:
Finding holes in 2d point sets?
project found holes back to 3D
again see the link in #1
Sorry for almost link only answer but booth links are here on SO/SE and deals exactly with your issue when combined. I was struggling first to flag your question as duplicate and leave this in a comment but this is more readable.
I have reached a dead-end with my university project and I can't find a way to solve. The problem is:
Given a circular robot (green circle) with radius r
I need to find a path (any path not the best) to the end point which is the blue dot.
image below
The obstacles are the red polygons and around them the cyan lines
represent the Minkowski sum.
The black dots represent the voronoi diagram.
The blue box around is the outside border
So first I though I should find the closer points to the start point (robot) and the end point of the voronoi diagram points. And those points are shown in the image (cyan dots).
Then I was thinking with some king of algorithm like the A* to search for the path for the cyan dots found above along the voronoi points, that way I will find the safest path sort of.
The problem is that I don't have a way of knowing which are the neighbors of each point in the voronoi diagram. Because as you can see in the some parts of the diagram there are big gaps.
So what do you suggest?
Thank you for your time.
Create a black and white image that represents your map. It should start as all black.
Render your obstacles as white.
Dilate the image using a circular filter the size of your circular robot.
Find the shortest path from A to B that only traverses black pixels.
The problem is that I don't have a way of knowing which are the neighbors of each point in the Voronoi diagram. Because as you can see in the some parts of the diagram there are big gaps.
There's probably a better solution, but here's a simple algorithm you could try:
Find (or set manually) the longest distance between "connected" points, D.
Make a table of distances between each pair of points of your Voronoi diagram, where that distance is less or equal to D.
Connect all points starting with shorter distances, except if path shorter than D already exists between them (to avoid unnecessary small loops and cutting corners).
Find closest point to your robot and closest point to your destination.
Run shortest path algorithm on the graph you built in step 3.
I have a shape (in black below) and a point inside the shape (red below). What's the algorithm to find the closest distance between my red point and the border of the shape (which is the green point on the graph) ?
The shape border is not a series of lines but a randomly drawn shape.
Thanks.
So your shape is defined as bitmap and you can access the pixels.
You could scan ever growing squares around your point for border pixels. First, check the pixel itself. Then check a square of width 2 that covers the point's eight adjacent pixels. Next, width 4 for the next 16 pixels and so on. When you find a border pixel, record its distance and check against the minimum distance found. You can stop searching when half the width of the square is greater than the current minimum distance.
An alternative is to draw Bresenham circles of growing radius around the point. The method is similar to the square method, but you can stop immediately when you have a hit, because all points are supposed to have the same distance to your point. The drawback is that this method is somewhat inaccurate, because the circle is only an approximation. You will also miss some pixels along the disgonals, because Bresenham circles have artefacts.
(Both methods are still quite brute-force and in the worst case of a fully black bitmap will visit every node.)
You need a criterion for a pixel on the border. Your shape is antialiassed, so that pixels on the border are smoothed by making them a shade of grey. If your criterion is a pixel that isn't black, you will chose a point a bit inside the shape. If you cose pure white, you'll land a bit outside. Perhaps it's best to chose a pixel with a grey value greater than 0.5 as border.
If you have to find the closest border point to many points for the same shape, you can preprocess the data and use other methods of [nearest-neighbour serach].
As always, it depends on the data, in this case, what your shapes are like and any useful information about your starting point (will it often be close to a border, will it often be near the center of mass, etc).
If they are similar to what you show, I'd probably test the border points individually against the start. Now the problem is how you find the border without having to edge detect the entire shape.
The problem is it appears you can have sharply concave borders (think of a circle with a tiny spike-like sliver jutting into it). In this case you just need to edge detect the shape and test every point.
I think these will work, but don't hold me to it. Computational geometry seems to be very well understood, so you can probably find a pro at this somewhere:
Method One
If the shape is well behaved or you don't mind being wrong try this:
1- Draw 4 lines (diving the shape into four quandrants). And check the distance to each border. What i mean by draw is keep going north until you hit a white pixel, then go south, west, and east.
2- Take the two lines you have drawn so far that have the closest intersection points, bisect the angle they create and add the new line to your set.
3- keep repeating step two until are you to a tolerance you can be happy with.
Actually you can stop before this and on a small enough interval just trace the border between two close points checking each point between them to refine the final answer.
Method Two (this wil work with the poorly behaved shapes and plays well with anti-aliasing):
1- draw a line in any direction until he hit the border (black to white). This will be your starting distance.
2- draw a circle at this distance noting everytime you go from black to white or white to black. These are your intersection points.
As long as you have more than two points, divide the radius in half and try again.
If you have no points increase your radius by 50% and try again (basically binary search until you get to two points - if you get one, you got lucky and found your answer).
3- your closet point lies in the region between your two points. Run along the border checking each one.
If you want to, to reduce the cost of step 3 you can keep doing step 2 until you get a small enough range to brute force in step 3.
Also to prevent a very unlucky start, draw four initial lines (also east, south, and west) and start with the smallest distance. Those are easy to draw and greatly reduce your chance of picking the exact longest distance and accidentally thinking that single pixel is the answer.
Edit: one last optimization: because of the symmetry, you only need to calculate the circle points (those points that make up the border of the circle) for the first quadrant, then mirror them. Should greatly cut down on computation time.
If you define the distance in terms of 'the minimum number of steps that need to be taken to reach from the start pixel to any pixel on the margin', then this problem can be solved using any shortest path search algorithm like bread first search or even better if you use A* search algorithm.
Suppose I have an image of a scene as depicted above. A sort of a pole with a blob on it next to possibly similar objects with no blobs.
How can I find the blob marked by the red circle (a binary image indicating which pixels belong to the blob).
Note that the pole together with the blob may be rotated arbitrarily and also size may vary.
Can you try to do it in below 4 steps?
Circle detection like: writing robust (color and size invariant) circle detection with opencv (based on Hough transform or other features)
Line detection, like: Finding location of rectangles in an image with OpenCV
Identify rectangle position by combining neighboring lines (For each line segment you have the start and end point position, you also know the direction of each line segment. So that you can figure out if two connecting line segments (whose endpoints are close) are orthogonal. Your goal is to find 3 such segments for each rectangle.)
Check the relative position of each circle and rectangle to see if any pair can form the knob shape.
One approach could be using Viola-Jones object detection framework.
Though the framework is mostly used for face detection - it is actually designed for generic objects you feed to the algorithm.
The algorithm basic idea is to feed samples of "good object" (what you are looking for) and "bad objects" to a machine learning algorithm - which generates patterns from the images as its features.
During Classification - using a sliding window the algorithm will search for a "match" to the object (the classifier returned a positive answer).
The algorithm uses supervised learning and thus requires a labeled set of examples (both positive and negative ones)
I'm sure there is some boundary-map algorithm in image processing to do this.
Otherwise, here is a quick fix: pick a pixel at the center of the
"undiscovered zone", which initially is the whole image.
trace the horizantal and vertical lines at 4 directions each ending at the
borders of the zone and find the value changes from 0 to 1 or the vice verse.
Trace each such value switch and complete the boundary of each figure (Step-A).
Do the same for the zones
that still are undiscovered: start at some center
point and skim thru the lines connecting the center to the image border or to a
pixel at the boundary of a known zone.
In Step-A, you can also check to see whether the boundary you traced is
a line or a curve. Whenever it is a curve, you need only two points on it--
points at some distance from one another for the accuracy of the calculation.
The lines perpendicular each to these two points of tangency
intersect at the center of the circle red in your figure.
You can segment the image. Then use only the pixels in the segments to contribute to a Hough-transform to find the circles.
Then you will only have segments with circle in them. You can use a modified hough transform to find rectangles. The 'best' rectangle and square combination will then be your match. This is very computationally intentsive.
Another approach, if you already have these binary pictures, is to transform to a (for example 256 bin) sample by taking the distance to the centroid compared to the distance travelled along the edge. If you start at the point furthest away from the centroid you have a fairly rotational robust featurevector.