Input: body and some closed space. Body and space are represented as meshes(or BReps, if you like). Initially body doesn't intersect the boundary of the space.
The problem is to find all possible directions in which body can move. For example, in following picture, body can move only in directions from (-1,0) to (0,1). If body has a circle(or sphere) surface - it is ok to return directions with some step (for example, for picture below, output can be (-1,0), (-pi/4,pi/4), (0,1) with step = 3).
Output: set of directions in which body can move.
Problem must be solved in 2d and 3d space.
You want to work in the configuration space. Basically increase the size of your boundary based on the shape of your body, then treat the body as a point object. What's left are all the valid positions of the body. Of course, if your body is not a circle and can rotate, then your configuration space is no longer 2D or 3D. It has as many dimensions as your body has degrees of freedom, so 6 for a rigid body that can translate and rotate.
This is a well known problem in robotic motion planning. Google for "configuration space" or "c-space", and "motion planning".
This is a good set of slides from a class at Carnegie Mellon:
Configuration Space Lecture
Displace the object (temporarily) by a vector (with a very small magnitude) representing the direction you want to test. Then run a collision detection algorithm between the object and the environment.
If there are no collisions, then the object can move in that direction. If there is a collision, then it can't.
I assume, the body initially doesn't intersect the boundary of your space.
As long as the body does not touch the boundary (or is closer to it than some epsilon), your body can move freely.
So start with the full range [0, 2 * pi] of valid directions.
Iterate over all vertices of your body and for each one, check if it touches the bondary. If so, compute the normal of the bondary segment touching the body and remove the 180-degree interval centered at your negative normal direction from the set of valid directions.
Related
I have a set of 3d points that lie in a plane. Somewhere on the plane, there will be a hole (which is represented by the lack of points), as in this picture:
I am trying to find the contour of this hole. Other solutions out there involve finding convex/concave hulls but those apply to the outer boundaries, rather than an inner one.
Is there an algorithm that does this?
If you know the plane (which you could determine by PCA), you can project all points into this plane and continue with the 2D coordinates. Thus, your problem reduces to finding boundary points in a 2D data set.
Your data looks as if it might be uniformly sampled (independently per axis). Then, a very simple check might be sufficient: Calculate the centroid of the - let's say 30 - nearest neighbors of a point. If the centroid is very far away from the original point, you are very likely on a boundary.
A second approach might be recording the directions in which you have neighbors. I.e. keep something like a bit field for the discretized directions (e.g. angles in 10° steps, which will give you 36 entries). Then, for every neighbor, calculate its direction and mark that direction, including a few of the adjacent directions, as occupied. E.g. if your neighbor is in the direction of 27.4°, you could mark the direction bits 1, 2, and 3 as occupied. This additional surrounding space will influence how fine-grained the result will be. You might also want to make it depend on the distance of the neighbor (i.e. treat the neighbors as circles and find the angular range that is spanned by the circle). Finally, check if all directions are occupied. If not, you are on a boundary.
Alpha shapes can give you both the inner and outer boundaries.
convert to 2D by projecting the points onto your plane
see related QA dealing with this:
C++ plane interpolation from a set of points
find holes in 2D point set
simply apply this related QA:
Finding holes in 2d point sets?
project found holes back to 3D
again see the link in #1
Sorry for almost link only answer but booth links are here on SO/SE and deals exactly with your issue when combined. I was struggling first to flag your question as duplicate and leave this in a comment but this is more readable.
I am trying to make a 2D collision detection and resolution engine without physics elements like impulses, forces etc. This is a good illustration of what i need, with two polygon collision:
2D Polygon Collision Detection
I've learned how Separating Axis Theorem works and how to use it to detect two polygons intersection and resolve it. However, I have a problem in implementing multiple bodies collision resolution. Here is an example of such case:
Rectangular body A moves up with velocity V and intersects with two static triangles.
Which algorithm can I use to resolve such collisions and find amount and direction of displacement that I need to apply to the body to prevent the penetration?
We can simplify the problem if we add the following restriction: the body can move only along its velocity vector.
Then, there are at least two options.
If you have an algorithm which tells whether two bodies collide, you can run a binary search on the amount of movement (from 0.0*v to 1.0*v) after which there is still no collision with any of the bodies.
Then, move the body (red rectangle in your example) only by that amount.
If you have a finer algorithm which can tell you for two bodies the maximal amount of movement along the vector v after which there is still no collision, just take the minimum of that value among all the bodies (two triangles in your example).
Suppose I have an image of a scene as depicted above. A sort of a pole with a blob on it next to possibly similar objects with no blobs.
How can I find the blob marked by the red circle (a binary image indicating which pixels belong to the blob).
Note that the pole together with the blob may be rotated arbitrarily and also size may vary.
Can you try to do it in below 4 steps?
Circle detection like: writing robust (color and size invariant) circle detection with opencv (based on Hough transform or other features)
Line detection, like: Finding location of rectangles in an image with OpenCV
Identify rectangle position by combining neighboring lines (For each line segment you have the start and end point position, you also know the direction of each line segment. So that you can figure out if two connecting line segments (whose endpoints are close) are orthogonal. Your goal is to find 3 such segments for each rectangle.)
Check the relative position of each circle and rectangle to see if any pair can form the knob shape.
One approach could be using Viola-Jones object detection framework.
Though the framework is mostly used for face detection - it is actually designed for generic objects you feed to the algorithm.
The algorithm basic idea is to feed samples of "good object" (what you are looking for) and "bad objects" to a machine learning algorithm - which generates patterns from the images as its features.
During Classification - using a sliding window the algorithm will search for a "match" to the object (the classifier returned a positive answer).
The algorithm uses supervised learning and thus requires a labeled set of examples (both positive and negative ones)
I'm sure there is some boundary-map algorithm in image processing to do this.
Otherwise, here is a quick fix: pick a pixel at the center of the
"undiscovered zone", which initially is the whole image.
trace the horizantal and vertical lines at 4 directions each ending at the
borders of the zone and find the value changes from 0 to 1 or the vice verse.
Trace each such value switch and complete the boundary of each figure (Step-A).
Do the same for the zones
that still are undiscovered: start at some center
point and skim thru the lines connecting the center to the image border or to a
pixel at the boundary of a known zone.
In Step-A, you can also check to see whether the boundary you traced is
a line or a curve. Whenever it is a curve, you need only two points on it--
points at some distance from one another for the accuracy of the calculation.
The lines perpendicular each to these two points of tangency
intersect at the center of the circle red in your figure.
You can segment the image. Then use only the pixels in the segments to contribute to a Hough-transform to find the circles.
Then you will only have segments with circle in them. You can use a modified hough transform to find rectangles. The 'best' rectangle and square combination will then be your match. This is very computationally intentsive.
Another approach, if you already have these binary pictures, is to transform to a (for example 256 bin) sample by taking the distance to the centroid compared to the distance travelled along the edge. If you start at the point furthest away from the centroid you have a fairly rotational robust featurevector.
does anybody know of a good algorithm for this task:
a multi polygon contains the reserved areas
find an empty position for the given polygon which is closest to its original position but does not intersect the reserved areas
I have implemented some very basic algorithm which does the job but far from optimally.
Thank You!
Edit:
my solution basically does the following:
move given polygon in all possible directions dx and dy
check wether the new intersection is less than the previous intersection
if so, use new position and make sure that you don't move your polygon back and forth at the same position
repeat these steps a maximum of N times
Example: it is intended for placing text which should not overlap with each other.
One method that immediately pops into my mind is to shoot a ray (i.e. measure a line segment) from the original position to every vertex of the polygon. Do a comparison on those distances, and then based on those comparisons, narrow it down to the minimally far away line segment of the polygon. Compute the perpendicular intersection of that line with the origin, and you'll get the minimally far away point. If the vertex comparisons don't lead you down the right path, just shoot off lines in random directions, and just stop when you're happy with the result. It doesn't sound like you require optimality.
Let's look at the original problem: making sure that one piece of text doesn't overlap another. Presumably this is for labelling a map. The way I do it is this: draw the text invisibly, checking for overlap (by using a specialised graphics context that instead of drawing a pixel, checks whether a pixel is already there) then try another position along the line on which the text is to be placed - usually a street. I try the middle of the line first, then successive positions further and further left and right of the middle. If that fails I try again with a condensed (narrower) font.
I have a map that is cut up into a number of regions by borders (contours) like countries on a world map. Each region has a certain surface-cover class S (e.g. 0 for water, 0.03 for grass...). The borders are defined by:
what value of S is on either side of it (0.03 on one side, 0.0 on the other, in the example below)
how many points the border is made of (n=7 in example below), and
n coordinate pairs (x, y).
This is one example.
0.0300 0.0000 7
2660607.5 6332685.5 2660565.0 6332690.5 2660541.5 6332794.5
2660621.7 6332860.5 2660673.8 6332770.5 2660669.0 6332709.5
2660607.5 6332685.5
I want to make a raster map in which each pixel has the value of S corresponding to the region in which the center of the pixel falls.
Note that the borders represent step changes in S. The various values of S represent discrete classes (e.g. grass or water), and are not values that can be averaged (i.e. no wet grass!).
Also note that not all borders are closed loops like the example above. This is a bit like country borders: e.g. the US-Canada border isn't a closed loop, but rather a line joining up at each end with two other borders: the Canada-ocean and the US-ocean "borders". (Closed-loop borders do exist nevertheless!)
Can anyone point me to an algorithm that can do this? I don't want to reinvent the wheel!
The general case for processing this sort of geometry in vector form can be quite difficult, especially since nothing about the structure you describe requires the geometry to be consistent. However, since you just want to rasterize it, then treating the problem as a Voronoi diagram of line segments can be more robust.
Approximating the Voronoi diagram can be done graphically in OpenGL by drawing each line segment as a pair of quads making a tent shape. The z-buffer is used to make the closest quad take precedence, and thus color the pixel based on whichever line is closest. The difference here is that you will want to color the polygons based on which side of the line they are on, instead of which line they represent. A good paper discussing a similar algorithm is Hoff et al's Fast Computation of Generalized Voronoi Diagrams Using Graphics Hardware
The 3d geometry will look something like this sketch with 3 red/yellow segments and 1 blue/green segment:
This procedure doesn't require you to convert anything into a closed loop, and doesn't require any fancy geometry libraries. Everything is handled by the z-buffer, and should be fast enough to run in real time on any modern graphics card. A refinement would be to use homogeneous coordinates to make the bases project to infinity.
I implemented this algorithm in a Python script at http://www.pasteall.org/9062/python. One interesting caveat is that using cones to cap the ends of the lines didn't work without distorting the shape of the cone, because the cones representing the end points of the segments were z-fighting. For the sample geometry you provided, the output looks like this:
I'd recommend you to use a geometry algorithm library like CGAL. Especially the second example in the "2D Polygons" page of the reference manual should provide you what you need. You can define each "border" as a polygon and check if certain points are inside the polygons. So basically it would be something like
for every y in raster grid
for every x in raster grid
for each defined polygon p
if point(x,y) is inside polygon p
pixel[X][Y] = inside_color[p]
I'm not so sure about what to do with the outside_color because the outside regions will overlap, won't they? Anyway, looking at your example, every outside region could be water, so you just could do a final
if pixel[X][Y] still undefined then pixel[X][Y] = water_value
(or as an alternative, set pixel[X][Y] to water_value before iterating through the polygon list)
first, convert all your borders into closed loops (possibly including the edges of your map), and indentify the inside colour. this has to be possible, otherwise you have an inconsistency in your data
use bresenham's algorithm to draw all the border lines on your map, in a single unused colour
store a list of all the "border pixels" as you do this
then for each border
triangulate it (delaunay)
iterate through the triangles till you find one whose centre is inside your border (point-in-polygon test)
floodfill your map at that point in the border's interior colour
once you have filled in all the interior regions, iterate through the list of border pixels, seeing which colour each one should be
choose two unused colors as markers "empty" and "border"
fill all area with "empty" color
draw all region borders by "border" color
iterate through points to find first one with "empty" color
determine which region it belongs to (google "point inside polygon", probably you will need to make your borders closed as Martin DeMello suggested)
perform flood-fill algorithm from this point with color of the region
go to next "empty" point (no need to restart search - just continue)
and so on till no "empty" points will remain
The way I've solved this is as follows:
March along each segment; stop at regular intervals L.
At each stop, place a tracer point immediately to the left and to the right of the segment (at a certain small distance d from the segment). The tracer points are attributed the left and right S-value, respectively.
Do a nearest-neighbour interpolation. Each point on the raster grid is attributed the S of the nearest tracer point.
This works even when there are non-closed lines, e.g. at the edge of the map.
This is not a "perfect" analytical algorithm. There are two parameters: L and d. The algorithm works beautifully as long as d << L. Otherwise you can get inaccuracies (usually single-pixel) near segment junctions, especially those with acute angles.