Find intersections in GPS tracks - algorithm

I’m embarking on a personal project (planning on starting in Python if it becomes relevant), and I’m looking for some algorithms advice.
One part of my project will entail finding intersections between GPS tracks. Not like identifying road intersections, but identifying places where two different tracks cross.
I can’t think of a way to do it that isn’t heinously computationally intensive, but I feel like this is a problem that has probably been studied/solved in the past. Is there a name for this type of problem? Or are there algorithms that I should look in to?

No "clever" algorithm there. You use all positions of a given mobile object to make the track (really, it's more the "history of movement" rather than a track), you obtain a polyline.
To check intersections between two mobile elements, you'll have to find where the two polylines intersects. You simply check each segment of the first track with each segment of the second track.
You can optimize a lot by using a bounding rectangle around the polylines: if the two bounding rectangles do not intersect, no need to check, there is no intersections. Otherwise, get the intersection, and you can directly ignore all segments that are outside this intersection area.
Once an intersection is found, you can then compare the plot timestamp in order to see if the two mobile objects did really meet, or if they simply crossed the other one's path at a different time.
If needed, once you found an intersection on projected polyline, you can also check the altitude, too, because the polyline is indeed in 3D and not only in 2D - GPS give altitude, too.

Related

Algorithm for creating a specific geometric structure

I observed some applications create a geometric structure apparently by just having a set of touch points. Like this example:
I wonder which algorithms can possibly help me to recreate such geometric structures?
UPDATE
In 3D printing, sometimes a support structure is needed:
The need for support is due to collapse of some 3D object regions, i.e. overhangs, while printing. Support structure is supposed to connect overhangs either to print floor or to 3D object itself. The geometric structure shown in the screenshot above is actually a sample support structure.
I am not a specialist in that matter and I may be missing important issues. So here is what I would naively do.
The triangles having a external normal pointing downward will reveal the overhangs. When projected vertically and merged by common edges, they define polygonal regions of the base plane. You first have to build those projected polygons, find their intersections, and order the intersections by Z. (You might also want to consider the facing polygons to take the surface thickness into account).
Now for every intersection polygon, you draw verticals to the one just below. The projections of the verticals might be sampled from a regular grid or elsehow, to tune the density. You might also consider sampling those pillars from the basement continuously to the upper surface, possibly stopping some of them earlier.
The key ingredient in this procedure is a good polygon intersection algorithm.

Can an unsorted binary tree be balanced using a distance-function?

I´ve got the following theoretical problem:
I have an amount n of cuboids in 3-dimensional space.
They are aligned to the coordinate-system, so that one cuboid can be described via a point (x,y,z) and dimensions (dimX,dimY,dimZ).
I want to organize these cuboids in a way that I´m able to check if a newly inserted cuboid intersecs with one of the existing (collision detection).
To do this I decided to use hierarchical bounding-boxes.
So in sum I have a binary-tree-structure of bounding volumes.
Insertion is then done by determining recursively the distance to both children (=the distance between the two centers of two cuboids) and inserting in the path with the smallest distance.
Collision detection works similar, but we take all bounding volumes in a sub-path which are intersecting a given cuboid.
The tricky part is, how to balance this tree to get better performance if some cuboids are very close to each other and others are far away.
So far, I´ve found no way to use e.g. an AVL-tree because then I´d have to be able to compare two cuboids in some way that does not break the conditions on which collision detection depends.
P.S.: I know there are libraries to do this, but I want to understand the principles of collision detection e.g. in games in detail and therefore want to implement this by myself.
I´ve now tried it with space-partitioning instead of object-partitioning. That´s not exactly what I wanted to do but I found much more helpful information about it, e.g.: https://en.wikipedia.org/wiki/Kd-tree
With this information it should be possible to implement it.

How to fit more than one line to data points

I am trying to fit more than one line to a list of points in 2D. My points are quite low in number (16 or 32).
These points are coming from a simulated environment of a robot with laser range finders attached to its side. If the points lie on a line it means that they detected a wall, if not, it means they detected an obstacle. I am trying to detect the walls and calculate their intersection, and for this I thought the best idea is to fit lines on the dataset.
Fitting one line to a set of points is not a problem, if we know all those points line on or around a line.
My problem is that I don't know how can I detect which sets of points should be classified for fitting on the same line and which should not be, for each line. Also, I don't even now the number of points on a line, while naturally it would be the best to detect the longest possible line segment.
How would you solve this problem? If I look at all the possibilities for example for groups of 5 points for all the 32 points then it gives 32 choose 5 = 201376 possibilities. I think it takes way too much time to try all the possibilities and try to fit a line to all 5-tuples.
So what would be a better algorithm what would run much faster? I could connect points within limit and create polylines. But even connecting the points is a hard task, as the edge distances change even within a single line.
Do you think it is possible to do some kind of Hough transform on a discrete dataset with such a low number of entries?
Note: if this problem is too hard to solve, I was thinking about using the order of the sensors and use it for filtering. This way the algorithm could be easier but if there is a small obstacle in front of a wall, it would distract the continuity of the line and thus break the wall into two halves.
A good way to find lines in noisy point data like this is to use RANSAC. Standard RANSAC usage is to pick the best hypothesis (=line in this case) but you can just as easy pick the best 2 or 4 lines given your data. Have a look at the example here:
http://www.janeriksolem.net/2009/06/ransac-using-python.html
Python code is available here
http://www.scipy.org/Cookbook/RANSAC
The first thing I would point out is that you seem to be ignoring a vital aspect of the data, which is you know which sensors (or readings) are adjacent to each other. If you have N laser sensors you know where they are bolted to the robot and if you are rotating a sensor you know the order (and position) in which the measurements are taken. So, connecting points together to form a piecewise linear fit (polylines) should be trivial. Once you had these correspondances you could take each set of four points and determine if they can be modeled effectively by only 2 lines, or something.
Secondly, it's well known that finding a globally optimal fit for even two lines to an arbitrary set of points is NP-Hard as it can be reduced to k-means clustering, so I would not expect to find an efficient algorithm for this. When I used the Hough transform it was for finding corners based on pixel intensities, in theory it is probably applicable to this sort of problem but it is just an approximation and it is probably going to take a fair bit of work to figure out and implement.
I hate to suggest it but it seems like you might benefit by looking at the problem in a slightly different way. When I was working in autonomous navigation with a laser range finder we solved this problem by discretizing the space into an occupancy grid, which is the default approach. Of course, this just assumes the walls are just another object, which is not a particularly outrageous idea IMO. Beyond that, can you assume you have a map of the walls and you are just trying to find the obstacles and localize (find the pose of) the robot? If so, there are a large number of papers and tech reports on this problem.
A part of the solution could be (it's where I would start) to investigate the robot. Questions such as:
How fast is the robot turning/moving?
At what interval is the robot shooting the laser?
Where was the robot and what was its orientation when a point was found?
The answers to these questions might help you better than trying to find some algorithm that relies on the points only. Especially when there are so very few.
The answers to these questions give you more information/points. E.g., if you know the robot was at a specific position when it detected a point, there are no points in between the position of the robot and the detected point. that is very valuable.
For all the triads, fit a line through them and compute how much the line is deviating or not from the points.
Then use only the good (little-deviating) triads and merge them if they have two points at common, and the grow the sets by appending all triads that have (at least) two points in the set.
As the last step you can ditch the triads that haven't been merged with any other but have some common element with result set of at least 4 elements (to get away with crossing lines) but keep those triads that did not merge with anyone but does not have any element in common with sets (this would yield three points on the right side of your example as one of the lines).
I presume this would find the left line and bottom line in the second step, and the right line the third.

reflection paths between points in2d

Just wondering if there was a nice (already implemented/documented) algorithm to do the following
boo! http://img697.imageshack.us/img697/7444/sdfhbsf.jpg
Given any shape (without crossing edges) and two points inside that shape, compute all the paths between the two points such that all reflections are perfect reflections. The path lengths should be limited to a certain length otherwise there are infinite solutions. I'm not interested in just shooting out rays to try to guess how close I can get, I'm interested in algorithms that can do it perfectly. Search based, not guess/improvement based.
I think you can do better than computing fans. Call your points A and B. You want to find paths of reflections from A to B.
Start off by reflecting A in an edge, and call the reflection A1. Can you draw a line from A1 to B that only hits that edge? If yes, that means you have a path from A to B that reflects on the edge. Do this for all the edges and you'll get all the single reflection paths that exist. It should be easy to construct these paths using the properties of reflections. Along the way, you need to check that the paths are legal, i.e. they do not cross other edges.
You can continue to find paths consisting of two reflections by reflecting all the first generation reflections of A in all the edges, and checking to see whether a line can be drawn from those points through the reflecting edge to B. Keep on doing this search until the distance of the reflected points from B exceeds a threshold.
I hope this makes sense. It should be easier than chasing fans and dealing with their breakups, even though you're still going to have to do some work.
By the way, this is a corner of a well studied field of billiards on tables of various geometries. Of course, a billiard ball bounces off the side of a table the same way light bounces off a mirror, so this is just another way of thinking of reflections. You can delve into this with search terms like polygonal billiards unfolding illumination, although the mathematicians tend to dwell on finding cases where there are no pool shots between two points on a polygonal table, as opposed to directly solving the problem you've posed.
Think not in terms of rays but fans. A fan would be all the rays emanating from one point and hitting a wall. You can then check if the fan contains the second point and if it does you can determine which ray hits it. Once a fan hits a wall, you can compute the reflected fan by transposing it's origin onto the outside of the wall - by doing this all fans are basically triangle shaped. There are some complications when a fan partially hits a wall and has to be broken into pieces to continue. Anyway, this tree of reflected fans can be traversed breadth first or depth first since you're limiting the total distance.
You may also want to look into radiosity methods, which is probably similar to what I've just described, but is usually done in 3d.
I do not know of any existing solutions for such a problem. Good luck to you if you find one, but incase you don't the first step to a complete but exponential (with regard to the line count) would be to break it into two parts:
Given an ordered subset of walls A,B,C and points P1, P2, calculate if a route is possible (either no solutions or a single unique solution).
Then generate permutations of your walls until you exceed whatever limit you had in mind.
The first part can be solved by a simple set of equations to find the necessary angles for each ray bounce. Then checking each line against existing lines for collisions would tell you if the path is possible.
The parameters to the system of equations would be
angle_1 = normal of line A with P1
angle_2 = normal of line B with intersection of line A
angle_3 = normal of line C with ...
angle_n = normal of line N-1 with P2
Each parameter is bounded by the constraints to hit the next line, which may not be linear (I have not checked). If they are not then you would probably have to pick suitable numerical non-linear solvers.
In response to brainjam
You still need wedges....
alt text http://img72.imageshack.us/img72/6959/ssdgk.jpg
In this situation, how would you know not to do the second reflection? How do you know what walls make sense to reflect over?

Compare three-dimensional structures

I need to evaluate if two sets of 3d points are the same (ignoring translations and rotations) by finding and comparing a proper geometric hash. I did some paper research on geometric hashing techniques, and I found a couple of algorithms, that however tend to be complicated by "vision requirements" (eg. 2d to 3d, occlusions, shadows, etc).
Moreover, I would love that, if the two geometries are slightly different, the hashes are also not very different.
Does anybody know some algorithm that fits my need, and can provide some link for further study?
Thanks
Your first thought may be trying to find the rotation that maps one object to another but this a very very complex topic... and is not actually necessary! You're not asking how to best match the two, you're just asking if they are the same or not.
Characterize your model by a list of all interpoint distances. Sort the list by that distance. Now compare the list for each object. They should be identical, since interpoint distances are not affected by translation or rotation.
Three issues:
1) What if the number of points is large, that's a large list of pairs (N*(N-1)/2). In this case you may elect to keep only the longest ones, or even better, keep the 1 or 2 longest ones for each vertex so that every part of your model has some contribution. Dropping information like this however changes the problem to be probabilistic and not deterministic.
2) This only uses vertices to define the shape, not edges. This may be fine (and in practice will be) but if you expect to have figures with identical vertices but different connecting edges. If so, test for the vertex-similarity first. If that passes, then assign a unique labeling to each vertex by using that sorted distance. The longest edge has two vertices. For each of THOSE vertices, find the vertex with the longest (remaining) edge. Label the first vertex 0 and the next vertex 1. Repeat for other vertices in order, and you'll have assigned tags which are shift and rotation independent. Now you can compare edge topologies exactly (check that for every edge in object 1 between two vertices, there's a corresponding edge between the same two vertices in object 2) Note: this starts getting really complex if you have multiple identical interpoint distances and therefore you need tiebreaker comparisons to make the assignments stable and unique.
3) There's a possibility that two figures have identical edge length populations but they aren't identical.. this is true when one object is the mirror image of the other. This is quite annoying to detect! One way to do it is to use four non-coplanar points (perhaps the ones labeled 0 to 3 from the previous step) and compare the "handedness" of the coordinate system they define. If the handedness doesn't match, the objects are mirror images.
Note the list-of-distances gives you easy rejection of non-identical objects. It also allows you to add "fuzzy" acceptance by allowing a certain amount of error in the orderings. Perhaps taking the root-mean-squared difference between the two lists as a "similarity measure" would work well.
Edit: Looks like your problem is a point cloud with no edges. Then the annoying problem of edge correspondence (#2) doesn't even apply and can be ignored! You still have to be careful of the mirror-image problem #3 though.
There a bunch of SIGGRAPH publications which may prove helpful to you.
e.g. "Global Non-Rigid Alignment of 3-D Scans" by Brown and Rusinkiewicz:
http://portal.acm.org/citation.cfm?id=1276404
A general search that can get you started:
http://scholar.google.com/scholar?q=siggraph+point+cloud+registration
spin images are one way to go about it.
Seems like a numerical optimisation problem to me. You want to find the parameters of the transform which transforms one set of points to as close as possible by the other. Define some sort of residual or "energy" which is minimised when the points are coincident, and chuck it at some least-squares optimiser or similar. If it manages to optimise the score to zero (or as near as can be expected given floating point error) then the points are the same.
Googling
least squares rotation translation
turns up quite a few papers building on this technique (e.g "Least-Squares Estimation of Transformation Parameters Between Two Point Patterns").
Update following comment below: If a one-to-one correspondence between the points isn't known (as assumed by the paper above), then you just need to make sure the score being minimised is independent of point ordering. For example, if you treat the points as small masses (finite radius spheres to avoid zero-distance blowup) and set out to minimise the total gravitational energy of the system by optimising the translation & rotation parameters, that should work.
If you want to estimate the rigid
transform between two similar
point clouds you can use the
well-established
Iterative Closest Point method. This method starts with a rough
estimate of the transformation and
then iteratively optimizes for the
transformation, by computing nearest
neighbors and minimizing an
associated cost function. It can be
efficiently implemented (even
realtime) and there are available
implementations available for
matlab, c++... This method has been
extended and has several variants,
including estimating non-rigid
deformations, if you are interested
in extensions you should look at
Computer graphics papers solving
scan registration problem, where
your problem is a crucial step. For
a starting point see the Wikipedia
page on Iterative Closest Point
which has several good external
links. Just a teaser image from a matlab implementation which was designed to match to point clouds:
(source: mathworks.com)
After aligning you could the final
error measure to say how similar the
two point clouds are, but this is
very much an adhoc solution, there
should be better one.
Using shape descriptors one can
compute fingerprints of shapes which
are often invariant under
translations/rotations. In most cases they are defined for meshes, and not point clouds, nevertheless there is a multitude of shape descriptors, so depending on your input and requirements you might find something useful. For this, you would want to look into the field of shape analysis, and probably this 2004 SIGGRAPH course presentation can give a feel of what people do to compute shape descriptors.
This is how I would do it:
Position the sets at the center of mass
Compute the inertia tensor. This gives you three coordinate axes. Rotate to them. [*]
Write down the list of points in a given order (for example, top to bottom, left to right) with your required precision.
Apply any algorithm you'd like for a resulting array.
To compare two sets, unless you need to store the hash results in advance, just apply your favorite comparison algorithm to the sets of points of step 3. This could be, for example, computing a distance between two sets.
I'm not sure if I can recommend you the algorithm for the step 4 since it appears that your requirements are contradictory. Anything called hashing usually has the property that a small change in input results in very different output. Anyway, now I've reduced the problem to an array of numbers, so you should be able to figure things out.
[*] If two or three of your axis coincide select coordinates by some other means, e.g. as the longest distance. But this is extremely rare for random points.
Maybe you should also read up on the RANSAC algorithm. It's commonly used for stitching together panorama images, which seems to be a bit similar to your problem, only in 2 dimensions. Just google for RANSAC, panorama and/or stitching to get a starting point.

Resources