Speeding up segment/set of segment intersection - computational-geometry

Suppose you have a set of segments in R^2 (call it S). Every segment is contained in a box of dimension WxH (so, the set S has four additional segments, one for each side of the box) and a segment s to be addedd to S. The segment s starts from point A (that will belong to one of the segment in S) and ends at point B. What i want to compute is the point B' such that B'belongs to one of the segment in S and A-B' does not intersect any other segment in S. Is there a wat to compute B' without using a brute-force algorithm (that is, intersecting AB with every other segment in S)?

"Real-time Collision Detection" by Ericson (table of contents) is a great resource for solving problems like this. Chapter 7 spatial partitioning lists a number of methods suitable for solving such problems.
Consider starting with Octrees, KD-Trees or spatial hashing. They are all reasonably easy to implement, and will make the problem go from O(n^2) to (from memory) O(n log n)

Related

Efficiently finding out which 3D lines intersect which 3D points

Given
A set of many points in 3D space
(each represented as 3 floating-point coordinates (x, y, z))
A set of many infinite lines in 3D space
(each represented by an arbitary point on the line, and a 3D direction vector)
Is there a way to find out which of the points lie on which of the lines (with a little tolerance to account for floating-point errors), that is more efficient than the trivial O(n²) approach of testing every point against every line in a nested loop?
I'm thinking along the lines of storing one of the two sets in a special data structure that helps with the intersection tests. But what would such a data structure look like?
(Links to relevant academic literature are also appreciated.)
This is a complement to Pinhead's answer.
Consider the bounding box of the P points, assumed roughly cubic, and subdivide it in C³ cells. If the spreading of the points is uniform, every cell will contain P/C³ points on average.
Now for every line you will find all the cells it intersects by a process similar to drawing a digital line, and you will intersect on average αC cells, where α is a small constant. Hence the total workload for the search is proportional to L P/C³.
Anyway, you have to account for the initialization time of the grid, proportional to C³. Hence the total including initialization is of the form C³+ ß L P/C², and is minimized by C~(L P)^(1/5), giving time and space complexity O((L P)^(3/5)), a significant saving over O(L P).
You can also think of a binary tree that contains a hierarchy of bounding spheres, starting from the tolerance radius around each point. You could create the tree in the same way one creates a kD-tree, by recursive subvisions on some dimension.
It is much harder to make precise estimates of the speedup, but you could expect a time complexity like O((L + P) Log(P)).
Divide 3d space into cubes of side N, so that a cube can contain more than one point. An infinite line will intersect an infinite amount of cubes, but checking if a cube has a point takes O(1). When a line intersects a cube with more than one point, you only check for the points in that cube, which is going to be smaller than the total size of points if 1) they are evenly distributed 2) you are using amortized constants(Constant Amortized Time). From the perspective of average scenario, you would only check all points at once at O(N^2) if all points are packed inside one cube. You can also have cubes inside the cubes, but that is more bookkeeping.
This idea is inspired by a quadtree:
https://en.wikipedia.org/wiki/Quadtree

Checking 2d point is on which side of contour defined by a vector of points

I have a contour defined by a vector of points i.e. I have a vector of (x_i ,y_i).
Now given any point (a,b) is there a fast way to determine (a,b) lies on the same side as the origin (0,0) or not ?
The vector which defines the contour is a vector of roughly 7000 points. So determination using point by point maybe extremely slow. It would be very kind if someone can give any pointers.
(I am using visual C++ for my computations)
Thanks in advance.
I understand that preprocessing is allowed (otherwise you cannot avoid the "point by point" procedure).
A simple way is by scanning the polyline to decompose it in monotonous sections, i.e. such that the vertices appear by increasing or decreasing x. This takes linear time.
Then when you want to compare a point to the polyline, compare it to every monotonous section independently and find the x interval it faces, by dichotomic search. For a section of length m, this takes Log m operations. When you know the interval, you can immediately tell on what side ot the section you lie.
Repeat for all sections and count the number of "below", the parity of which gives you the answer.
This procedure takes time s Log m for s sections of length m (on geometric average). It is not worst-case optimal but is simple and will behave very decently on most cases.

Covering N line segments with unit disks

Given a set of N disjoint horizontal line segments (parallel to the X-axis) of variable length and variable perpendicular distances in between, we need to place minimum numbers of unit disks that intersects at least one line segment or a unit disk such that the union of line segments and disks is connected. Is there any existing algorithm or anything I can use to solve this?
Finding an optimal solution is possibly NP-hard, as #j_random_hacker says. But perhaps a greedy algorithm produces a reasonable approximation.
Sort your segments by their left endpoints, and process the segments in that order, left-to-right. At any stage, you have a connected set of segments and disks to the left of the current not-yet-incorporated segment s. Find the closest object (segment or disk) to the left endpoint of s. Suppose this distance is d. Then connect s by a sequence of ceiling(d/2) disks along the line realizing that minimum distance d. s is now incorporated, and you can move on to the next segment.
One could easily create examples where this performs poorly, but there is quite a bit of room to improve via heuristics.

Find closest 2d point on polyline in constant time

Is there an algorithm that for a given 2d position finds the closest point on a 2d polyline consisting of n - 1 line segments (n line vertices) in constant time? The naive solution is to traverse all segments, test the minimum distance of each segment to the given position and then for the closest segment, calculate the exact closest point to the given position, which has a complexity of O(n). Unfortunately, hardware constraints prevent me from using any type of loop or pointers, meaning also no optimizations like quadtrees for a hierarchical lookup of the closest segment in O(log n).
I have theoretically unlimited time to pre-calculate any datastructure that can be used for a lookup and this pre-calculation can be arbitrarily complex, only the lookup at runtime itself needs to be in O(1). However, the second constraint of the hardware is that I only have very limited memory, meaning that it is not feasible to find the closest point on the line for each numerically possible position of the domain and storing this in a huge array. In other words, the memory consumption should be in O(n^x).
So it comes down to the question how to find the closest segment of a polyline or its index given a 2d position without any loops. Is this possible?
Edit: About the given position … it can be quite arbitrary, but it is reasonable to consider only positions in the closer neighborhood of a line, given by a constant maximum distance.
Create a single axis-aligned box that contains all of your line segments with some padding. Discretize it into a WxH grid of integer indexes. For each grid cell, compute the nearest line segment, and store its index in that grid cell.
To query a point, in O(1) time compute which grid cell it falls in. Lookup the index of the nearest line segment. Do the standard O(1) algorithm to compute exactly the nearest point on the line.
This is an O(1) almost-exact algorithm that will take O(WH) space, where WH is the number of cells in the grid.
For example, here is the subdivision of the space imposed by some line segments:
Here is a 9x7 tiling of the space, where each color corresponds to an edge index: red (0), green (1), blue (2), purple (3). Notice how the discretizing of the space introduces some error. You would of course use a much finer subdivision of the space to reduce that error to as much as you want, at the cost of having to store a larger grid. This coarse tiling is meant for illustration only.
You can keep your algorithm O(1) and make it even more almost-exact by taking your query point, identifying what cell it lies in, and then looking at the 8 neighboring cells in addition to that cell. Determine the set of edges that those 9 cells identify. (The set contains at most 9 edges.) Then for each edge find the closest point. Then keep the closest among those (at most 9) closest points.
In any case, this approach will always fail for some pathological case, so you'll have to factor that into deciding whether you want to use this.
You can find the closest geometric point on a line in O(1) time, but that won't tell you which of the given vertices is closest to it. The best you can do for that is a binary search, which is O(log n), but of course requires a loop or recursion.
If you're designing VLSI or FPGA, you can evaluate all the vertices in parallel. Then, you can compare neighbors, and do a big wired-or to encode the index of the segment that straddles the closest geometric point. You'll technically get some sort of O(log n) delay based on the number of elements in the wired-or, but that kind of thing is usually treated as near-constant.
You can optimize this type of search using an R-Tree which is a general purpose spatial data structure support fast searches. It's not a constant time algorithm; it's average case is O(log n).
You said that you can pre-calculate the data structure, but you could not use any loops. However is there some limitation that prevents any loops? Arbitrary searches are not likely to hit an existing datapoint so it must at least look left and right in a tree.
This SO answer contains some links to libraries:
Java commercial-friendly R-tree implementation?

Fast geospatial search geofencing irregular polygons

I have possibly billions of multipoint (sometimes more than 20) polygonal regions (varying sizes , intersecting, nested) distributed throughout the span of a country. Though, the polygonal regions or 'fences' will not usually be too large and most of them will be negligible in size compared to the span . The points of the polygons are described in terms of it's GPS co-ordinates. I need a fast algorithm to check whether the current GPS location of the device falls within any of the enclosed 'fences'.
I have thought of using RAY CASTING ALGORITHM (point in polygon test) after sufficiently narrowing down on the candidates using some search algorithm. The question now being what method would serve as an efficient 2D SEARCH ALGORITHM keeping the following two considerations-
Speed of processing
Data structure should be compact as the storage space is limited (flash storage)
The regions are static and constant. Hence , the 2 dimensional data can be pre sorted to any requirement before being stored.
The unit is to be put on a vehicle , and hence the search need not be repeated for regions sufficiently far from the vehicle's proximity. It's acceptable to have an initial boot up time provided the successive real-time look-ups are extremely fast.
I initially thought of using KD trees to perform a boundary box query after reducing each 'fence' to a point ( approx. by using a point within it) followed by ray casting algorithm.
I have also been looking at hilbert curves and geo hash for the same.
Another method I'm considering is to use RECTANGLES that just enclose the fences. We choose these rectangles for each fence such that it is aligned to the GRID (sides aligned to the latitude and longitude).
That is , find out the max and min value of the latitudes and longitudes for a fence individually , let them be LAT 1, LAT 2, LONG 1 , LONG 2.
Now the points (LAT1, LONG1) , (LAT1, LONG2) , (LAT2, LONG1) , (LAT2, LONG2) are the co-ordinates of the rectangle (aligned to the GRID) that the fence must necessarily be contained in. (I do understand that it will not be a rectangle in the geometrical sense, nonetheless. )
Now use R tree to search for the RECTANGLES that the current gps location falls within. This will narrow the search down to very few results , each of which can be individually tested using RAY CASTING ALGORITHM.
I could also use a hybrid method by using K-D tree for the initial chunk and then applying R tree search.
I am more than open to any other method as well.
What do you think is the best way to approach this problem statement?
EDIT - Removed the size restriction on the polygonal fences.
It sounds like your set of polygons is static. If so, one of the fastest ways is likely to be binary space partitioning. Very briefly, building a BSP tree means picking a single edge from among your billions of polygons, splitting any polygons that cross this (infinite) line into two halves, partitioning the polygons into those on each side of the line, and recursively building a BSP tree on each side. At the bottommost level of the tree are convex polygons that are wholly inside some set of original polygons -- so e.g. if two original polygons A and B intersect but one doesn't fully contain the other, then this will produce at least 3 "basic" polygons at the lowest level, one for the region A but not B, one for B but not A, and one for A and B. (More may be needed if any of these are not convex.) Each of these can be tagged with the list of original polygons that it is part of.
In the ideal case where no polygons are split, finding the complete set of polygons that you are inside is O(log(total number of edges)) once the BSP tree has been built, since deciding which child to visit next in the tree is just an O(1) test of which side of a line the query point is on.
Building the BSP tree takes quite a bit of work up front, especially if you want to make sure it does as little polygon subdivision as possible. It also requires that your polygons are convex, but if that's not the case, you can easily make them so by triangulating them first.
If your set of polygons grows slowly over time, you can of course complement the above technique with regular point-in-polygon testing for the new polygons, occasionally rebuilding the BSP tree when some threshold is reached.
You can try a hierarchical point-in-polygon test for example kirkpatrick data structure but its very complicated. Personaly I would try rectangles, kd-tree, hilbert curves, quadtrees.

Resources