Finding number of lattice points inside a region - computational-geometry

Given a set of points in 2-D plane, How to find number of points lying on or inside any arbitrary triangle.
One method is to check all points whether they lie inside the given triangle.
But I read that Kd-tree can be used to find the number of points lying within a region in O(log n) time, where 'n' is number of points. But I did not understand how to implement that.
Is there any other simpler method to do that?
Or kd-tree will work? If so can someone explain how?

It can be done by recursively checking position of sub-partitions to a triangle. To see which points of a tree node are in a triangle, check each of a node partition (there are 2 in a k-d tree) is it whole in a triangle, is it outside of a triangle or is it intersecting triangle. If partition is in triangle than add number of points in that partition to a result, if partition is out of triangle than do nothing for that partition, if partition intersects triangle than make same check for a sub-partitions of that partition.
For this, each tree node has to store number of points in its sub-tree, which is easy to do in tree creation.
Running time depends on a number of intersections of triangle edges with a partition boundaries. I'm not sure is it O(log n) in the worst case.

Related

Merging overlapping axis-aligned rectangles

I have a set of axis aligned rectangles. When two rectangles overlap (partly or completely), they shall be merged into their common bounding box. This process works recursively.
Detecting all overlaps and using union-find to form groups, which you merge in the end will not work, because the merging of two rectangles covers a larger area and can create new overlaps. (In the figure below, after the two overlapping rectangles have been merged, a new overlap appears.)
As in my case the number of rectangles is moderate (say N<100), a brute force solution is usable (try all pairs and if an overlap is found, merge and restart from the beginning). Anyway I would like to reduce the complexity, which is probably O(N³) in the worst case.
Any suggestion how to improve this ?
I think an R-Tree will do the job here. An R-Tree indexes rectangular regions and lets you insert, delete and query (e.g, intersect queries) in O(log n) for "normal" queries in low dimensions.
The idea is to process your rectangles successively, for each rectangle you do the following:
perform an intersect query on the current R-Tree (empty in the
beginning)
If there are results then delete the results from the R-Tree,
merge the current rectangle with all result rectangles and insert
the newly merged rectangle (for the last step jump to step 1.).
If there are no results just insert the rectangle in the R-Tree
In total you will perform
O(n) intersect queries in step 1. (O(n log n))
O(n) insert steps in step 3. (O(n log n))
at most n delete and n insert steps in step 2. This is because each time you perform step 2 you decrease the number of rectangles by at least 1 (O(n log n))
In theory you should get away with O(n log n), however the merging steps in the end (with large rectangles) might have low selectivity and need more than O(log n), but depending on the data distribution this should not ruin the overall runtime.
Use a balanced normalized quad-tree.
Normalized: Gather all the x coordinates, sort them and replace them with the index in the sorted array. Same for the y coordinates.
Balanced: When building the quad-tree always split at the middle coordinate.
So when you get a rectangle you want to go and mark the correct nodes in the tree with some id of the rectangle. If you find any other rectangles underneath(that means they will be overlapping), gather them in a set. When done, if the vector is not empty (you found overlapping rectangles), then we create a new rectangle to represent the union of the subrectangles. If the computed rectangle is bigger then the one you just inserted, then apply the algorithm again using the new computed rectangle. Repeat this until it no longer grows, then move to the next input rectangle.
For performance every node in the quad-tree store all the rectangles overlapping that node, in addition to marking it as an end-node.
Complexity: Initial normalization is O(NlogN). Inserting and checking for overlaps will be O(log(N)^2). You need to do this for the original N rectangles and also for the overlaps. Every time you find an overlap you eliminate at least one of the original rectangles so you can find at most (N-1) overlaps. So overall you need 2N operations. So overall the complexity is going to be O(N(log(N)^2)).
This is better than other approaches because you don't need to check any-to-any rectangles for overlap.
This can be solved using a combination of plane sweep and spatial data structure: we merge intersecting rectangles along the sweep line and put any rectangles behind the sweep line to spatial data structure. Every time we get a newly merged rectangle we check spatial data structure for any rectangles intersecting this new rectangle and merge it if found.
If any rectangle (R) behind the sweep line intersects some rectangle (S) under sweep line then either of two corners of R nearest to the sweep line is inside S. This means that spatial data structure should store points (two points for each rectangle) and answer queries about any point lying inside a given rectangle. One obvious way to implement such data structure is segment tree where each leaf contains the rectangle with top and bottom sides at corresponding y-coordinate and each other node points to one of its descendants containing the rightmost rectangle (where its right side is nearest to the sweep line).
To use such segment tree we should compress (normalize) y-coordinates of rectangles' corners.
Compress y-coordinates
Sort rectangles by x-coordinate of their left sides.
Move sweep line to next rectangle, if it passes right sides of some rectangles, move them to the segment tree.
Check if current rectangle intersects anything along the sweep line. If not go to step 3.
Check if union of rectangles found on step 4 intersects anything in the segment tree and merge recursively, then go to step 4.
When step 3 reaches the end of list get all rectangles under sweep line and all rectangles in segment tree and uncompress their coordinates.
Worst-case time complexity is determined by segment tree: O(n log n). Space complexity is O(n).

Efficient checking of whether a point is inside a large number of triangles in 2D

I have implemented an algorithm to check whether a point is inside a triangle in 2D by calling the 1st algorithm in this question: how to determine if a point is inside a 2d triangle.
This algorithm works, but I think if I preprocess the triangle and use a smart data structure, I can make this efficient. I'm using about 10 million triangles. I think one way of making this fast is to compute bounding rectangles for the triangles and do a point in rectangle check, but I feel even this case can be made faster by making use of some data structure to only check for rectangles that are closeby.
Does such a data structure and algorithm exist?
The classical approach from computational geometry is to triangulate and then do point location, but it's a lot easier to just use a k-d tree, putting triangles that intersect the partitioning line of an interior tree node in that node so that they can be checked on the way down to the leaves.
Constructing the k-d tree is a recursive process. Given a list of triangles and a coordinate to focus on (x or y, alternating with each layer), find a separating line that is vaguely in the middle, by sampling randomly, by taking a median, or by some combination or sampling and taking a median. Gather the triangles whose points are all strictly less than the separation coordinate and use them to construct the left subtree. Gather the triangles whose points are all strictly greater than the separation coordinate and use them to construct the right subtree. Store the remaining triangles (the ones that intersect the coordinate line) in the root.
To query, test whether the point belongs to any of the triangle in the root. Then, if the point is less than the separation coordinate, check the left subtree. If the point is greater, check the right.

FInd furthest point in O(1) time

Consider a set S of n points in the plane such that the farthest pair is having distance at most 1. I would like to find the farthest point of a given query point q (not in S) in O(1) time. How do I pre-process the points in S to achieve the desired query time bound?
Can this be possible?
It is not possible stricto sensu. This is a point location problem in a planar straight line graph, which is known to require O(log(N)) query time.
Anyway, it can be addressed approximately by gridding.
Overlay a square grid over the furthest point Voronoi diagram, and for every cell note the regions it covers. Make sure that the number of covered regions is bounded. This can be approximately achieved by taking a grid pitch smaller than the distance of the two closest vertices in the diagram.
For a query pixel, finding the containing cell is done in constant time. Then finding the region among a bounded number takes constant time as well.
Assuming there is no relation between the points, there is no single operation that will give you the furthest point. So, the only way to do it is to compute it in advance, so you need a simple mapping between each point and the point furthest from it.

Segment Intersection

Here is a question from CLRS.
A disk consists of a circle plus its interior and is represented by its center point and radius. Two disks intersect if they have any point in common. Give an O(n lg n)-time algorithm to determine whether any two disks in a set of n intersect.
Its not my home work. I think we can take the horizontal diameter of every circle to be the representing line segment. If two orders come consecutive, then we check the length of the distances between the two centers. If its less than or equal to the sum of the radii of the circles, then they intersect.
Please let me know if m correct.
Build a Voronoi diagram for disk centers. This is an O(n log n) job.
Now for each edge of the diagram take the corresponding pair of centers and check whether their disk intersect.
Build a k-d tree with the centres of the circles.
For every circle (p, r), find using the k-d tree the set S of circles whose centres are nearer than 2r from p.
Check if any of the circles in S touches the current circle.
I think the average cost for this algorithm is O(NlogN).
The logic is that we loop over the set O(N), and for every element get a subset of elements near O(NlogN), so, a priori, the complexity is O(N^2 logN). But we also have to consider that the probability of two random circles being less than 2r apart and not touching is lesser than 3/4 (if they touch we can short-circuit the algorithm).
That means that the average size of S is probabilistically limited to be a small value.
Another approach to solve the problem:
Divide the plane using a grid whose diameter is that of the biggest circle.
Use a hashing algorithm to classify the grid cells in N groups.
For every circle calculate the grid cells it overlaps and the corresponding groups.
Get all the circles in a group and...
Check if the biggest circle touches any other circle in the group.
Recurse applying this algorithm to the remaining circles in the group.
This same algorithm implemented in scala: https://github.com/salva/simplering/blob/master/touching/src/main/scala/org/vesbot/simplering/touching/Circle.scala

Intersection of N rectangles

I'm looking for an algorithm to solve this problem:
Given N rectangles on the Cartesian coordinate, find out if the intersection of those rectangles is empty or not. Each rectangle can lie in any direction (not necessary to have its edges parallel to Ox and Oy)
Do you have any suggestion to solve this problem? :) I can think of testing the intersection of each rectangle pair. However, it's O(N*N) and quite slow :(
Abstract
Either use a sorting algorithm according to smallest X value of the rectangle, or store your rectangles in an R-tree and search it.
Straight-forward approach (with sorting)
Let us denote low_x() - the smallest (leftmost) X value of a rectangle, and high_x() - the highest (rightmost) X value of a rectangle.
Algorithm:
Sort the rectangles according to low_x(). # O(n log n)
For each rectangle in sorted array: # O(n)
Finds its highest X point. # O(1)
Compare it with all rectangles whose low_x() is smaller # O(log n)
than this.high(x)
Complexity analysis
This should work on O(n log n) on uniformly distributed rectangles.
The worst case would be O(n^2), for example when the rectangles don't overlap but are one above another. In this case, generalize the algorithm to have low_y() and high_y() too.
Data-structure approach: R-Trees
R-trees (a spatial generalization of B-trees) are one of the best ways to store geospatial data, and can be useful in this problem. Simply store your rectangles in an R-tree, and you can spot intersections with a straightforward O(n log n) complexity. (n searches, log n time for each).
Observation 1: given a polygon A and a rectangle B, the intersection A ∩ B can be computed by 4 intersection with half-planes corresponding to each edge of B.
Observation 2: cutting a half plane from a convex polygon gives you a convex polygon. The first rectangle is a convex polygon. This operation increases the number of vertices at most per 1.
Observation 3: the signed distance of the vertices of a convex polygon to a straight line is a unimodal function.
Here is a sketch of the algorithm:
Maintain the current partial intersection D in a balanced binary tree in a CCW order.
When cutting a half-plane defined by a line L, find the two edges in D that intersect L. This can be done in logarithmic time through some clever binary or ternary search exploiting the unimodality of the signed distance to L. (This is the part I don't exactly remember.) Remove all the vertices on the one side of L from D, and insert the intersection points to D.
Repeat for all edges L of all rectangles.
This seems like a good application of Klee's measure. Basically, if you read http://en.wikipedia.org/wiki/Klee%27s_measure_problem there are lower bounds on the runtime of the best algorithms that can be found for rectilinear intersections at O(n log n).
I think you should use something like the sweep line algorithm: finding intersections is one of its applications. Also, have a look at answers to this questions
Since the rectangles must not be parallel to the axis, it is easier to transform the problem to an already solved one: compute the intersections of the borders of the rectangles.
build a set S which contains all borders, together with the rectangle they're belonging to; you get a set of tuples of the form ((x_start,y_start), (x_end,y_end), r_n), where r_n is of course the ID of the corresponding rectangle
now use a sweep line algorithm to find the intersections of those lines
The sweep line stops at every x-coordinate in S, i.e. all start values and all end values. For every new start coordinate, put the corresponding line in a temporary set I. For each new end-coordinate, remove the corresponding line from I.
Additionally to adding new lines to I, you can check for each new line whether it intersects with one of the lines currently in I. If they do, the corresponding rectangles do, too.
You can find a detailed explanation of this algorithm here.
The runtime is O(n*log(n) + c*log(n)), where c is the number of intersection points of the lines in I.
Pick the smallest rectangle from the set (or any rectangle), and go over each point within it. If one of it's point also exists in all other rectangles, the intersection is not empty. If all points are free from ALL other rectangles, the intersection is empty.

Resources