Each rectangle is comprised of 4 doubles like this: (x0,y0,x1,y1)
The edges are parallel to the x and y axes
They are randomly placed - they may be touching at the edges, overlapping , or not have any contact
I need to find the area that is formed by their overlap - all the area in the canvas that more than one rectangle "covers" (for example with two rectangles, it would be the intersection)
I understand I need to use sweep line algorithm. Do I have to use a tree structure? What is the easiest way of using sweep line algorithm for this problem?
At first blush it seems that an O(n^2) algorithm should be straightforward since we can just check all pairwise points. However, that would create the problem of double counting, as all points that are in 3 rectangles would get counted 3 times! After realizing that, an O(n^2) algorithm doesn't look bad to me now. If you can think of a trivial O(n^2) algorithm, please post.
Here is an O(n^2 log^2 n) algorithm.
Data structure: Point (p) {x_value, isBegin, isEnd, y_low, y_high, rectid}
[For each point, we have a single x_value, two y_values, and the ID of the rectangle which this point came from]
Given n rectangles, first create 2n points as above using the x_left and x_right values of the rectangle.
Create a list of points, and sort it on x_value. This takes O(n log n) time
Start from the left (index 0), use a map to put when you see a begin, and remove when you see an end point.
In other words:
Map m = new HashMap(); // rectangles overlapping in x-axis
for (Point p in the sorted list) {
if (p.isBegin()) {
m.put(p); // m is keyed off of rectangle id
if (s.size() >= 2) {
checkOverlappingRectangles(m.values())
}
} else {
m.remove(p); // So, this takes O(log n) time
}
}
Next, we need a function that takes a list of rectangles, knowing that the all the rectangles have overlapping x axis, but may or may not overlap on y axis. That is in fact same as this algorithm, we just use a transverse data structures since we are interested in y axis now.
Related
Input
You have a points list which represents a 2D point cloud.
Output
You have to generate a list of triangles (should be as less as possible triangles) so the following restrictions are fulfilled:
Each point from the cloud should be a vertex of a triangle or be
inside a triangle.
Triangles can be build only on the points from
the original point cloud.
Triangles should not intersect with each
other.
One point of the cloud can be a vertex for several triangles.
If triangle vertex lies on the side of another triangle we assume such triangles do not intersect.
If point lies on the side of triangle we assume the point is inside a triangle.
For example
Investigation
I invented the way to find a convex hull of given set of points and divide that convex hull into triangles but it is not right solution.
Any guesses how to solve it?
Here is my opinion.
Create a Delaunay Triangulation of the point cloud.
Do a Mesh Simplification by Half Edge Collapse.
For step 1, the boundary of the triangulation will be the convex hull. You can also use a Constrained Delaunay Triangulation (CDT) if you need to honor a non-convex boundary.
For step 2 half-edge collapse operation is going to preserve existing vertices, so no new vertices will be added. Note that in your case the collapses are not removing vertices, they are only removing edges. Before applying an edge collapse you should check that you are not introducing triangle inversions (which produce self intersection) and that no point is outside a triangle. The order of collapses matter but you can follow the usual rule that measures the "cost" of collapses in terms of introducing poor quality triangles (i.e. triangles with acute angles). So you should choose collapses that produce the most isometric triangles as possible.
Edit:
The order of collapses guide the simplification to different results. It can be guided by other criteria than minimize acute angles. I think the most empty triangles can be minimized by choosing collapses that produce triangles most filled vs triangles most empty. Still all criteria are euristics.
Some musings about triangles and convex hulls
Ignoring any set with 2 or less points and 3 points gives always gives 1 triangle.
Make a convex hull.
Select any random internal point.
unless all points are in hull ...
All point in the hull must be part of an triangle as they per definition of convex hull can't be internal.
Now we have an upper bound of triangles, namely the number of points in the hull.
An upper bound is also number of points / 3 rounded up as you can make that many independent triangles.
so the upper bound is the minimum of the two above
We can also guess at the lower bound roundup(hull points / 3) as each 3 neighboring points can make a triangle and any surplus can reuse 1-2.
Now the difficult part reducing the upper bound
walk through the inner points using them as center for all triangles.
If any triangle is empty we can save a triangle by removing the hull edge.
if two or more adjacent triangles are empty we will have to keep every other triangle or join the 3 points to a new triangle, as the middle point can be left out.
note the best result.
Is this prof that no better result exist? no.
If there exist a triangle that envelop all remaining points that would this be better.
N = number of points
U = upper bound
L = lower bound
T = set of triangles
R = set of remaining points
A = set of all points
B = best solution
BestSolution(A)
if A < 3 return NoSolution
if A == 3 return A
if not Sorted(A) // O(N)
SortByX(A) // O(n lg n) or radex if possible O(N)
H = ConvexHull(A)
noneHull = A - H
B = HullTriangles(H, noneHull) // removing empty triangles
U = size B
if noneHull == 0
return U // make triangles of 3 successive points in H and add the remaining to the last
if U > Roundup(N/3)
U = Roundup(N/3)
B = MakeIndepenTriangles(A)
AddTriangle(empty, A)
return // B is best solution, size B is number of triangles.
AddTriangle(T, R)
if size T+1 >= U return // no reason to test if we just end up with another U solution
ForEach r in R // O(N)
ForEach p2 in A-r // O(N)
ForEach p3 in A-r-p2 // O(N)
t = Triangle(r, p2, p3)
c = Candidate(t, T, R)
if c < 0
return c+1 // found better solution
return 0
Candidate(t, T, R)
if not Overlap(t, T) // pt. 3, O(T), T < U
left = R-t
left -= ContainedPoints(t) // O(R) -> O(N)
if left is empty
u = U
U = size T + 1
B = T+t
return U-u // found better solution
return AddTriangle(T+t, left)
return 0
So ... total runtime ...
Candidate O(N)
AddTriangle O(N^3)
recursion is limited to the current best solution U
O((N N^3)^U) -> O((N^4)^U)
space is O(U N)
So reducing U before we go to brute force is essential.
- Reducing R quickly should reduce recursion
- so starting with bigger and hopefully more enclosing triangles would be good
- any 3 points in the hull should make some good candidates
- these split the remaining points in 3 parts which can be investigated independently
- treat each part as a hull where its 2 base points are part of a triangle but the 3rd is not in the set.
- if possible make this a BFS so we can select the most enclosing first
- space migth be a problem
- O(H U N)
- else start with points that are a 1/3 around the hull relative to each other first.
AddTriangle really sucks performance so how many triangles can we really make
Selecting 3 out of N is
N!/(N-3)!
And we don't care about order so
N!/(3!(N-3)!)
N!/(6(N-3)!)
N (N-1) (n-2) / 6
Which is still O(N^3) for the loops, but it makes us feel better. The loops might still be faster if the permutation takes too long.
AddTriangle(T, R)
if size T+1 >= U return // no reason to test if we just end up with another U solution
while t = LazySelectUnordered(3, R, A) // always select one from R first O(R (N-1)(N-2) / 6) aka O(N^3)
c = Candidate(t, T, R)
if c < 0
return c+1 // found better solution
return 0
Input:
A set of points
Coordinates are non-negative integer type.
Integer k
Output:
A point P(x, y) (in or not in the given set) whose manhattan distance to closest is maximal and max(x, y) <= k
My (naive) solution:
For every (x, y) in the grid which contain given set
BFS to find closest point to (x, y)
...
return maximum;
But I feel it run very slow for a large grid, please help me to design a better algorithm (or the code / peseudo code) to solve this problem.
Should I instead of loop over every (x, y) in grid, just need to loop every median x, y
P.S: Sorry for my English
EDIT:
example:
Given P1(x1,y1), P2(x2,y2), P3(x3,y3). Find P(x,y) such that min{dist(P,P1), dist(P,P2),
dist(P,P3)} is maximal
Yes, you can do it better. I'm not sure if my solution is optimal, but it's better than yours.
Instead of doing separate BFS for every point in the grid. Do a 'cumulative' BFS from all the input points at once.
You start with 2-dimensional array dist[k][k] with cells initialized to +inf and zero if there is a point in the input for this cell, then from every point P in the input you try to go in every possible direction. The further you are from the start point the bigger integer you put in the array dist. If there is a value in dist for a specific cell, but you can get there with a smaller amount of steps (smaller integer) you overwrite it.
In the end, when no more moves can be done, you scan the array dist to find the cell with maximum value. This is your point.
I think this would work quite well in practice.
For k = 3, assuming 1 <= x,y <= k, P1 = (1,1), P2 = (1,3), P3 = (2,2)
dist would be equal in the beginning
0, +inf, +inf,
+inf, 0, +inf,
0, +inf, +inf,
in the next step it would be:
0, 1, +inf,
1, 0, 1,
0, 1, +inf,
and in the next step it would be:
0, 1, 2,
1, 0, 1,
0, 1, 2,
so the output is P = (3,1) or (3,3)
If K is not large enough and you need to find a point with integer coordinates, you should do, as another answer suggested - Calculate minimum distances for all points on the grid, using BFS, strarting from all given points at once.
Faster solution, for large K, and probably the only one which can find a point with float coordinates, is as following. It has complexity of O(n log n log k)
Search for resulting maximum distance using dihotomy. You have to check if there is any point inside the square [0, k] X [0, k] which is at least given distance away from all points in given set. Suppose, you can check that fast enough for any distance. It is obvious, that if there is such point for some distance R, there always will be some point for all smaller distances r < R. For example, the same point would go. Thus you can search for maximum distance using binary search procedure.
Now, how to fast check for existence (and also find) a point which is at least r units away from all given points. You should draw "Manhattan spheres of radius r" around all given points. These are set of points at most r units away from given point. They are tilted by 45 degrees squares with diagonal equal to 2r. Now turn the picture by 45 degrees, and all squares will be parallel to the axis. Now you can check for existence of any point outside such squares using sweeping line algorithm. You have to sort all vertical edges of squares, and then process them one by one from left to right. Left borders will add segment mark to sweeping line, Left borders will erase it. And you have to check if there is any non marked point on the line. You can implement it using segment tree. Then, you have to check if there is any non marked point on the line inside the initial square [0,k]X[0,k].
So, again, overall solution will be binary search for r. Inside of it you will have to check if there is any point at least r units away from all given points. Do that by constructing "manhattans spheres of radius r" and then scanning them with a diagonal line from left-top corner to right-bottom. While moving line you should store number of opened spheres at each point at the line in the segment tree. between opening and closing of any spheres, line does not change, and if there is any free point there, it means, that you found it for distance r.
Binary search contributes log k to complexity. Each checking procedure is n log n for sorting squares borders, and n log k (n log n?) for processing them all.
Voronoi diagram would be another fast solution and could also find non integer answer. But it is much much harder to implement even for Manhattan measure.
First try
We can turn a 2D problem into a 1D problem by projecting onto the lines y=x and y=-x. If the points are (x1,y1) and (x2,y2) then the manhattan distance is abs(x1-x2)+abs(y1-y2). Change coordinate to a u-v system with basis U = (1,1), V = (1,-1). Coords of the two points in this basis are u1 = (x1-y1)/sqrt(2), v1= (x1+y1), u2= (x1-y1), v2 = (x1+y1). And the manhatten distance is the largest of abs(u1-u2), abs(v1-v2).
How this helps. We can just work with the 1D u-values of each points. Sort by u-value, loop through points and find the largest difference between pains of points. Do the same of v-values.
Calculating u,v coords of O(n), quick sorting is O(n log n), looping through sorted list is O(n).
Alas does not work well. Fails if we have point (-10,0), (10,0), (0,-10), (0,10). Lets try a
Voronoi diagram
Construct a Voronoi diagram
using Manhattan distance. This can be calculate in O(n log n) using https://en.wikipedia.org/wiki/Fortune%27s_algorithm
The vertices in the diagram are points which have maximum distance from its nearest vertices. There is psudo-code for the algorithm on the wikipedia page. You might need to adapt this for Manhattan distance.
I was at the high frequency Trading firm interview, they asked me
Find a square whose length size is R with given n points in the 2D plane
conditions:
--parallel sides to the axis
and it contains at least 5 of the n points
running complexity is not relative to the R
they told me to give them O(n) algorithm
Interesting problem, thanks for posting! Here's my solution. It feels a bit inelegant but I think it meets the problem definition:
Inputs: R, P = {(x_0, y_0), (x_1, y_1), ..., (x_N-1, y_N-1)}
Output: (u,v) such that the square with corners (u,v) and (u+R, v+R) contains at least 5 points from P, or NULL if no such (u,v) exist
Constraint: asymptotic run time should be O(n)
Consider tiling the plane with RxR squares. Construct a sparse matrix, B defined as
B[i][j] = {(x,y) in P | floor(x/R) = i and floor(y/R) = j}
As you are constructing B, if you find an entry that contains at least five elements stop and output (u,v) = (i*R, j*R) for i,j of the matrix entry containing five points.
If the construction of B did not yield a solution then either there is no solution or else the square with side length R does not line up with our tiling. To test for this second case we will consider points from four adjacent tiles.
Iterate the non-empty entries in B. For each non-empty entry B[i][j], consider the collection of points contained in the tile represented by the entry itself and in the tiles above and to the right. These are the points in entries: B[i][j], B[i+1][j], B[i][j+1], B[i+1][j+1]. There can be no more than 16 points in this collection, since each entry must have fewer than 5. Examine this collection and test if there are 5 points among the points in this collection satisfying the problem criteria; if so stop and output the solution. (I could specify this algorithm in more detail, but since (a) such an algorithm clearly exists, and (b) its asymptotic runtime is O(1), I won't go into that detail).
If after iterating the entries in B no solution is found then output NULL.
The construction of B involves just a single pass over P and hence is O(N). B has no more than N elements, so iterating it is O(N). The algorithm for each element in B considers no more than 16 points and hence does not depend on N and is O(1), so the overall solution meets the O(N) target.
Run through set once, keeping the 5 largest x values in a (sorted) local array. Maintaining the sorted local array is O(N) (constant time performed N times at most).
Define xMin and xMax as the x-coordinates of the two points with largest and 5th largest x values respectively (ie (a[0] and a[4]).
Sort a[] again on Y value, and set yMin and yMax as above, again in constant time.
Define deltaX = xMax- xMin, and deltaY as yMax - yMin, and R = largest of deltaX and deltaY.
The square of side length R located with upper-right at (xMax,yMax) meets the criteria.
Observation if R is fixed in advance:
O(N) complexity means no sort is allowed except on a fixed number of points, as only a Radix sort would meet the criteria and it requires a constraint on the values of xMax-xMin and of yMax-yMin, which was not provided.
Perhaps the trick is to start with the point furthest down and left, and move up and right. The lower-left-most point can be determined in a single pass of the input.
Moving up and right in steps and counitng points in the square requries sorting the points on X and Y in advance, which to be done in O(N) time requiress that the Radix sort constraint be met.
Input: list of 2d points (x,y) where x and y are integers.
Distance: distance is defined as the Manhattan distance.
ie:
def dist(p1,p2)
return abs(p1.x-p2.x) + abs(p1.y - p2.y)
What is an efficient algorithm to find the point that is closest to all other points.
I can only think of a brute force O(n^2) solution:
minDist=inf
bestPoint = null
for p1 in points:
dist = 0
for p2 in points:
dist+=distance(p1,p2)
minDist = min(dist,minDist)
bestPoint = argmin(p1, bestPoint)
basically look at every pair of points.
Note that in 1-D the point that minimizes the sum of distances to all the points is the median.
In 2-D the problem can be solved in O(n log n) as follows:
Create a sorted array of x-coordinates and for each element in the array compute the "horizontal" cost of choosing that coordinate. The horizontal cost of an element is the sum of distances to all the points projected onto the X-axis. This can be computed in linear time by scanning the array twice (once from left to right and once in the reverse direction). Similarly create a sorted array of y-coordinates and for each element in the array compute the "vertical" cost of choosing that coordinate.
Now for each point in the original array, we can compute the total cost to all other points in O(1) time by adding the horizontal and vertical costs. So we can compute the optimal point in O(n). Thus the total running time is O(n log n).
What you are looking for is center of mass.
You basically add all xs' and ys' together and divide be the mass of the whole system.
Now, You have uniform mass less particles let their mass be 1.
then all you have to do is is sum the location of the particles and divide by the number of particles.
say we have p1(1,2) p2(1,1) p3 (1,0)
// we sum the x's
bestXcord = (1+1+1)/3 = 1
//we sum the y's
bestYcord = (2+1)/3 = 1
so p2 is the closest.
solved in O(n)
Starting from your initial algortihm, there is an optimization possible:
minDist=inf
bestPoint = null
for p1 in points:
dist = 0
for p2 in points:
dist+=distance(p1,p2)
//This will weed out bad points rather fast
if dist>=minDist then continue(p1)
/*
//Unnecessary because of new line above
minDist = min(dist,minDist)
bestPoint = argmin(p1, bestPoint)
*/
bestPoint = p1
The idea is, to throw away outliers as fast as possible. This can be improved more:
start p1 loop with a heuristic "inner" point (This creates a good minDist first, so worse points get thrown away faster)
start p2 loop with heuristic "outer" points (This makes dist rise fast, possibly triggering the exit condition faster
If you trade size for speed, you can go another route:
//assumes points are numbered 0..n
dist[]=int[n+1]; //initialized to 0
for (i=0;i<n;i++)
for (j=i+1;j<=n;j++) {
dist=dist(p[i], p[j]);
dist[i]+=dist;
dist[j]+=dist;
}
minDist=min(dist);
bestPoint=p[argmin(dist)];
which needs more space for the dist array, but calculates each distance only once. This has the advantage to be better suited to a "get the N best points" sort of problem and for calculation-intensive metrices. I suspect it would bring nothing for the manhattan metric, on an x86 or x64 architecture though: The memory access would heavily dominate the calculation.
Given points in Euclidean space, is there a fast algorithm to count the number of points 'under' one arbitrary hyperplane? Fast means time complexity lower than O(n)
Time for preprocessing or sorting the points is okay
And, even if not high dimensional, I'd like to know whether there exists one that can be used in 2 dimension space
If you're willing to preprocess the points, then you have to visit each one at least once, which is O(n). If you consider a test of which side the point is on as part of the preprocessing then you've got an O(0) algorithm (with O(n) preprocessing). So I don't think this question makes sense as stated.
Nevertheless, I'll attempt to give a useful answer, even if it's not precisely what the OP asked for.
Choose a hyperplane unit normal and root point. If the plane is given in parametric form
(P - O).N == 0
then you have these already, just make sure the normal is unitized.
If it's given in analytic form: Sum(i = 1 to n: a[i] x[i]) + d = 0, then the vector A = (a1, ... a[n]) is a normal of the plane, and N = A/||A|| is the unit plane normal. A point O (for origin) on the plane is d N.
You can test which side each point P is on by projecting it onto N add checking the sign of the parameter:
Let V = P - O. V is the vector from the chosen origin O to P.
Let s N be the projection of V onto N. If s is negative, then P is "under" the hyperplane.
You should go to the link on vector projection if you're rusty on the subject, but I'll summarize here using my notation. Or, you can take my word for it, and just skip to the formula at the end.
If alpha is the angle between V and N, then from the definition of cosine we have cos(alpha) = s||N||/||V|| = s/||V|| since N is a unit normal. But we also know from vector algebra that cos(alpha) = ||V||(V.N), where "." is scalar product (a.k.a. dot product, or euclidean inner product).
Equating these two expressions for cos(alpha) we have
s = (V.V)(V.N)
(using the fact that ||V||^2 == V.V).
So your proprocesing work is to compute N and O, and your test is:
bool is_under = (dot(V, V)*dot(V, N) < 0.);
I don't believe it can be done any faster.
When setting the point values, use checking conditions at that point setting. Then increment or dont increment the counter. O(n)
I found O(logN) algorithm in 2D dimension by using divide-and-conquer and binary search with O(N log N) preprocessing time complexity and O(N log N) memory complexity
The basic idea is that points can be divided into left N/2 points and right N/2 points, and the number of points that's under the line(in 2D dimension) is sum of the number of left points under the line and the number of the right points under the line. I'll call the infinite line that divides whole points into 'left' and 'right' as 'dividing line'. Dividing line will be look like 'x = k'
If each 'left points' and 'right points' are sorted by y-axis order, then the number of specific points - the points at the right lower corner - can be quickly found by using binary searching 'the number of points whose y values are lower than the y value of intersection point of the line and the Dividing line'.
Therefore time complexity is
T(N) = 2T(N/2) + O(log N)
and finally the time complexity is O(log N)