How to calculate total volume of multiple overlapping cuboids - algorithm

I have a list of Cuboids, defined by their coordinates of their lower-left-back and upper-right-front corners, with edges parallel to the axis. Coordinates are double values. These cuboids are densely packed, will overlap with one or more others, or even fully contain others.
I need to calculate the total volume encompassed by all the given cuboids. Areas which overlap (even multiple times) should be counted exactly once.
For example, the volumes:
((0,0,0) (3,3,3))
((0,1,0) (2,2,4))
((1,0,1) (2,5,2))
((6,6,6) (8,8,8))
The total volume is 27 + 1 + 2 + 8 = 38.
Is there an easy way to do this ( in O(n^3) time or better?) ?

How about maintaining a collection of non-intersecting cuboids as each one is processed?
This collection would start empty.
The first cuboid would be added to the collection – it would be the only element, therefore guaranteed not to intersect anything else.
The second and subsequent cuboids would be checked against the elements in the collection. For each new cuboid N, for each element E already in the collection:
If N is totally contained by E, discard N and resume processing at the next new cuboid.
If N totally contains E, remove E from the collection and continue testing N against the other elements in the collection.
If N intersects E, split N into up to five (see comment below) smaller cuboids (depending on how they intersect) representing the volume that does not intersect and continue testing these smaller cuboids against the other elements in the collection.
If we get to the end of the tests against the non-intersecting elements with one or more cuboids generated from N (representing the volume contributed by N that wasn't in any of the previous cuboids) then add them all to the collection and process the next cuboid.
Once all the cuboids have been processed, the total volume will be the sum of the volumes in the collection of non-intersecting cuboids that has been built up.

This can be solved efficiently using a plane-sweep algorithm, that is a straightforward extension of the line-sweep algorithm suggested here for finding the total area of overlapping rectangles.
For each cuboid add it's left and right x-coordinate in an event queue and sort the queue. Now sweep a yz-plane (that has a constant x value) through the cuboids and record the volume between any two successive events in the event queue. We do this by maintaining the list of rectangles that intersect the plane at any stage
As we sweep the plane we encounter two types of events:
(1) We see the beginning of new cuboid that starts intersecting the sweeping plane. In this case a new rectangle intersects the plane, and we update the area of the rectangles intersecting the sweeping plane.
(2) The end of an existing cuboid that was intersecting with the plane. In this case we have to remove the corresponding rectangle from the list of rectangles that are currently intersecting the plane and update the new area of the resulting rectangles.
The volume of the cuboids between any two successive events qi and qi+1 is equal to the horizontal distance between the two events times the area of the rectangles intersecting the sweep line at qi.
By using the O(nlogn) algorithm for computing the area of rectangles as a subroutine, we can obtain an O(n2logn) algorithm for computing the total volume of the cuboids. But there may be a better way of maintaining the rectangles (since we only add or delete a rectangle at any stage) that is more efficient.

I recently had the same problem and found the following approach easy to implement and working for n dimensions.
First build a grid and then check for each cell in the grid whether it overlaps with a cuboid or not. The volume of overlapping cuboids is the sum of the volumes for those cells which are included in one or more cuboids.
Describe your cuboids with their min/max value for each dimension.
For each dimension store min/max values of each cuboid in an array. Sort this array and remove duplicates.
Now you have grid points of a non-equidistant grid. Each cell of the grid is either completely inside one or more cuboids or not.
Iterate over the grid cells and count the volume for those cells which overlap with one or more cuboids.
You can get all grid cells by using the Cartesian Product.

I tried the cellular approach suggested by #ccssmnn; it worked but was way too slow. The problem is that the size of the array used for "For each dimension store min/max values of each cuboid in an array." is O(n), so the number of cells (hence, the execution time) is n^d, e.g., n^3 for three dimensions.
Next, I tried a nested sweep-line algorithm, as suggested by #krjampani; much faster but still too slow. I believe the complexity is n^2*log^3(n).
So now, I'm wondering if there's any recourse. I've read several postings that mention the use of interval trees or augmented interval trees; might this approach have better complexity, e.g., n*log^3(n)?
Also, I'm trying to get my head around what would be the augmenting value in this case? In the case of point or range queries, I can see sorting the cuboids by their (xlo,ylo,zlo) and using max(xhi,yhi,zhi) for each subtree as the augmenting value, but can't figure out how to extend this to keep track of the union of the cuboids and its volume.

Related

How to index nearby 3D points on the fly?

In physics simulations (for example n-body systems) it is sometimes necessary to keep track of which particles (points in 3D space) are close enough to interact (within some cutoff distance d) in some kind of index. However, particles can move around, so it is necessary to update the index, ideally on the fly without recomputing it entirely. Also, for efficiency in calculating interactions it is necessary to keep the list of interacting particles in the form of tiles: a tile is a fixed size array (eg 32x32) where the rows and columns are particles, and almost every row-particle is close enough to interact with almost every column particle (and the array keeps track of which ones actually do interact).
What algorithms may be used to do this?
Here is a more detailed description of the problem:
Initial construction: Given a list of points in 3D space (on the order of a few thousand to a few million, stored as array of floats), produce a list of tiles of a fixed size (NxN), where each tile has two lists of points (N row points and N column points), and a boolean array NxN which describes whether the interaction between each row and column particle should be calculated, and for which:
a. every pair of points p1,p2 for which distance(p1,p2) < d is found in at least one tile and marked as being calculated (no missing interactions), and
b. if any pair of points is in more than one tile, it is only marked as being calculated in the boolean array in at most one tile (no duplicates),
and also the number of tiles is relatively small if possible (but this is less important than being able to update the tiles efficiently)
Update step: If the positions of the points change slightly (by much less than d), update the list of tiles in the fastest way possible so that they still meet the same conditions a and b (this step is repeated many times)
It is okay to keep any necessary data structures that help with this, for example the bounding boxes of each tile, or a spatial index like a quadtree. It is probably too slow to calculate all particle pairwise distances for every update step (and in any case we only care about particles which are close, so we can skip most possible pairs of distances just by sorting along a single dimension for example). Also it is probably too slow to keep a full (quadtree or similar) index of all particle positions. On the other hand is perfectly fine to construct the tiles on a regular grid of some kind. The density of particles per unit volume in 3D space is roughly constant, so the tiles can probably be built from (essentially) fixed size bounding boxes.
To give an example of the typical scale/properties of this kind of problem, suppose there is 1 million particles, which are arranged as a random packing of spheres of diameter 1 unit into a cube with of size roughly 100x100x100. Suppose the cutoff distance is 5 units, so typically each particle would be interacting with (2*5)**3 or ~1000 other particles or so. The tile size is 32x32. There are roughly 1e+9 interacting pairs of particles, so the minimum possible number of tiles is ~1e+6. Now assume each time the positions change, the particles move a distance around 0.0001 unit in a random direction, but always in a way such that they are at least 1 unit away from any other particle and the typical density of particles per unit volume stays the same. There would typically be many millions of position update steps like that. The number of newly created pairs of interactions per step due to the movement is (back of the envelope) (10**2 * 6 * 0.0001 / 10**3) * 1e+9 = 60000, so one update step can be handled in principle by marking 60000 particles as non-interacting in their original tiles, and adding at most 60000 new tiles (mostly empty - one per pair of newly interacting particles). This would rapidly get to a point where most tiles are empty, so it is definitely necessary to combine/merge tiles somehow pretty often - but how to do it without a full rebuild of the tile list?
P.S. It is probably useful to describe how this differs from the typical spatial index (eg octrees) scenario: a. we only care about grouping close by points together into tiles, not looking up which points are in an arbitrary bounding box or which points are closest to a query point - a bit closer to clustering that querying and b. the density of points in space is pretty constant and c. the index has to be updated very often, but most moves are tiny
Not sure my reasoning is sound, but here's an idea:
Divide your space into a grid of 3d cubes, like this in three dimensions:
The cubes have a side length of d. Then do the following:
Assign all points to all cubes in which they're contained; this is fast since you can derive a point's cube from just their coordinates
Now check the following:
Mark all points in the top left of your cube as colliding; they're less than d apart. Further, every "quarter cube" in space is only the top left quarter of exactly one cube, so you won't check the same pair twice.
Check fo collisions of type (p, q), where p is a point in the top left quartile, and q is a point not in the top left quartile. In this way, you will check collision between every two points again at most once, because very pair of quantiles is checked exactly once.
Since every pair of points is either in the same quartile or in neihgbouring quartiles, they'll be checked by the first or the second algorithm. Further, since points are approximately distributed evenly, your runtime is much less than n^2 (n=no points); in aggregate, it's k^2 (k = no points per quartile, which appears to be approximately constant).
In an update step, you only need to check:
if a point crossed a boundary of a box, which should be fast since you can look at one coordinate at a time, and box' boundaries are a simple multiple of d/2
check for collisions of the points as above
To create the tiles, divide the space into a second grid of (non-overlapping) cubes whose width is chosen s.t. the average count of centers between two particles that almost interact with each other that fall into a given cube is less than the width of your tiles (i.e. 32). Since each particle is expected to interact with 300-500 particles, the width will be much smaller than d.
Then, while checking for interactions in step 1 & 2, assigne particle interactions to these new cubes according to the coordinates of the center of their interaction. Assign one tile per cube, and mark interacting particles assigned to that cube in the tile. Visualization:
Further optimizations might be to consider the distance of a point's closest neighbour within a cube, and derive from that how many update steps are needed at least to change the collision status of that point; then ignore that point for this many steps.
I suggest the following algorithm. E.g we have cube 1x1x1 and the cutoff distance is 0.001
Let's choose three base anchor points: (0,0,0) (0,1,0) (1,0,0)
Associate array of size 1000 ( 1 / 0.001) with each anchor point
Add three numbers into each regular point. We will store the distance between the given point and each anchor point inside these fields
At the same time this distance will be used as an index in an array inside the anchor point. E.g. 0.4324 means index 432.
Let's store the set of points inside of each three arrays
Calculate distance between the regular point and each anchor point every time when update point
Move point between sets in arrays during the update
The given structures will give you an easy way to find all closer points: it is the intersection between three sets. And we choose these sets based on the distance between point and anchor points.
In short, it is the intersection between three spheres. Maybe you need to apply additional filtering for the result if you want to erase the corners of this intersection.
Consider using the Barnes-Hut algorithm or something similar. A simulation in 2D would use a quadtree data structure to store particles, and a 3D simulation would use an octree.
The benefit of using a a tree structure is that it stores the particles in a way that nearby particles can be found quickly by traversing the tree, and far-away particles are in traversal paths that can be ignored.
Wikipedia has a good description of the algorithm:
The Barnes–Hut tree
In a three-dimensional n-body simulation, the Barnes–Hut algorithm recursively divides the n bodies into groups by storing them in an octree (or a quad-tree in a 2D simulation). Each node in this tree represents a region of the three-dimensional space. The topmost node represents the whole space, and its eight children represent the eight octants of the space. The space is recursively subdivided into octants until each subdivision contains 0 or 1 bodies (some regions do not have bodies in all of their octants). There are two types of nodes in the octree: internal and external nodes. An external node has no children and is either empty or represents a single body. Each internal node represents the group of bodies beneath it, and stores the center of mass and the total mass of all its children bodies.
demo

Merging overlapping axis-aligned rectangles

I have a set of axis aligned rectangles. When two rectangles overlap (partly or completely), they shall be merged into their common bounding box. This process works recursively.
Detecting all overlaps and using union-find to form groups, which you merge in the end will not work, because the merging of two rectangles covers a larger area and can create new overlaps. (In the figure below, after the two overlapping rectangles have been merged, a new overlap appears.)
As in my case the number of rectangles is moderate (say N<100), a brute force solution is usable (try all pairs and if an overlap is found, merge and restart from the beginning). Anyway I would like to reduce the complexity, which is probably O(N³) in the worst case.
Any suggestion how to improve this ?
I think an R-Tree will do the job here. An R-Tree indexes rectangular regions and lets you insert, delete and query (e.g, intersect queries) in O(log n) for "normal" queries in low dimensions.
The idea is to process your rectangles successively, for each rectangle you do the following:
perform an intersect query on the current R-Tree (empty in the
beginning)
If there are results then delete the results from the R-Tree,
merge the current rectangle with all result rectangles and insert
the newly merged rectangle (for the last step jump to step 1.).
If there are no results just insert the rectangle in the R-Tree
In total you will perform
O(n) intersect queries in step 1. (O(n log n))
O(n) insert steps in step 3. (O(n log n))
at most n delete and n insert steps in step 2. This is because each time you perform step 2 you decrease the number of rectangles by at least 1 (O(n log n))
In theory you should get away with O(n log n), however the merging steps in the end (with large rectangles) might have low selectivity and need more than O(log n), but depending on the data distribution this should not ruin the overall runtime.
Use a balanced normalized quad-tree.
Normalized: Gather all the x coordinates, sort them and replace them with the index in the sorted array. Same for the y coordinates.
Balanced: When building the quad-tree always split at the middle coordinate.
So when you get a rectangle you want to go and mark the correct nodes in the tree with some id of the rectangle. If you find any other rectangles underneath(that means they will be overlapping), gather them in a set. When done, if the vector is not empty (you found overlapping rectangles), then we create a new rectangle to represent the union of the subrectangles. If the computed rectangle is bigger then the one you just inserted, then apply the algorithm again using the new computed rectangle. Repeat this until it no longer grows, then move to the next input rectangle.
For performance every node in the quad-tree store all the rectangles overlapping that node, in addition to marking it as an end-node.
Complexity: Initial normalization is O(NlogN). Inserting and checking for overlaps will be O(log(N)^2). You need to do this for the original N rectangles and also for the overlaps. Every time you find an overlap you eliminate at least one of the original rectangles so you can find at most (N-1) overlaps. So overall you need 2N operations. So overall the complexity is going to be O(N(log(N)^2)).
This is better than other approaches because you don't need to check any-to-any rectangles for overlap.
This can be solved using a combination of plane sweep and spatial data structure: we merge intersecting rectangles along the sweep line and put any rectangles behind the sweep line to spatial data structure. Every time we get a newly merged rectangle we check spatial data structure for any rectangles intersecting this new rectangle and merge it if found.
If any rectangle (R) behind the sweep line intersects some rectangle (S) under sweep line then either of two corners of R nearest to the sweep line is inside S. This means that spatial data structure should store points (two points for each rectangle) and answer queries about any point lying inside a given rectangle. One obvious way to implement such data structure is segment tree where each leaf contains the rectangle with top and bottom sides at corresponding y-coordinate and each other node points to one of its descendants containing the rightmost rectangle (where its right side is nearest to the sweep line).
To use such segment tree we should compress (normalize) y-coordinates of rectangles' corners.
Compress y-coordinates
Sort rectangles by x-coordinate of their left sides.
Move sweep line to next rectangle, if it passes right sides of some rectangles, move them to the segment tree.
Check if current rectangle intersects anything along the sweep line. If not go to step 3.
Check if union of rectangles found on step 4 intersects anything in the segment tree and merge recursively, then go to step 4.
When step 3 reaches the end of list get all rectangles under sweep line and all rectangles in segment tree and uncompress their coordinates.
Worst-case time complexity is determined by segment tree: O(n log n). Space complexity is O(n).

Minimum amount of rectangles from multi-colored grid

I've been working for some time in an XNA roguelike game and I can't get my head around the following problem: developing an algorithm to divide a matrix of non-binary values into the fewest rectangles grouping these values.
Example: given the following matrix
01234567
0 ---##*##
1 ---##*##
2 --------
The algorithm should return:
3x3 rectangle of '-'s starting at (0,0)
2x2 rectangle of '#'s starting at (3, 0)
1x2 rectangle of '*'s starting at (5, 0)
2x2 rectangle of '#'s starting at (6, 0)
5x1 rectangle of '-'s starting at (3, 2)
Why am I doing this: I've gotten a pretty big dungeon type with a size of approximately 500x500. If I were to individually call the "Draw" method for each tile's Sprite, my FPS would be far too low. It is possible to optimize this process by grouping similar-textured tiles and applying texture repetition to them, which would dramatically decrease the amount of GPU draw calls for that. For example, if my map were the previous matrix, instead of calling draw 16 times, I'd call it only 5 times.
I've looked at some algorithms which can give you the biggest rectangle of a type inside a given binary matrix, but that doesn't fit my problem.
Thanks in advance!
You can use breadth first searches to separate each area of different tile type.
Picking a partitioning within the individual shapes is an NP-hard problem (see https://en.wikipedia.org/wiki/Graph_partition), so you can't find an efficient solution that guarantees the minimum number of rectangles. However if you don't mind an extra rectangle or two for each shape and your shapes are relatively small, you can come up with algorithms that split the shape into a number of rectangles close to the minimum.
An off the top of my head guess for something that could potentially work would be to pick a tile with the maximum connecting tiles and start growing a rectangle from it using a recursive algorithm to maximize the size. Remove the resulting rectangle from the shape, then repeat until there are no more tiles not included in a rectangle. Again, this won't produce perfect results, there are graphs on which this will return with more than the minimum amount of rectangles, but it's an easy to implement ballpark solution. With a little more effort I'm sure you will be able to find better heuristics to use and get better results too.
One possible building block is a routine to check, given two points, whether the rectangle formed by using those points as opposite corners is all of the same type. I think that a fast (but unreliable) means of testing this can be based on mapping each type to a large random number, and then working out the sum of the numbers within a rectangle modulo a large prime. Take one of the numbers within the rectangle. If the sum of the numbers within the rectangle is the size of the rectangle times the one number sampled, assume that the all of the numbers in the rectangle are the same.
In one dimension we can work out all of the cumulative sums a, a+b, a+b+c, a+b+c+d,... in time O(N) and then, for any two points, work out the sum for the interval between them by subtracting cumulative sums: b+c+d = a+b+c+d - a. In two dimensions, we can use cumulative sums to work out, for each point, the sum of all of the numbers from positions which have x and y co-ordinates no greater than the (x, y) coordinate of that position. For any rectangle we can work out the sum of the numbers within that rectangle by working out A-B-C+D where A,B,C,D are two-dimensional cumulative sums.
So with pre-processing O(N) we can work out a table which allows us to compute the sum of the numbers within a rectangle specified by its opposite corners in time O(1). Unless we are very unlucky, checking this sum against the size of the rectangle times a number extracted from within the rectangle will tell us whether the rectangle is all of the same type.
Based on this, repeatedly start with a random point not covered. Take a point just to its left and move that point left as long as the interval between the two points is of the same type. Then move that point up as long as the rectangle formed by the two points is of the same type. Now move the first point to the right and down as long as the rectangle formed by the two points is of the same type. Now you think you have a large rectangle covering the original point. Check it. In the unlikely event that it is not all of the same type, add that rectangle to a list of "fooled me" rectangles you can check against in future and try again. If it is all of the same type, count that as one extracted rectangle and mark all of the points in it as covered. Continue until all points are covered.
This is a greedy algorithm that makes no attempt at producing the optimal solution, but it should be reasonably fast - the most expensive part is checking that the rectangle really is all of the same type, and - assuming you pass that test - each cell checked is also a cell covered so the total cost for the whole process should be O(N) where N is the number of cells of input data.

neighbor searching algorithm

I have a STL file that contains the coordinates (x,y,z) of 3 points (p0, p1, p2) of a triangle. these triangle represent a 3D surface f(x,y,z). The STL file might have over a 1000 triangles to represent a complex geometry.
for my application, I need to know the neighboring triangles for each triangle entry from the stl file. meaning that for each triangle, i have to pick 3 pairs of points pair1=(p0,p1), pair2=(p0,p2), pair3= (p1,p2) and compare them with pair of points in other triangles in the set
what's the best and most efficient algorithm to achieve this purpose? can i use a hashtree, hashmap?
change the mesh representation to point table and triangle faces table. STL demands that all triangles are joined in their vertexes so no cutting of edges which means neighboring triangle always share one complete edge.
double pnt[points][3];
int tri[triangles][3];
The pnt should be list of all distinct points (index sort it to improve speed for high point count). The tri should contain 3 indexes of points used in triangle. Sort them (asc or desc) to improve match speed.
Now if any triangle tri[i] shares the same edge like tri[j] then those two are neighboring triangles.
if ((tri[i][0]==tri[j][0])&&(tri[i][1]==tri[j][1])
||(tri[i][0]==tri[j][1])&&(tri[i][1]==tri[j][2])) triangles i,j are neighbors
Add all combinations ...
If you need just neighboring points then find all triangles containing that points and all the other points used in those triangles are neighbors
To load STL to such structure do this:
clear pnt[],tri[] lists/tables
process each triangle of STL
for each point of triangle
look if it is in pnt[] if yes use its index for new triangle. if not add new point to pnt and use its index for new triangle. When all 3 points done add new triangle to tri.
Improving pnt[] performance
Add index sort for pnt[] sorted by any coordinate for example x and improve performance of checking if point is already present in pnt.
So while adding (xi,yi,zi) into pnt[] find index of point that have the biggest x which is xi>=pnt[i0][0] via binary search and then scan all points in pnt until x crosses xi so xi<pnt[i1][0] this way you do not need to check all points.
If this is too slow (usually if number of points is bigger then 40000) you can improve performance more by segment index sorting (dividing index sort into segment pages of finite size like 8192 points)
Improving tri[] performance
You can also sort the tri[] by tri[i][0] so you can use binary search similarly to pnt[].
I would suggest going with hashmap where values are sets (based on tree) of references to Tringles, keys are those pairs of Points (lets call these pairs simply Sides) and some hashing function that would take into accout the property that hash of Side (a,b) should be equal to hash of (b,a).
Some kind of algorithm:
Read 3 Points and create from them 3 Sides and Triangle.
Add all that to hashmap: map[side[i]].insert(tringle)
Repeat 1-2 until you read all the data
Now you have a map with filled data. About the complexity of filling: insertion into hashmap are constant-time at average (it also depends on the hash-function) and insertion complexity into a set is logarithmic so the complete complexity of filleng data is O(n*logm) where n is the number of Sides and m is average number of Tringles with the same Side.
Normally each set would contain around 4 Triangles: 1 + 3 side-neighbours, so logm is relatively small (comparing to n) and could be not taken into account (suppose it is a constant). These suggestions lead us to some kind of conclusion: best-case complexity for filling is O(n) (no collisions, no rehashing, etc) and worst is O(n*logn) (worst-case inserting of n Sides by 1 average case in map and by logn case inserting into one set meaning all Tringles share the same Side).
Now to get all side-neighbours for some Triangle:
Get all 3 sets for each Side of that Triangle (e.g. set[i] = map[triangle.sides[i]].
Get intersection of those 3 sets (exclude triangle to get only its side-neighbours).
Done.
About complexity of getting side-neighbours: linearly-depent on the size of the sets and relatively small comparing to 'n' in normal case.
Note: To get not side-neighbours but point-neighbours (assuming neighbours are called any 2 Triangles with common Point not Side) simply fill sets with Points instead of Sides. The above assumptions about time-complexities hold exept for constants.

Querying a collection of rectangles for the overlap of an input rectangle

In a multi-dimensional space, I have a collection of rectangles, all of which are aligned to the grid. (I am using the word "rectangles" loosely - in a three dimensional space, they would be rectangular prisms.)
I want to query this collection for all rectangles that overlap an input rectangle.
What is the best data structure for holding the collection of rectangles? I will be adding rectangles to and removing rectangles from the collection from time to time, but these operations will be infrequent. The operation I want to be fast is the query.
One solution is to keep the corners of the rectangles in a list, and do a linear scan over the list, finding which rectangles overlap the query rectangle and skipping over the ones that don't.
However, I want the query operation to be faster than linear.
I've looked at the R-tree data structure, but it holds a collection of points, not a collection of rectangles, and I don't see any obvious way to generalize it.
The coordinates of my rectangles are discrete, in case you find that helpful.
I am interested in the general solution, but I will also tell you the properties of my specific problem: my problem space has three dimensions, and their multiplicity varies wildly. The first dimension has two possible values, the second dimension has 87 values, and the third dimension has 1.8 million values.
You can probably use KD-Trees which can be used for rectangles according to the wiki page:
Variations
Instead of points
Instead of points, a kd-tree can also
contain rectangles or
hyperrectangles[5]. A 2D rectangle is
considered a 4D object (xlow, xhigh,
ylow, yhigh). Thus range search
becomes the problem of returning all
rectangles intersecting the search
rectangle. The tree is constructed the
usual way with all the rectangles at
the leaves. In an orthogonal range
search, the opposite coordinate is
used when comparing against the
median. For example, if the current
level is split along xhigh, we check
the xlow coordinate of the search
rectangle. If the median is less than
the xlow coordinate of the search
rectangle, then no rectangle in the
left branch can ever intersect with
the search rectangle and so can be
pruned. Otherwise both branches should
be traversed. See also interval tree,
which is a 1-dimensional special case.
Let's call the original problem by PN - where N is number of dimensions.
Suppose we know the solution for P1 - 1-dimensional problem: find if a new interval is overlapping with a given collection of intervals.
Once we know to solve it, we can check if the new rectangle is overlapping with the collection of rectangles in each of the x/y/z projections.
So the solution of P3 is equivalent to P1_x AND P1_y AND P1_z.
In order to solve P1 efficiently we can use sorted list. Each node of the list will include coordinate and number-of-opened-intetrvals-up-to-this-coordinate.
Suppose we have the following intervals:
[1,5]
[2,9]
[3,7]
[0,2]
then the list will look as follows:
{0,1} , {1,2} , {2,2}, {3,3}, {5,2}, {7,1}, {9,0}
if we receive a new interval, say [6,7], we find the largest item in the list that is smaller than 6: {5,2} and smllest item that is greater than 7: {9,0}.
So it is easy to say that the new interval does overlap with the existing ones.
And the search in the sorted list is faster than linear :)
You have to use some sort of a partitioning technique. However, because your problem is constrained (you use only rectangles), the data-structure can be a little simplified. I haven't thought this through in detail, but something like this should work ;)
Using the discrete value constraint - you can create a secondary table-like data-structure where you store the discrete values of second dimension (the 87 possible values). Assume that these values represent planes perpendicular to this dimension. For each of these planes you can store, in this secondary table, the rectangles that intersect these planes.
Similarly for the third dimension you can use another table with as many equally spaced values as you need (1.8 million is too much, so you would probably want to make this at least a couple of magnitudes smaller), and create a map the rectangles that are between two chosen values.
Given a query rectangle you can query the first table in constant time to determine a set of tables which possibly intersects this query. Then you can do another query on the second table, and do an intersection of the results from the first and the second query results. This should narrow down the number of actual intersection tests that you have to perform.

Resources