Datastructure and algorithm to detect collisions of irregular shaped moving objects - algorithm

I came across this interview question
Many irregularly shaped objects are moving in random directions. Provide a data structure and algorithm to detect collisions. Remember that the number of objects is in the millions.
I am assuming that every object would have an x and y coordinate. Other assumptions are most welcome. Also a certain kind of tree should be used, I suppose, but I am clueless about the algorithm.
Any suggestions?

I would have a look at the Plane Sweep Algorithm or the Bently-Ottmann Algorithm. It uses plane sweep to determine in O(n log(n)) time (and O(n) space) the intersection of lines on a euclidian plane.

Most likely what you want is to sub-divide the plane with a space-filling-curve like a z-curve or a hilbert-curve and thus reducing the complexity of a 2D problem to a 1D problem. Look for quadtree.
Link: http://dmytry.com/texts/collision_detection_using_z_order_curve_aka_Morton_order.html

There are many solutions to this problem. First: Use bounding boxes or circles (balls in 3D). If the bounding boxes do not intersect then no further tests are needed. Second: Subdivide your space. You do not have to test every object against all other objects (that is O(n^2)). You can have an average complexity of O(n) with quadtrees.

I guess there should be a loop which takes reference of 1 object find co-ordinates and then checks with rest of all other objects to see if there is any collision. I am not sure how good my solution is for millions of objects.
Psuedo-code:
For each irregular shaped object1
int left1, left2;
int right1, right2;
int top1, top2;
int bottom1, bottom2;
bool bRet = 1; // No collision
left1 = object1->x;
right1 = object1->x + object1->width;
top1 = object1->y;
bottom1 = object1->y + object1->height;
For each irregular shaped object2
{
left2 = object2->x;
right2 = object2->x + object2->width;
top2 = object2->y;
bottom2 = object2->y + object2->height;
if (bottom1 < top2) bRet =0;
if (top1 > bottom2) bRet = 0;
if (right1 < left2) bRet = 0;
if (left1 > right2) bRet = 0;
}
return bRet;

Related

Can I efficiently construct a Voronoi diagram / Delaunay mesh from a subset of points?

I have a problem where I have a large number (~10,000) points (in 2-D) from which I need to repeatedly pick a small number (~100) and construct a Voronoi diagram.
I can pre-compute the Voronoi diagram / Delaunay mesh for the 10000 points which always remain the same. Is there then a way to efficiently compute the Voronoi diagram for a small subset of these points? Or do I need to start from scratch every time?
Many thanks!
Generally speaking, you see the term "dynamic algorithms" used to describe the process of taking an algorithm where the input is typically always known up front and modify it to handle the case where the underlying data change. In your case, you're looking for "dynamic Voronoi diagrams," which are data structures that maintain Voronoi diagrams even as nodes are added and deleted.
I am not particularly familiar with dynamic computational geometry algorithms, but a bit of Googling turned up a couple of hits for "dynamic Voronoi diagrams," including this paper by Gowda, Kirkpatrick, et al describing one approach. It may not end up being faster in your case than just computing the full Voronoi diagram, but it could be useful as a starting point for a search.
For such a small input set (100 vertices) just building a Delauany mesh/Voronoi diagram from scratch should be reasonably fast. While I don't pretend to have an exhaustive knowledge of the algorithms that exist today, my experience with the Delaunay is that the logic for removing vertices from a mesh tends to be more expensive than adding to the mesh. So the approach of creating a mesh of 100 points from a mesh of 10000 by removing vertices probably would not succeed.
Here's a sample of Java code using the Tinfour library. Building 1000 Voronoi diagrams required only 277 milliseconds on a middle-of-the-road laptop computer. The focus of the Tinfour library is the Delaunay rather than the Voronoi, but it does include a so-so class for building a Voronoi from a Delaunay mesh. I think that any good computational geometry library you find should yield similar performance.
public static void main(String[] args) {
int seed = 0;
List<Vertex> masterList = TestVertices.makeRandomVertices(10000, seed);
int nTrials = 1000;
int nVerticesInSubset = 100;
int nPolygons = 0;
Random random = new Random(seed);
long time0 = System.nanoTime();
for (int iTrial = 0; iTrial < nTrials; iTrial++) {
// we wish to build a subset of N unique vertices.
// the bitSet allows us to avoid randomly selecting
// a vertex more than once.
BitSet bitSet = new BitSet(10000);
ArrayList<Vertex> subList = new ArrayList<>();
for (int i = 0; i < nVerticesInSubset; i++) {
while (true) {
int index = random.nextInt(masterList.size());
// the random index is a value in the range 0 to 10000.
// see if the corresponding vertex has already been selected
// if not, add it to the subList. if so, keep looking
if (!bitSet.get(index)) {
subList.add(masterList.get(index));
bitSet.set(index);
break;
}
}
}
IncrementalTin tin = new IncrementalTin(0.001);
tin.add(subList, null);
BoundedVoronoiDiagram voronoi = new BoundedVoronoiDiagram(tin);
nPolygons += voronoi.getPolygons().size();
}
long time1 = System.nanoTime();
System.out.println("Elapsed time in milliseconds " + (time1 - time0) / 1.0e+6);
System.out.println("Avg. polygons in Voronoi " + ((double) nPolygons / nTrials));
}

Nearest Neighbors in CUDA Particles

Edit 2: Please take a look at this crosspost for TLDR.
Edit: Given that the particles are segmented into grid cells (say 16^3 grid), is it a better idea to let run one work-group for each grid cell and as many work-items in one work-group as there can be maximal number of particles per grid cell?
In that case I could load all particles from neighboring cells into local memory and iterate through them computing some properties. Then I could write specific value into each particle in the current grid cell.
Would this approach be beneficial over running the kernel for all particles and for each iterating over (most of the time the same) neighbors?
Also, what is the ideal ratio of number of particles/number of grid cells?
I'm trying to reimplement (and modify) CUDA Particles for OpenCL and use it to query nearest neighbors for every particle. I've created the following structures:
Buffer P holding all particles' 3D positions (float3)
Buffer Sp storing int2 pairs of particle ids and their spatial hashes. Sp is sorted according to the hash. (The hash is just a simple linear mapping from 3D to 1D – no Z-indexing yet.)
Buffer L storing int2 pairs of starting and ending positions of particular spatial hashes in buffer Sp. Example: L[12] = (int2)(0, 50).
L[12].x is the index (in Sp) of the first particle with spatial hash 12.
L[12].y is the index (in Sp) of the last particle with spatial hash 12.
Now that I have all these buffers, I want to iterate through all the particles in P and for each particle iterate through its nearest neighbors. Currently I have a kernel that looks like this (pseudocode):
__kernel process_particles(float3* P, int2* Sp, int2* L, int* Out) {
size_t gid = get_global_id(0);
float3 curr_particle = P[gid];
int processed_value = 0;
for(int x=-1; x<=1; x++)
for(int y=-1; y<=1; y++)
for(int z=-1; z<=1; z++) {
float3 neigh_position = curr_particle + (float3)(x,y,z)*GRID_CELL_SIDE;
// ugly boundary checking
if ( dot(neigh_position<0, (float3)(1)) +
dot(neigh_position>BOUNDARY, (float3)(1)) != 0)
continue;
int neigh_hash = spatial_hash( neigh_position );
int2 particles_range = L[ neigh_hash ];
for(int p=particles_range.x; p<particles_range.y; p++)
processed_value += heavy_computation( P[ Sp[p].y ] );
}
Out[gid] = processed_value;
}
The problem with that code is that it's slow. I suspect the nonlinear GPU memory access (particulary P[Sp[p].y] in the inner-most for loop) to be causing the slowness.
What I want to do is to use Z-order curve as the spatial hash. That way I could have only 1 for loop iterating through a continuous range of memory when querying neighbors. The only problem is that I don't know what should be the start and stop Z-index values.
The holy grail I want to achieve:
__kernel process_particles(float3* P, int2* Sp, int2* L, int* Out) {
size_t gid = get_global_id(0);
float3 curr_particle = P[gid];
int processed_value = 0;
// How to accomplish this??
// `get_neighbors_range()` returns start and end Z-index values
// representing the start and end near neighbors cells range
int2 nearest_neighboring_cells_range = get_neighbors_range(curr_particle);
int first_particle_id = L[ nearest_neighboring_cells_range.x ].x;
int last_particle_id = L[ nearest_neighboring_cells_range.y ].y;
for(int p=first_particle_id; p<=last_particle_id; p++) {
processed_value += heavy_computation( P[ Sp[p].y ] );
}
Out[gid] = processed_value;
}
You should study the Morton Code algorithms closely. Ericsons Real time collision detection explains that very well.
Ericson - Real time Collision detection
Here is another nice explanation including some tests:
Morton encoding/decoding through bit interleaving: Implementations
Z-Order algorithms only defines the paths of the coordinates in which you can hash from 2 or 3D coordinates to just an integer. Although the algorithm goes deeper for every iteration you have to set the limits yourself. Usually the stop index is denoted by a sentinel. Letting the sentinel stop will tell you at which level the particle is placed. So the maximum level you want to define will tell you the number of cells per dimension. For example with maximum level at 6 you have 2^6 = 64. You will have 64x64x64 cells in your system (3D). That also means that you have to use integer based coordinates. If you use floats you have to convert like coord.x = 64*float_x and so on.
If you know how many cells you have in your system you can define your limits. Are you trying to use a binary octree?
Since particles are in motion (in that CUDA example) you should try to parallelize over the number of particles instead of cells.
If you want to build lists of nearest neighbours you have to map the particles to cells. This is done through a table that is sorted afterwards by cells to particles. Still you should iterate through the particles and access its neighbours.
About your code:
The problem with that code is that it's slow. I suspect the nonlinear GPU memory access (particulary P[Sp[p].y] in the inner-most for loop) to be causing the slowness.
Remember Donald Knuth. You should measure where the bottle neck is. You can use NVCC Profiler and look for bottleneck. Not sure what OpenCL has as profiler.
// ugly boundary checking
if ( dot(neigh_position<0, (float3)(1)) +
dot(neigh_position>BOUNDARY, (float3)(1)) != 0)
continue;
I think you should not branch it that way, how about returning zero when you call heavy_computation. Not sure, but maybe you have sort of a branch prediction here. Try to remove that somehow.
Running parallel over the cells is a good idea only if you have no write accesses to the particle data, otherwise you will have to use atomics. If you go over the particle range instead you read accesses to the cells and neighbours but you create your sum in parallel and you are not forced to some race condiction paradigm.
Also, what is the ideal ratio of number of particles/number of grid cells?
Really depends on your algorithms and the particle packing within your domain, but in your case I would define the cell size equivalent to the particle diameter and just use the number of cells you get.
So if you want to use Z-order and achieve your holy grail, try to use integer coordinates and hash them.
Also try to use larger amounts of particles. About 65000 particles like CUDA examples uses you should consider because that way the parallelisation is mostly efficient; the running processing units are exploited (fewer idles threads).

Intersection of axis-aligned rectangular cuboids (MBR) in one dimension

Currently I'm doing benchmarks on time series indexing algorithms. Since most of the time no reference implementations are available, I have to write my own implementations (all in Java). At the moment I am stuck a little at section 6.2 of a paper called Indexing multi-dimensional time-series with support for multiple distance measures available here in PDF : http://hadjieleftheriou.com/papers/vldbj04-2.pdf
A MBR (minimum bounding rectangle) is basically a rectanglular cubiod with some coordinates and directions. As an example P and Q are two MBRs with P.coord={0,0,0} and P.dir={1,1,3} and Q.coords={0.5,0.5,1} and Q.dir={1,1,1} where the first entries represent the time dimension.
Now I would like to calculate the MINDIST(Q,P) between Q and P :
However I am not sure how to implement the "intersection of two MBRs in the time dimension" (Dim 1) since I am not sure what the intersection in the time dimension actually means. It is also not clear what h_Q, l_Q, l_P, h_P mean, since this notation is not explained (my guess is they mean something like highest or lowest value of a dimension in the intersection).
I would highly appreciate it, if someone could explain to me how to calculate the intersection of two MBRs in the first dimension and maybe enlighten me with an interpretation of the notation. Thanks!
Well, Figure 14 in your paper explains the time intersection. And the rectangles are axis-aligned, thus it makes sense to use high and low on each coordinate.
The multiplication sign you see is not a cross product, just a normal multiplication, because on both sides of it you have a scalar, and not vectors.
However I must agree that the discussions on page 14 are rather fuzzy, but they seem to tell us that both types of intersections (complete and partial), when they are have a t subscript, mean the norm of the intersection along the t coordinate.
Thus it seems you could factorize the time intersection to get a formula that would be :
It is worth noting that, maybe counter-intuitively, when your objects don't intersect on the time plane, their MINDIST is defined to be 0.
Hence the following pseudo-code ;
mindist(P, Q)
{
if( Q.coord[0] + Q.dir[0] < P.coord[0] ||
Q.coord[0] > P.coord[0] + P.dir[0] )
return 0;
time = min(Q.coord[0] + Q.dir[0], P.coord[0] + P.dir[0]) - max(Q.coord[0], P.coord[0]);
sum = 0;
for(d=1; d<D; ++d)
{
if( Q.coord[d] + Q.dir[d] < P.coord[d] )
x = Q.coord[d] + Q.dir[d] - P.coord[d];
else if( P.coord[d] + P.dir[d] < Q.coord[d] )
x = P.coord[d] + P.dir[d] - Q.coord[d];
else
x = 0;
sum += x*x;
}
return sqrt(time * sum);
}
Note the absolute values in the paper are unnecessary since we just checked which values where bigger, and we thus know we only add positive numbers.

Drawing shaped based on points making sure lines don't cross

say, I have 100 points and want to draw a closedcurve (I'm using C# and graphics), like this:
Graphics g = this.CreateGraphics();
Pen pen = new Pen(Color.Black, 2);
Point[] points = new Point[DrawingPoints];
for (int x = 0; x < DrawingPoints; x++)
{
int px = r.Next(0, MaxXSize);
int py = r.Next(0, MaxYSize);
Point p = new Point(px, py);
points[x] = p;
}
g.DrawClosedCurve(pen, points);
It is connecting points as they get into points[] and lines cross - you will not get a solid figure with this.
Is there an algorithm that will help me toss the points to get a solid figure? Here's a picture below (tried as hard as I could hehe) to help visualize what I'm asking for.
Well, in O(n log n) time, you could compute the centroid of the points and sort them in order of angle about that centroid, leaving a star-shaped polygon. That's efficient but probably messes up the order of your points too much.
I think you'd be happier with the results of a 2-OPT method for TSP (description here). 2-OPT is worst-case exponential but polynomial-time in practice.

3D Connected Points Labeling based on Euclidean distances

Currently, I am working on a project that is trying to group 3d points from a dataset by specifying connectivity as a minimum euclidean distance. My algorithm right now is simply a 3d adaptation of the naive flood fill.
size_t PointSegmenter::growRegion(size_t & seed, size_t segNumber) {
size_t numPointsLabeled = 0;
//alias for points to avoid retyping
vector<Point3d> & points = _img.points;
deque<size_t> ptQueue;
ptQueue.push_back(seed);
points[seed].setLabel(segNumber);
while (!ptQueue.empty()) {
size_t currentIdx = ptQueue.front();
ptQueue.pop_front();
points[currentIdx].setLabel(segNumber);
numPointsLabeled++;
vector<int> newPoints = _img.queryRadius(currentIdx, SEGMENT_MAX_DISTANCE, MATCH_ACCURACY);
for (int i = 0; i < (int)newPoints.size(); i++) {
int newIdx = newPoints[i];
Point3d &newPoint = points[newIdx];
if(!newPoint.labeled()) {
newPoint.setLabel(segNumber);
ptQueue.push_back(newIdx);
}
}
}
//NOTE to whoever wrote the other code, the compiler optimizes i++
//to ++i in cases like these, so please don't change them just for speed :)
for (size_t i = seed; i < points.size(); i++) {
if(!points[i].labeled()) {
//search for an unlabeled point to serve as the next seed.
seed = i;
return numPointsLabeled;
}
}
return numPointsLabeled;
}
Where this code snippet is ran again for the new seed, and _img.queryRadius() is a fixed radius search with the ANN library:
vector<int> Image::queryRadius(size_t index, double range, double epsilon) {
int k = kdTree->annkFRSearch(dataPts[index], range*range, 0);
ANNidxArray nnIdx = new ANNidx[k];
kdTree->annkFRSearch(dataPts[index], range*range, k, nnIdx);
vector<int> outPoints;
outPoints.reserve(k);
for(int i = 0; i < k; i++) {
outPoints.push_back(nnIdx[i]);
}
delete[] nnIdx;
return outPoints;
}
My problem with this code is that it runs waaaaaaaaaaaaaaaay too slow for large datasets. If I'm not mistaken, this code will do a search for every single point, and the searches are O(NlogN), giving this a time complexity of (N^2*log(N)).
In addition to that, deletions are relatively expensive if I remember right from KD trees, but also not deleting points creates problems in that each point can be searched hundreds of times, by every neighbor close to it.
So my question is, is there a better way to do this? Especially in a way that will grow linearly with the dataset?
Thanks for any help you may be able to provide
EDIT
I have tried using a simple sorted list like dash-tom-bang said, but the result was even slower than what I was using before. I'm not sure if it was the implementation, or it was just simply too slow to iterate through every point and check euclidean distance (even when just using squared distance.
Is there any other ideas people may have? I'm honestly stumped right now.
I propose the following algorithm:
Compute 3D Delaunay triangulation of your data points.
Remove all the edges that are longer than your threshold distance, O(N) when combined with step 3.
Find connected components in the resulting graph which is O(N) in size, this is done in O(N α(N)).
The bottleneck is step 1 which can be done in O(N2) or even O(N log N) according to this page http://www.ncgia.ucsb.edu/conf/SANTA_FE_CD-ROM/sf_papers/lattuada_roberto/paper.html. However it's definitely not a 100 lines algorithm.
When I did something along these lines, I chose an "origin" outside of the dataset somewhere and sorted all of the points by their distance to that origin. Then I had a much smaller set of points to choose from at each step, and I only had to go through the "onion skin" region around the point being considered. You would check neighboring points until the distance to the closest point is less than the width of the range you're checking.
While that worked well for me, a similar version of that can be achieved by sorting all of your points along one axis (which would represent the "origin" being infinitely far away) and then just checking points again until your "search width" exceeds the distance to the closest point so far found.
Points should be better organized. To search more efficiently instead of a vector<Point3d> you need some sort of a hash map where hash collision implies that two points are close to each other (so you use hash collisions to your advantage). You can for instance divide the space into cubes of size equal to SEGMENT_MAX_DISTANCE, and use a hash function that returns a triplet of ints instead of just an int, where each part of a triplet is calculated as point.<corresponding_dimension> / SEGMENT_MAX_DISTANCE.
Now for each point in this new set you search only for points in the same cube, and in adjacent cubes of space. This greatly reduces the search space.

Resources