Rapid update of bounding box when one of a thousand items moves? - data-structures

I have thousands of objects represented by axis-aligned bounding boxes. I can calculate the smallest axis-aligned bounding box containing all of the items easily, but it's a rather expensive operation. I'd like to have a structure where I can insert/update/delete items in at most logarithmic time, and then get the current bounding box containing all items in constant time.
What data structure is a good fit for this problem?

Here's a simpler idea: 4 binary trees.
So 4 power-of-two-sized arrays. The root will be in element 1, and the bounding boxes of the objects are the leaves (in the last half of the array). The parent of two nodes will hold the min (or max) of its two children. Unused leaves should hold int.MinValue or int.MaxValue (for max and min tree respectively). Updating is pretty simple in O(log n), see below.
You'd have
a tree with min for the left coordinate
a tree with max for the right coordinate
a tree with min for the top coordinate
a tree with max for the bottom coordinate
The total bounding box is described by the roots, available in O(1).
Update procedure is something like this: (untested)
update(array, item, newvalue)
index = (array.length >> 1) + item // index of leaf
array[index] = newvalue;
while (index > 1)
parent = index >> 1
sibling = index ^ 1
array[parent] = min(array[index], array[sibling])
index = parent
There is a possibility for early exit, if the update of the parent doesn't change it.
This generalizes to trees with a higher degree, making them shallower at the expense of taking a min or max over more than 2 items. In my experience, that can be worth it for larger trees, but only up to a point, increasing the degree quickly runs into diminishing returns. You could try 4 and 8 (power-of-two degrees are best, it avoids complicating the math).

There are a class of data structures called kinetic data structures that support efficient multidimensional queries under the assumption that the underlying elements are moving. I'm not much of an expert on them, but this set of lecture slides seems like a great starting point. It looks like the STAR-tree might be a good data structure to look at; it looks like it's a kinetic version of an R-tree.
Hope this helps!

Related

Bin Packing with Overflow

I have N bins (homogenous or heterogeneous in size, depending on variant of task) in which I am trying to fit M items (always heterogeneous in size). Items can be larger than a single bin and are allowed to overflow to the next bin(s) (but no wrap around from bin N-1 to 0).
The more bins an item spans, the higher its allocation cost.
I want to minimize the overall allocation cost of fitting all M into N bins.
Everything is static. It is guaranteed that all M fit in N.
I think I am looking for a variant of the Bin Packing algorithm. But any hints towards an existing solution/approximation/good heuristic are appreciated.
My current approach looks like this:
sort items by size
for i in items:
for b in bins:
try allocation of i starting at b
if allocation valid:
record cost
do allocation of i in b with lowest recorded cost
update all b fill level
So basically a greedy by size approach with O(MxNxC) runtime, where C~"longest allocation across banks" (try allocation takes C time).
I would suggest dynamic programming for the exact solution. To make it easier to visualize, assume each bin's size is the length of an array of cells. Since they are contiguous, you can visualize them as a contiguous set of arrays, e.g.
|xxx|xx|xxx|xxxx|
the delimiters of the arrays are || and the positions in the arrays are given by x. so this has 4 arrays, arr_0 of size 3, arr_1 of size 2 and so on.
If you place an item at position i, it will ocuppy position i to i+(h-1), where h is the size of the items. E.g. if items are of size h=5, and you place an item at position 1, you would get
|xb|bb|bbx|xxxx|
One trick to use is that if we introduce the additional constraint that the items need to be inserted “in order”, i.e. the first inserted is the leftmost inserted item, the second is the second-leftmost inserted item, etc. then this problem will have the same optimal solution solution as the original one (since we can just take the optimal solution and insert it in order).
Consider pos(k-1,i) to be the optimal position to insert the k-th object
Given that we the k-1th object ended at (I-1). opt_c(k-i,i) the optimal extra-cost of inserting the k-1…N pieces, given that the k-1th object ended at (I-1).
Then pos(N-1,i) can be easily calculated by running through the cells and calculating the extra-cost (and even easier by noting that at least one of the borders should match up with pos(N-1,i), but to make analysis easier we will evaluate the extra-cost each of the I…NumCells-h). opt_c(N-1,i) equals this extra-cost.
Similarly,
pos(N-2,i) = argmin_x extra_cost(x,i, pos(N-1,x+h)) + opt(N-1,x+h)
Where extra_cost is the extra_cost(x,i,j) of inserting at x, given that the last inserted object ended at I-1 and the next inserted object will start at j.
And by substituting x = opt(N-2,i)
opt(N-2,i) = extra_cost(opt(N-2,i),I, pos(N-1,opt(N-2,i)+h)) + opt(N-1,opt(N-2,i)+h)
By induction, for all 1<=W<N-1
pos(W,i) = argmin_x extra_cost(x,i, pos(W+1,x+h)) + opt(W+1,x+h)
And
opt(W,i) = extra_cost( pos(W,i),I, pos(N+1, pos(W,i)+h)) + opt(N-1, pos(W,i)+h)
And your final result is given by the minimal over i of all opt(0,i).

Two-dimensional acceleration structure

I am writing a little game in which many (order 1000-10000) cars are driving around you and need to dodge each other smoothly. Their position can be represented by their 2D XY-coordinates.
Each frame, every car needs to ensure it doesn't bump into another one, which naively implemented would be an O(N^2) algorithm. What is the best 2D acceleration structure I can employ to make this more efficient?
To add some more context:
Since cars can move anywhere, the structure needs to constantly keep track of all cars. So it needs to be quick to update (not O(N) for each car for example).
Every car moves on every frame
Cars are not allowed to hit any other car. They obviously have knowledge of their own size, and their current direction and speed. Their size is none-zero (cause you know, it's a car), and it needs this data in collision checks.
I considered a regular grid with set-structure, which works, but each car still has to check its own cell and all its (8) neighbours, so I imagine there are still better ways.
So, you've to check if there is a collision. For there to be a collision if I understand your example, there have to be two cars in the same spot after the cars move.
I think we can use the concept of spot, or position, here. You could save the positions that at a given time have a car on them in a set (coordinates of a matrix, or whatever suits your problem best). When a car moves, delete it's position from the set and insert it's new one to the set.
Once we've this, you'll only have to check if the set contains a car's next position to know if there will be a collision or not, and in a HashSet consulting that is O(1). Apply that to your number of cars(n) and you'll do your collision management in O(n).
Example for further clarification:
You've this matrix of positions with cars on "O"s and empty spaces on "X"s. Meaning you'd have a car on position 1 and another one in position 8 if you number your positions from the top left corner.
O X X
X X X
X O X
This would imply you've a set of Integers with positions 1 and 8 on it. Now, an example to move one of the cars:
Car on position 1 moves to position 2: You check if position 2 is in the set[O(1)]. It isn't. You remove position 1 from the set[O(1)] and add position 2 to the set[O(1)], because now position 1 doesn't have a car on it but position 2 does.
Repeat for every car (n) => O(1) * O(n) = O(n).
You could use something like a quadtree. A quadtree is a 2D indexing scheme with O(log n) update (insert/remove) time.
The easiest possibility is to store rectangles (axis aligned bounding boxes) for each car. Then, each time before you reinsert a car at it's new position, you perform a window query (query for a rectangle) at the new position to see whether it overlaps with any other car. If you have the source code of the tree you can even implement a combined operation updateAnCheck() that detects collisionas and updates in a single call. (I assume that the structure is updated often enough that old and new position overlap, so that collisions are more or less reliably detected, other wise cars could 'teleport' through one another if they are fast enough).
Complexity:
Update: O(log n) per car
Query: O(log n) per car
Space: About O(log n) per car
Total computational complexity: detecting all collisions for n car: O(n * 2 * log n) = O(n log n)
Total space complexity: about O(n * log n)
There are many open source implementations for quadtrees available, if you are using Java, you may want to check out mine.

If I store a binary tree in an array, how do I avoid the wasted space?

Often we need trees in algorithms and I get out a tree with lots of pointers and recursion.
Sometimes I need more speed an I put the tree into an 2D array like so:
Example of a binary tree stored in an array
+-----------------+
|0eeeeeeeeeeeeeeee| //no pointers needed, parent/child, is y dimension,
|11 dddddddd| //sibbling is x dimension of the array.
|2222 cccc| //The 123 tree is stored root up.
|33333333 bb| //Notice how the abc-tree is stored upside down
|4444444444444444a| //The wasted space in the middle, is offset by the fact
+-----------------+ //that you do not need up, down, and sibbling pointers.
I love this structure because it allowes me speed up options that I don't have when using pointers and recursion.
But notice that wasted space in the middle....
How do I get rid of/reuse that wasted space?
Requirements
I only use this structure if I need every last bit of speed, so a solution with lots of translations and address calculations to get to that space will not be helpful.
A binary tree can be stored in an array more efficiently as explained here: http://en.wikipedia.org/wiki/Binary_tree#Arrays:
From Wikipedia:
In this compact arrangement, if a node has an index i, its children are found at indices (2i + 1) (for the left child) and (2i + 2) (for the right), while its parent (if any) is found at index floor((i-1)/2) (assuming the root has index zero).
This method benefits from more compact storage and better locality of reference, particularly during a preorder traversal. However, it is expensive to grow and wastes space proportional to (2H - n) for a tree of height H with n nodes.
In other words, it will waste space if your tree is not a complete binary tree, but it will still be a bit more compact than a plain 2D array.

Parabolic knapsack

Lets say I have a parabola. Now I also have a bunch of sticks that are all of the same width (yes my drawing skills are amazing!). How can I stack these sticks within the parabola such that I am minimizing the space it uses as much as possible? I believe that this falls under the category of Knapsack problems, but this Wikipedia page doesn't appear to bring me closer to a real world solution. Is this a NP-Hard problem?
In this problem we are trying to minimize the amount of area consumed (eg: Integral), which includes vertical area.
I cooked up a solution in JavaScript using processing.js and HTML5 canvas.
This project should be a good starting point if you want to create your own solution. I added two algorithms. One that sorts the input blocks from largest to smallest and another that shuffles the list randomly. Each item is then attempted to be placed in the bucket starting from the bottom (smallest bucket) and moving up until it has enough space to fit.
Depending on the type of input the sort algorithm can give good results in O(n^2). Here's an example of the sorted output.
Here's the insert in order algorithm.
function solve(buckets, input) {
var buckets_length = buckets.length,
results = [];
for (var b = 0; b < buckets_length; b++) {
results[b] = [];
}
input.sort(function(a, b) {return b - a});
input.forEach(function(blockSize) {
var b = buckets_length - 1;
while (b > 0) {
if (blockSize <= buckets[b]) {
results[b].push(blockSize);
buckets[b] -= blockSize;
break;
}
b--;
}
});
return results;
}
Project on github - https://github.com/gradbot/Parabolic-Knapsack
It's a public repo so feel free to branch and add other algorithms. I'll probably add more in the future as it's an interesting problem.
Simplifying
First I want to simplify the problem, to do that:
I switch the axes and add them to each other, this results in x2 growth
I assume it is parabola on a closed interval [a, b], where a = 0 and for this example b = 3
Lets say you are given b (second part of interval) and w (width of a segment), then you can find total number of segments by n=Floor[b/w]. In this case there exists a trivial case to maximize Riemann sum and function to get i'th segment height is: f(b-(b*i)/(n+1))). Actually it is an assumption and I'm not 100% sure.
Max'ed example for 17 segments on closed interval [0, 3] for function Sqrt[x] real values:
And the segment heights function in this case is Re[Sqrt[3-3*Range[1,17]/18]], and values are:
Exact form:
{Sqrt[17/6], 2 Sqrt[2/3], Sqrt[5/2],
Sqrt[7/3], Sqrt[13/6], Sqrt[2],
Sqrt[11/6], Sqrt[5/3], Sqrt[3/2],
2/Sqrt[3], Sqrt[7/6], 1, Sqrt[5/6],
Sqrt[2/3], 1/Sqrt[2], 1/Sqrt[3],
1/Sqrt[6]}
Approximated form:
{1.6832508230603465,
1.632993161855452, 1.5811388300841898, 1.5275252316519468, 1.4719601443879744, 1.4142135623730951, 1.35400640077266, 1.2909944487358056, 1.224744871391589, 1.1547005383792517, 1.0801234497346435, 1, 0.9128709291752769, 0.816496580927726, 0.7071067811865475, 0.5773502691896258, 0.4082482904638631}
What you have archived is a Bin-Packing problem, with partially filled bin.
Finding b
If b is unknown or our task is to find smallest possible b under what all sticks form the initial bunch fit. Then we can limit at least b values to:
lower limit : if sum of segment heights = sum of stick heights
upper limit : number of segments = number of sticks longest stick < longest segment height
One of the simplest way to find b is to take a pivot at (higher limit-lower limit)/2 find if solution exists. Then it becomes new higher or lower limit and you repeat the process until required precision is met.
When you are looking for b you do not need exact result, but suboptimal and it would be much faster if you use efficient algorithm to find relatively close pivot point to actual b.
For example:
sort the stick by length: largest to smallest
start 'putting largest items' into first bin thy fit
This is equivalent to having multiple knapsacks (assuming these blocks are the same 'height', this means there's one knapsack for each 'line'), and is thus an instance of the bin packing problem.
See http://en.wikipedia.org/wiki/Bin_packing
How can I stack these sticks within the parabola such that I am minimizing the (vertical) space it uses as much as possible?
Just deal with it like any other Bin Packing problem. I'd throw meta-heuristics on it (such as tabu search, simulated annealing, ...) since those algorithms aren't problem specific.
For example, if I'd start from my Cloud Balance problem (= a form of Bin Packing) in Drools Planner. If all the sticks have the same height and there's no vertical space between 2 sticks on top of each other, there's not much I'd have to change:
Rename Computer to ParabolicRow. Remove it's properties (cpu, memory, bandwith). Give it a unique level (where 0 is the lowest row). Create a number of ParabolicRows.
Rename Process to Stick
Rename ProcessAssignement to StickAssignment
Rewrite the hard constraints so it checks if there's enough room for the sum of all Sticks assigned to a ParabolicRow.
Rewrite the soft constraints to minimize the highest level of all ParabolicRows.
I'm very sure it is equivalent to bin-packing:
informal reduction
Be x the width of the widest row, make the bins 2x big and create for every row a placeholder element which is 2x-rowWidth big. So two placeholder elements cannot be packed into one bin.
To reduce bin-packing on parabolic knapsack you just create placeholder elements for all rows that are bigger than the needed binsize with size width-binsize. Furthermore add placeholders for all rows that are smaller than binsize which fill the whole row.
This would obviously mean your problem is NP-hard.
For other ideas look here maybe: http://en.wikipedia.org/wiki/Cutting_stock_problem
Most likely this is the 1-0 Knapsack or a bin-packing problem. This is a NP hard problem and most likely this problem I don't understand and I can't explain to you but you can optimize with greedy algorithms. Here is a useful article about it http://www.developerfusion.com/article/5540/bin-packing that I use to make my php class bin-packing at phpclasses.org.
Props to those who mentioned the fact that the levels could be at varying heights (ex: assuming the sticks are 1 'thick' level 1 goes from 0.1 unit to 1.1 units, or it could go from 0.2 to 1.2 units instead)
You could of course expand the "multiple bin packing" methodology and test arbitrarily small increments. (Ex: run the multiple binpacking methodology with levels starting at 0.0, 0.1, 0.2, ... 0.9) and then choose the best result, but it seems like you would get stuck calulating for an infinite amount of time unless you had some methodlogy to verify that you had gotten it 'right' (or more precisely, that you had all the 'rows' correct as to what they contained, at which point you could shift them down until they met the edge of the parabola)
Also, the OP did not specify that the sticks had to be laid horizontally - although perhaps the OP implied it with those sweet drawings.
I have no idea how to optimally solve such an issue, but i bet there are certain cases where you could randomly place sticks and then test if they are 'inside' the parabola, and it would beat out any of the methodologies relying only on horizontal rows.
(Consider the case of a narrow parabola that we are trying to fill with 1 long stick.)
I say just throw them all in there and shake them ;)

How to quickly count the number of neighboring voxels?

I have got a 3D grid (voxels), where some of the voxels are filled, and some are not. The 3D grid is sparsely filled, so I have got a set filledVoxels with coordinates (x, y, z) of the filled voxels. What I am trying to do is find out is for each filled voxel, how many neighboring voxels are filled too.
Here is an example:
filledVoxels contains the voxels (1, 1, 1), (1, 2, 1), and (1, 3, 1).
Therefore, the neighbor counts are:
(1,1,1) has 1 neighbor
(1,2,1) has 2 neighbors
(1,3,1) has 1 neighbor.
Right now I have this algorithm:
voxelCount = new Map<Voxel, Integer>();
for (voxel v in filledVoxels)
count = checkAllNeighbors(v, filledVoxels);
voxelCount[v] = count;
end
checkAllNeighbors() looks up all 26 surrounding voxels. So in total I am doing 26*filledVoxels.size() lookups, which is quite slow.
Is there any way to cut down the number of required lookups? When you look at the above example you can see that I am checking the same voxels several times, so it might be possible to get rid of lookups with some clever caching.
If this helps in any way, the voxels represent a voxelized 3D surface (but there might be holes in it). I usually want to get a list of all voxels that have 5 or 6 neighbors.
You can transform your voxel space into a octree in which every node contains a flag that specifies whether it contains filled voxels at all.
When a node does not contain filled voxels, you don't need to check any of its descendants.
I'd say if each of your lookups is slow (O(size)), you should optimize it by binary search in an ordered list (O(log(size))).
The constant 26, I wouldn't worry much. If you iterate smarter, you could cache something and have 26 -> 10 or something, I think, but unless you have profiled the whole application and found out decisively that it is the bottleneck I would concentrate on something else.
As ilya states, there's not much you can do to get around the 26 neighbor look-ups. You have to make your biggest gains in efficiently identifying whether a given neighbor is filled or not. Given that the brute force solution is essentially O(N^2), you have a lot of possible ground to gain in that area. Since you have to iterate over all filled voxels at least once, I would take an approach similar to the following:
voxelCount = new Map<Voxel, Integer>();
visitedVoxels = new EfficientSpatialDataType();
for (voxel v in filledVoxels)
for (voxel n in neighbors(v))
if (visitedVoxels.contains(n))
voxelCount[v]++;
voxelCount[n]++;
end
next
visitedVoxels.add(v);
next
For your efficient spatial data type, a kd-tree, as Zifre suggested, might be a good idea. In any case, you're going to want to reduce your search space by binning visited voxels.
If you're marching along the voxels one at a time, you can keep a lookup table corresponding to the grid, so that after you've checked it once using IsFullVoxel() you put the value in this grid. For each voxel you're marching in you can check if its lookup table value is valid, and only call IsFullVoxel() it it isn't.
OTOH it seems like you can't avoid iterating over all neighboring voxels, either using IsFullVoxel() or the LUT. If you had some more a priori information it could help. For instance, if you knew that there were at most x neighboring filled voxels, or you knew that there were at most y neighboring filled voxels in each direction. For instance, if you know you're looking for voxels with 5 to 6 neighbors, you can stop after you've found 7 full neighbors or 22 empty neighbors.
I'm assuming that a function IsFullVoxel() exists that returns true if a voxel is full.
If most of the moves in your iteration were to neighbors, you could reduce your checking by around 25% by not looking back at the ones you just checked before you made the step.
You may find a Z-order curve a useful concept here. It lets you (with certain provisos) keep a sliding window of data around the point you're currently querying, so that when you move to the next point, you don't have to throw away many of the queries you've already performed.
Um, your question is not very clear. I'm assuming you just have a list of the filled points. In that case, this is going to be very slow, because you have to iterate through it (or use some kind of tree structure such as a kd-tree, but this will still be O(log n)).
If you can (i.e. the grid is not too big), just make a 3d array of bools. 26 lookups in a 3d array shouldn't really take that long (and there really is no way to cut down on the number of lookups).
Actually, now that I think of it, you could make it a 3d array of longs (64 bits). Each 64 bit block would hold 64 (4 x 4 x 4) voxels. When you are checking the neighbors of a voxel in the middle of the block, you could do a single 64 bit read (which would be much faster).
Is there any way to cut down the number of required lookups?
You will, at minimum, have to perform at least 1 lookup per voxel. Since that's the minimum, then any algorithm which only performs one lookup per voxel will meet your requirement.
One simplistic idea is to initialize an array to hold the count for each voxel, then look at each voxel and increment the neighbors of that voxel in the array.
Pseudo C might look something like this:
#define MAXX 100
#define MAXY 100
#define MAXZ 100
int x, y, z
char countArray[MAXX][MAXY][MAXZ];
initializeCountArray(MAXX, MAXY, MAXZ); // Set all array elements to 0
for(x=0; x<MAXX; x++)
for(y=0;y<MAXY;y++)
for(z=0;z<MAXZ;z++)
if(VoxelExists(x,y,z))
incrementNeighbors(x,y,z);
You'll need to write initializeCountArray so it sets all array elements to 0.
More importantly you'll also need to write incrementNeighbors so that it won't increment outside the array. A slight speed increase here is to only perform the above algorithm on all voxels on the interior, then do a separate run on all the outside edge voxels with a modified incrementNeighbrs routine that understands there won't be neighbors on one side.
This algorithm results in 1 lookup per voxel, and at most 26 byte additions per voxel. If your voxel space is sparse then this will result in very few (relative) additions. If your voxel space is very dense, you might consider reversing the algorithm - initialize the array to the value of 26 for each entry, then decrement the neighbors when a voxel doesn't exist.
The results for a given voxel (ie, how many neighbors do I have?) reside in the array. If you need to know how many neighbors voxel 2,3,5 has, just look at the byte in countArray[2][3][5].
The array will consume 1 byte per voxel. You could use less space, and possibly increase the speed a little bit by packing the bytes.
There are better algorithms if you know details about your data. For instance, a very sparse voxel space will benefit greatly from an octree, where you can skip large blocks of lookups when you already know there are no filled voxels inside. Most of these algorithms, however, would still require at least one lookup per voxel to fill their matrix, but if you are performing several operations then they may benefit more than this one operation.

Resources