Meaning of Parameters in BackgroundSubtractorGMG Algorithm in OpenCV 3.0 - algorithm

I am studying the GMG background subtraction algorithm as described in this paper. As OpenCV 3.0 also has an implementation of the GMG algorithm (as the additional package opencv_contrib), I try to study the two together. However, I am not quite sure about the meaning of the two parameters maxFeatures and quantizationLevels, as I want to map them to the descriptions in the paper.
Quoting the source code in OpenCV 3.0 (file modules\bgsegm\src\bgfg_gmg.cpp):
//! Total number of distinct colors to maintain in histogram.
int maxFeatures;
and
//! Number of discrete levels in each channel to be used in histograms.
int quantizationLevels;
And quoting the paper (Section II B) (with some mathematical symbol and variable names modified since LaTex is not supported here):
"... Of the T observed features, select the F_tot <= F_max most recently observed unique features; let I is a subset of {1, 2, ... T}, where |I| = F_tot, be the corresponding time index set. (If T > F_max, it is possible that F_tot, the number of distinct features observed, exceeds the limit F_max. In that case, we throw away the oldest observations so that F_tot <= F_max). Then, we calculate an average to generate the initial histogram: H(T) = (1/F_tot)∑f(r). This puts equal weight, 1/F_tot, in F_tot unique bins of the histogram."
From the above description, I was originally convinced that maxFeatures in OpenCV 3.0 refers to F_max in the paper, and quantizationLevels refers to F_tot. However, this does not sound right for two reasons: (1) The paper mentions "F_tot is the number of distinct features observed", and (2) the OpenCV source code does not pose any relationship between maxFeatures and quantizationLevels, while the paper clearly suggests that the former should be larger than or equal to the latter.
So, what are the meaning of maxFeatures and quantizationLevels? And is quantizationLevels a parameter introduced by OpenCV for the calculation of histogram?

After further studying the source code in OpenCV, I believe that maxFeatures refer to F_max in the paper, while quantizationLevels is actually the number of bins in the histogram. The reason is as follows:
In the function insertFeature(), which contains the following code:
static bool insertFeature(unsigned int color, float weight, unsigned int* colors, float* weights, int& nfeatures, int maxFeatures)
{
int idx = -1;
for (int i = 0; i < nfeatures; ++i) {
if (color == colors[i]) {
// feature in histogram
weight += weights[i];
idx = i;
break;
}
}
if (idx >= 0) { // case 1
// move feature to beginning of list
::memmove(colors + 1, colors, idx * sizeof(unsigned int));
::memmove(weights + 1, weights, idx * sizeof(float));
colors[0] = color;
weights[0] = weight;
}
else if (nfeatures == maxFeatures) { // case 2
// discard oldest feature
::memmove(colors + 1, colors, (nfeatures - 1) * sizeof(unsigned int));
::memmove(weights + 1, weights, (nfeatures - 1) * sizeof(float));
colors[0] = color;
weights[0] = weight;
}
else { // case 3
colors[nfeatures] = color;
weights[nfeatures] = weight;
++nfeatures;
return true;
}
return false;
}
Case 1: When the color matches one item in the array colors[], the corresponding array element is obtained.
Case 2: When the color does not match any item in the array colors[] and maxFeatures is reached (nFeatures stores the number of items stored in the array), then the oldest feature is removed for the new color.
Case 3: When the color does not match any item in the array colors[] and maxFeatures is not reached yet, add the color to a new item of the array, and nFeatures is incremented by 1.
Hence maxFeatures should correspond to F_max (the maximum number of features) in the paper.
In addition, in the function apply():
static unsigned int apply(const void* src_, int x, int cn, double minVal, double maxVal, int quantizationLevels)
{
const T* src = static_cast<const T*>(src_);
src += x * cn;
unsigned int res = 0;
for (int i = 0, shift = 0; i < cn; ++i, ++src, shift += 8)
res |= static_cast<int>((*src - minVal) * quantizationLevels / (maxVal - minVal)) << shift;
return res;
}
This function maps the pixel's color intensity pointed to by the pointer src_ to a bin according to the values of maxVal, minVal and quantizationLevels, such that if quantizationLevels = q, the result of the code:
static_cast<int>((*src - minVal) * quantizationLevels / (maxVal - minVal))
must be an integer in the range [0, q-1]. However, there can be cn channels (for example, in RGB, cn = 3, hence the shift operation and the possible number of bins (denote it as b) is therefore quantizationLevels to the power of cn. So if b > F_max, we have to discard the (b - F_max) oldest features.
Hence, in OpenCV, if we set maxFeatures to be >= quantizationLevels ^ cn, we never have to discard the oldest features, because we allow more than enough bins, or more than enough different unique features.

Related

Fast random/mutation algorithms (vector to vector) [duplicate]

I've been trying to create a generalized Gradient Noise generator (which doesn't use the hash method to get gradients). The code is below:
class GradientNoise {
std::uint64_t m_seed;
std::uniform_int_distribution<std::uint8_t> distribution;
const std::array<glm::vec2, 4> vector_choice = {glm::vec2(1.0, 1.0), glm::vec2(-1.0, 1.0), glm::vec2(1.0, -1.0),
glm::vec2(-1.0, -1.0)};
public:
GradientNoise(uint64_t seed) {
m_seed = seed;
distribution = std::uniform_int_distribution<std::uint8_t>(0, 3);
}
// 0 -> 1
// just passes the value through, origionally was perlin noise activation
double nonLinearActivationFunction(double value) {
//return value * value * value * (value * (value * 6.0 - 15.0) + 10.0);
return value;
}
// 0 -> 1
//cosine interpolation
double interpolate(double a, double b, double t) {
double mu2 = (1 - cos(t * M_PI)) / 2;
return (a * (1 - mu2) + b * mu2);
}
double noise(double x, double y) {
std::mt19937_64 rng;
//first get the bottom left corner associated
// with these coordinates
int corner_x = std::floor(x);
int corner_y = std::floor(y);
// then get the respective distance from that corner
double dist_x = x - corner_x;
double dist_y = y - corner_y;
double corner_0_contrib; // bottom left
double corner_1_contrib; // top left
double corner_2_contrib; // top right
double corner_3_contrib; // bottom right
std::uint64_t s1 = ((std::uint64_t(corner_x) << 32) + std::uint64_t(corner_y) + m_seed);
std::uint64_t s2 = ((std::uint64_t(corner_x) << 32) + std::uint64_t(corner_y + 1) + m_seed);
std::uint64_t s3 = ((std::uint64_t(corner_x + 1) << 32) + std::uint64_t(corner_y + 1) + m_seed);
std::uint64_t s4 = ((std::uint64_t(corner_x + 1) << 32) + std::uint64_t(corner_y) + m_seed);
// each xy pair turns into distance vector from respective corner, corner zero is our starting corner (bottom
// left)
rng.seed(s1);
corner_0_contrib = glm::dot(vector_choice[distribution(rng)], {dist_x, dist_y});
rng.seed(s2);
corner_1_contrib = glm::dot(vector_choice[distribution(rng)], {dist_x, dist_y - 1});
rng.seed(s3);
corner_2_contrib = glm::dot(vector_choice[distribution(rng)], {dist_x - 1, dist_y - 1});
rng.seed(s4);
corner_3_contrib = glm::dot(vector_choice[distribution(rng)], {dist_x - 1, dist_y});
double u = nonLinearActivationFunction(dist_x);
double v = nonLinearActivationFunction(dist_y);
double x_bottom = interpolate(corner_0_contrib, corner_3_contrib, u);
double x_top = interpolate(corner_1_contrib, corner_2_contrib, u);
double total_xy = interpolate(x_bottom, x_top, v);
return total_xy;
}
};
I then generate an OpenGL texture to display with like this:
int width = 1024;
int height = 1024;
unsigned char *temp_texture = new unsigned char[width*height * 4];
double octaves[5] = {2,4,8,16,32};
for( int i = 0; i < height; i++){
for(int j = 0; j < width; j++){
double d_noise = 0;
d_noise += temp_1.noise(j/octaves[0], i/octaves[0]);
d_noise += temp_1.noise(j/octaves[1], i/octaves[1]);
d_noise += temp_1.noise(j/octaves[2], i/octaves[2]);
d_noise += temp_1.noise(j/octaves[3], i/octaves[3]);
d_noise += temp_1.noise(j/octaves[4], i/octaves[4]);
d_noise/=5;
uint8_t noise = static_cast<uint8_t>(((d_noise * 128.0) + 128.0));
temp_texture[j*4 + (i * width * 4) + 0] = (noise);
temp_texture[j*4 + (i * width * 4) + 1] = (noise);
temp_texture[j*4 + (i * width * 4) + 2] = (noise);
temp_texture[j*4 + (i * width * 4) + 3] = (255);
}
}
Which give good results:
But gprof is telling me that the Mersenne twister is taking up 62.4% of my time and growing with larger textures. Nothing else individual takes any where near as much time. While the Mersenne twister is fast after initialization, the fact that I initialize it every time I use it seems to make it pretty slow.
This initialization is 100% required for this to make sure that the same x and y generates the same gradient at each integer point (so you need either a hash function or seed the RNG each time).
I attempted to change the PRNG to both the linear congruential generator and Xorshiftplus, and while both ran orders of magnitude faster, they gave odd results:
LCG (one time, then running 5 times before using)
Xorshiftplus
After one iteration
After 10,000 iterations.
I've tried:
Running the generator several times before utilizing output, this results in slow execution or simply different artifacts.
Using the output of two consecutive runs after initial seed to seed the PRNG again and use the value after wards. No difference in result.
What is happening? What can i do to get faster results that are of the same quality as the mersenne twister?
OK BIG UPDATE:
I don't know why this works, I know it has something to do with the prime number utilized, but after messing around a bit, it appears that the following works:
Step 1, incorporate the x and y values as seeds separately (and incorporate some other offset value or additional seed value with them, this number should be a prime/non trivial factor)
Step 2, Use those two seed results into seeding the generator again back into the function (so like geza said, the seeds made were bad)
Step 3, when getting the result, instead of using modulo number of items (4) trying to get, or & 3, modulo the result by a prime number first then apply & 3. I'm not sure if the prime being a mersenne prime matters or not.
Here is the result with prime = 257 and xorshiftplus being used! (note I used 2048 by 2048 for this one, the others were 256 by 256)
LCG is known to be inadequate for your purpose.
Xorshift128+'s results are bad, because it needs good seeding. And providing good seeding defeats the whole purpose of using it. I don't recommend this.
However, I recommend using an integer hash. For example, one from Bob's page.
Here's a result of the first hash of that page, it looks OK to me, and it is fast (I think it is much faster than Mersenne Twister):
Here's the code I've written to generate this:
#include <cmath>
#include <stdio.h>
unsigned int hash(unsigned int a) {
a = (a ^ 61) ^ (a >> 16);
a = a + (a << 3);
a = a ^ (a >> 4);
a = a * 0x27d4eb2d;
a = a ^ (a >> 15);
return a;
}
unsigned int ivalue(int x, int y) {
return hash(y<<16|x)&0xff;
}
float smooth(float x) {
return 6*x*x*x*x*x - 15*x*x*x*x + 10*x*x*x;
}
float value(float x, float y) {
int ix = floor(x);
int iy = floor(y);
float fx = smooth(x-ix);
float fy = smooth(y-iy);
int v00 = ivalue(iy+0, ix+0);
int v01 = ivalue(iy+0, ix+1);
int v10 = ivalue(iy+1, ix+0);
int v11 = ivalue(iy+1, ix+1);
float v0 = v00*(1-fx) + v01*fx;
float v1 = v10*(1-fx) + v11*fx;
return v0*(1-fy) + v1*fy;
}
unsigned char pic[1024*1024];
int main() {
for (int y=0; y<1024; y++) {
for (int x=0; x<1024; x++) {
float v = 0;
for (int o=0; o<=9; o++) {
v += value(x/64.0f*(1<<o), y/64.0f*(1<<o))/(1<<o);
}
int r = rint(v*0.5f);
pic[y*1024+x] = r;
}
}
FILE *f = fopen("x.pnm", "wb");
fprintf(f, "P5\n1024 1024\n255\n");
fwrite(pic, 1, 1024*1024, f);
fclose(f);
}
If you want to understand, how a hash function work (or better yet, which properties a good hash have), check out Bob's page, for example this.
You (unknowingly?) implemented a visualization of PRNG non-random patterns. That looks very cool!
Except Mersenne Twister, all your tested PRNGs do not seem fit for your purpose. As I have not done further tests myself, I can only suggest to try out and measure further PRNGs.
The randomness of LCGs are known to be sensitive to the choice of their parameters. In particular, the period of a LCG is relative to the m parameter - at most it will be m (your prime factor) & for many values it can be less.
Similarly, the careful parameters selection is required to get a long period from Xorshift PRNGs.
You've noted that some PRNGs give good procedural generation results while other do not. In order to isolate the cause, I would factor out the proc gen stuff & examine the PRNG output directly. An easy way to visualize the data is to build a grey scale image where each pixel value is a (possibly scaled) random value. For image based stuff, I find this to be an easy way to find stuff that may lead to visual artifacts. Any artifacts you see with this are likely to cause issues with your proc gen output.
Another option is to try something like the Diehard tests. If the aforementioned image test failed to reveal any problems, I might use this just to be sure my PRNG techniques were trustworthy.
Note that your code seeds the PRNG, then generates one pseudorandom number from the PRNG. The reason for the nonrandomness in xorshift128+ that you discovered is that xorshift128+ simply adds the two halves of the seed (and uses the result mod 264 as the generated number) before changing its state (review its source code). This makes that PRNG considerably different from a hash function.
What you see is the practical demonstration of quality of PRNG. Mersenne Twister is one of the best PRNGs with good performance, it passes DIEHARD tests. One should know that generating a random numbers is not an easy computational task, so looking for a better performance will inevitably result in poor quality. LCG is known to be simplest and worst PRNG ever designed and it clearly shows two-dimensional correlation as in your picture. The quality of Xorshift generators largely depend on bitness and parameters. They are definitely worse than Mersenne Twister, but some (xorshift128+) may work good enough to pass BigCrush battery of TestU01 tests.
In other words, if you are making an important physical modelling numerical experiment, you better continue to use Mersenne Twister as known to be a good trade-off between speed and quality and it comes in many standard libraries. On a less important case you may try to use xorshift128+ generator. For an ultimate results you need to use cryptographical-quality PRNG (none of mentioned here may be used for cryptographical purposes).

Algorithm to match sets with overlapping members

Looking for an efficient algorithm to match sets among a group of sets, ordered by the most overlapping members. 2 identical sets for example are the best match, while no overlapping members are the worst.
So, the algorithm takes input a list of sets and returns matching set pairs ordered by the sets with the most overlapping members.
Would be interested in ideas to do this efficiently. Brute force approach is to try all combinations and sort which obviously is not very performant when the number of sets is very large.
Edit: Use case - Assume a large number of sets already exist. When a new set arrives, the algorithm is run and the output includes matching sets (with at least one element overlap) sorted by the most matching to least (doesn't matter how many items are in the new/incoming set). Hope that clarifies my question.
If you can afford an approximation algorithm with a chance of error, then you should probably consider MinHash.
This algorithm allows estimating the similarity between 2 sets in constant time. For any constructed set, a fixed size signature is computed, and then only the signatures are compared when estimating the similarities. The similarity measure being used is Jaccard distance, which ranges from 0 (disjoint sets) to 1 (identical sets). It is defined as the intersection to union ratio of two given sets.
With this approach, any new set has to be compared against all existing ones (in linear time), and then the results can be merged into the top list (you can use a bounded search tree/heap for this purpose).
Since the number of possible different values is not very large, you get a fairly efficient hashing if you simply set the nth bit in a "large integer" when the nth number is present in your set. You can then look for overlap between sets with a simple bitwise AND followed by a "count set bits" operation. On 64 bit architecture, that means that you can look for the similarity between two numbers (out of 1000 possible values) in about 16 cycles, regardless of the number of values in each cluster. As the cluster gets more sparse, this becomes a less efficient algorithm.
Still - I implemented some of the basic functions you might need in some code that I attach here - not documented but reasonably understandable, I think. In this example I made the numbers small so I can check the result by hand - you might want to change some of the #defines to get larger ranges of values, and obviously you will want some dynamic lists etc to keep up with the growing catalog.
#include <stdio.h>
// biggest number you will come across: want this to be much bigger
#define MAXINT 25
// use the biggest type you have - not int
#define BITSPER (8*sizeof(int))
#define NWORDS (MAXINT/BITSPER + 1)
// max number in a cluster
#define CSIZE 5
typedef struct{
unsigned int num[NWORDS]; // want to use longest type but not for demo
int newmatch;
int rank;
} hmap;
// convert number to binary sequence:
void hashIt(int* t, int n, hmap* h) {
int ii;
for(ii=0;ii<n;ii++) {
int a, b;
a = t[ii]%BITSPER;
b = t[ii]/BITSPER;
h->num[b]|=1<<a;
}
}
// print binary number:
void printBinary(int n) {
unsigned int jj;
jj = 1<<31;
while(jj!=0) {
printf("%c",((n&jj)!=0)?'1':'0');
jj>>=1;
}
printf(" ");
}
// print the array of binary numbers:
void printHash(hmap* h) {
unsigned int ii, jj;
for(ii=0; ii<NWORDS; ii++) {
jj = 1<<31;
printf("0x%08x: ", h->num[ii]);
printBinary(h->num[ii]);
}
//printf("\n");
}
// find the maximum overlap for set m of n
int maxOverlap(hmap* h, int m, int n) {
int ii, jj;
int overlap, maxOverlap = -1;
for(ii = 0; ii<n; ii++) {
if(ii == m) continue; // don't compare with yourself
else {
overlap = 0;
for(jj = 0; jj< NWORDS; jj++) {
// just to see what's going on: take these print statements out
printBinary(h->num[ii]);
printBinary(h->num[m]);
int bc = countBits(h->num[ii] & h->num[m]);
printBinary(h->num[ii] & h->num[m]);
printf("%d bits overlap\n", bc);
overlap += bc;
}
if(overlap > maxOverlap) maxOverlap = overlap;
}
}
return maxOverlap;
}
int countBits (unsigned int b) {
int count;
for (count = 0; b != 0; count++) {
b &= b - 1; // this clears the LSB-most set bit
}
return count;
}
int main(void) {
int cluster[20][CSIZE];
int temp[CSIZE];
int ii,jj;
static hmap H[20]; // make them all 0 initially
for(jj=0; jj<20; jj++){
for(ii=0; ii<CSIZE; ii++) {
temp[ii] = rand()%MAXINT;
}
hashIt(temp, CSIZE, &H[jj]);
}
for(ii=0;ii<20;ii++) {
printHash(&H[ii]);
printf("max overlap: %d\n", maxOverlap(H, ii, 20));
}
}
See if this helps at all...

The Maximum Volume of Trapped Rain Water in 3D

A classic algorithm question in 2D version is typically described as
Given n non-negative integers representing an elevation map where the width of each bar is 1, compute how much water it is able to trap after raining.
For example, Given the input
[0,1,0,2,1,0,1,3,2,1,2,1]
the return value would be
6
The algorithm that I used to solve the above 2D problem is
int trapWaterVolume2D(vector<int> A) {
int n = A.size();
vector<int> leftmost(n, 0), rightmost(n, 0);
//left exclusive scan, O(n), the highest bar to the left each point
int leftMaxSoFar = 0;
for (int i = 0; i < n; i++){
leftmost[i] = leftMaxSoFar;
if (A[i] > leftMaxSoFar) leftMaxSoFar = A[i];
}
//right exclusive scan, O(n), the highest bar to the right each point
int rightMaxSoFar = 0;
for (int i = n - 1; i >= 0; i--){
rightmost[i] = rightMaxSoFar;
if (A[i] > rightMaxSoFar) rightMaxSoFar = A[i];
}
// Summation, O(n)
int vol = 0;
for (int i = 0; i < n; i++){
vol += max(0, min(leftmost[i], rightmost[i]) - A[i]);
}
return vol;
}
My Question is how to make the above algorithm extensible to the 3D version of the problem, to compute the maximum of water trapped in real-world 3D terrain. i.e. To implement
int trapWaterVolume3D(vector<vector<int> > A);
Sample graph:
We know the elevation at each (x, y) point and the goal is to compute the maximum volume of water that can be trapped in the shape. Any thoughts and references are welcome.
For each point on the terrain consider all paths from that point to the border of the terrain. The level of water would be the minimum of the maximum heights of the points of those paths. To find it we need to perform a slightly modified Dijkstra's algorithm, filling the water level matrix starting from the border.
For every point on the border set the water level to the point height
For every point not on the border set the water level to infinity
Put every point on the border into the set of active points
While the set of active points is not empty:
Select the active point P with minimum level
Remove P from the set of active points
For every point Q adjacent to P:
Level(Q) = max(Height(Q), min(Level(Q), Level(P)))
If Level(Q) was changed:
Add Q to the set of active points
user3290797's "slightly modified Dijkstra algorithm" is closer to Prim's algorithm than Dijkstra's. In minimum spanning tree terms, we prepare a graph with one vertex per tile, one vertex for the outside, and edges with weights equal to the maximum height of their two adjoining tiles (the outside has height "minus infinity").
Given a path in this graph to the outside vertex, the maximum weight of an edge in the path is the height that the water has to reach in order to escape along that path. The relevant property of a minimum spanning tree is that, for every pair of vertices, the maximum weight of an edge in the path in the spanning tree is the minimum possible among all paths between those vertices. The minimum spanning tree thus describes the most economical escape paths for water, and the water heights can be extracted in linear time with one traversal.
As a bonus, since the graph is planar, there's a linear-time algorithm for computing the minimum spanning tree, consisting of alternating Boruvka passes and simplifications. This improves on the O(n log n) running time of Prim.
This problem can be solved using the Priority-Flood algorithm. It's been discovered and published a number of times over the past few decades (and again by other people answering this question), though the specific variant you're looking for is not, to my knowledge, in the literature.
You can find a review paper of the algorithm and its variants here. Since that paper was published an even faster variant has been discovered (link), as well as methods to perform this calculation on datasets of trillions of cells (link). A method for selectively breaching low/narrow divides is discussed here. Contact me if you'd like copies of any of these papers.
I have a repository here with many of the above variants; additional implementations can be found here.
A simple script to calculate volume using the RichDEM library is as follows:
#include "richdem/common/version.hpp"
#include "richdem/common/router.hpp"
#include "richdem/depressions/Lindsay2016.hpp"
#include "richdem/common/Array2D.hpp"
/**
#brief Calculates the volume of depressions in a DEM
#author Richard Barnes (rbarnes#umn.edu)
Priority-Flood starts on the edges of the DEM and then works its way inwards
using a priority queue to determine the lowest cell which has a path to the
edge. The neighbours of this cell are added to the priority queue if they
are higher. If they are lower, then they are members of a depression and the
elevation of the flooding minus the elevation of the DEM times the cell area
is the flooded volume of the cell. The cell is flooded, total volume
tracked, and the neighbors are then added to a "depressions" queue which is
used to flood depressions. Cells which are higher than a depression being
filled are added to the priority queue. In this way, depressions are filled
without incurring the expense of the priority queue.
#param[in,out] &elevations A grid of cell elevations
#pre
1. **elevations** contains the elevations of every cell or a value _NoData_
for cells not part of the DEM. Note that the _NoData_ value is assumed to
be a negative number less than any actual data value.
#return
Returns the total volume of the flooded depressions.
#correctness
The correctness of this command is determined by inspection. (TODO)
*/
template <class elev_t>
double improved_priority_flood_volume(const Array2D<elev_t> &elevations){
GridCellZ_pq<elev_t> open;
std::queue<GridCellZ<elev_t> > pit;
uint64_t processed_cells = 0;
uint64_t pitc = 0;
ProgressBar progress;
std::cerr<<"\nPriority-Flood (Improved) Volume"<<std::endl;
std::cerr<<"\nC Barnes, R., Lehman, C., Mulla, D., 2014. Priority-flood: An optimal depression-filling and watershed-labeling algorithm for digital elevation models. Computers & Geosciences 62, 117–127. doi:10.1016/j.cageo.2013.04.024"<<std::endl;
std::cerr<<"p Setting up boolean flood array matrix..."<<std::endl;
//Used to keep track of which cells have already been considered
Array2D<int8_t> closed(elevations.width(),elevations.height(),false);
std::cerr<<"The priority queue will require approximately "
<<(elevations.width()*2+elevations.height()*2)*((long)sizeof(GridCellZ<elev_t>))/1024/1024
<<"MB of RAM."
<<std::endl;
std::cerr<<"p Adding cells to the priority queue..."<<std::endl;
//Add all cells on the edge of the DEM to the priority queue
for(int x=0;x<elevations.width();x++){
open.emplace(x,0,elevations(x,0) );
open.emplace(x,elevations.height()-1,elevations(x,elevations.height()-1) );
closed(x,0)=true;
closed(x,elevations.height()-1)=true;
}
for(int y=1;y<elevations.height()-1;y++){
open.emplace(0,y,elevations(0,y) );
open.emplace(elevations.width()-1,y,elevations(elevations.width()-1,y) );
closed(0,y)=true;
closed(elevations.width()-1,y)=true;
}
double volume = 0;
std::cerr<<"p Performing the improved Priority-Flood..."<<std::endl;
progress.start( elevations.size() );
while(open.size()>0 || pit.size()>0){
GridCellZ<elev_t> c;
if(pit.size()>0){
c=pit.front();
pit.pop();
} else {
c=open.top();
open.pop();
}
processed_cells++;
for(int n=1;n<=8;n++){
int nx=c.x+dx[n];
int ny=c.y+dy[n];
if(!elevations.inGrid(nx,ny)) continue;
if(closed(nx,ny))
continue;
closed(nx,ny)=true;
if(elevations(nx,ny)<=c.z){
if(elevations(nx,ny)<c.z){
++pitc;
volume += (c.z-elevations(nx,ny))*std::abs(elevations.getCellArea());
}
pit.emplace(nx,ny,c.z);
} else
open.emplace(nx,ny,elevations(nx,ny));
}
progress.update(processed_cells);
}
std::cerr<<"t Succeeded in "<<std::fixed<<std::setprecision(1)<<progress.stop()<<" s"<<std::endl;
std::cerr<<"m Cells processed = "<<processed_cells<<std::endl;
std::cerr<<"m Cells in pits = " <<pitc <<std::endl;
return volume;
}
template<class T>
int PerformAlgorithm(std::string analysis, Array2D<T> elevations){
elevations.loadData();
std::cout<<"Volume: "<<improved_priority_flood_volume(elevations)<<std::endl;
return 0;
}
int main(int argc, char **argv){
std::string analysis = PrintRichdemHeader(argc,argv);
if(argc!=2){
std::cerr<<argv[0]<<" <Input>"<<std::endl;
return -1;
}
return PerformAlgorithm(argv[1],analysis);
}
It should be straight-forward to adapt this to whatever 2d array format you are using
In pseudocode, the following is equivalent to the foregoing:
Let PQ be a priority-queue which always pops the cell of lowest elevation
Let Closed be a boolean array initially set to False
Let Volume = 0
Add all the border cells to PQ.
For each border cell, set the cell's entry in Closed to True.
While PQ is not empty:
Select the top cell from PQ, call it C.
Pop the top cell from PQ.
For each neighbor N of C:
If Closed(N):
Continue
If Elevation(N)<Elevation(C):
Volume += (Elevation(C)-Elevation(N))*Area
Add N to PQ, but with Elevation(C)
Else:
Add N to PQ with Elevation(N)
Set Closed(N)=True
This problem is very close to the construction of the morphological watershed of a grayscale image.
One approach is as follows (flooding process):
sort all pixels by increasing elevation.
work incrementally, by increasing elevations, assigning labels to the pixels per catchment basin.
for a new elevation level, you need to label a new set of pixels:
Some have no labeled
neighbor, they form a local minimum configuration and begin a new catchment basin.
Some have only neighbors with the same label, they can be labeled similarly (they extend a catchment basin).
Some have neighbors with different labels. They do not belong to a specific catchment basin and they define the watershed lines.
You will need to enhance the standard watershed algorithm to be able to compute the volume of water. You can do that by determining the maximum water level in each basin and deduce the ground height on every pixel. The water level in a basin is given by the elevation of the lowest watershed pixel around it.
You can act every time you discover a watershed pixel: if a neighboring basin has not been assigned a level yet, that basin can stand the current level without leaking.
In order to accomplish tapping water problem in 3D i.e., to calculate the maximum volume of trapped rain water you can do something like this:
#include<bits/stdc++.h>
using namespace std;
#define MAX 10
int new2d[MAX][MAX];
int dp[MAX][MAX],visited[MAX][MAX];
int dx[] = {1,0,-1,0};
int dy[] = {0,-1,0,1};
int boundedBy(int i,int j,int k,int in11,int in22)
{
if(i<0 || j<0 || i>=in11 || j>=in22)
return 0;
if(new2d[i][j]>k)
return new2d[i][j];
if(visited[i][j]) return INT_MAX;
visited[i][j] = 1;
int r = INT_MAX;
for(int dir = 0 ; dir<4 ; dir++)
{
int nx = i + dx[dir];
int ny = j + dy[dir];
r = min(r,boundedBy(nx,ny,k,in11,in22));
}
return r;
}
void mark(int i,int j,int k,int in1,int in2)
{
if(i<0 || j<0 || i>=in1 || j>=in2)
return;
if(new2d[i][j]>=k)
return;
if(visited[i][j]) return ;
visited[i][j] = 1;
for(int dir = 0;dir<4;dir++)
{
int nx = i + dx[dir];
int ny = j + dy[dir];
mark(nx,ny,k,in1,in2);
}
dp[i][j] = max(dp[i][j],k);
}
struct node
{
int i,j,key;
node(int x,int y,int k)
{
i = x;
j = y;
key = k;
}
};
bool compare(node a,node b)
{
return a.key>b.key;
}
vector<node> store;
int getData(int input1, int input2, int input3[])
{
int row=input1;
int col=input2;
int temp=0;
int count=0;
for(int i=0;i<row;i++)
{
for(int j=0;j<col;j++)
{
if(count==(col*row))
break;
new2d[i][j]=input3[count];
count++;
}
}
store.clear();
for(int i = 0;i<input1;i++)
{
for(int j = 0;j<input2;j++)
{
store.push_back(node(i,j,new2d[i][j]));
}
}
memset(dp,0,sizeof(dp));
sort(store.begin(),store.end(),compare);
for(int i = 0;i<store.size();i++)
{
memset(visited,0,sizeof(visited));
int aux = boundedBy(store[i].i,store[i].j,store[i].key,input1,input2);
if(aux>store[i].key)
{
memset(visited,0,sizeof(visited));
mark(store[i].i,store[i].j,aux,input1,input2);
}
}
long long result =0 ;
for(int i = 0;i<input1;i++)
{
for(int j = 0;j<input2;j++)
{
result = result + max(0,dp[i][j]-new2d[i][j]);
}
}
return result;
}
int main()
{
cin.sync_with_stdio(false);
cout.sync_with_stdio(false);
int n,m;
cin>>n>>m;
int inp3[n*m];
store.clear();
for(int j = 0;j<n*m;j++)
{
cin>>inp3[j];
}
int k = getData(n,m,inp3);
cout<<k;
return 0;
}
class Solution(object):
def trapRainWater(self, heightMap):
"""
:type heightMap: List[List[int]]
:rtype: int
"""
m = len(heightMap)
if m == 0:
return 0
n = len(heightMap[0])
if n == 0:
return 0
visited = [[False for i in range(n)] for j in range(m)]
from Queue import PriorityQueue
q = PriorityQueue()
for i in range(m):
visited[i][0] = True
q.put([heightMap[i][0],i,0])
visited[i][n-1] = True
q.put([heightMap[i][n-1],i,n-1])
for j in range(1, n-1):
visited[0][j] = True
q.put([heightMap[0][j],0,j])
visited[m-1][j] = True
q.put([heightMap[m-1][j],m-1,j])
S = 0
while not q.empty():
cell = q.get()
for (i, j) in [(1,0), (-1,0), (0,1), (0,-1)]:
x = cell[1] + i
y = cell[2] + j
if x in range(m) and y in range(n) and not visited[x][y]:
S += max(0, cell[0] - heightMap[x][y]) # how much water at the cell
q.put([max(heightMap[x][y],cell[0]),x,y])
visited[x][y] = True
return S
Here is the simple code for the same-
#include<iostream>
using namespace std;
int main()
{
int n,count=0,a[100];
cin>>n;
for(int i=0;i<n;i++)
{
cin>>a[i];
}
for(int i=1;i<n-1;i++)
{
///computing left most largest and Right most largest element of array;
int leftmax=0;
int rightmax=0;
///left most largest
for(int j=i-1;j>=1;j--)
{
if(a[j]>leftmax)
{
leftmax=a[j];
}
}
///rightmost largest
for(int k=i+1;k<=n-1;k++)
{
if(a[k]>rightmax)
{
rightmax=a[k];
}
}
///computing hight of the water contained-
int x=(min(rightmax,leftmax)-a[i]);
if(x>0)
{
count=count+x;
}
}
cout<<count;
return 0;
}

Random sample of values with the specified resulting probabilities

Imagine we have four symbols - 'a', 'b', 'c', 'd'. We also have four given probabilities of those symbols appearing in the function output - P1, P2, P3, P4 (the sum of which is equal to 1). How would one implement a function which would generate a random sample of three of those symbols, such is that the resulting symbols are present in it with those specified probabilities?
Example: 'a', 'b', 'c' and 'd' have the probabilities of 9/30, 8/30, 7/30 and 6/30 respectively. The function outputs various random samples of any three out of those four symbols: 'abc', 'dca', 'bad' and so on. We run this function many times, counting the amount of times each of the symbols is encountered in its output. At the end, the value of counts stored for 'a' divided by the total amount of symbols output should converge to 9/30, for 'b' to 8/30, for 'c' to 7/30, and for 'd' to 6/30.
E.g. the function generates 10 outputs:
adc
dab
bca
dab
dba
cab
dcb
acd
cab
abc
which out of 30 symbols contains 9 of 'a', 8 of 'b', 7 of 'c' and 6 of 'd'. This is an idealistic example, of course, as the values would only converge when the number of samples is much larger - but it should hopefully convey the idea.
Obviously, this all is only possible when neither probability is larger than 1/3, since each single sample output would always contain three distinct symbols. It is ok for the function to enter an infinite loop or otherwise behave erratically if it's impossible to satisfy the values provided.
Note: the function should obviously use an RNG, but should otherwise be stateless. Each new invocation should be independent from any of the previous ones, except for the RNG state.
EDIT: Even though the description mentions choosing 3 out of 4 values, ideally the algorithm should be able to cope with any sample size.
Your problem is underdetermined.
If we assign a probability to each string of three letters that we allow, p(abc), p(abd), p(acd) etc xtc we can gernerate a series of equations
eqn1: p(abc) + p(abd) + ... others with a "a" ... = p1
...
...
eqn2: p(abd) + p(acd) + ... others with a "d" ... = p4
This has more unknowns than equations, so many ways of solving it. Once a solution is found, by whatever method you choose (use the simplex algorithm if you are me), sample from the probabilities of each string using the roulette method that #alestanis describes.
from numpy import *
# using cvxopt-1.1.5
from cvxopt import matrix, solvers
###########################
# Functions to do some parts
# function to find all valid outputs
def perms(alphabet, length):
if length == 0:
yield ""
return
for i in range(len(alphabet)):
val1 = alphabet[i]
for val2 in perms(alphabet[:i]+alphabet[i+1:], length-1):
yield val1 + val2
# roulette sampler
def roulette_sampler(values, probs):
# Create cumulative prob distro
probs_cum = [sum(probs[:i+1]) for i in range(n_strings)]
def fun():
r = random.rand()
for p,s in zip(probs_cum, values):
if r < p:
return s
# in case of rounding error
return values[-1]
return fun
############################
# Main Part
# create list of all valid strings
alphabet = "abcd"
string_length = 3
alpha_probs = [string_length*x/30. for x in range(9,5,-1)]
# show probs
for a,p in zip(alphabet, alpha_probs):
print "p("+a+") =",p
# all valid outputs for this particular case
strings = [perm for perm in perms(alphabet, string_length)]
n_strings = len(strings)
# constraints from probabilities p(abc) + p(abd) ... = p(a)
contains = array([[1. if s.find(a) >= 0 else 0. for a in alphabet] for s in strings])
#both = concatenate((contains,wons), axis=1).T # hacky, but whatever
#A = matrix(both)
#b = matrix(alpha_probs + [1.])
A = matrix(contains.T)
b = matrix(alpha_probs)
#also need to constrain to [0,1]
wons = array([[1. for s in strings]])
G = matrix(concatenate((eye(n_strings),wons,-eye(n_strings),-wons)))
h = matrix(concatenate((ones(n_strings+1),zeros(n_strings+1))))
## target matricies for approx KL divergence
# uniform prob over valid outputs
u = 1./len(strings)
P = matrix(eye(n_strings))
q = -0.5*u*matrix(ones(n_strings))
# will minimise p^2 - pq for each p val equally
# Do convex optimisation
sol = solvers.qp(P,q,G,h,A,b)
probs = array(sol['x'])
# Print ouput
for s,p in zip(strings,probs):
print "p("+s+") =",p
checkprobs = [0. for char in alphabet]
for a,i in zip(alphabet, range(len(alphabet))):
for s,p in zip(strings,probs):
if s.find(a) > -1:
checkprobs[i] += p
print "p("+a+") =",checkprobs[i]
print "total =",sum(probs)
# Create the sampling function
rndstring = roulette_sampler(strings, probs)
###################
# Verify
print "sampling..."
test_n = 1000
output = [rndstring() for i in xrange(test_n)]
# find which one it is
sampled_freqs = []
for char in alphabet:
n = 0
for val in output:
if val.find(char) > -1:
n += 1
sampled_freqs += [n]
print "plotting histogram..."
import matplotlib.pyplot as plt
plt.bar(range(0,len(alphabet)),array(sampled_freqs)/float(test_n), width=0.5)
plt.show()
EDIT: Python code
I think this is a pretty interesting problem. I don't know the general solution, but it's easy enough to solve in the case of samples of size n-1 (if there is a solution), since there are exactly n possible samples, each of which corresponds to the absence of one of the elements.
Suppose we are seeking Fa = 9/30, Fb = 8/30, Fc = 7/30, Fd = 6/30 in samples of size 3 from a universe of size 4, as in the OP. We can translate each of those frequencies directly into a frequency of samples by selecting the samples which do not contain the given object. For example, we wish 9/30 of the selected objects to be a's; we cannot have more than one a in a sample, and we always have three symbols in a sample; consequently, 9/10 of the samples must contain a and 1/10 cannot contain a. But there is only one possible sample which doesn't contain a: bcd. So 10% of the samples must be bcd. Similarly, 20% must be acd; 30% abd and 40% abc. (Or, more generally, Fā = 1 - (n-1)Fa where Fā is the frequency of the (unique) sample which does not include a)
I can't help thinking that this observation combined with one of the classic ways of generating unique samples can solve the general problem. But I don't have that solution. For what it's worth, the algorithm I'm thinking of is the following:
To select a random sample of size k out of a universe U of n objects:
1) Set needed = k; available = n.
2) For each element in U, select a random number in the range [0, 1).
3) If the random number is less than k/n:
3a) Add the element to the sample.
3b) Decrement needed by 1. If it reaches 0, we're finished.
4) Decrement available, and continue with the next element in U.
So my idea is that it should be possible to manipulate the frequency of element by changing the threshold in step 3, making it somehow a function of the desired frequency of the corresponding element.
Assuming that the length of a word is always one less than the number of symbols, the following C# code does the job:
using System;
using System.Collections.Generic;
using System.Linq;
using MathNet.Numerics.Distributions;
namespace RandomSymbols
{
class Program
{
static void Main(string[] args)
{
// Sample case: Four symbols with the following distribution, and 10000 trials
double[] distribution = { 9.0/30, 8.0/30, 7.0/30, 6.0/30 };
int trials = 10000;
// Create an array containing all of the symbols
char[] symbols = Enumerable.Range('a', distribution.Length).Select(s => (char)s).ToArray();
// We're assuming that the word length is always one less than the number of symbols
int wordLength = symbols.Length - 1;
// Calculate the probability of each symbol being excluded from a given word
double[] excludeDistribution = Array.ConvertAll(distribution, p => 1 - p * wordLength);
// Create a random variable using the MathNet.Numerics library
var randVar = new Categorical(excludeDistribution);
var random = new Random();
randVar.RandomSource = random;
// We'll store all of the words in an array
string[] words = new string[trials];
for (int t = 0; t < trials; t++)
{
// Start with a word containing all of the symbols
var word = new List<char>(symbols);
// Remove one of the symbols
word.RemoveAt(randVar.Sample());
// Randomly permute the remainder
for (int i = 0; i < wordLength; i++)
{
int swapIndex = random.Next(wordLength);
char temp = word[swapIndex];
word[swapIndex] = word[i];
word[i] = temp;
}
// Store the word
words[t] = new string(word.ToArray());
}
// Display words
Array.ForEach(words, w => Console.WriteLine(w));
// Display histogram
Array.ForEach(symbols, s => Console.WriteLine("{0}: {1}", s, words.Count(w => w.Contains(s))));
}
}
}
Update: The following is a C implementation of the method that rici outlined. The tricky part is calculating the thresholds that he mentions, which I did with recursion.
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
// ****** Change the following for different symbol distributions, word lengths, and number of trials ******
double targetFreqs[] = {10.0/43, 9.0/43, 8.0/43, 7.0/43, 6.0/43, 2.0/43, 1.0/43 };
const int WORDLENGTH = 4;
const int TRIALS = 1000000;
// *********************************************************************************************************
const int SYMBOLCOUNT = sizeof(targetFreqs) / sizeof(double);
double inclusionProbs[SYMBOLCOUNT];
double probLeftToIncludeTable[SYMBOLCOUNT][SYMBOLCOUNT];
// Calculates the probability that there will be n symbols left to be included when we get to the ith symbol.
double probLeftToInclude(int i, int n)
{
if (probLeftToIncludeTable[i][n] == -1)
{
// If this is the first symbol, then the number of symbols left to be included is simply the word length.
if (i == 0)
{
probLeftToIncludeTable[i][n] = (n == WORDLENGTH ? 1.0 : 0.0);
}
else
{
// Calculate the probability based on the previous symbol's probabilities.
// To get to a point where there are n symbols left to be included, either there were n+1 symbols left
// when we were considering that previous symbol and we included it, leaving n,
// or there were n symbols left and we didn't included it, also leaving n.
// We have to take into account that the previous symbol may have been manditorily included.
probLeftToIncludeTable[i][n] = probLeftToInclude(i-1, n+1) * (n == SYMBOLCOUNT-i ? 1.0 : inclusionProbs[i-1])
+ probLeftToInclude(i-1, n) * (n == 0 ? 1.0 : 1 - inclusionProbs[i-1]);
}
}
return probLeftToIncludeTable[i][n];
}
// Calculates the probability that the ith symbol won't *have* to be included or *have* to be excluded.
double probInclusionIsOptional(int i)
{
// The probability that inclusion is optional is equal to 1.0
// minus the probability that none of the remaining symbols can be included
// minus the probability that all of the remaining symbols must be included.
return 1.0 - probLeftToInclude(i, 0) - probLeftToInclude(i, SYMBOLCOUNT - i);
}
// Calculates the probability with which the ith symbol should be included, assuming that
// it doesn't *have* to be included or *have* to be excluded.
double inclusionProb(int i)
{
// The following is derived by simple algebra:
// Unconditional probability = (1.0 * probability that it must be included) + (inclusionProb * probability that inclusion is optional)
// therefore...
// inclusionProb = (Unconditional probability - probability that it must be included) / probability that inclusion is optional
return (targetFreqs[i]*WORDLENGTH - probLeftToInclude(i, SYMBOLCOUNT - i)) / probInclusionIsOptional(i);
}
int main(int argc, char* argv[])
{
srand(time(NULL));
// Initialize inclusion probabilities
for (int i=0; i<SYMBOLCOUNT; i++)
for (int j=0; j<SYMBOLCOUNT; j++)
probLeftToIncludeTable[i][j] = -1.0;
// Calculate inclusion probabilities
for (int i=0; i<SYMBOLCOUNT; i++)
{
inclusionProbs[i] = inclusionProb(i);
}
// Histogram
int histogram[SYMBOLCOUNT];
for (int i=0; i<SYMBOLCOUNT; i++)
{
histogram[i] = 0;
}
// Scratchpad for building our words
char word[WORDLENGTH+1];
word[WORDLENGTH] = '\0';
// Run trials
for (int t=0; t<TRIALS; t++)
{
int included = 0;
// Build the word by including or excluding symbols according to the problem constraints
// and the probabilities in inclusionProbs[].
for (int i=0; i<SYMBOLCOUNT && included<WORDLENGTH; i++)
{
if (SYMBOLCOUNT - i == WORDLENGTH - included // if we have to include this symbol
|| (double)rand()/(double)RAND_MAX < inclusionProbs[i]) // or if we get a lucky roll of the dice
{
word[included++] = 'a' + i;
histogram[i]++;
}
}
// Randomly permute the word
for (int i=0; i<WORDLENGTH; i++)
{
int swapee = rand() % WORDLENGTH;
char temp = word[swapee];
word[swapee] = word[i];
word[i] = temp;
}
// Uncomment the following to show each word
// printf("%s\r\n", word);
}
// Show the histogram
for (int i=0; i<SYMBOLCOUNT; i++)
{
printf("%c: target=%d, actual=%d\r\n", 'a'+i, (int)(targetFreqs[i]*WORDLENGTH*TRIALS), histogram[i]);
}
return 0;
}
To do this you have to use a temporary array storing the cumulated sum of your probabilities.
In your example, probabilities are 9/30, 8/30, 7/30 and 6/30 respectively.
You should then have an array:
values = {'a', 'b', 'c', 'd'}
proba = {9/30, 17/30, 24/30, 1}
Then you pick a random number r in [0, 1] and do like this:
char chooseRandom() {
int i = 0;
while (r > proba[i])
++i;
return values[i];
}

Calculate the Hilbert value of a point for use in a Hilbert R-Tree?

I have an application where a Hilbert R-Tree (wikipedia) (citeseer) would seem to be an appropriate data structure. Specifically, it requires reasonably fast spatial queries over a data set that will experience a lot of updates.
However, as far as I can see, none of the descriptions of the algorithms for this data structure even mention how to actually calculate the requisite Hilbert Value; which is the distance along a Hilbert Curve to the point.
So any suggestions for how to go about calculating this?
Fun question!
I did a bit of googling, and the good news is, I've found an implementation of Hilbert Value.
The potentially bad news is, it's in Haskell...
http://www.serpentine.com/blog/2007/01/11/two-dimensional-spatial-hashing-with-space-filling-curves/
It also proposes a Lebesgue distance metric you might be able to compute more easily.
Below is my java code adapted from C code in the paper "Encoding and decoding the Hilbert order" by Xian Lu and Gunther Schrack, published in Software: Practice and Experience Vol. 26 pp 1335-46 (1996).
Hope this helps. Improvements welcome !
Michael
/**
* Find the Hilbert order (=vertex index) for the given grid cell
* coordinates.
* #param x cell column (from 0)
* #param y cell row (from 0)
* #param r resolution of Hilbert curve (grid will have Math.pow(2,r)
* rows and cols)
* #return Hilbert order
*/
public static int encode(int x, int y, int r) {
int mask = (1 << r) - 1;
int hodd = 0;
int heven = x ^ y;
int notx = ~x & mask;
int noty = ~y & mask;
int temp = notx ^ y;
int v0 = 0, v1 = 0;
for (int k = 1; k < r; k++) {
v1 = ((v1 & heven) | ((v0 ^ noty) & temp)) >> 1;
v0 = ((v0 & (v1 ^ notx)) | (~v0 & (v1 ^ noty))) >> 1;
}
hodd = (~v0 & (v1 ^ x)) | (v0 & (v1 ^ noty));
return interleaveBits(hodd, heven);
}
/**
* Interleave the bits from two input integer values
* #param odd integer holding bit values for odd bit positions
* #param even integer holding bit values for even bit positions
* #return the integer that results from interleaving the input bits
*
* #todo: I'm sure there's a more elegant way of doing this !
*/
private static int interleaveBits(int odd, int even) {
int val = 0;
// Replaced this line with the improved code provided by Tuska
// int n = Math.max(Integer.highestOneBit(odd), Integer.highestOneBit(even));
int max = Math.max(odd, even);
int n = 0;
while (max > 0) {
n++;
max >>= 1;
}
for (int i = 0; i < n; i++) {
int bitMask = 1 << i;
int a = (even & bitMask) > 0 ? (1 << (2*i)) : 0;
int b = (odd & bitMask) > 0 ? (1 << (2*i+1)) : 0;
val += a + b;
}
return val;
}
See uzaygezen.
The code and java code above are fine for 2D data points. But for higher dimensions you may need to look at Jonathan Lawder's paper: J.K.Lawder. Calculation of Mappings Between One and n-dimensional Values Using the Hilbert Space-filling Curve.
I figured out a slightly more efficient way to interleave bits. It can be found at the Stanford Graphics Website. I included a version that I created that can interleave two 32 bit integers into one 64 bit long.
public static long spreadBits32(int y) {
long[] B = new long[] {
0x5555555555555555L,
0x3333333333333333L,
0x0f0f0f0f0f0f0f0fL,
0x00ff00ff00ff00ffL,
0x0000ffff0000ffffL,
0x00000000ffffffffL
};
int[] S = new int[] { 1, 2, 4, 8, 16, 32 };
long x = y;
x = (x | (x << S[5])) & B[5];
x = (x | (x << S[4])) & B[4];
x = (x | (x << S[3])) & B[3];
x = (x | (x << S[2])) & B[2];
x = (x | (x << S[1])) & B[1];
x = (x | (x << S[0])) & B[0];
return x;
}
public static long interleave64(int x, int y) {
return spreadBits32(x) | (spreadBits32(y) << 1);
}
Obviously, the B and S local variables should be class constants but it was left this way for simplicity.
Michael,
thanks for your Java code! I tested it and it seems to work fine, but I noticed that the bit-interleaving function overflows at recursion level 7 (at least in my tests, but I used long values), because the "n"-value is calculated using highestOneBit()-function, which returns the value and not the position of the highest one bit; so the loop does unnecessarily many interleavings.
I just changed it to the following snippet, and after that it worked fine.
int max = Math.max(odd, even);
int n = 0;
while (max > 0) {
n++;
max >>= 1;
}
If you need a spatial index with fast delete/insert capabilities, have a look at the PH-tree. It partly based on quadtrees but faster and more space efficient. Internally it uses a Z-curve which has slightly worse spatial properties than an H-curve but is much easier to calculate.
Paper: http://www.globis.ethz.ch/script/publication/download?docid=699
Java implementation: http://globis.ethz.ch/files/2014/11/ph-tree-2014-11-10.zip
Another option is the X-tree, which is also available here:
https://code.google.com/p/xxl/
Suggestion: A good simple efficient data structure for spatial queries is a multidimensional binary tree.
In a traditional binary tree, there is one "discriminant"; the value that's used to determine whether you take the left branch or the right branch. This can be considered to be the one-dimensional case.
In a multidimensional binary tree, you have multiple discriminants; consecutive levels use different discriminants. For example, for two dimensional spacial data, you could use the X and Y coordinates as discriminants. Consecutive levels would use X, Y, X, Y...
For spatial queries (for example finding all nodes within a rectangle) you do a depth-first search of the tree starting at the root, and you use the discriminant at each level to avoid searching down branches that contain no nodes in the given rectangle.
This allows you to potentially cut the search space in half at each level, making it very efficient for finding small regions in a massive data set. (BTW, this data structure is also useful for partial-match queries, i.e. queries that omit one or more discriminants. You just search down both branches at levels with an omitted discriminant.)
A good paper on this data structure: http://portal.acm.org/citation.cfm?id=361007
This article has good diagrams and algorithm descriptions: http://en.wikipedia.org/wiki/Kd-tree

Resources