Seeded Pseudo Random Number Generator - random

I am developing a Perlin Noise generator which works based on a seed integer and on two other integers: x and y.
By now, the pseudo-random number generator is looking like this:
private float noise(int x, int y) {
int n = x + y * seed;
return (1.0f - ((n * (n * n * 15731 + 789221) + 1376312589) & 0x7fffffff) / 1073741824f);
}
But there are some problems with this implementation: first of all, the return interval is not constant (ideally, I would like to work with [-1, 1] or [0, 1]) and for negative x and y values, the pattern gets blocky, not looking organic at all.
Is there a way to change my formula (or maybe a totally new one) which would make it fit my needs?

I use this one (i found it on the net, but i don't have the original link anymore):
private double noise(int x, int y) {
int n=(int)x*331+(int)y*337; // add your seed on this line.
n=(n<<13)^n;
int nn=(n*(n*n*41333 +53307781)+1376312589)&0x7fffffff;
return ((1.0-((double)nn/1073741824.0))+1)/2.0;
}
You can easy add your seed to it.

Related

C++ srand() repeating the same string of numbers [duplicate]

So, I'm trying to create a random vector (think geometry, not an expandable array), and every time I call my random vector function I get the same x value, though y and z are different.
int main () {
srand ( (unsigned)time(NULL));
Vector<double> a;
a.randvec();
cout << a << endl;
return 0;
}
using the function
//random Vector
template <class T>
void Vector<T>::randvec()
{
const int min=-10, max=10;
int randx, randy, randz;
const int bucket_size = RAND_MAX/(max-min);
do randx = (rand()/bucket_size)+min;
while (randx <= min && randx >= max);
x = randx;
do randy = (rand()/bucket_size)+min;
while (randy <= min && randy >= max);
y = randy;
do randz = (rand()/bucket_size)+min;
while (randz <= min && randz >= max);
z = randz;
}
For some reason, randx will consistently return 8, whereas the other numbers seem to be following the (pseudo) randomness perfectly. However, if I put the call to define, say, randy before randx, randy will always return 8.
Why is my first random number always 8? Am I seeding incorrectly?
The issue is that the random number generator is being seeded with a values that are very close together - each run of the program only changes the return value of time() by a small amount - maybe 1 second, maybe even none! The rather poor standard random number generator then uses these similar seed values to generate apparently identical initial random numbers. Basically, you need a better initial seed generator than time() and a better random number generator than rand().
The actual looping algorithm used is I think lifted from Accelerated C++ and is intended to produce a better spread of numbers over the required range than say using the mod operator would. But it can't compensate for always being (effectively) given the same seed.
I don't see any problem with your srand(), and when I tried running extremely similar code, I did not repeatedly get the same number with the first rand(). However, I did notice another possible issue.
do randx = (rand()/bucket_size)+min;
while (randx <= min && randx >= max);
This line probably does not do what you intended. As long as min < max (and it always should be), it's impossible for randx to be both less than or equal to min and greater than or equal to max. Plus, you don't need to loop at all. Instead, you can get a value in between min and max using:
randx = rand() % (max - min) + min;
I had the same problem exactly. I fixed it by moving the srand() call so it was only called once in my program (previously I had been seeding it at the top of a function call).
Don't really understand the technicalities - but it was problem solved.
Also to mention, you can even get rid of that strange bucket_size variable and use the following method to generate numbers from a to b inclusively:
srand ((unsigned)time(NULL));
const int a = -1;
const int b = 1;
int x = rand() % ((b - a) + 1) + a;
int y = rand() % ((b - a) + 1) + a;
int z = rand() % ((b - a) + 1) + a;
A simple quickfix is to call rand a few times after seeding.
int main ()
{
srand ( (unsigned)time(NULL));
rand(); rand(); rand();
Vector<double> a;
a.randvec();
cout << a << endl;
return 0;
}
Just to explain better, the first call to rand() in four sequential runs of a test program gave the following output:
27592
27595
27598
27602
Notice how similar they are? For example, if you divide rand() by 100, you will get the same number 3 times in a row. Now take a look at the second result of rand() in four sequential runs:
11520
22268
248
10997
This looks much better, doesn't it? I really don't see any reason for the downvotes.
Your implementation, through integer division, ignores the smallest 4-5 bit of the random number. Since your RNG is seeded with the system time, the first value you get out of it will change only (on average) every 20 seconds.
This should work:
randx = (min) + (int) ((max - min) * rand() / (RAND_MAX + 1.0));
where
rand() / (RAND_MAX + 1.0)
is a random double value in [0, 1) and the rest is just shifting it around.
Not directly related to the code in this question, but I had same issue with using
srand ((unsigned)time(NULL)) and still having same sequence of values being returned from following calls to rand().
It turned out that srand needs to called on each thread you are using it on separately. I had a loading thread that was generating random content (that wasn't random cuz of the seed issue). I had just using srand in the main thread and not the loading thread. So added another srand ((unsigned)time(NULL)) to start of loading thread fixed this issue.

Fast random/mutation algorithms (vector to vector) [duplicate]

I've been trying to create a generalized Gradient Noise generator (which doesn't use the hash method to get gradients). The code is below:
class GradientNoise {
std::uint64_t m_seed;
std::uniform_int_distribution<std::uint8_t> distribution;
const std::array<glm::vec2, 4> vector_choice = {glm::vec2(1.0, 1.0), glm::vec2(-1.0, 1.0), glm::vec2(1.0, -1.0),
glm::vec2(-1.0, -1.0)};
public:
GradientNoise(uint64_t seed) {
m_seed = seed;
distribution = std::uniform_int_distribution<std::uint8_t>(0, 3);
}
// 0 -> 1
// just passes the value through, origionally was perlin noise activation
double nonLinearActivationFunction(double value) {
//return value * value * value * (value * (value * 6.0 - 15.0) + 10.0);
return value;
}
// 0 -> 1
//cosine interpolation
double interpolate(double a, double b, double t) {
double mu2 = (1 - cos(t * M_PI)) / 2;
return (a * (1 - mu2) + b * mu2);
}
double noise(double x, double y) {
std::mt19937_64 rng;
//first get the bottom left corner associated
// with these coordinates
int corner_x = std::floor(x);
int corner_y = std::floor(y);
// then get the respective distance from that corner
double dist_x = x - corner_x;
double dist_y = y - corner_y;
double corner_0_contrib; // bottom left
double corner_1_contrib; // top left
double corner_2_contrib; // top right
double corner_3_contrib; // bottom right
std::uint64_t s1 = ((std::uint64_t(corner_x) << 32) + std::uint64_t(corner_y) + m_seed);
std::uint64_t s2 = ((std::uint64_t(corner_x) << 32) + std::uint64_t(corner_y + 1) + m_seed);
std::uint64_t s3 = ((std::uint64_t(corner_x + 1) << 32) + std::uint64_t(corner_y + 1) + m_seed);
std::uint64_t s4 = ((std::uint64_t(corner_x + 1) << 32) + std::uint64_t(corner_y) + m_seed);
// each xy pair turns into distance vector from respective corner, corner zero is our starting corner (bottom
// left)
rng.seed(s1);
corner_0_contrib = glm::dot(vector_choice[distribution(rng)], {dist_x, dist_y});
rng.seed(s2);
corner_1_contrib = glm::dot(vector_choice[distribution(rng)], {dist_x, dist_y - 1});
rng.seed(s3);
corner_2_contrib = glm::dot(vector_choice[distribution(rng)], {dist_x - 1, dist_y - 1});
rng.seed(s4);
corner_3_contrib = glm::dot(vector_choice[distribution(rng)], {dist_x - 1, dist_y});
double u = nonLinearActivationFunction(dist_x);
double v = nonLinearActivationFunction(dist_y);
double x_bottom = interpolate(corner_0_contrib, corner_3_contrib, u);
double x_top = interpolate(corner_1_contrib, corner_2_contrib, u);
double total_xy = interpolate(x_bottom, x_top, v);
return total_xy;
}
};
I then generate an OpenGL texture to display with like this:
int width = 1024;
int height = 1024;
unsigned char *temp_texture = new unsigned char[width*height * 4];
double octaves[5] = {2,4,8,16,32};
for( int i = 0; i < height; i++){
for(int j = 0; j < width; j++){
double d_noise = 0;
d_noise += temp_1.noise(j/octaves[0], i/octaves[0]);
d_noise += temp_1.noise(j/octaves[1], i/octaves[1]);
d_noise += temp_1.noise(j/octaves[2], i/octaves[2]);
d_noise += temp_1.noise(j/octaves[3], i/octaves[3]);
d_noise += temp_1.noise(j/octaves[4], i/octaves[4]);
d_noise/=5;
uint8_t noise = static_cast<uint8_t>(((d_noise * 128.0) + 128.0));
temp_texture[j*4 + (i * width * 4) + 0] = (noise);
temp_texture[j*4 + (i * width * 4) + 1] = (noise);
temp_texture[j*4 + (i * width * 4) + 2] = (noise);
temp_texture[j*4 + (i * width * 4) + 3] = (255);
}
}
Which give good results:
But gprof is telling me that the Mersenne twister is taking up 62.4% of my time and growing with larger textures. Nothing else individual takes any where near as much time. While the Mersenne twister is fast after initialization, the fact that I initialize it every time I use it seems to make it pretty slow.
This initialization is 100% required for this to make sure that the same x and y generates the same gradient at each integer point (so you need either a hash function or seed the RNG each time).
I attempted to change the PRNG to both the linear congruential generator and Xorshiftplus, and while both ran orders of magnitude faster, they gave odd results:
LCG (one time, then running 5 times before using)
Xorshiftplus
After one iteration
After 10,000 iterations.
I've tried:
Running the generator several times before utilizing output, this results in slow execution or simply different artifacts.
Using the output of two consecutive runs after initial seed to seed the PRNG again and use the value after wards. No difference in result.
What is happening? What can i do to get faster results that are of the same quality as the mersenne twister?
OK BIG UPDATE:
I don't know why this works, I know it has something to do with the prime number utilized, but after messing around a bit, it appears that the following works:
Step 1, incorporate the x and y values as seeds separately (and incorporate some other offset value or additional seed value with them, this number should be a prime/non trivial factor)
Step 2, Use those two seed results into seeding the generator again back into the function (so like geza said, the seeds made were bad)
Step 3, when getting the result, instead of using modulo number of items (4) trying to get, or & 3, modulo the result by a prime number first then apply & 3. I'm not sure if the prime being a mersenne prime matters or not.
Here is the result with prime = 257 and xorshiftplus being used! (note I used 2048 by 2048 for this one, the others were 256 by 256)
LCG is known to be inadequate for your purpose.
Xorshift128+'s results are bad, because it needs good seeding. And providing good seeding defeats the whole purpose of using it. I don't recommend this.
However, I recommend using an integer hash. For example, one from Bob's page.
Here's a result of the first hash of that page, it looks OK to me, and it is fast (I think it is much faster than Mersenne Twister):
Here's the code I've written to generate this:
#include <cmath>
#include <stdio.h>
unsigned int hash(unsigned int a) {
a = (a ^ 61) ^ (a >> 16);
a = a + (a << 3);
a = a ^ (a >> 4);
a = a * 0x27d4eb2d;
a = a ^ (a >> 15);
return a;
}
unsigned int ivalue(int x, int y) {
return hash(y<<16|x)&0xff;
}
float smooth(float x) {
return 6*x*x*x*x*x - 15*x*x*x*x + 10*x*x*x;
}
float value(float x, float y) {
int ix = floor(x);
int iy = floor(y);
float fx = smooth(x-ix);
float fy = smooth(y-iy);
int v00 = ivalue(iy+0, ix+0);
int v01 = ivalue(iy+0, ix+1);
int v10 = ivalue(iy+1, ix+0);
int v11 = ivalue(iy+1, ix+1);
float v0 = v00*(1-fx) + v01*fx;
float v1 = v10*(1-fx) + v11*fx;
return v0*(1-fy) + v1*fy;
}
unsigned char pic[1024*1024];
int main() {
for (int y=0; y<1024; y++) {
for (int x=0; x<1024; x++) {
float v = 0;
for (int o=0; o<=9; o++) {
v += value(x/64.0f*(1<<o), y/64.0f*(1<<o))/(1<<o);
}
int r = rint(v*0.5f);
pic[y*1024+x] = r;
}
}
FILE *f = fopen("x.pnm", "wb");
fprintf(f, "P5\n1024 1024\n255\n");
fwrite(pic, 1, 1024*1024, f);
fclose(f);
}
If you want to understand, how a hash function work (or better yet, which properties a good hash have), check out Bob's page, for example this.
You (unknowingly?) implemented a visualization of PRNG non-random patterns. That looks very cool!
Except Mersenne Twister, all your tested PRNGs do not seem fit for your purpose. As I have not done further tests myself, I can only suggest to try out and measure further PRNGs.
The randomness of LCGs are known to be sensitive to the choice of their parameters. In particular, the period of a LCG is relative to the m parameter - at most it will be m (your prime factor) & for many values it can be less.
Similarly, the careful parameters selection is required to get a long period from Xorshift PRNGs.
You've noted that some PRNGs give good procedural generation results while other do not. In order to isolate the cause, I would factor out the proc gen stuff & examine the PRNG output directly. An easy way to visualize the data is to build a grey scale image where each pixel value is a (possibly scaled) random value. For image based stuff, I find this to be an easy way to find stuff that may lead to visual artifacts. Any artifacts you see with this are likely to cause issues with your proc gen output.
Another option is to try something like the Diehard tests. If the aforementioned image test failed to reveal any problems, I might use this just to be sure my PRNG techniques were trustworthy.
Note that your code seeds the PRNG, then generates one pseudorandom number from the PRNG. The reason for the nonrandomness in xorshift128+ that you discovered is that xorshift128+ simply adds the two halves of the seed (and uses the result mod 264 as the generated number) before changing its state (review its source code). This makes that PRNG considerably different from a hash function.
What you see is the practical demonstration of quality of PRNG. Mersenne Twister is one of the best PRNGs with good performance, it passes DIEHARD tests. One should know that generating a random numbers is not an easy computational task, so looking for a better performance will inevitably result in poor quality. LCG is known to be simplest and worst PRNG ever designed and it clearly shows two-dimensional correlation as in your picture. The quality of Xorshift generators largely depend on bitness and parameters. They are definitely worse than Mersenne Twister, but some (xorshift128+) may work good enough to pass BigCrush battery of TestU01 tests.
In other words, if you are making an important physical modelling numerical experiment, you better continue to use Mersenne Twister as known to be a good trade-off between speed and quality and it comes in many standard libraries. On a less important case you may try to use xorshift128+ generator. For an ultimate results you need to use cryptographical-quality PRNG (none of mentioned here may be used for cryptographical purposes).

What is the 'roughness constant' of this midpoint displacement algorithm, and how can I modify it?

I've taken code from "Midpoint displacement algorithm example", cleaned it up a bit, and resuited it to work as a 1D linear terrain generator. Below is my new version of the doMidpoint() method:
public boolean setMidpointDisplacement(int x1, int x2) {
// Exit recursion if points are next to eachother
if (x2 - x1 < 2) {
return false;
}
final int midX = (x1 + x2) / 2;
final int dist = x2 - x1;
final int distHalf = dist / 2;
final int y1 = map[x1];
final int y2 = map[x2];
final int delta = random.nextInt(dist) - distHalf; // +/- half the distance
final int sum = y1 + y2;
map[midX] = (sum + delta) / 2; // Sets the midpoint
// Divide and repeat
setMidpointDisplacement(x1, midX);
setMidpointDisplacement(midX, x2);
return true;
}
The code seems to work well and produces workable terrain (you can see how I've tested it, with a rudimentary GUI)
After reading "Generating Random Fractal Terrain" and "Mid Point Displacement Algorithm", my question is:
How can I identify the 'roughness constant' implicitly utilized by this code? And then, how can I change it?
Additionally, and this may or may not be directly related to my major question, but I've noticed that the code adds the sum of the y-values to the "delta" (change amount) and divides this by 2 -- although this is the same as averaging the sum and then adding delta/2. Does this have any bearing on the 'roughness constant'? I'm thinking that I could do
map[midX] = sum/2 + delta/K;
and K would now be representative of the 'roughness constant', but I'm not sure if this is accurate or not, since it seems to allow me to control smoothing but doesn't directly control "how much the random number range is reduced each time through the loop" as defined by "Generating Random Fractal Terrain".
Like I've said before, I ported the 2D MDP noise generator I found into a 1D version -- but I'm fairly certain I did it accurately, so that is not the source of any problems.
How can I identify the 'roughness constant' implicitly utilized by this code?
In the cited, roughness is the amount you diminish the max random displacement. As your displacement is random.nextInt(dist) = dist*random.nextDouble(), your dist = x2-x1 and you go from one recursion step to the other with half of this dist, it follows that the roughness == 1 (in the cited terminology)
And then, how can I change it?
public boolean setMidpointDisplacement(int x1, int x2, int roughness) {
// Exit recursion if points are next to eachother
if (x2 - x1 < 2) {
return false;
}
// this is 2^-roughness as per cited
// you can pass it precalculated as a param, using it as such here
// is only to put it into a relation with the cited
double factor=1.0/(1<<roughness);
final int midX = (x1 + x2) / 2;
final int dist = x2 - x1;
final int distHalf = dist / 2;
final int y1 = map[x1];
final int y2 = map[x2];
// and you apply it here. A cast will be necessary though
final int delta = factor*(random.nextInt(dist) - distHalf); // +/- half the distance
final int sum = y1 + y2;
map[midX] = (sum + delta) / 2; // Sets the midpoint
// Divide and repeat
setMidpointDisplacement(x1, midX, roughness);
setMidpointDisplacement(midX, x2, roughness);
return true;
}
Additionally, and this may or may not be directly related to my major question, but I've noticed that the code adds the sum of the y-values to the "delta" (change amount) and divides this by 2
Their way has the advantage of doing it with a single division. As you work with ints, the accumulated truncation errors will be smaller with a single div (not to mention slightly faster).

Least Squares solution to simultaneous equations

I am trying to fit a transformation from one set of coordinates to another.
x' = R + Px + Qy
y' = S - Qx + Py
Where P,Q,R,S are constants, P = scale*cos(rotation). Q=scale*sin(rotation)
There is a well known 'by hand' formula for fitting P,Q,R,S to a set of corresponding points.
But I need to have an error estimate on the fit - so I need a least squares solution.
Read 'Numerical Recipes' but I'm having trouble working out how to do this for data sets with x and y in them.
Can anyone point to an example/tutorial/code sample of how to do this ?
Not too bothered about the language.
But - just use built in feature of Matlab/Lapack/numpy/R probably not helpful !
edit:
I have a large set of old(x,y) new(x,y) to fit to. The problem is overdetermined (more data points than unknowns) so simple matrix inversion isn't enough - and as I said I really need the error on the fit.
The following code should do the trick. I used the following formula for the residuals:
residual[i] = (computed_x[i] - actual_x[i])^2
+ (computed_y[i] - actual_y[i])^2
And then derived the least-squares formulae based on the general procedure described at Wolfram's MathWorld.
I tested out this algorithm in Excel and it performs as expected. I Used a collection of ten random points which were then rotated, translated and scaled by a randomly generated transformation matrix.
With no random noise applied to the output data, this program produces four parameters (P, Q, R, and S) which are identical to the input parameters, and an rSquared value of zero.
As more and more random noise is applied to the output points, the constants start to drift away from the correct values, and the rSquared value increases accordingly.
Here is the code:
// test data
const int N = 1000;
float oldPoints_x[N] = { ... };
float oldPoints_y[N] = { ... };
float newPoints_x[N] = { ... };
float newPoints_y[N] = { ... };
// compute various sums and sums of products
// across the entire set of test data
float Ex = Sum(oldPoints_x, N);
float Ey = Sum(oldPoints_y, N);
float Exn = Sum(newPoints_x, N);
float Eyn = Sum(newPoints_y, N);
float Ex2 = SumProduct(oldPoints_x, oldPoints_x, N);
float Ey2 = SumProduct(oldPoints_y, oldPoints_y, N);
float Exxn = SumProduct(oldPoints_x, newPoints_x, N);
float Exyn = SumProduct(oldPoints_x, newPoints_y, N);
float Eyxn = SumProduct(oldPoints_y, newPoints_x, N);
float Eyyn = SumProduct(oldPoints_y, newPoints_y, N);
// compute the transformation constants
// using least-squares regression
float divisor = Ex*Ex + Ey*Ey - N*(Ex2 + Ey2);
float P = (Exn*Ex + Eyn*Ey - N*(Exxn + Eyyn))/divisor;
float Q = (Exn*Ey + Eyn*Ex + N*(Exyn - Eyxn))/divisor;
float R = (Exn - P*Ex - Q*Ey)/N;
float S = (Eyn - P*Ey + Q*Ex)/N;
// compute the rSquared error value
// low values represent a good fit
float rSquared = 0;
float x;
float y;
for (int i = 0; i < N; i++)
{
x = R + P*oldPoints_x[i] + Q*oldPoints_y[i];
y = S - Q*oldPoints_x[i] + P*oldPoints_y[i];
rSquared += (x - newPoints_x[i])^2;
rSquared += (y - newPoints_y[i])^2;
}
To find P, Q, R, and S, then you can use least squares. I think the confusing thing is that that usual description of least squares uses x and y, but they don't match the x and y in your problem. You just need translate your problem carefully into the least squares framework. In your case the independent variables are the untransformed coordinates x and y, the dependent variables are the transformed coordinates x' and y', and the adjustable parameters are P, Q, R, and S. (If this isn't clear enough, let me know and I'll post more detail.)
Once you've found P, Q, R, and S, then scale = sqrt(P^2 + Q^2) and you can then find the rotation from sin rotation = Q / scale and cos rotation = P / scale.
You can use the levmar program to calculate this. Its tested and integrated into multiple products including mine. Its licensed under the GPL, but if this is a non-opensource project, he will change the license for you (for a fee)
Define the 3x3 matrix T(P,Q,R,S) such that (x',y',1) = T (x,y,1). Then compute
A = \sum_i |(T (x_i,y_i,1)) - (x'_i,y'_i,1)|^2
and minimize A against (P,Q,R,S).
Coding this yourself is a medium to large sized project unless you can guarntee that the data are well conditioned, especially when you want good error estimates out of the procedure. You're probably best off using an existing minimizer that supports error estimates..
Particle physics type would use minuit either directly from CERNLIB (with the coding most easily done in fortran77), or from ROOT (with the coding in c++, or it should be accessible though the python bindings). But that is a big installation if you don't have one of these tools already.
I'm sure that others can suggest other minimizers.
Thanks eJames, thats almost exaclty what I have. I coded it from an old army surveying manual that was based on an earlier "Instructions to Surveyors" note that must be 100years old! (It uses N and E for North and East rather than x/y )
The goodness of fit parameter will be very useful - I can interactively throw out selected points if they make the fit worse.
FindTransformation(vector<Point2D> known,vector<Point2D> unknown) {
{
// sums
for (unsigned int ii=0;ii<known.size();ii++) {
sum_e += unknown[ii].x;
sum_n += unknown[ii].y;
sum_E += known[ii].x;
sum_N += known[ii].y;
++n;
}
// mean position
me = sum_e/(double)n;
mn = sum_n/(double)n;
mE = sum_E/(double)n;
mN = sum_N/(double)n;
// differences
for (unsigned int ii=0;ii<known.size();ii++) {
de = unknown[ii].x - me;
dn = unknown[ii].y - mn;
// for P
sum_deE += (de*known[ii].x);
sum_dnN += (dn*known[ii].y);
sum_dee += (de*unknown[ii].x);
sum_dnn += (dn*unknown[ii].y);
// for Q
sum_dnE += (dn*known[ii].x);
sum_deN += (de*known[ii].y);
}
double P = (sum_deE + sum_dnN) / (sum_dee + sum_dnn);
double Q = (sum_dnE - sum_deN) / (sum_dee + sum_dnn);
double R = mE - (P*me) - (Q*mn);
double S = mN + (Q*me) - (P*mn);
}
One issue is that numeric stuff like this is often tricky. Even when the algorithms are straightforward, there's often problems that show up in actual computation.
For that reason, if there is a system you can get easily that has a built-in feature, it might be best to use that.

Calculate the Hilbert value of a point for use in a Hilbert R-Tree?

I have an application where a Hilbert R-Tree (wikipedia) (citeseer) would seem to be an appropriate data structure. Specifically, it requires reasonably fast spatial queries over a data set that will experience a lot of updates.
However, as far as I can see, none of the descriptions of the algorithms for this data structure even mention how to actually calculate the requisite Hilbert Value; which is the distance along a Hilbert Curve to the point.
So any suggestions for how to go about calculating this?
Fun question!
I did a bit of googling, and the good news is, I've found an implementation of Hilbert Value.
The potentially bad news is, it's in Haskell...
http://www.serpentine.com/blog/2007/01/11/two-dimensional-spatial-hashing-with-space-filling-curves/
It also proposes a Lebesgue distance metric you might be able to compute more easily.
Below is my java code adapted from C code in the paper "Encoding and decoding the Hilbert order" by Xian Lu and Gunther Schrack, published in Software: Practice and Experience Vol. 26 pp 1335-46 (1996).
Hope this helps. Improvements welcome !
Michael
/**
* Find the Hilbert order (=vertex index) for the given grid cell
* coordinates.
* #param x cell column (from 0)
* #param y cell row (from 0)
* #param r resolution of Hilbert curve (grid will have Math.pow(2,r)
* rows and cols)
* #return Hilbert order
*/
public static int encode(int x, int y, int r) {
int mask = (1 << r) - 1;
int hodd = 0;
int heven = x ^ y;
int notx = ~x & mask;
int noty = ~y & mask;
int temp = notx ^ y;
int v0 = 0, v1 = 0;
for (int k = 1; k < r; k++) {
v1 = ((v1 & heven) | ((v0 ^ noty) & temp)) >> 1;
v0 = ((v0 & (v1 ^ notx)) | (~v0 & (v1 ^ noty))) >> 1;
}
hodd = (~v0 & (v1 ^ x)) | (v0 & (v1 ^ noty));
return interleaveBits(hodd, heven);
}
/**
* Interleave the bits from two input integer values
* #param odd integer holding bit values for odd bit positions
* #param even integer holding bit values for even bit positions
* #return the integer that results from interleaving the input bits
*
* #todo: I'm sure there's a more elegant way of doing this !
*/
private static int interleaveBits(int odd, int even) {
int val = 0;
// Replaced this line with the improved code provided by Tuska
// int n = Math.max(Integer.highestOneBit(odd), Integer.highestOneBit(even));
int max = Math.max(odd, even);
int n = 0;
while (max > 0) {
n++;
max >>= 1;
}
for (int i = 0; i < n; i++) {
int bitMask = 1 << i;
int a = (even & bitMask) > 0 ? (1 << (2*i)) : 0;
int b = (odd & bitMask) > 0 ? (1 << (2*i+1)) : 0;
val += a + b;
}
return val;
}
See uzaygezen.
The code and java code above are fine for 2D data points. But for higher dimensions you may need to look at Jonathan Lawder's paper: J.K.Lawder. Calculation of Mappings Between One and n-dimensional Values Using the Hilbert Space-filling Curve.
I figured out a slightly more efficient way to interleave bits. It can be found at the Stanford Graphics Website. I included a version that I created that can interleave two 32 bit integers into one 64 bit long.
public static long spreadBits32(int y) {
long[] B = new long[] {
0x5555555555555555L,
0x3333333333333333L,
0x0f0f0f0f0f0f0f0fL,
0x00ff00ff00ff00ffL,
0x0000ffff0000ffffL,
0x00000000ffffffffL
};
int[] S = new int[] { 1, 2, 4, 8, 16, 32 };
long x = y;
x = (x | (x << S[5])) & B[5];
x = (x | (x << S[4])) & B[4];
x = (x | (x << S[3])) & B[3];
x = (x | (x << S[2])) & B[2];
x = (x | (x << S[1])) & B[1];
x = (x | (x << S[0])) & B[0];
return x;
}
public static long interleave64(int x, int y) {
return spreadBits32(x) | (spreadBits32(y) << 1);
}
Obviously, the B and S local variables should be class constants but it was left this way for simplicity.
Michael,
thanks for your Java code! I tested it and it seems to work fine, but I noticed that the bit-interleaving function overflows at recursion level 7 (at least in my tests, but I used long values), because the "n"-value is calculated using highestOneBit()-function, which returns the value and not the position of the highest one bit; so the loop does unnecessarily many interleavings.
I just changed it to the following snippet, and after that it worked fine.
int max = Math.max(odd, even);
int n = 0;
while (max > 0) {
n++;
max >>= 1;
}
If you need a spatial index with fast delete/insert capabilities, have a look at the PH-tree. It partly based on quadtrees but faster and more space efficient. Internally it uses a Z-curve which has slightly worse spatial properties than an H-curve but is much easier to calculate.
Paper: http://www.globis.ethz.ch/script/publication/download?docid=699
Java implementation: http://globis.ethz.ch/files/2014/11/ph-tree-2014-11-10.zip
Another option is the X-tree, which is also available here:
https://code.google.com/p/xxl/
Suggestion: A good simple efficient data structure for spatial queries is a multidimensional binary tree.
In a traditional binary tree, there is one "discriminant"; the value that's used to determine whether you take the left branch or the right branch. This can be considered to be the one-dimensional case.
In a multidimensional binary tree, you have multiple discriminants; consecutive levels use different discriminants. For example, for two dimensional spacial data, you could use the X and Y coordinates as discriminants. Consecutive levels would use X, Y, X, Y...
For spatial queries (for example finding all nodes within a rectangle) you do a depth-first search of the tree starting at the root, and you use the discriminant at each level to avoid searching down branches that contain no nodes in the given rectangle.
This allows you to potentially cut the search space in half at each level, making it very efficient for finding small regions in a massive data set. (BTW, this data structure is also useful for partial-match queries, i.e. queries that omit one or more discriminants. You just search down both branches at levels with an omitted discriminant.)
A good paper on this data structure: http://portal.acm.org/citation.cfm?id=361007
This article has good diagrams and algorithm descriptions: http://en.wikipedia.org/wiki/Kd-tree

Resources