Fast noise/PRNG from 3d input - performance

For a noise shader I'm looking for a pseudo random number algorithm with 3d vector argument, i.e.,
for every integer vector it returns a value in [0,1].
It should be as fast as possible without producing visual artifacts and giving the same results on every GPU.
Two variants (pseudo code) I found are
rand1(vec3 (x,y,z)){
return xorshift32(x ^ xorshift32(y ^ xorshift32(z)));
}
which already uses 20 arithmetic operations and still has to be casted and normalized and
rand2(vec3 v){
return fract(sin(dot(v, vec3(12.9898, 78.233, ?))) * 43758.5453);
};
which might be faster but uses sin causing precision problems and different results on different GPU's.
Do you know any other algorithms requiring less arithmetic operations?
Thanks in advance.
Edit: Another standard PRNG is XORWOW as implemented in C as
xorwow() {
int x = 123456789, y = 362436069, z = 521288629, w = 88675123, v = 5783321, d = 6615241;
int t = (x ^ (x >> 2));
x = y;
y = z;
z = w;
w = v;
v = (v ^ (v << 4)) ^ (t ^ (t << 1));
return (d += 362437) + v;
}
Can we rewrite it to fit in our context?

Related

map range of IEEE 32bit float [1:2) to some arbitrary [a:b)

Back story : uniform PRNG with arbitrary endpoints
I've got a fast uniform pseudo random number generator that creates uniform float32 numbers in range [1:2) i.e. u : 1 <= u <= 2-eps. Unfortunately mapping the endpoints [1:2) to that of an arbitrary range [a:b) is non-trivial in floating point math. I'd like to exactly match the endpoints with a simple affine calculation.
Formally stated
I want to make an IEEE-754 32 bit floating point affine function f(x,a,b) for 1<=x<2 and arbitrary a,b that exactly maps
1 -> a and nextlower(2) -> nextlower(b)
where nextlower(q) is the next lower FP representable number (e.g. in C++ std::nextafter(float(q),float(q-1)))
What I've tried
The simple mapping f(x,a,b) = (x-1)*(b-a) + a always achieves the f(1) condition but sometimes fails the f(2) condition due to floating point rounding.
I've tried replacing the 1 with a free design parameter to cancel FP errors in the spirit of Kahan summation.
i.e. with
f(x,c0,c1,c2) = (x-c0)*c1 + c2
one mathematical solution is c0=1,c1=(b-a),c2=a (the simple mapping above),
but the extra parameter lets me play around with constants c0,c1,c2 to match the endpoints. I'm not sure I understand the principles behind Kahan summation well enough to apply them to determine the parameters or even be confident a solution exists. It feels like I'm bumping around in the dark where others might've found the light already.
Aside: I'm fine assuming the following
a < b
both a and b are far from zero, i.e. OK to ignore subnormals
a and b are far enough apart (measuered in representable FP values) to mitigate non-uniform quantization and avoid degenerate cases
Update
I'm using a modified form of Chux's answer to avoid the division.
While I'm not 100% certain my refactoring kept all the magic, it does still work in all my test cases.
float lerp12(float x,float a,float b)
{
const float scale = 1.0000001f;
// scale = 1/(nextlower(2) - 1);
const float ascale = a*scale;
const float bscale = nextlower(b)*scale;
return (nextlower(2) - x)*ascale + (x - 1.0f)*bscale;
}
Note that only the last line (5 FLOPS) depends on x, so the others can be reused if (a,b) remain the same.
OP's goal
I want to make an IEEE-754 32 bit floating point affine function f(x,a,b) for 1<=x<2 and arbitrary a,b that exactly maps 1 -> a and nextlower(2) -> nextlower(b)
This differs slightly from "map range of IEEE 32bit float [1:2) to some arbitrary [a:b)".
General case
Map x0 to y0, x1 to y1 and various x in-between to y :
m = (y1 - y0)/(x1 - x0);
y = m*(x - x0) + y0;
OP's case
// x0 = 1.0f;
// x1 = nextafterf(2.0f, 1.0f);
// y0 = a;
// y1 = nextafterf(b, a);
#include <math.h> // for nextafterf()
float x = random_number_1_to_almost_2();
float m = (nextafterf(b, a) - a)/(nextafterf(2.0f, 1.0f) - 1.0f);
float y = m*(x - 1.0f) + a;
nextafterf(2.0f, 1.0f) - 1.0f, x - 1.0f and nextafterf(b, a) are exact, incurring no calculation error.
nextafterf(2.0f, 1.0f) - 1.0f is a value a little less than 1.0f.
Recommendation
Other re-formations are possible with better symmetry and numerical stability at the end-points.
float x = random_number_1_to_almost_2();
float afactor = nextafterf(2.0f, 1.0f) - x; // exact
float bfactor = x - 1.0f; // exact
float xwidth = nextafterf(2.0f, 1.0f) - 1.0f; // exact
// Do not re-order next line of code, perform 2 divisions
float y = (afactor/xwidth)*a + (bfactor/xwidth)*nextafterf(b, a);
Notice afactor/xwidth and bfactor/xwidth are both exactly 0.0 or 1.0 at the end-points, thus meeting "maps 1 -> a and nextlower(2) -> nextlower(b)". Extended precision not needed.
OP's (x-c0)*c1 + c2 has trouble as it divides (x-c0)*c1 by (2.0 - 1.0) or 1.0 (implied), when it should divide by nextafterf(2.0f, 1.0f) - 1.0f.
Simple lerping based on fused multiply-add can reliably hit the endpoints for interpolation factors 0 and 1. For x in [1, 2) the interpolation factor x - 1 does not reach unity, which can be fixed by slight stretching by multiplying x-1 with (2.0f / nextlower(2.0f)). Obviously the endpoint needs to also be adjusted to the endpoint nextlower(b). For the C code below I have used the definition of nextlower() provided in the question, which may not be what asker desires, since for floating-point q sufficiently large in magnitude, q == (q - 1).
Asker stated in comments that it is understood that this kind of mapping is not going to result in an exactly uniform distribution of the pseudo-random numbers in the interval [a, b), only approximately so, and that pathological mappings may occur when a and b are extremely close together. I have not mathematically proved that the implementation of map() below guarantees the desired behavior, but it seems to do so for a large number of random test cases.
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <string.h>
#include <math.h>
float nextlowerf (float q)
{
return nextafterf (q, q - 1);
}
float map (float a, float b, float x)
{
float t = (x - 1.0f) * (2.0f / nextlowerf (2.0f));
return fmaf (t, nextlowerf (b), fmaf (-t, a, a));
}
float uint32_as_float (uint32_t a)
{
float r;
memcpy (&r, &a, sizeof(r));
return r;
}
// George Marsaglia's KISS PRNG, period 2**123. Newsgroup sci.math, 21 Jan 1999
// Bug fix: Greg Rose, "KISS: A Bit Too Simple" http://eprint.iacr.org/2011/007
static uint32_t kiss_z=362436069, kiss_w=521288629;
static uint32_t kiss_jsr=123456789, kiss_jcong=380116160;
#define znew (kiss_z=36969*(kiss_z&65535)+(kiss_z>>16))
#define wnew (kiss_w=18000*(kiss_w&65535)+(kiss_w>>16))
#define MWC ((znew<<16)+wnew )
#define SHR3 (kiss_jsr^=(kiss_jsr<<13),kiss_jsr^=(kiss_jsr>>17), \
kiss_jsr^=(kiss_jsr<<5))
#define CONG (kiss_jcong=69069*kiss_jcong+1234567)
#define KISS ((MWC^CONG)+SHR3)
int main (void)
{
float a, b, x, r;
float FP32_MIN_NORM = 0x1.000000p-126f;
float FP32_MAX_NORM = 0x1.fffffep+127f;
do {
do {
a = uint32_as_float (KISS);
} while ((fabsf (a) < FP32_MIN_NORM) || (fabsf (a) > FP32_MAX_NORM) || isnan (a));
do {
b = uint32_as_float (KISS);
} while ((fabsf (b) < FP32_MIN_NORM) || (fabsf (b) > FP32_MAX_NORM) || isnan (b) || (b < a));
x = 1.0f;
r = map (a, b, x);
if (r != a) {
printf ("lower bound failed: a=%12.6a b=%12.6a map=%12.6a\n", a, b, r);
return EXIT_FAILURE;
}
x = nextlowerf (2.0f);
r = map (a, b, x);
if (r != nextlowerf (b)) {
printf ("upper bound failed: a=%12.6a b=%12.6a map=%12.6a\n", a, b, r);
return EXIT_FAILURE;
}
} while (1);
return EXIT_SUCCESS;
}

Solving a "nearly" Hermitian linear system

So I need to solve for the linear system (A + i * mu * I) x = b, where A is dense Hermitian matrix (6x6 complex numbers), mu is a real scalar, and I is identity matrix.
Obviously if mu=0, I should just use Cholesky and be done with it. With non-zero mu though, the matrix ceases to be Hermitian and Cholesky fails.
Possible solutions:
Solve normal operator using Cholesky and multiply by the conjugate
Solve directly using LU decomposition
This is in a time-critical performance routine, where I need the most efficient method. Any thoughts on the optimum approach, or if there is a specific method for solving the above shifted Hermitian system?
This is to be deployed in a CUDA kernel, where I'll be solving many linear systems in parallel, e.g., one per thread. This means that I need a solution that minimizes thread divergence. Given the small system size, pivoting can be ignored without too much issue: this removes a possible source of thread divergence. I've already implemented an in-place Cholesky normal method, and while it's working ok, the performance isn't great in double precision.
I can't vouch for the stability of the method below, but if your matrix is reasonably well conditioned, it might be worth a try.
We want to solve
A*X = B
If we pick out the first row and column, say
A = ( a y )
( z A_ )
X = ( x )
( X_)
B = ( b )
( B_ )
The requirement is
a*x + y*X_ = b
z*x + A_*X_ = B_
so
x = (b - y*X_ )/a
(A_ - zy/a) * X_ = B_ - (b/a)z
The solution goes in two stages. First use the second equation to transform A and b, then use the second to form the solution x.
In C:
static void nhsol( int dim, complx* A, complx* B, complx* X)
{
int i, j, k;
complx a, fb, fa;
complx* z;
complx* acol;
// update A and B
for( i=0; i<dim; ++i)
{ z = A + i*dim;
a = z[i];
// update B
fb = B[i]/a;
for( j=i+1; j<dim; ++j)
{ B[j] -= fb*z[j];
}
// update A
for( k=i+1; k<dim; ++k)
{ acol = A + k*dim;
fa = acol[i]/a;
for( j=i+1; j<dim; ++j)
{ acol[j] -= fa*z[j];
}
}
}
// compute x
i = dim-1;
X[i] = B[i] / A[i+dim*i];
while( --i>=0)
{
complx s = B[i];
for( j=i+1; j<dim; ++j)
{ s -= A[i+j*dim]*X[j];
}
X[i] = s/A[i+i*dim];
}
}
where
typedef _Complex double complx;
If code space is not at a premuim it might be worth unrolling the loops. Personally I would do this by writing a program whose sole job was to write the code.

Expand Right Bitwise Algorithm

Originally this post requested an inverse sheep-and-goats operation, but I realized that it was more than I really needed, so I edited the title, because I only need an expand-right algorithm, which is simpler. The example that I described below is still relevant.
Original Post:
I'm trying to figure out how to do either an inverse sheep-and-goats operation or, even better, an expand-right-flip.
According to Hacker's Delight, a sheeps-and-goats operation can be represented by:
SAG(x, m) = compress_left(x, m) | compress(x, ~m)
According to this site, the inverse can be found by:
INV_SAG(x, m, sw) = expand_left(x, ~m, sw) | expand_right(x, m, sw)
However, I can't find any code for the expand_left and expand_right functions. They are, of course, the inverse functions for compress, but compress is kind of hard to understand in itself.
Example:
To better explain what I'm looking for, consider a set of 8 bits like:
0000abcd
The variables a, b, c and d may be either ones or zeros. In addition, there is a mask which repositions the bits. So for example, if the mask were 01100101, the resulting bits would be repositioned as follows:
0ab00c0d
This can be done with an inverse sheeps-and-goats operation. However, according to this section of the site mentioned above, there is a more efficient way which he refers to as the expand-right-flip. Looking at his site, I was unable to figure out how that can be done.
Here's the expand_right from Hacker's Delight, it just says expand but it's the right version.
unsigned expand(unsigned x, unsigned m) {
unsigned m0, mk, mp, mv, t;
unsigned array[5];
int i;
m0 = m; // Save original mask.
mk = ~m << 1; // We will count 0's to right.
for (i = 0; i < 5; i++) {
mp = mk ^ (mk << 1); // Parallel suffix.
mp = mp ^ (mp << 2);
mp = mp ^ (mp << 4);
mp = mp ^ (mp << 8);
mp = mp ^ (mp << 16);
mv = mp & m; // Bits to move.
array[i] = mv;
m = (m ^ mv) | (mv >> (1 << i)); // Compress m.
mk = mk & ~mp;
}
for (i = 4; i >= 0; i--) {
mv = array[i];
t = x << (1 << i);
x = (x & ~mv) | (t & mv);
}
return x & m0; // Clear out extraneous bits.
}
You can use expand_left(x, m) == expand_right(x >> (32 - popcnt(m)), m) to make the left version, but that's probably not the best way.

Least Squares solution to simultaneous equations

I am trying to fit a transformation from one set of coordinates to another.
x' = R + Px + Qy
y' = S - Qx + Py
Where P,Q,R,S are constants, P = scale*cos(rotation). Q=scale*sin(rotation)
There is a well known 'by hand' formula for fitting P,Q,R,S to a set of corresponding points.
But I need to have an error estimate on the fit - so I need a least squares solution.
Read 'Numerical Recipes' but I'm having trouble working out how to do this for data sets with x and y in them.
Can anyone point to an example/tutorial/code sample of how to do this ?
Not too bothered about the language.
But - just use built in feature of Matlab/Lapack/numpy/R probably not helpful !
edit:
I have a large set of old(x,y) new(x,y) to fit to. The problem is overdetermined (more data points than unknowns) so simple matrix inversion isn't enough - and as I said I really need the error on the fit.
The following code should do the trick. I used the following formula for the residuals:
residual[i] = (computed_x[i] - actual_x[i])^2
+ (computed_y[i] - actual_y[i])^2
And then derived the least-squares formulae based on the general procedure described at Wolfram's MathWorld.
I tested out this algorithm in Excel and it performs as expected. I Used a collection of ten random points which were then rotated, translated and scaled by a randomly generated transformation matrix.
With no random noise applied to the output data, this program produces four parameters (P, Q, R, and S) which are identical to the input parameters, and an rSquared value of zero.
As more and more random noise is applied to the output points, the constants start to drift away from the correct values, and the rSquared value increases accordingly.
Here is the code:
// test data
const int N = 1000;
float oldPoints_x[N] = { ... };
float oldPoints_y[N] = { ... };
float newPoints_x[N] = { ... };
float newPoints_y[N] = { ... };
// compute various sums and sums of products
// across the entire set of test data
float Ex = Sum(oldPoints_x, N);
float Ey = Sum(oldPoints_y, N);
float Exn = Sum(newPoints_x, N);
float Eyn = Sum(newPoints_y, N);
float Ex2 = SumProduct(oldPoints_x, oldPoints_x, N);
float Ey2 = SumProduct(oldPoints_y, oldPoints_y, N);
float Exxn = SumProduct(oldPoints_x, newPoints_x, N);
float Exyn = SumProduct(oldPoints_x, newPoints_y, N);
float Eyxn = SumProduct(oldPoints_y, newPoints_x, N);
float Eyyn = SumProduct(oldPoints_y, newPoints_y, N);
// compute the transformation constants
// using least-squares regression
float divisor = Ex*Ex + Ey*Ey - N*(Ex2 + Ey2);
float P = (Exn*Ex + Eyn*Ey - N*(Exxn + Eyyn))/divisor;
float Q = (Exn*Ey + Eyn*Ex + N*(Exyn - Eyxn))/divisor;
float R = (Exn - P*Ex - Q*Ey)/N;
float S = (Eyn - P*Ey + Q*Ex)/N;
// compute the rSquared error value
// low values represent a good fit
float rSquared = 0;
float x;
float y;
for (int i = 0; i < N; i++)
{
x = R + P*oldPoints_x[i] + Q*oldPoints_y[i];
y = S - Q*oldPoints_x[i] + P*oldPoints_y[i];
rSquared += (x - newPoints_x[i])^2;
rSquared += (y - newPoints_y[i])^2;
}
To find P, Q, R, and S, then you can use least squares. I think the confusing thing is that that usual description of least squares uses x and y, but they don't match the x and y in your problem. You just need translate your problem carefully into the least squares framework. In your case the independent variables are the untransformed coordinates x and y, the dependent variables are the transformed coordinates x' and y', and the adjustable parameters are P, Q, R, and S. (If this isn't clear enough, let me know and I'll post more detail.)
Once you've found P, Q, R, and S, then scale = sqrt(P^2 + Q^2) and you can then find the rotation from sin rotation = Q / scale and cos rotation = P / scale.
You can use the levmar program to calculate this. Its tested and integrated into multiple products including mine. Its licensed under the GPL, but if this is a non-opensource project, he will change the license for you (for a fee)
Define the 3x3 matrix T(P,Q,R,S) such that (x',y',1) = T (x,y,1). Then compute
A = \sum_i |(T (x_i,y_i,1)) - (x'_i,y'_i,1)|^2
and minimize A against (P,Q,R,S).
Coding this yourself is a medium to large sized project unless you can guarntee that the data are well conditioned, especially when you want good error estimates out of the procedure. You're probably best off using an existing minimizer that supports error estimates..
Particle physics type would use minuit either directly from CERNLIB (with the coding most easily done in fortran77), or from ROOT (with the coding in c++, or it should be accessible though the python bindings). But that is a big installation if you don't have one of these tools already.
I'm sure that others can suggest other minimizers.
Thanks eJames, thats almost exaclty what I have. I coded it from an old army surveying manual that was based on an earlier "Instructions to Surveyors" note that must be 100years old! (It uses N and E for North and East rather than x/y )
The goodness of fit parameter will be very useful - I can interactively throw out selected points if they make the fit worse.
FindTransformation(vector<Point2D> known,vector<Point2D> unknown) {
{
// sums
for (unsigned int ii=0;ii<known.size();ii++) {
sum_e += unknown[ii].x;
sum_n += unknown[ii].y;
sum_E += known[ii].x;
sum_N += known[ii].y;
++n;
}
// mean position
me = sum_e/(double)n;
mn = sum_n/(double)n;
mE = sum_E/(double)n;
mN = sum_N/(double)n;
// differences
for (unsigned int ii=0;ii<known.size();ii++) {
de = unknown[ii].x - me;
dn = unknown[ii].y - mn;
// for P
sum_deE += (de*known[ii].x);
sum_dnN += (dn*known[ii].y);
sum_dee += (de*unknown[ii].x);
sum_dnn += (dn*unknown[ii].y);
// for Q
sum_dnE += (dn*known[ii].x);
sum_deN += (de*known[ii].y);
}
double P = (sum_deE + sum_dnN) / (sum_dee + sum_dnn);
double Q = (sum_dnE - sum_deN) / (sum_dee + sum_dnn);
double R = mE - (P*me) - (Q*mn);
double S = mN + (Q*me) - (P*mn);
}
One issue is that numeric stuff like this is often tricky. Even when the algorithms are straightforward, there's often problems that show up in actual computation.
For that reason, if there is a system you can get easily that has a built-in feature, it might be best to use that.

Calculate the Hilbert value of a point for use in a Hilbert R-Tree?

I have an application where a Hilbert R-Tree (wikipedia) (citeseer) would seem to be an appropriate data structure. Specifically, it requires reasonably fast spatial queries over a data set that will experience a lot of updates.
However, as far as I can see, none of the descriptions of the algorithms for this data structure even mention how to actually calculate the requisite Hilbert Value; which is the distance along a Hilbert Curve to the point.
So any suggestions for how to go about calculating this?
Fun question!
I did a bit of googling, and the good news is, I've found an implementation of Hilbert Value.
The potentially bad news is, it's in Haskell...
http://www.serpentine.com/blog/2007/01/11/two-dimensional-spatial-hashing-with-space-filling-curves/
It also proposes a Lebesgue distance metric you might be able to compute more easily.
Below is my java code adapted from C code in the paper "Encoding and decoding the Hilbert order" by Xian Lu and Gunther Schrack, published in Software: Practice and Experience Vol. 26 pp 1335-46 (1996).
Hope this helps. Improvements welcome !
Michael
/**
* Find the Hilbert order (=vertex index) for the given grid cell
* coordinates.
* #param x cell column (from 0)
* #param y cell row (from 0)
* #param r resolution of Hilbert curve (grid will have Math.pow(2,r)
* rows and cols)
* #return Hilbert order
*/
public static int encode(int x, int y, int r) {
int mask = (1 << r) - 1;
int hodd = 0;
int heven = x ^ y;
int notx = ~x & mask;
int noty = ~y & mask;
int temp = notx ^ y;
int v0 = 0, v1 = 0;
for (int k = 1; k < r; k++) {
v1 = ((v1 & heven) | ((v0 ^ noty) & temp)) >> 1;
v0 = ((v0 & (v1 ^ notx)) | (~v0 & (v1 ^ noty))) >> 1;
}
hodd = (~v0 & (v1 ^ x)) | (v0 & (v1 ^ noty));
return interleaveBits(hodd, heven);
}
/**
* Interleave the bits from two input integer values
* #param odd integer holding bit values for odd bit positions
* #param even integer holding bit values for even bit positions
* #return the integer that results from interleaving the input bits
*
* #todo: I'm sure there's a more elegant way of doing this !
*/
private static int interleaveBits(int odd, int even) {
int val = 0;
// Replaced this line with the improved code provided by Tuska
// int n = Math.max(Integer.highestOneBit(odd), Integer.highestOneBit(even));
int max = Math.max(odd, even);
int n = 0;
while (max > 0) {
n++;
max >>= 1;
}
for (int i = 0; i < n; i++) {
int bitMask = 1 << i;
int a = (even & bitMask) > 0 ? (1 << (2*i)) : 0;
int b = (odd & bitMask) > 0 ? (1 << (2*i+1)) : 0;
val += a + b;
}
return val;
}
See uzaygezen.
The code and java code above are fine for 2D data points. But for higher dimensions you may need to look at Jonathan Lawder's paper: J.K.Lawder. Calculation of Mappings Between One and n-dimensional Values Using the Hilbert Space-filling Curve.
I figured out a slightly more efficient way to interleave bits. It can be found at the Stanford Graphics Website. I included a version that I created that can interleave two 32 bit integers into one 64 bit long.
public static long spreadBits32(int y) {
long[] B = new long[] {
0x5555555555555555L,
0x3333333333333333L,
0x0f0f0f0f0f0f0f0fL,
0x00ff00ff00ff00ffL,
0x0000ffff0000ffffL,
0x00000000ffffffffL
};
int[] S = new int[] { 1, 2, 4, 8, 16, 32 };
long x = y;
x = (x | (x << S[5])) & B[5];
x = (x | (x << S[4])) & B[4];
x = (x | (x << S[3])) & B[3];
x = (x | (x << S[2])) & B[2];
x = (x | (x << S[1])) & B[1];
x = (x | (x << S[0])) & B[0];
return x;
}
public static long interleave64(int x, int y) {
return spreadBits32(x) | (spreadBits32(y) << 1);
}
Obviously, the B and S local variables should be class constants but it was left this way for simplicity.
Michael,
thanks for your Java code! I tested it and it seems to work fine, but I noticed that the bit-interleaving function overflows at recursion level 7 (at least in my tests, but I used long values), because the "n"-value is calculated using highestOneBit()-function, which returns the value and not the position of the highest one bit; so the loop does unnecessarily many interleavings.
I just changed it to the following snippet, and after that it worked fine.
int max = Math.max(odd, even);
int n = 0;
while (max > 0) {
n++;
max >>= 1;
}
If you need a spatial index with fast delete/insert capabilities, have a look at the PH-tree. It partly based on quadtrees but faster and more space efficient. Internally it uses a Z-curve which has slightly worse spatial properties than an H-curve but is much easier to calculate.
Paper: http://www.globis.ethz.ch/script/publication/download?docid=699
Java implementation: http://globis.ethz.ch/files/2014/11/ph-tree-2014-11-10.zip
Another option is the X-tree, which is also available here:
https://code.google.com/p/xxl/
Suggestion: A good simple efficient data structure for spatial queries is a multidimensional binary tree.
In a traditional binary tree, there is one "discriminant"; the value that's used to determine whether you take the left branch or the right branch. This can be considered to be the one-dimensional case.
In a multidimensional binary tree, you have multiple discriminants; consecutive levels use different discriminants. For example, for two dimensional spacial data, you could use the X and Y coordinates as discriminants. Consecutive levels would use X, Y, X, Y...
For spatial queries (for example finding all nodes within a rectangle) you do a depth-first search of the tree starting at the root, and you use the discriminant at each level to avoid searching down branches that contain no nodes in the given rectangle.
This allows you to potentially cut the search space in half at each level, making it very efficient for finding small regions in a massive data set. (BTW, this data structure is also useful for partial-match queries, i.e. queries that omit one or more discriminants. You just search down both branches at levels with an omitted discriminant.)
A good paper on this data structure: http://portal.acm.org/citation.cfm?id=361007
This article has good diagrams and algorithm descriptions: http://en.wikipedia.org/wiki/Kd-tree

Resources