Any faster method to move things in a circle? - performance

Currently I'm using Math.cos and Math.sin to move objects in a circle in my game, however I suspect it's slow (didn't make proper tests yet though) after reading a bit about it.
Are there any ways to calculate this in a faster way?. Been reading that one alternative could be to have a sort of hash table with stored pre-calculated results, like old people used it in the old times before the computer age.
Any input is appreciated.

Expanding on my comment, if you don't have any angular acceleration (the angular velocity stays constant -- this is a requirement for the object to remain traveling in a circle with constant radius without changing the center-pointing force, e.g. via tension in a string), then you can use the following strategy:
1) Compute B = angular_velocity * time_step_size. This is how much angle change the object needs to go through in a single time step.
2) Compute sinb = sin(B) and cosb = cos(B).
3)
Note that we want to change the angle from A to A+B (the object is going counterclockwise). In this derivation, the center of the circle we're orbiting is given by the origin.
Since the radius of the circle is constant, we know r*sin(A+B) = y_new = r*sin(A)cos(B) + r*cos(A)sin(B) = y_old * cos(B) + x_old*sin(B) and r*cos(A+B) = x_new = r*cos(A)*cos(B) - r*sin(A)sin(B) = x_old*cos(B) - y_old*sin(B).
We've removed the cosine and sine of anything we don't already know, so the Cartesian coordinates can be written as
x_new = x_old*cosb - y_old*sinb
y_new = x_old*sinb + y_old*cosb
No more cos or sin calls except in an initialization step which is called once. Obviously, this won't save you anything if B keeps changing for whatever reason (either angular velocity or time step size changes).
You'll notice this is the same as multiplying the position vector by a fixed rotation matrix. You can translate by the circle center and translate back if you don't want to only consider circles with a center at the origin.
First Edit
As #user5428643 mentions, this method is numerically unstable over time due to drift in the radius. You can probably correct this by periodically renormalizing x and y (x_new = x_old * r_const / sqrt(x_old^2 + y_old^2) and similarly for y every few thousand steps -- if you implement this, save the factor r_const / sqrt(x_old^2 + y_old^2) since it is the same for both x and y). I'll think about it some more and edit this answer if I come up with a better fix.
Second Edit
Some more comments on the numerical drift over time:
I did a couple of tests in C++ and python. In C++ using single precision floats, there is sizable drift even after 1 million time steps when B = 0.1. I used a circle with radius 1. In double precision, I didn't notice any drift visually after 100 million steps, but checking the radius shows that it is contaminated in the lower few digits. Doing the renormalization on every step (which is unnecessary if you're just doing visualization) results in an approximately 4 times slower running time versus the drifty version. However, even this version is about 2-3 times faster than using sin and cos on every iteration. I used full optimization (-O3) in g++. In python (using the math package) I only got a speed up of 2 between the drifty and normalized versions, however the sin and cos version actually slots in between these two -- it's almost exactly halfway between these two in terms of run time. Renormalizing every once in a few thousand steps would still make this faster, but it's not nearly as big a difference as my C++ version would indicate.
I didn't do too much scientific testing to get the timings, just a few tests with 1 million to 1 billion steps in increments of 10.

Sorry, not enough rep to comment.
The answers by #neocpp and #oliveryas01 would both be perfectly correct without roundoff error.
The answer by #oliveryas01, just using sine and cosine directly, and precalculating and storing many values if necessary, will work fine.
However, #neocpp's answer, repeatedly rotating by small angles using a rotation matrix, is numerically unstable; over time, the roundoff error in the radius will tend to grow exponentially, so if you run your programme for a long time the objects will slowly move off the circle, spiralling either inwards or outwards.
You can see this mathematically with a little numerical analysis: at each stage, the squared radius is approximately multiplied by a number which is approximately constant and approximately equal to 1, but almost certainly not exactly equal to 1 due to inexactness of floating point representations.
If course, if you're using double precision numbers and only trying to achieve a simple visual effect, this error may not be large enough to matter to you.

I would stick with sine and cosine if I were you. They're the most efficient way to do what you're trying to do. If you really want maximum performance then you should generate an array of x and y values from the sine and cosine values, then plug that array's values into the circle's position. This way, you aren't running sine and cosine repeatedly, instead only for one cycle.

Another possibility completely avoiding the trig functions would be use a polar-coordinate model, where you set the distance and angle. For example, you could set the x coordinate to be the distance, and the rotation to be the angle, as in...
var gameBoardPin:Sprite = new Sprite();
var gameEntity:Sprite = new YourGameEntityHere();
gameBoardPin.addChild( gameEntity );
...and in your loop...
// move gameEntity relative to the center of gameBoardPin
gameEntity.x = circleRadius;
// rotate gameBoardPin from its center causes gameEntity to rotate at the circleRadius
gameBoardPin.rotation = desiredAngleForMovingObject
gameBoardPin's x,y coordinates would be set to the center of rotation for gameEntity. So, if you wanted the gameEntity to rotate with a 100 pixel tether around the center of the stage, you might...
gameBoardPin.x = stage.stageWidth / 2;
gameBoardPin.y = stage.stageHeight / 2;
gameEntity.x = 100;
...and then in the loop you might...
desiredAngleForMovingObject += 2;
gameBoardPin.rotation = desiredAngleForMovingObject
With this method you're using degrees instead of radians.

Related

Algorithm for evenly arranging steps in 2 directions

I am currently programming the controller for a CNC machine, and therefore I need to get the amount of stepper motor steps in each direction when I get from point A to B.
For example point A's coordinates are x=0 and y=0 and B's coordinates are x=15 and y=3. So I have to go 15 steps on the x axis and 3 und the y axis.
But how do I get those two values mixed up in a way that is smooth (aka not first x and then y, this results in really ugly lines)?
In my example with x=15 and y=3 I want it arranged like that:
for 3 times do:
x:4 steps y:0 steps
x:1 steps y:1 step
But how can I get these numbers from an algorithm?
I hope you get what my problem is, thanks for your time,
Luca
there are 2 major issues in here:
trajectory
this can be handled by any interpolation/rasterization like:
DDA
Bresenham
the DDA is your best option as it can handle any number of dimensions easily and can be computed on both integer and floating arithmetics. Its also faster (was not true in the x386 days but nowadays CPU architecture changed all)
and even if you got just 2D machine the interpolation itself will be most likely multidimensional as you will probably add another stuff like: holding force, tool rpm, preasures for what ever, etc... That has to be interpolated along your line in the same way.
speed
This one is much much more complicated. You need to drive your motors smoothly from start position to the end concerning with these:
line start/end speeds so you can smoothly connect more lines together
top speed (dependent on the manufactoring process usually constant for each tool)
motor/mechanics resonance
motor speed limits: start/stop and top
When writing about speed I mean frequency [Hz] for the steps of the motor or physical speed of the tool [m/s] or [mm/2].
Linear interpolation is not good for this I am using cubics instead as they can be smoothly connected and provide good shape for the speed change. See:
How can i produce multi point linear interpolation?
The interpolation cubic (form of CATMUL ROM) is exactly what I use for tasks like this (and I derived it for this very purpose)
The main problem is the startup of the motor. You need to drive from 0 Hz to some frequency but usual stepping motor has resonance in the lower frequencies and as they can not be avoided for multidimensional machines you need to spend as small time in such frequencies as possible. There are also another means of handling this shifting resonance of kinematics by adding weights or change of shape, and adding inertial dampeners on the motors itself (rotary motors only)
So usual speed control for single start/stop line looks like this:
So you should have 2 cubics one per start up and one per stopping dividing your line into 2 joined ones. You have to do it so start and stop frequency is configurable ...
Now how to merge speed and time? I am using discrete non linear time for this:
Find start point (time) of each cycle in a sine wave
its the same process but instead of time there is angle. The frequency of sinwave is changing linearly so that part you need to change with the cubic. Also You have not a sinwave so instead of that use the resulting time as interpolation parameter for DDA ... or compare it with time of next step and if bigger or equal do step and compute the next one ...
Here another example of this technique:
how to control the speed of animation, using a Bezier curve?
This one actually does exactly what you should be doing ... interpolate DDA with Speed controled by cubic curve.
When done you need to build another layer on top of this which will configure the speeds for each line of trajectory so the result is as fast as possible and matching your machine speed limits and also matching tool speed if possible. This part is the most complicated one...
In order to show you what is ahead of you when I put all this together mine CNC interpolator has ~166KByte of pure C++ code not counting depending libs like vector math, dynamic lists, communication etc... The whole control code is ~2.2 MByte
If your controller can issue commands faster than the steppers can actually turn, you probably want to use some kind of event-driven timer-based system. You need to calculate when you trigger each of the motors so that the motion is distributed evenly on both axes.
The longer motion should be programmed as fast as it can go (that is, if the motor can do 100 steps per second, pulse it every 1/100th of a second) and the other motion at longer intervals.
Edit: the paragraph above assumes that you want to move the tool as fast as possible. This is not normally the case. Usually, the tool speed is given, so you need to calculate the speed along X and Y (and maybe also Z) axes separately from that. You also should know what tool travel distance corresponds to one step of the motor. So you can calculate the number of steps you need to do per time unit, and also duration of the entire movement, and thus time intervals between successive stepper pulses along each axis.
So you program your timer to fire after the smallest of the calculated time intervals, pulse the corresponding motor, program the timer for the next pulse, and so on.
This is a simplification because motors, like all physical objects, have inertia and need time to accelerate/decelerate. So you need to take this into account if you want to produce smooth movement. There are more considerations to be taken into account. But this is more about physics than programming. The programming model stays the same. You model your machine as a physical object that reacts to known stimuli (stepper pulses) in some known way. Your program calculates timings for stepper pulses from the model, and sits in an event loop, waiting for the next time event to occur.
Consider Bresenham's line drawing algorithm - he invented it for plotters many years ago. (Also DDA one)
In your case X/Y displacements have common divisor GCD=3 > 1, so steps should change evenly, but in general case they won't distributed so uniformly.
You should take the ratio between the distance on each of the coordinates, and then alternate between steps along the coordinate that has the longest distance with steps that do a single unit step on both coordinates.
Here is an implementation in JavaScript -- using only the simplest of its syntax:
function steps(a, b) {
const dx = Math.abs(b.x - a.x);
const dy = Math.abs(b.y - a.y);
const sx = Math.sign(b.x - a.x); // sign = -1, 0, or 1
const sy = Math.sign(b.y - a.y);
const longest = Math.max(dx, dy);
const shortest = Math.min(dx, dy);
const ratio = shortest / longest;
const series = [];
let longDone = 0;
let remainder = 0;
for (let shortStep = 0; shortStep < shortest; shortStep++) {
const steps = Math.ceil((0.5 - remainder) / ratio);
if (steps > 1) {
if (dy === longest) {
series.push( {x: 0, y: (steps-1)*sy} );
} else {
series.push( {x: (steps-1)*sx, y: 0} );
}
}
series.push( {x: sx, y: sy} );
longDone += steps;
remainder += steps*ratio-1;
}
if (longest > longDone) {
if (dy === longest) {
series.push( {x: 0, y: longest-longDone} );
} else {
series.push( {x: longest-longDone, y: 0} );
}
}
return series;
}
// Demo
console.log(steps({x: 0, y: 0}, {x: 3, y: 15}));
Note that the first segment is shorter than all the others, so that it is more symmetrical with how the sequence ends near the second point. If you don't like that, then replace the occurrence of 0.5 in the code with either 0 or 1.

How use raw Gryoscope Data °/s for calculating 3D rotation?

My question may seem trivial, but the more I read about it - the more confused I get... I have started a little project where I want to roughly track the movements of a rotating object. (A basketball to be precise)
I have a 3-axis accelerometer (low-pass-filtered) and a 3-axis gyroscope measuring °/s.
I know about the issues of a gyro, but as the measurements will only be several seconds and the angles tend to be huge - I don't care about drift and gimbal right now.
My Gyro gives me the rotation speed of all 3 axis. As I want to integrate the acceleration twice to get the position at each timestep, I wanted to convert the sensors coordinate-system into an earthbound system.
For the first try, I want to keep things simple, so I decided to go with the big standard rotation matrix.
But as my results are horrible I wonder if this is the right way to do so. If I understood correctly - the matrix is simply 3 matrices multiplied in a certain order. As rotation of a basketball doesn't have any "natural" order, this may not be a good idea. My sensor measures 3 angular velocitys at once. If I throw them into my system "step by step" it will not be correct since my second matrix calculates the rotation around the "new y-axis" , but my sensor actually measured an angular velocity around the "old y-axis". Is that correct so far?
So how can I correctly calculate the 3D rotation?
Do I need to go for quaternoins? but how do I get one from 3 different rotations? And don't I have the same issue here again?
I start with a unity-matrix ((1, 0, 0)(0, 1, 0)(0, 0, 1)) multiplied with the acceleration vector to give me the first movement.
Then I want use the Rotation matrix to find out, where the next acceleration is really heading so I can simply add the accelerations together.
But right now I am just too confused to find a proper way.
Any suggestions?
btw. sorry for my poor english, I am tired and (obviously) not a native speaker ;)
Thanks,
Alex
Short answer
Yes, go for quaternions and use a first order linearization of the rotation to calculate how orientation changes. This reduces to the following pseudocode:
float pose_initial[4]; // quaternion describing original orientation
float g_x, g_y, g_z; // gyro rates
float dt; // time step. The smaller the better.
// quaternion with "pose increment", calculated from the first-order
// linearization of continuous rotation formula
delta_quat = {1, 0.5*dt*g_x, 0.5*dt*g_y, 0.5*dt*g_z};
// final orientation at start time + dt
pose_final = quaternion_hamilton_product(pose_initial, delta_quat);
This solution is used in PixHawk's EKF navigation filter (it is open source, check out formulation here). It is simple, cheap, stable and accurate enough.
Unit matrix (describing a "null" rotation) is equivalent to quaternion [1 0 0 0]. You can get the quaternion describing other poses using a suitable conversion formula (for example, if you have Euler angles you can go for this one).
Notes:
Quaternions following [w, i, j, k] notation.
These equations assume angular speeds in SI units, this is, radians per second.
Long answer
A gyroscope describes the rotational speed of an object as a decomposition in three rotational speeds around the orthogonal local axes XYZ. However, you could equivalently describe the rotational speed as a single rate around a certain axis --either in reference system that is local to the rotated body or in a global one.
The three rotational speeds affect the body simultaneously, continously changing the rotation axis.
Here we have the problem of switching from the continuous-time real world to a simpler discrete-time formulation that can be easily solved using a computer. When discretizing, we are always going to introduce errors. Some approaches will lead to bigger errors, while others will be notably more accurate.
Your approach of concatenating three simultaneous rotations around orthogonal axes work reasonably well with small integration steps (let's say smaller than 1/1000 s, although it depends on the application), so that you are simulating the continuous change of rotation axis. However, this is computationally expensive, and error grows as you make time steps bigger.
As an alternative to first-order linearization, you can calculate pose increments as a small delta of angular speed gradient (also using quaternion representation):
quat_gyro = {0, g_x, g_y, g_z};
q_grad = 0.5 * quaternion_product(pose_initial, quat_gyro);
// Important to normalize result to get unit quaternion!
pose_final = quaternion_normalize(pose_initial + q_grad*dt);
This technique is used in Madgwick rotation filter (here an implementation), and works pretty fine for me.

How to tilt compensate my magnetometer ? Tried a lot

I try to tilt compensate a magnetometer (BMX055) reading and tried various approaches I have found online, not a single one works.
I atually tried almost any result I found on Google.
I run this on an AVR, it would be extra awesome to find something that works without complex functions (trigonometry etc) for angles up to 50 degree.
I have a fused gravity vector (int16 signed in a float) from gyro+acc (1g gravity=16k).
attitude.vect_mag.x/y/z is a float but contains a 16bit integer ranging from around -250 to +250 per axis.
Currently I try this code:
float rollRadians = attitude.roll * DEG_TO_RAD / 10;
float pitchRadians = attitude.pitch * DEG_TO_RAD / 10;
float cosRoll = cos(rollRadians);
float sinRoll = sin(rollRadians);
float cosPitch = cos(pitchRadians);
float sinPitch = sin(pitchRadians);
float Xh = attitude.vect_mag.x * cosPitch + attitude.vect_mag.z * sinPitch;
float Yh = attitude.vect_mag.x * sinRoll * sinPitch + attitude.vect_mag.y * cosRoll - attitude.vect_mag.z *sinRoll * cosPitch;
float heading = atan2(Yh, Xh);
attitude.yaw = heading*RAD_TO_DEG;
The result is meaningless, but the values without tilt compensation are correct.
The uncompensated formula:
atan2(attitude.vect_mag.y,attitude.vect_mag.x);
works fine (when not tilted)
I am sort of clueless what is going wrong, the normal atan2 returns a good result (when balanced) but using the wide spread formulas for tilt compensation completely fails.
Do I have to keep the mag vector values within a specific range for the trigonometry to work ?
Any way to do the compensation without trig functions ?
I'd be glad for some help.
Update:
I found that the BMX055 magnetometer has X and Y inverted as well as Y axis is *-1
The sin/cos functions now seem to lead to a better result.
I am trying to implement the suggested vector algorithms, struggling so far :)
Let us see.
(First, forgive me a bit of style nagging. The keyword volatile means that the variable may change even if we do not change it ourselves in our code. This may happen with a memory position that is written by another process (interrupt request in AVR context). For the compiler volatile means that the variable has to be always loaded and stored into memory when used. See:
http://en.wikipedia.org/wiki/Volatile_variable
So, most likely you do not want to have any attributes to your floats.)
Your input:
three 12-bit (11 bits + sign) integers representing accelerometer data
three approximately 9-bit (8 bits + sign) integers representing the magnetic field
Good news (well...) is that your resolution is not that big, so you can use integer arithmetics, which is much faster. Bad news is that there is no simple magical one-liner which would solve your problem.
First of all, what would you like to have as the compass bearing when the device is tilted? Should the device act as if it was not tilted, or should it actually show the correct projection of the magnetic field lines on the screen? The latter is how an ordinary compass acts (if the needle moves at all when tilted). In that case you should not compensate for anything, and the device can show the fancy vertical tilt of the magnetic lines when rolled sideways.
In any case, try to avoid trigonometry, it takes a lot of code space and time. Vector arithmetics is much simpler, and most of the time you can make do with multiplys and adds.
Let us try to define your problem in vector terms. Actually you have two space vectors to start with, m pointing to the direction of the magnetic field, g to the direction of gravity. If I have understood your intention correctly, you need to have vector d which points along some fixed direction in the device. (If I think of a mobile phone, d would be a vector parallel to the screen left or right edges.)
With vector mathematics this looks rather simple:
g is a normal to a horizontal (truly horizontal) plane
the projection of m on this plane defines the direction a horizontal compass would show
the projection of d on the plane defines the "north" on the compass face
the angle between m and d gives the compass bearing
Now that we are not interested in the magnitude of the magnetic field, we can scale everything as we want. This reduces the need to use unity vectors which are expensive to calculate.
So, the maths will be something along these lines:
# projection of m on g (. represents dot product)
mp := m - g (m.g) / (g.g)
# projection of d on g
dp := d - g (d.g) / (g.g)
# angle between mp and dp
cos2 := (mp.dp)^2 / (mp.mp * dp.dp)
sgn1 := sign(mp.dp)
# create a vector 90 rotated from d on the plane defined by g (x is cross product)
drot := dp x g
sin2 := (mp.drot)^2 / (mp.mp * drot.drot)
sgn2 := sign(mp.drot)
After this you will have a sin^2 and cos^2 of the compass directions. You need to create a resolving function for one quadrant and then determine the correct quadrant by using the signs. The resolving function may sound difficult, but actually you just need to create a table lookup function for sin2/cos2 or cos2/sin2 (whichever is smaller). It is relatively fast, and only a few points are required in the lookup (with bilinear approximation even fewer).
So, as you can see, there are no trig functions around, and even no square roots around. Vector dots and crosses are just multiplys. The only slightly challenging trick is to scale the fixed point arithmetics to the correct scale in each calculation.
You might notice that there is a lot of room for optimization, as the same values are used several times. The first step is to get the algorithm run on a PC with floating point with the correct results. The optimizations come later.
(Sorry, I am not going to write the actual code here, but if there is something that needs clarifying, I'll be glad to help.)

XNA 2D Camera loosing precision

I have created a 2D camera (code below) for a top down game. Everything works fine when the players position is close to 0.0x and 0.0y.
Unfortunately as distance increases the transform seems to have problems, at around 0.0x 30e7y (yup that's 30 million y) the camera starts to shudder when the player moves (the camera gets updated with the player position at the end of each update) At really big distances, a billion + the camera wont even track the player, as I'm guessing what ever error is in the matrix is amplified by too much.
My question is: Is there either a problem in the matrix, or is this standard behavior for extreme numbers.
Camera Transform Method:
public Matrix getTransform()
{
Matrix transform;
transform = (Matrix.CreateTranslation(new Vector3(-position.X, -position.Y, 0)) *
Matrix.CreateRotationZ(rotation) * Matrix.CreateScale(new Vector3(zoom, zoom, 1.0f)) *
Matrix.CreateTranslation(new Vector3((viewport.Width / 2.0f), (viewport.Height / 2.0f), 0)));
return transform;
}
Camera Update Method:
This requests the objects position given it's ID, it returns a basic Vector2 which is then set as the cameras position.
if (camera.CameraMode == Camera2D.Mode.Track && cameraTrackObject != Guid.Empty)
{
camera.setFocus(quadTree.getObjectPosition(cameraTrackObject));
}
If any one can see an error or enlighten me as to why the matrix struggles I would be most grateful.
I have actually found the reason for this, it was something I should have thought of.
I'm using single precision floating points, which only have precision to 7 digits. That's fine for smaller numbers (up to around the 2.5 million mark I have found). Anything over this and the multiplication functions in the matrix start to gain precision errors as the floats start to truncate.
The best solution for my particular problem is to introduce some artificial scaling (I need the very large numbers as the simulation is set in space). I have limited my worlds to 5 million units squared (+/- 2.5 million units) and will come up with another way of granulating the world.
I also found a good answer about this here:
Vertices shaking with large camera position values
And a good article that discusses floating points in more detail:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Thank you for the views and comments!!

Is point A nearby point B in 3D - distance check

I am looking for efficent algorithm for checking if one point is nearby another in 3D.
sqrt((x2-x1)^2 + (y2-y1)^2 + (z2-z1)^2) < radius
This doesn't seem to be too fast and actually I don't need such a big accuracy. How else could I do this?
Square the distance, and drop the call to sqrt(), that's much faster:
(((x2-x1)^2 + (y2-y1)^2 + (z2-z1)^2 < radius * radius
Of course, in many cases at least radius * radius can be computed ahead of time and stored as e.g. squaredRadius.
Well if you can be content with a cube distance rather than a spherical distance a pretty naive implementation would be like this:
Math.Abs(x2-x1) < radius && Math.Abs(y2-y1) < radius && Math.Abs(z2-z1) < radius
You can use your own favourite methods of optimising Math.Abs if it proves a bottleneck.
I should also add that if one of the dimensions generally varies less than other dimensions then putting that one last should lead to a performance gain. For example if you are mainly dealing with objects on a "ground" x-y plane then check the z axis last, as you should be able to rule out collisions earlier by using the x and y checks.
If you do not need big accuracy maybe you can check if 2nd point is inside cube (side length '2a'), not sphere, where the 1st point is in center:
|x2-x1|<a && |y2-y1|<a && |z2-z1|<a
Because of the pipelined processor architectures it is - nowadays - in most cases more efficient to do the FPU calculation twice, as branching once. In case of a branch mis-prediction you are stalling for ages ( in cpu-terms ).
So, I would rather go the calculation-way, not the branching-way.
if you don't need the accuracy you can check whether you are in a cube rather than a sphere.
there are options here as well you can pick the cube that enclose the sphere (no false negatives) the cube with the same volume as the sphere (some false positives and negatives, but max error is minimized), the cube that is contained within the sphere (no false positives).
this technique also extends well to higher dimensions.
if you want to get all the points near another one some form of spacial indexing may also be appropriate (kd-tree perhaps)
If you have to check against many other points, you could consider using a spatial ordering method to quickly discover points, that are near a certain location. Have a look at this link:
wiki link
If we were going to optimise this because it was being run billions of times, I would solve this by using unwind's method, and then parallelizing it using SIMD. There's a few different ways to do that. You might simply do all the subtractions (x2-x1,y2-y1,z2-z1) in one op, and then the multiplies in one op also. That way you parallize inside the method without re-designing your algorithm.
Or you could write a bulk version which calculates (x2-x1)^2+(y2-y1)^2+(z2-z1)^2 - r^2 on many elements and stores the results in an array. You can maybe get better throughput, but it means re-designing your algorithm and depends what the tests are used for.
You could also easily optimize this using something like OpenMP if you were really doing lots of tests in a row.
Use max(abs(x1-x2), abs(y1-y2), abs(z1-z2))
After dropping the square root, if the values gets larger, its better to apply log.
This does the cube-distance, and if you are doing a lot of points, most of the time it only does the first test.
close = (abs(x2-x1) < r && abs(y2-y1) < r && abs(z2-z1) < r);

Resources