Related
I am modelling a particle in 3D space.
{0} The particle starts at time t0 from a known position P0 with a velocity V0. The velocity is computed using its known previous position of P-1 at t-1.
{1} The particle is targeted to go to P1 at t1 with a known velocity of V1.
{..} The particle moves as fast as it can, without jerks (C1 continuous) bound by a set of constraints that clamp the acceleration along x, y and z independently. The maximum acceleration/deceleration along x, y and z are known and are Xa, Ya, and Za. The max rate of change of acceleration along x, y and z are defined by Xr, Yr, and Zr.
{n} After an unknown number of time steps it reaches Pn at some time (say tn) with a velocity of Vn.
{n+1} It moves to Pn+1 at tn+1.
The problem I have is to compute the minimum time for the transition from P0 to Pn and to generate the intermediate positions and velocity directions thereof. A secondary goal is to accelerate smoothly instead of applying acceleration that results in jerks.
Current Approach:
find the dimension {x, y or z} that will take the longest to align from start P0 to end Pn. This will be the critical dimension and will determine the total time. This is fairly straightforward and I can write something to this effect.
interpolate smoothly without jitters from P0 to Pn in all dimensions such that the velocity at Pn is as expected. I am not sure, how to approach this.
Any inputs/physics engines that already do this will be useful. It is a commercial project and I cannot put dependencies on large 3rd party libraries with restrictive licenses.
Note: Particle at P0 and Pn have little or no acceleration.
If I understand correctly, you have a point (P0, V0), with V0 = P0 - P-1, and a point (Pn, Vn), with Vn = Pn - Pn-1, and you want to find the fewest intermediate points by adjusting the acceleration at each time step.
Let's define the acceleration at ti: Ai = Vi - Vi-1, with abs(Ai) <= mA. Here, since the problem is axis-independant, abs is the member-wise absolute instead of the norm (or vector magnitude), and mA is the maximum acceleration vector, positive in each dimension. Let's also consider that Pn > P0 (member-wise).
From that, we get Vi = Vi-1 + Ai and so Pi = Pi-1 + Vi-1 + Ai.
If you need to go from some point to another in the fastest way possible, the obvious thing to do, whatever the initial velocity, is accelerate as much as possible until you reach the goal. However, since your problem is discrete and you have a terminal velocity Vn, using that method will probably lead too far and with a different terminal velocity.
However, you can do the same thing in reverse, starting from the end point. And if you start simultaneously from both points, you will make two paths crossing each other in each dimension (not necessarily crossing in 3D, but, in each dimension, the relative direction of both paths changes at some "crossing" point).
Let's take a one-dimensional example. (P0, V0) = (0, -2) and (Pn, Vn) = (35, -1), and mA = 1.
The first path, with Ai = mA, goes like this:
(0, -2) -> (-1, -1) -> (-1, 0) -> (0, 1) -> (2, 2) ->
(5, 3) -> (9, 4) -> (14, 5) -> (20, 6) -> (27, 7) -> ...
The second path, with Ai = -mA but in reverse, goes like this:
(35, -1) <- (36, 0) <- (36, 1) <- (35, 2) <- (33, 3) <-
(30, 4) <- (26, 5) <- (21, 6) <- (15, 7) <- ...
You can see the paths cross with the same velocity somewhere between 20 and 21. That gives you the fastest acceleration and deceleration parts of the path you need, but the two parts aren't connected. However, it's easy to connect them by finding the closest points of same velocity; let's call these points Pq and Pr. Here, Pq = (20, 6) and Pr = (21, 6). Since that velocity is calculated between current and previous points, take the point before Pq (Pq-1, or (14, 5) in the example) and the point Pr, and try connecting them.
If Pq >= Pr >= Pq - 2mA, then you can connect them directly by taking Pq-1 unchanged, and Pr with Vr = Pr - Pq-1.
Else, take Pq-2 and Pr-1 (where Vr-1 = Vr - mA, because it's in reverse) and try connecting those by adding intermediate points. Since these points have a velocity difference of mA, you can search only for intermediate points with the same velocity Vs such that Vq-2 <= Vs <= Vr-1.
If you still can't find a solution, then take Pq-3 and Pr-2 and repeat the process with more intermediate points.
In the example I took, Pq < Pr, so we have to try with Pq-2 = (9, 4) and Pr-1 = (26, 5). We can connect those with a sequence of 3 points, for example (9, 4) -> (13, 4) -> (17, 4) -> (21, 4) -> (26, 5).
In any case, this method will give you the smallest amount of intermediate points, meaning the fastest path between P0 and Pn.
If you then want to reduce jerk, then you can forget the points calculated previously and do an interpolation with the number of points you now know to be minimal.
After playing around with some ideas, I came up with another solution, more accurate and probably faster, if done correctly, than that of my previous answer. It is however quite complicated and requires quite a bit of maths, although not very complex maths. Moreover, this is a work in progress: I am still investigating some areas. Nonetheless, from what I've tried, it does already produce very good results.
The problem
Definitions and goal
Throughout this answer, p[n] refers to the position of the nth point, v[n] to its velocity, a[n] to its acceleration, and j[n] to its jerk (the derivative of acceleration). The velocity of the nth point depends only on its position and that of the previous point. Similarly for acceleration and jerk, but with the points velocity and acceleration, respectively.
We have a start point and an end point, respectively p[0] and p[n], both with associated velocities v[0] and v[n]. The goal is to place n-1 points in between, with an arbitrary n, such that, along the X, Y, and Z axes, the absolute values of acceleration and jerk at any of these points (and at p[n]) are below some limits, respectively aMaxX, aMaxY, and aMaxZ for acceleration, and jMaxX, jMaxY, and jMaxZ for jerk.
What we want to find is the values of p[i] for all i ∈ [1; n-1]. Because p[i] = p[i-1] + v[i], this is the same as finding v[i]. By the same reasoning, with v[i] = v[i-1] + a[i] and a[i] = a[i-1] + j[i], it is also the same as finding a[i] or j[i].
a[0] and a[n+1] are assumed to be zero.
Observations and simplifications
Because the problem's constraints are independant of the dimension, we can solve for each of the three dimensions separately, as long as the number of points obtained in each case is the same. Therefore, I am only going to solve the one-dimensional version of the problem, using aMax and jMax, irrespective of the axis.
*[WIP]* Determine the worst case to solve first, then solve the other ones, knowing the number of points.
The actual positions of the two given points are irrelevant, what matters is the relative distance between them, which we can define as P = p[n] - p[0]. Let's also define the ranges R = [1; n] and R* = [1; n+1].
Because of the discrete nature of the problem, we can obtain the following equations. Note that ∑{i∈R}(x[i]) is the sum of all x[i] for i∈R.
Ⓐ ∑{i∈R}(v[i]) = P
Ⓑ ∑{i∈R}(a[i]) = v[n] - v[0]
Ⓧ ∑{i∈R*}(j[i]) = 0
Ⓧ comes from the assumption that a[0] = a[n+1] = 0.
From Ⓐ and v[i] = v[i-1] + a[i], i∈R, we can deduce:
Ⓒ ∑{i∈R}((n+1-i)*a[i]) = P - n*v[0]
By the same logic, from Ⓑ, Ⓒ, and a[i] = a[i-1] + j[i], i∈R, we can deduce:
Ⓨ ∑{i∈R}((n+1-i)*j[i]) = v[n] - v[0]
Ⓩ ∑{i∈R}(T[n+1-i]*j[i]) = P - n*v[0]
Here, T[n] is the nth triangular number, defined by T[n] = n*(n+1)/2.
The equations Ⓧ, Ⓨ, and Ⓩ are the relevant ones for the next parts.
The approach
In order to minimize n, we can start with a small value of n (1, 2?) and find a solution. Then, if max{i∈R}(abs(a[i])) > aMax or max{i∈R}(abs(j[i])) > jMax, we can increment n and repeat the process.
*[WIP]* Find a lower bound for n to avoid unnecessary calculations from small values of n. Or estimate the correct value of n and pinpoint it by testing solutions.
Finding a solution requires finding the values of j[i] for all i∈R*. I have yet to find an optimal form for j[i], but defining j*[i], r[i] and s[i] such that
j[i] = j*[i] + r[i]v[0] + s[i]v[n]
works quite well.
*[WIP]* Find a better form for j[i]
By doing that, we transform our n-1 unknowns (j[i], i∈R, note that j[n+1] = -∑{i∈R}(j[i])) into 3(n-1) easier to find unknowns. Here are a few things we can deduce right now from Ⓧ, Ⓨ, and Ⓩ.
∑{i∈R*}(r[i]) = 0
∑{i∈R*}(s[i]) = 0
∑{i∈R}((n+1-i)*r[i]) = -1
∑{i∈R}((n+1-i)*s[i]) = 1
∑{i∈R}(T(n+1-i)*r[i]) = -n
∑{i∈R}(T(n+1-i)*s[i]) = 0
As a reminder, here are Ⓧ, Ⓨ, and Ⓩ.
Ⓧ ∑{i∈R*}(j[i]) + j[n+1] = 0
Ⓨ ∑{i∈R}((n+1-i)*j[i]) = v[n] - v[0]
Ⓩ ∑{i∈R}(T[n+1-i]*j[i]) = P - n*v[0]
The goal now is to find adequate special cases to help us determine these unknowns.
The special cases
v[0] = v[n] = 0
By playing with values of jerk, I observed that taking all of j[i], i∈R* as part of a parabola yields excellent results for minimizing both jerk and acceleration. Although it isn't the best possible fit, I haven't found better yet.
The intuition behind values of jerk coming from a parabola is that, if the values of position are to follow a polynomial, then its degree must be at least 5, and can be 5. This is easier to understand if you think about the values of velocity following a 4th degree polynomial. The constraints that v[0] and v[n] are set, a[0] = a[n+1] = 0, and that its integral over [0; n] must equal P, this polynomial must have a degree of at least 4. This holds for both continuous and dicrete cases. Finally, it seems that taking the smallest degree leads to a smoother jerk as well as making it easier to calculate.
Here is an example of a continuous case where the position is in purple, the velocity in blue, the acceleration in yellow and the jerk in red.
In case you want to play with this, here is how to define the position curve in terms of n, p[0], p[n], v[0], and v[n] (the other ones are simply derivatives).
a = (-3(v[n]+v[0]) + 6(p[n]-p[0])) / n^5
b = (n(7v[n]+8v[0]) - 15(p[n]-p[0])) / n^4
c = (-n(4v[n]+6v[0]) + 10(p[n]-p[0])) / n^3
p[x] = ax^5 + bx^4 + cx^3 + v[0]x + p[0]
If v[0] = v[n] = 0, then j[i] = j*[i], i∈R*. That means that the values j*[i] follow a quadratic polynomial. So we want to find α, β, and γ such that Ⓟ holds.
Ⓟ j*[i] = αi^2 + βi + γ, i∈R*
From Ⓧ, Ⓨ, and Ⓩ follow these equations.
α*∑{i∈R*}(i^2) + β*∑{i∈R*}(i) + c*∑{i∈R*}(1) = 0
α*∑{i∈R}((n+1-i)*i^2) + β*∑{i∈R}((n+1-i)*i) + c*∑{i∈R}(n+1-i) = 0
α*∑{i∈R}(T(n+1-i)*i^2) + β*∑{i∈R}(T(n+1-i)*i) + c*∑{i∈R}(T(n+1-i)) = P
Solving this system gives α, β, and γ, which can be used with Ⓟ to calculate j*[i], i∈R*. Note that j*[i] = j*[n+2-i], so only the upper half of the calculations need to be done.
v[0] = v[n] = 1/n
If v[0] = v[n] = 1/n, then j[i] = 0, i∈R*. This means that Ⓠ holds.
Ⓠ r[i] + s[i] = -n*j[i], i∈R*
v[0] = 0, j[i∈L] = J, j[h] = 0, j[i∈U] = -J
L and U are respectively the lower and upper halves of R*, and h is the value in between, if n+1 is odd. In other words:
if n is odd:
L = [1; (n+1)/2]
U = [(n+3)/2; n+1]
if n is even:
L = [1; n/2]
h = n/2+1
U = [n/2+2; n]
This special case corresponds to the maximum overall acceleration between p[0] and p[n] while minimizing abs(j[i]), i∈R*. Here, Ⓩ gives us the following equation.
∑{i∈R}(T[n+1-i]*j[i]) = P
∑{i∈L}(T[n+1-i])*j[1] + ∑{i∈U}(T[n+1-i])*j[n+1] = P
j[1] = P / [ ∑{i∈L}(T[n+1-i]) - ∑{i∈U}(T[n+1-i]) ]
This gives j[1], and so every j[i], i∈R*. We can then calculate v[n] using Ⓨ.
Putting the pieces together
Each special case gives us, for some values of v[0], v[n] and P, a relation of the form
αj*[i] + βr[i] + γs[i] = δ.
By treating three special cases (assuming they are not similar, meaning the do not give the same relation), we have a system of three equations that, once solved, gives the values of j*[i], r[i] and s[i] for all i∈R*.
As a result, we can calculate, for each value of n, values of j[i] depending on v[0], v[n] and P. They can be precalculated, which means testing them for any value of n can be very fast. Thereby, we can very quicklyt find a good estimate for the fewest amount of points needed in the trajectory, as well as a good approximation of the best trajectory possible, as long as we have precalculated values up to a sufficiently large value of n.
Answer
I suggest you to take following function :
X(n) = Xstart + Vxstart n+ (-6xstart+3Vxstart+6xend-3Vxend+c/2) n^2 + (8xstart+3Vxstart-8xend+5Vxend-c) n^3 + (-3Xstart-Vxstart+3xend-2Vxend+c/2) n^4
(for each coordinate X,Y,Z)
Here are some graphs of what this gives, I took c=3 for each samples:
For xstart=1, vstart=1, xend=3, vstart=-2, this gives :
X(n)= 1 + n + 16 n^2 -25 n^3 + 10 n^4
For xstart = -4, vstart =-4, xend = 4, vend = 0, this gives :
(-4 -4n +61n^2 -78n^3 + 29yn^4)
where c is a number from 0.1 to 5, it is up to you to decide, the higher c will be, the faster the function will go to that point (but it might have to turn back if c > 4). (See graphs below).
The polynomial comes from following calculation : where a=x0,b=v0,c=xe,d=v2,e=the magic constant
Explanation
Based on Nelfeal's answer, my idea was to try to solve the given problem with polynomials.
We can change the problem as to define a new Axis which goes in the P[last]-P[0], to have the problem reduced to dimension 1.
We can think about the problem in continuous mathematics instead of discrete mathematics (eg use functions instead of sequences), and go back to the discrete world which is just a special case of the continuous.
We can change the unit for time and space so that the time is 1 and the distance is 1, so that the problem is simplified to
Find a function 𝒇 which satisfies the following :
𝒇(0) = 0 and 𝒇(1) = 1
𝒇'(0) = 0 and 𝒇'(1) = 0
For x∈ℝ |𝒇''(x)| < c, where c is the max speed
We have
P(X) = ∑{i∈ℕ} Ai Xi
P'(X) = ∑{i∈ℕ} (i+1) Ai+1 Xi
P''(X) = ∑{i∈ℕ} (i+2)(i+1) Ai+2 Xi
We need :
P(0) = 0
P(1) = 1
P'(0) = 0
P'(1) = 0
-c <= P''(x) <= c
Thus it means :
a0 = 0 (from 1.)
a1 = 0 (from 3.)
P(1) = ∑{i∈ℕ} Ai = 1
P'(1) = ∑{i∈ℕ} (i+1) Ai = 0
P''(x) = ∑{i∈ℕ} (i+2)(i+1) Ai Xi in [-c,c]
The third equation is the most complex one, and can be simplified by saying that P(1) = c.
We will have c vary to see what changes.
After inverting a 3x3 matrix, we get following result :
P(x) = (c/2+6) x^2 - (c+8) x^3 + (c/2+3) x^4
For c=0.15, this gives :
For c=1, this gives:
For c=4, we see a bounce back :
If we take c from 0.1 to 6, we get following 3d graph :
Note that we have solved this for polynomals of degree 4, but you might do the same things to higher degrees (up to 10 if you want to) to get more possibilities in your functions.
Since this is about remapping a uniform distribution to another with a different range, this is not a PHP question specifically although I am using PHP.
I have a cryptographicaly secure random number generator that gives me evenly distributed integers (uniform discrete distribution) between 0 and PHP_INT_MAX.
How do I remap these results to fit into a different range in an efficient manner?
Currently I am using $mappedRandomNumber = $randomNumber % ($range + 1) + $min where $range = $max - $min, but that obvioulsy doesn't work since the first PHP_INT_MAX%$range integers from the range have a higher chance to be picked, breaking the uniformity of the distribution.
Well, having zero knowledge of PHP definitely qualifies me as an expert, so
mentally converting to float U[0,1)
f = r / PHP_MAX_INT
then doing
mapped = min + f*(max - min)
going back to integers
mapped = min + (r * max - r * min)/PHP_MAX_INT
if computation is done via 64bit math, and PHP_MAX_INT being 2^31 it should work
This is what I ended up doing. PRNG 101 (if it does not fit, ignore and generate again). Not very sophisticated, but simple:
public function rand($min = 0, $max = null){
// pow(2,$numBits-1) calculated as (pow(2,$numBits-2)-1) + pow(2,$numBits-2)
// to avoid overflow when $numBits is the number of bits of PHP_INT_MAX
$maxSafe = (int) floor(
((pow(2,8*$this->intByteCount-2)-1) + pow(2,8*$this->intByteCount-2))
/
($max - $min)
) * ($max - $min);
// discards anything above the last interval N * {0 .. max - min -1}
// that fits in {0 .. 2^(intBitCount-1)-1}
do {
$chars = $this->getRandomBytesString($this->intByteCount);
$n = 0;
for ($i=0;$i<$this->intByteCount;$i++) {$n|=(ord($chars[$i])<<(8*($this->intByteCount-$i-1)));}
} while (abs($n)>$maxSafe);
return (abs($n)%($max-$min+1))+$min;
}
Any improvements are welcomed.
(Full code on https://github.com/elcodedocle/cryptosecureprng/blob/master/CryptoSecurePRNG.php)
Here is the sketch how I would do it:
Consider you have uniform random integer distribution in range [A, B) that's what your random number generator provide.
Let L = B - A.
Let P be the highest power of 2 such that P <= L.
Let X be a sample from this range.
First calculate Y = X - A.
If Y >= P, discard it and start with new X until you get an Y that fits.
Now Y contains log2(P) uniformly random bits - zero extend it up to log2(P) bits.
Now we have uniform random bit generator that can be used to provide arbitrary number of random bits as needed.
To generate a number in the target range, let [A_t, B_t) be the target range. Let L_t = B_t - A_t.
Let P_t be the smallest power of 2 such that P_t >= L_t.
Read log2(P_t) random bits and make an integer from it, let's call it X_t.
If X_t >= L_t, discard it and try again until you get a number that fits.
Your random number in the desired range will be L_t + A_t.
Implementation considerations: if your L_t and L are powers of 2, you never have to discard anything. If not, then even in the worst case you should get the right number in less than 2 trials on average.
John Carmack has a special function in the Quake III source code which calculates the inverse square root of a float, 4x faster than regular (float)(1.0/sqrt(x)), including a strange 0x5f3759df constant. See the code below. Can someone explain line by line what exactly is going on here and why this works so much faster than the regular implementation?
float Q_rsqrt( float number )
{
long i;
float x2, y;
const float threehalfs = 1.5F;
x2 = number * 0.5F;
y = number;
i = * ( long * ) &y;
i = 0x5f3759df - ( i >> 1 );
y = * ( float * ) &i;
y = y * ( threehalfs - ( x2 * y * y ) );
#ifndef Q3_VM
#ifdef __linux__
assert( !isnan(y) );
#endif
#endif
return y;
}
FYI. Carmack didn't write it. Terje Mathisen and Gary Tarolli both take partial (and very modest) credit for it, as well as crediting some other sources.
How the mythical constant was derived is something of a mystery.
To quote Gary Tarolli:
Which actually is doing a floating
point computation in integer - it took
a long time to figure out how and why
this works, and I can't remember the
details anymore.
A slightly better constant, developed by an expert mathematician (Chris Lomont) trying to work out how the original algorithm worked is:
float InvSqrt(float x)
{
float xhalf = 0.5f * x;
int i = *(int*)&x; // get bits for floating value
i = 0x5f375a86 - (i >> 1); // gives initial guess y0
x = *(float*)&i; // convert bits back to float
x = x * (1.5f - xhalf * x * x); // Newton step, repeating increases accuracy
return x;
}
In spite of this, his initial attempt a mathematically 'superior' version of id's sqrt (which came to almost the same constant) proved inferior to the one initially developed by Gary despite being mathematically much 'purer'. He couldn't explain why id's was so excellent iirc.
Of course these days, it turns out to be much slower than just using an FPU's sqrt (especially on 360/PS3), because swapping between float and int registers induces a load-hit-store, while the floating point unit can do reciprocal square root in hardware.
It just shows how optimizations have to evolve as the nature of underlying hardware changes.
Greg Hewgill and IllidanS4 gave a link with excellent mathematical explanation.
I'll try to sum it up here for ones who don't want to go too much into details.
Any mathematical function, with some exceptions, can be represented by a polynomial sum:
y = f(x)
can be exactly transformed into:
y = a0 + a1*x + a2*(x^2) + a3*(x^3) + a4*(x^4) + ...
Where a0, a1, a2,... are constants. The problem is that for many functions, like square root, for exact value this sum has infinite number of members, it does not end at some x^n. But, if we stop at some x^n we would still have a result up to some precision.
So, if we have:
y = 1/sqrt(x)
In this particular case they decided to discard all polynomial members above second, probably because of calculation speed:
y = a0 + a1*x + [...discarded...]
And the task has now came down to calculate a0 and a1 in order for y to have the least difference from the exact value. They have calculated that the most appropriate values are:
a0 = 0x5f375a86
a1 = -0.5
So when you put this into equation you get:
y = 0x5f375a86 - 0.5*x
Which is the same as the line you see in the code:
i = 0x5f375a86 - (i >> 1);
Edit: actually here y = 0x5f375a86 - 0.5*x is not the same as i = 0x5f375a86 - (i >> 1); since shifting float as integer not only divides by two but also divides exponent by two and causes some other artifacts, but it still comes down to calculating some coefficients a0, a1, a2... .
At this point they've found out that this result's precision is not enough for the purpose. So they additionally did only one step of Newton's iteration to improve the result accuracy:
x = x * (1.5f - xhalf * x * x)
They could have done some more iterations in a loop, each one improving result, until required accuracy is met. This is exactly how it works in CPU/FPU! But it seems that only one iteration was enough, which was also a blessing for the speed. CPU/FPU does as many iterations as needed to reach the accuracy for the floating point number in which the result is stored and it has more general algorithm which works for all cases.
So in short, what they did is:
Use (almost) the same algorithm as CPU/FPU, exploit the improvement of initial conditions for the special case of 1/sqrt(x) and don't calculate all the way to precision CPU/FPU will go to but stop earlier, thus gaining in calculation speed.
I was curious to see what the constant was as a float so I simply wrote this bit of code and googled the integer that popped out.
long i = 0x5F3759DF;
float* fp = (float*)&i;
printf("(2^127)^(1/2) = %f\n", *fp);
//Output
//(2^127)^(1/2) = 13211836172961054720.000000
It looks like the constant is "An integer approximation to the square root of 2^127 better known by the hexadecimal form of its floating-point representation, 0x5f3759df" https://mrob.com/pub/math/numbers-18.html
On the same site it explains the whole thing. https://mrob.com/pub/math/numbers-16.html#le009_16
According to this nice article written a while back...
The magic of the code, even if you
can't follow it, stands out as the i =
0x5f3759df - (i>>1); line. Simplified,
Newton-Raphson is an approximation
that starts off with a guess and
refines it with iteration. Taking
advantage of the nature of 32-bit x86
processors, i, an integer, is
initially set to the value of the
floating point number you want to take
the inverse square of, using an
integer cast. i is then set to
0x5f3759df, minus itself shifted one
bit to the right. The right shift
drops the least significant bit of i,
essentially halving it.
It's a really good read. This is only a tiny piece of it.
The code consists of two major parts. Part one calculates an approximation for 1/sqrt(y), and part two takes that number and runs one iteration of Newton's method to get a better approximation.
Calculating an approximation for 1/sqrt(y)
i = * ( long * ) &y;
i = 0x5f3759df - ( i >> 1 );
y = * ( float * ) &i;
Line 1 takes the floating point representation of y and treats it as an integer i. Line 2 shifts i over one bit and subtracts it from a mysterious constant. Line 3 takes the resulting number and converts it back to a standard float32. Now why does this work?
Let g be a function that maps a floating point number to its floating point representation, read as an integer. Line 1 above is setting i = g(y).
The following good approximation of g exists(*):
g(y) ≈ Clog_2 y + D for some constants C and D. An intuition for why such a good approximation exists is that the floating point representation of y is roughly linear in the exponent.
The purpose of line 2 is to map from g(y) to g(1/sqrt(y)), after which line 3 can use g^-1 to map that number to 1/sqrt(y). Using the approximation above, we have g(1/sqrt(y)) ≈ Clog_2 (1/sqrt(y)) + D = -C/2 log_2 y + D. We can use these formulas to calculate the map from g(y) to g(1/sqrt(y)), which is g(1/sqrt(y)) ≈ 3D/2 - 1/2 * g(y). In line 2, we have 0x5f3759df ≈ 3D/2, and i >> 1 ≈ 1/2*g(y).
The constant 0x5f3759df is slightly smaller than the constant that gives the best possible approximation for g(1/sqrt(y)). That is because this step is not done in isolation. Due to the direction that Newton's method tends to miss in, using a slightly smaller constant tends to yield better results. The exact optimal constant to use in this setting depends on your input distribution of y, but 0x5f3759df is one such constant that gives good results over a fairly broad range.
A more detailed description of this process can be found on Wikipedia: https://en.wikipedia.org/wiki/Fast_inverse_square_root#Algorithm
(*) More explicitly, let y = 2^e*(1+f). Taking the log of both sides, we get log_2 y = e + log_2(1+f), which can be approximated as log_2 y ≈ e + f + σ for a small constant sigma. Separately, the float32 encoding of y expressed as an integer is g(y) ≈ 2^23 * (e+127) + f * 2^23. Combining the two equations, we get g(y) ≈ 2^23 * log_2 y + 2^23 * (127 - σ).
Using Newton's method
y = y * ( threehalfs - ( x2 * y * y ) );
Consider the function f(y) = 1/y^2 - num. The positive zero of f is y = 1/sqrt(num), which is what we are interested in calculating.
Newton's method is an iterative algorithm for taking an approximation y_n for the zero of a function f, and calculating a better approximation y_n+1, using the following equation: y_n+1 = y_n - f(y_n)/f'(y_n).
Calculating what that looks like for our function f gives the following equation: y_n+1 = y_n - (-y_n+y_n^3*num)/2 = y_n * (3/2 - num/2 * y_n * y_n). This is exactly what the line of code above is doing.
You can learn more about the details of Newton's method here: https://en.wikipedia.org/wiki/Newton%27s_method
I've been profiling some of our core math on an Intel Core Duo, and while looking at various approaches to square root I've noticed something odd: using the SSE scalar operations, it is faster to take a reciprocal square root and multiply it to get the sqrt, than it is to use the native sqrt opcode!
I'm testing it with a loop something like:
inline float TestSqrtFunction( float in );
void TestFunc()
{
#define ARRAYSIZE 4096
#define NUMITERS 16386
float flIn[ ARRAYSIZE ]; // filled with random numbers ( 0 .. 2^22 )
float flOut [ ARRAYSIZE ]; // filled with 0 to force fetch into L1 cache
cyclecounter.Start();
for ( int i = 0 ; i < NUMITERS ; ++i )
for ( int j = 0 ; j < ARRAYSIZE ; ++j )
{
flOut[j] = TestSqrtFunction( flIn[j] );
// unrolling this loop makes no difference -- I tested it.
}
cyclecounter.Stop();
printf( "%d loops over %d floats took %.3f milliseconds",
NUMITERS, ARRAYSIZE, cyclecounter.Milliseconds() );
}
I've tried this with a few different bodies for the TestSqrtFunction, and I've got some timings that are really scratching my head. The worst of all by far was using the native sqrt() function and letting the "smart" compiler "optimize". At 24ns/float, using the x87 FPU this was pathetically bad:
inline float TestSqrtFunction( float in )
{ return sqrt(in); }
The next thing I tried was using an intrinsic to force the compiler to use SSE's scalar sqrt opcode:
inline void SSESqrt( float * restrict pOut, float * restrict pIn )
{
_mm_store_ss( pOut, _mm_sqrt_ss( _mm_load_ss( pIn ) ) );
// compiles to movss, sqrtss, movss
}
This was better, at 11.9ns/float. I also tried Carmack's wacky Newton-Raphson approximation technique, which ran even better than the hardware, at 4.3ns/float, although with an error of 1 in 210 (which is too much for my purposes).
The doozy was when I tried the SSE op for reciprocal square root, and then used a multiply to get the square root ( x * 1/√x = √x ). Even though this takes two dependent operations, it was the fastest solution by far, at 1.24ns/float and accurate to 2-14:
inline void SSESqrt_Recip_Times_X( float * restrict pOut, float * restrict pIn )
{
__m128 in = _mm_load_ss( pIn );
_mm_store_ss( pOut, _mm_mul_ss( in, _mm_rsqrt_ss( in ) ) );
// compiles to movss, movaps, rsqrtss, mulss, movss
}
My question is basically what gives? Why is SSE's built-in-to-hardware square root opcode slower than synthesizing it out of two other math operations?
I'm sure that this is really the cost of the op itself, because I've verified:
All data fits in cache, and
accesses are sequential
the functions are inlined
unrolling the loop makes no difference
compiler flags are set to full optimization (and the assembly is good, I checked)
(edit: stephentyrone correctly points out that operations on long strings of numbers should use the vectorizing SIMD packed ops, like rsqrtps — but the array data structure here is for testing purposes only: what I am really trying to measure is scalar performance for use in code that can't be vectorized.)
sqrtss gives a correctly rounded result. rsqrtss gives an approximation to the reciprocal, accurate to about 11 bits.
sqrtss is generating a far more accurate result, for when accuracy is required. rsqrtss exists for the cases when an approximation suffices, but speed is required. If you read Intel's documentation, you will also find an instruction sequence (reciprocal square-root approximation followed by a single Newton-Raphson step) that gives nearly full precision (~23 bits of accuracy, if I remember properly), and is still somewhat faster than sqrtss.
edit: If speed is critical, and you're really calling this in a loop for many values, you should be using the vectorized versions of these instructions, rsqrtps or sqrtps, both of which process four floats per instruction.
There are a number of other answers to this already from a few years ago. Here's what the consensus got right:
The rsqrt* instructions compute an approximation to the reciprocal square root, good to about 11-12 bits.
It's implemented with a lookup table (i.e. a ROM) indexed by the mantissa. (In fact, it's a compressed lookup table, similar to mathematical tables of old, using adjustments to the low-order bits to save on transistors.)
The reason why it's available is that it is the initial estimate used by the FPU for the "real" square root algorithm.
There's also an approximate reciprocal instruction, rcp. Both of these instructions are a clue to how the FPU implements square root and division.
Here's what the consensus got wrong:
SSE-era FPUs do not use Newton-Raphson to compute square roots. It's a great method in software, but it would be a mistake to implement it that way in hardware.
The N-R algorithm to compute reciprocal square root has this update step, as others have noted:
x' = 0.5 * x * (3 - n*x*x);
That's a lot of data-dependent multiplications and one subtraction.
What follows is the algorithm that modern FPUs actually use.
Given b[0] = n, suppose we can find a series of numbers Y[i] such that b[n] = b[0] * Y[0]^2 * Y[1]^2 * ... * Y[n]^2 approaches 1. Then consider:
x[n] = b[0] * Y[0] * Y[1] * ... * Y[n]
y[n] = Y[0] * Y[1] * ... * Y[n]
Clearly x[n] approaches sqrt(n) and y[n] approaches 1/sqrt(n).
We can use the Newton-Raphson update step for reciprocal square root to get a good Y[i]:
b[i] = b[i-1] * Y[i-1]^2
Y[i] = 0.5 * (3 - b[i])
Then:
x[0] = n Y[0]
x[i] = x[i-1] * Y[i]
and:
y[0] = Y[0]
y[i] = y[i-1] * Y[i]
The next key observation is that b[i] = x[i-1] * y[i-1]. So:
Y[i] = 0.5 * (3 - x[i-1] * y[i-1])
= 1 + 0.5 * (1 - x[i-1] * y[i-1])
Then:
x[i] = x[i-1] * (1 + 0.5 * (1 - x[i-1] * y[i-1]))
= x[i-1] + x[i-1] * 0.5 * (1 - x[i-1] * y[i-1]))
y[i] = y[i-1] * (1 + 0.5 * (1 - x[i-1] * y[i-1]))
= y[i-1] + y[i-1] * 0.5 * (1 - x[i-1] * y[i-1]))
That is, given initial x and y, we can use the following update step:
r = 0.5 * (1 - x * y)
x' = x + x * r
y' = y + y * r
Or, even fancier, we can set h = 0.5 * y. This is the initialisation:
Y = approx_rsqrt(n)
x = Y * n
h = Y * 0.5
And this is the update step:
r = 0.5 - x * h
x' = x + x * r
h' = h + h * r
This is Goldschmidt's algorithm, and it has a huge advantage if you're implementing it in hardware: the "inner loop" is three multiply-adds and nothing else, and two of them are independent and can be pipelined.
In 1999, FPUs already needed a pipelined add/substract circuit and a pipelined multiply circuit, otherwise SSE would not be very "streaming". Only one of each circuit was needed in 1999 to implement this inner loop in a fully-pipelined way without wasting a lot of hardware just on square root.
Today, of course, we have fused multiply-add exposed to the programmer. Again, the inner loop is three pipelined FMAs, which are (again) generally useful even if you're not computing square roots.
This is also true for division. MULSS(a,RCPSS(b)) is way faster than DIVSS(a,b). In fact it's still faster even when you increase its precision with a Newton-Raphson iteration.
Intel and AMD both recommend this technique in their optimisation manuals. In applications which don't require IEEE-754 compliance, the only reason to use div/sqrt is code readability.
Instead of supplying an answer, that actually might be incorrect (I'm also not going to check or argue about cache and other stuff, let's say they are identical) I'll try to point you to the source that can answer your question.
The difference might lie in how sqrt and rsqrt are computed. You can read more here http://www.intel.com/products/processor/manuals/. I'd suggest to start from reading about processor functions you are using, there are some info, especially about rsqrt (cpu is using internal lookup table with huge approximation, which makes it much simpler to get the result). It may seem, that rsqrt is so much faster than sqrt, that 1 additional mul operation (which isn't to costly) might not change the situation here.
Edit: Few facts that might be worth mentioning:
1. Once I was doing some micro optimalizations for my graphics library and I've used rsqrt for computing length of vectors. (instead of sqrt, I've multiplied my sum of squared by rsqrt of it, which is exactly what you've done in your tests), and it performed better.
2. Computing rsqrt using simple lookup table might be easier, as for rsqrt, when x goes to infinity, 1/sqrt(x) goes to 0, so for small x's the function values doesn't change (a lot), whereas for sqrt - it goes to infinity, so it's that simple case ;).
Also, clarification: I'm not sure where I've found it in books I've linked, but I'm pretty sure I've read that rsqrt is using some lookup table, and it should be used only, when the result doesn't need to be exact, although - I might be wrong as well, as it was some time ago :).
Newton-Raphson converges to the zero of f(x) using increments equals to -f/f' where f' is the derivative.
For x=sqrt(y), you can try to solve f(x) = 0 for x using f(x) = x^2 - y;
Then the increment is: dx = -f/f' = 1/2 (x - y/x) = 1/2 (x^2 - y) / x
which has a slow divide in it.
You can try other functions (like f(x) = 1/y - 1/x^2) but they will be equally complicated.
Let's look at 1/sqrt(y) now. You can try f(x) = x^2 - 1/y, but it will be equally complicated: dx = 2xy / (y*x^2 - 1) for instance.
One non-obvious alternate choice for f(x) is: f(x) = y - 1/x^2
Then: dx = -f/f' = (y - 1/x^2) / (2/x^3) = 1/2 * x * (1 - y * x^2)
Ah! It's not a trivial expression, but you only have multiplies in it, no divide. => Faster!
And: the full update step new_x = x + dx then reads:
x *= 3/2 - y/2 * x * x which is easy too.
It is faster becausse these instruction ignore rounding modes, and do not handle floatin point exceptions or dernormalized numbers. For these reasons it is much easier to pipeline, speculate and execute other fp instruction Out of order.
Suppose you have a list of floating point numbers that are approximately multiples of a common quantity, for example
2.468, 3.700, 6.1699
which are approximately all multiples of 1.234. How would you characterize this "approximate gcd", and how would you proceed to compute or estimate it?
Strictly related to my answer to this question.
You can run Euclid's gcd algorithm with anything smaller then 0.01 (or a small number of your choice) being a pseudo 0. With your numbers:
3.700 = 1 * 2.468 + 1.232,
2.468 = 2 * 1.232 + 0.004.
So the pseudo gcd of the first two numbers is 1.232. Now you take the gcd of this with your last number:
6.1699 = 5 * 1.232 + 0.0099.
So 1.232 is the pseudo gcd, and the mutiples are 2,3,5. To improve this result, you may take the linear regression on the data points:
(2,2.468), (3,3.7), (5,6.1699).
The slope is the improved pseudo gcd.
Caveat: the first part of this is algorithm is numerically unstable - if you start with very dirty data, you are in trouble.
Express your measurements as multiples of the lowest one. Thus your list becomes 1.00000, 1.49919, 2.49996. The fractional parts of these values will be very close to 1/Nths, for some value of N dictated by how close your lowest value is to the fundamental frequency. I would suggest looping through increasing N until you find a sufficiently refined match. In this case, for N=1 (that is, assuming X=2.468 is your fundamental frequency) you would find a standard deviation of 0.3333 (two of the three values are .5 off of X * 1), which is unacceptably high. For N=2 (that is, assuming 2.468/2 is your fundamental frequency) you would find a standard deviation of virtually zero (all three values are within .001 of a multiple of X/2), thus 2.468/2 is your approximate GCD.
The major flaw in my plan is that it works best when the lowest measurement is the most accurate, which is likely not the case. This could be mitigated by performing the entire operation multiple times, discarding the lowest value on the list of measurements each time, then use the list of results of each pass to determine a more precise result. Another way to refine the results would be adjust the GCD to minimize the standard deviation between integer multiples of the GCD and the measured values.
This reminds me of the problem of finding good rational-number approximations of real numbers. The standard technique is a continued-fraction expansion:
def rationalizations(x):
assert 0 <= x
ix = int(x)
yield ix, 1
if x == ix: return
for numer, denom in rationalizations(1.0/(x-ix)):
yield denom + ix * numer, numer
We could apply this directly to Jonathan Leffler's and Sparr's approach:
>>> a, b, c = 2.468, 3.700, 6.1699
>>> b/a, c/a
(1.4991896272285252, 2.4999594813614263)
>>> list(itertools.islice(rationalizations(b/a), 3))
[(1, 1), (3, 2), (925, 617)]
>>> list(itertools.islice(rationalizations(c/a), 3))
[(2, 1), (5, 2), (30847, 12339)]
picking off the first good-enough approximation from each sequence. (3/2 and 5/2 here.) Or instead of directly comparing 3.0/2.0 to 1.499189..., you could notice than 925/617 uses much larger integers than 3/2, making 3/2 an excellent place to stop.
It shouldn't much matter which of the numbers you divide by. (Using a/b and c/b you get 2/3 and 5/3, for instance.) Once you have integer ratios, you could refine the implied estimate of the fundamental using shsmurfy's linear regression. Everybody wins!
I'm assuming all of your numbers are multiples of integer values. For the rest of my explanation, A will denote the "root" frequency you are trying to find and B will be an array of the numbers you have to start with.
What you are trying to do is superficially similar to linear regression. You are trying to find a linear model y=mx+b that minimizes the average distance between a linear model and a set of data. In your case, b=0, m is the root frequency, and y represents the given values. The biggest problem is that the independent variables X are not explicitly given. The only thing we know about X is that all of its members must be integers.
Your first task is trying to determine these independent variables. The best method I can think of at the moment assumes that the given frequencies have nearly consecutive indexes (x_1=x_0+n). So B_0/B_1=(x_0)/(x_0+n) given a (hopefully) small integer n. You can then take advantage of the fact that x_0 = n/(B_1-B_0), start with n=1, and keep ratcheting it up until k-rnd(k) is within a certain threshold. After you have x_0 (the initial index), you can approximate the root frequency (A = B_0/x_0). Then you can approximate the other indexes by finding x_n = rnd(B_n/A). This method is not very robust and will probably fail if the error in the data is large.
If you want a better approximation of the root frequency A, you can use linear regression to minimize the error of the linear model now that you have the corresponding dependent variables. The easiest method to do so uses least squares fitting. Wolfram's Mathworld has a in-depth mathematical treatment of the issue, but a fairly simple explanation can be found with some googling.
Interesting question...not easy.
I suppose I would look at the ratios of the sample values:
3.700 / 2.468 = 1.499...
6.1699 / 2.468 = 2.4999...
6.1699 / 3.700 = 1.6675...
And I'd then be looking for a simple ratio of integers in those results.
1.499 ~= 3/2
2.4999 ~= 5/2
1.6675 ~= 5/3
I haven't chased it through, but somewhere along the line, you decide that an error of 1:1000 or something is good enough, and you back-track to find the base approximate GCD.
The solution which I've seen and used myself is to choose some constant, say 1000, multiply all numbers by this constant, round them to integers, find the GCD of these integers using the standard algorithm and then divide the result by the said constant (1000). The larger the constant, the higher the precision.
This is a reformulaiton of shsmurfy's solution when you a priori choose 3 positive tolerances (e1,e2,e3)
The problem is then to search smallest positive integers (n1,n2,n3) and thus largest root frequency f such that:
f1 = n1*f +/- e1
f2 = n2*f +/- e2
f3 = n3*f +/- e3
We assume 0 <= f1 <= f2 <= f3
If we fix n1, then we get these relations:
f is in interval I1=[(f1-e1)/n1 , (f1+e1)/n1]
n2 is in interval I2=[n1*(f2-e2)/(f1+e1) , n1*(f2+e2)/(f1-e1)]
n3 is in interval I3=[n1*(f3-e3)/(f1+e1) , n1*(f3+e3)/(f1-e1)]
We start with n1 = 1, then increment n1 until the interval I2 and I3 contain an integer - that is floor(I2min) different from floor(I2max) same with I3
We then choose smallest integer n2 in interval I2, and smallest integer n3 in interval I3.
Assuming normal distribution of floating point errors, the most probable estimate of root frequency f is the one minimizing
J = (f1/n1 - f)^2 + (f2/n2 - f)^2 + (f3/n3 - f)^2
That is
f = (f1/n1 + f2/n2 + f3/n3)/3
If there are several integers n2,n3 in intervals I2,I3 we could also choose the pair that minimize the residue
min(J)*3/2=(f1/n1)^2+(f2/n2)^2+(f3/n3)^2-(f1/n1)*(f2/n2)-(f1/n1)*(f3/n3)-(f2/n2)*(f3/n3)
Another variant could be to continue iteration and try to minimize another criterium like min(J(n1))*n1, until f falls below a certain frequency (n1 reaches an upper limit)...
I found this question looking for answers for mine in MathStackExchange (here and here).
I've only managed (yet) to measure the appeal of a fundamental frequency given a list of harmonic frequencies (following the sound/music nomenclature), which can be useful if you have a reduced number of options and is feasible to compute the appeal of each one and then choose the best fit.
C&P from my question in MSE (there the formatting is prettier):
being v the list {v_1, v_2, ..., v_n}, ordered from lower to higher
mean_sin(v, x) = sum(sin(2*pi*v_i/x), for i in {1, ...,n})/n
mean_cos(v, x) = sum(cos(2*pi*v_i/x), for i in {1, ...,n})/n
gcd_appeal(v, x) = 1 - sqrt(mean_sin(v, x)^2 + (mean_cos(v, x) - 1)^2)/2, which yields a number in the interval [0,1].
The goal is to find the x that maximizes the appeal. Here is the (gcd_appeal) graph for your example [2.468, 3.700, 6.1699], where you find that the optimum GCD is at x = 1.2337899957639993
Edit:
You may find handy this JAVA code to calculate the (fuzzy) divisibility (aka gcd_appeal) of a divisor relative to a list of dividends; you can use it to test which of your candidates makes the best divisor. The code looks ugly because I tried to optimize it for performance.
//returns the mean divisibility of dividend/divisor as a value in the range [0 and 1]
// 0 means no divisibility at all
// 1 means full divisibility
public double divisibility(double divisor, double... dividends) {
double n = dividends.length;
double factor = 2.0 / divisor;
double sum_x = -n;
double sum_y = 0.0;
double[] coord = new double[2];
for (double v : dividends) {
coordinates(v * factor, coord);
sum_x += coord[0];
sum_y += coord[1];
}
double err = 1.0 - Math.sqrt(sum_x * sum_x + sum_y * sum_y) / (2.0 * n);
//Might happen due to approximation error
return err >= 0.0 ? err : 0.0;
}
private void coordinates(double x, double[] out) {
//Bhaskara performant approximation to
//out[0] = Math.cos(Math.PI*x);
//out[1] = Math.sin(Math.PI*x);
long cos_int_part = (long) (x + 0.5);
long sin_int_part = (long) x;
double rem = x - cos_int_part;
if (cos_int_part != sin_int_part) {
double common_s = 4.0 * rem;
double cos_rem_s = common_s * rem - 1.0;
double sin_rem_s = cos_rem_s + common_s + 1.0;
out[0] = (((cos_int_part & 1L) * 8L - 4L) * cos_rem_s) / (cos_rem_s + 5.0);
out[1] = (((sin_int_part & 1L) * 8L - 4L) * sin_rem_s) / (sin_rem_s + 5.0);
} else {
double common_s = 4.0 * rem - 4.0;
double sin_rem_s = common_s * rem;
double cos_rem_s = sin_rem_s + common_s + 3.0;
double common_2 = ((cos_int_part & 1L) * 8L - 4L);
out[0] = (common_2 * cos_rem_s) / (cos_rem_s + 5.0);
out[1] = (common_2 * sin_rem_s) / (sin_rem_s + 5.0);
}
}