Determining Floating Point Square Root - algorithm

How do I determine the square root of a floating point number? Is the Newton-Raphson method a good way? I have no hardware square root either. I also have no hardware divide (but I have implemented floating point divide).
If possible, I would prefer to reduce the number of divides as much as possible since they are so expensive.
Also, what should be the initial guess to reduce the total number of iterations???
Thank you so much!

When you use Newton-Raphson to compute a square-root, you actually want to use the iteration to find the reciprocal square root (after which you can simply multiply by the input--with some care for rounding--to produce the square root).
More precisely: we use the function f(x) = x^-2 - n. Clearly, if f(x) = 0, then x = 1/sqrt(n). This gives rise to the newton iteration:
x_(i+1) = x_i - f(x_i)/f'(x_i)
= x_i - (x_i^-2 - n)/(-2x_i^-3)
= x_i + (x_i - nx_i^3)/2
= x_i*(3/2 - 1/2 nx_i^2)
Note that (unlike the iteration for the square root), this iteration for the reciprocal square root involves no divisions, so it is generally much more efficient.
I mentioned in your question on divide that you should look at existing soft-float libraries, rather than re-inventing the wheel. That advice applies here as well. This function has already been implemented in existing soft-float libraries.
Edit: the questioner seems to still be confused, so let's work an example: sqrt(612). 612 is 1.1953125 x 2^9 (or b1.0011001 x 2^9, if you prefer binary). Pull out the even portion of the exponent (9) to write the input as f * 2^(2m), where m is an integer and f is in the range [1,4). Then we will have:
sqrt(n) = sqrt(f * 2^2m) = sqrt(f)*2^m
applying this reduction to our example gives f = 1.1953125 * 2 = 2.390625 (b10.011001) and m = 4. Now do a newton-raphson iteration to find x = 1/sqrt(f), using a starting guess of 0.5 (as I noted in a comment, this guess converges for all f, but you can do significantly better using a linear approximation as an initial guess):
x_0 = 0.5
x_1 = x_0*(3/2 - 1/2 * 2.390625 * x_0^2)
= 0.6005859...
x_2 = x_1*(3/2 - 1/2 * 2.390625 * x_1^2)
= 0.6419342...
x_3 = 0.6467077...
x_4 = 0.6467616...
So even with a (relatively bad) initial guess, we get rapid convergence to the true value of 1/sqrt(f) = 0.6467616600226026.
Now we simply assemble the final result:
sqrt(f) = x_n * f = 1.5461646...
sqrt(n) = sqrt(f) * 2^m = 24.738633...
And check: sqrt(612) = 24.738633...
Obviously, if you want correct rounding, careful analysis needed to ensure that you carry sufficient precision at each stage of the computation. This requires careful bookkeeping, but it isn't rocket science. You simply keep careful error bounds and propagate them through the algorithm.
If you want to correct rounding without explicitly checking a residual, you need to compute sqrt(f) to a precision of 2p + 2 bits (where p is precision of the source and destination type). However, you can also take the strategy of computing sqrt(f) to a little more than p bits, square that value, and adjust the trailing bit by one if necessary (which is often cheaper).
sqrt is nice in that it is a unary function, which makes exhaustive testing for single-precision feasible on commodity hardware.
You can find the OS X soft-float sqrtf function on opensource.apple.com, which uses the algorithm described above (I wrote it, as it happens). It is licensed under the APSL, which may or not be suitable for your needs.

Probably (still) the fastest implementation for finding the inverse square root and the 10 lines of code that I adore the most.
It's based on Newton Approximation, but with a few quirks. There's even a great story around this.

Easiest to implement (you can even implement this in a calculator):
def sqrt(x, TOL=0.000001):
y=1.0
while( abs(x/y -y) > TOL ):
y= (y+x/y)/2.0
return y
This is exactly equal to newton raphson:
y(new) = y - f(y)/f'(y)
f(y) = y^2-x and f'(y) = 2y
Substituting these values:
y(new) = y - (y^2-x)/2y = (y^2+x)/2y = (y+x/y)/2
If division is expensive you should consider: http://en.wikipedia.org/wiki/Shifting_nth-root_algorithm .
Shifting algorithms:
Let us assume you have two numbers a and b such that least significant digit (equal to 1) is larger than b and b has only one bit equal to (eg. a=1000 and b=10). Let s(b) = log_2(b) (which is just the location of bit valued 1 in b).
Assume we already know the value of a^2. Now (a+b)^2 = a^2 + 2ab + b^2. a^2 is already known, 2ab: shift a by s(b)+1, b^2: shift b by s(b).
Algorithm:
Initialize a such that a has only one bit equal to one and a^2<= n < (2*a)^2.
Let q=s(a).
b=a
sqra = a*a
For i = q-1 to -10 (or whatever significance you want):
b=b/2
sqrab = sqra + 2ab + b^2
if sqrab > n:
continue
sqra = sqrab
a=a+b
n=612
a=10000 (16)
sqra = 256
Iteration 1:
b=01000 (8)
sqrab = (a+b)^2 = 24^2 = 576
sqrab < n => a=a+b = 24
Iteration 2:
b = 4
sqrab = (a+b)^2 = 28^2 = 784
sqrab > n => a=a
Iteration 3:
b = 2
sqrab = (a+b)^2 = 26^2 = 676
sqrab > n => a=a
Iteration 4:
b = 1
sqrab = (a+b)^2 = 25^2 = 625
sqrab > n => a=a
Iteration 5:
b = 0.5
sqrab = (a+b)^2 = 24.5^2 = 600.25
sqrab < n => a=a+b = 24.5
Iteration 6:
b = 0.25
sqrab = (a+b)^2 = 24.75^2 = 612.5625
sqrab < n => a=a
Iteration 7:
b = 0.125
sqrab = (a+b)^2 = 24.625^2 = 606.390625
sqrab < n => a=a+b = 24.625
and so on.

A good approximation to square root on the range [1,4) is
def sqrt(x):
y = x*-0.000267
y = x*(0.004686+y)
y = x*(-0.034810+y)
y = x*(0.144780+y)
y = x*(-0.387893+y)
y = x*(0.958108+y)
return y+0.315413
Normalise your floating point number so the mantissa is in the range [1,4), use the above algorithm on it, and then divide the exponent by 2. No floating point divisions anywhere.
With the same CPU time budget you can probably do much better, but that seems like a good starting point.

Related

find more indipendent seed value for a 64 bit LCG (MMIX (by Knuth))

I'm using a 64 bit LCG (MMIX (by Knuth)). It generate a certain block of random numbers inside my code, which use them to perform some operations. My code works in single core and I would like to parallelize the work to reduce the execution time.
Before start thinking to more advanced methods in this sense I'd like to simply execute more identical codes in parallel (in fact the code repeats the same task over a certain numbers of indipendent simulation, so I can simply split the number of simulation between more identical codes and run them in parallel).
My only problem now is to find a seed for each code; in particular, to avoid the possibility of unwanted non trivial correlation between data generated in different codes, I have to be sure that the random number generated in the various codes don't overlap. To do so, starting from a certain seed in the first code I have to find a way to find a value (the next seed) very distant not in absolute value but in the pseudo-random sequence (so, such that, to go from the first to the second seed, I need a huge number of steps of LCG).
My first attempt was this:
starting from the LCG relation between 2 consecutive numbers generated in the sequence
So, in principle, I could calculate the above relation with, say, n = 2^40 and I_0 equal to the value of the first seed, and obtain a new seed distant 2^40 steps in the random CLG sequence from the first one.
The problem is that, doing so, I necessary go in overflow calculating a^n. In fact for MMIX (by Knuth) a~2^62 and i use unsigned long long int (64 bit). Note that the only problem here is the fraction in the above relation. If there only were sum and multiplication I could ignore the overflow problem due to the following modular properties (in fact I'm using 2^64 as c (64 bit generator)):
So, starting from a certain value (first seed), how can I find a second one distant a huge number of step in the LC pseudo-random sequence?
[EDIT]
r3mainer solution is perfectly suited for python codes. I'm trying now to implement it in c using unsigned __int128 variables. I have only one problem: in principle I should compute:
Say, for simplicity, I want to compute:
with n = 2^40 and c(a-1)~2^126. I proceed with a cycle.Starting with temp = a, in each iteration I compute temp = temp*temp, then I compute temp%c(a-1). The problem is in the second step (temp = temp*temp). temp in fact could be, in principle any number < c(a-1)~2^126. If temp is a big number, say > 2^64, I'll go in overflow, reaching 2^128 - 1, before the next module operation. So is there a way to avoid it? For now the only solution I see is to perform each multiplication with a loop over bit, as suggested here: c code: prevent overflow in modular operation with huge modules (modules near the overflow treshold)
Is there another way to perform module operation during the multiplication?
(note that being c = 2^64, with mod(c) operation I don't have the same problem because the overflow point (for ull int variables) coincides with the module)
Any LCG of the form x[n+1] = (x[n] * a + c) % m can be skipped to an arbitrary position very quickly.
Starting with a seed value of zero, the first few iterations of the LCG will give you this sequence:
x₀ = 0
x₁ = c % m
x₂ = (c(a + 1)) % m
x₃ = (c(a² + a + 1)) % m
x₄ = (c(a³ + a² + a + 1)) % m
It's pretty easy to see that each term is actually the sum of a geometric series, which can be calculated with a simple formula:
x_n = (c(a^{n-1} + a^{n-2} + ... + a + 1)) % m
= (c * (a^n - 1) / (a - 1)) % m
The (a^n - 1) term can be calculated quickly by modular exponentiation, but dividing by (a-1) is a bit tricky because (a-1) and m are both even (i.e., not coprime), so we can't calculate the modular multiplicative inverse of (a-1) mod m directly.
Instead, calculate (a^n-1) mod m*(a-1), then perform a straightforward (non-modular) division of the result by a-1. In Python, the calculation would go something like this:
def lcg_skip(m, a, c, n):
# Calculate nth term of LCG sequence with parameters m (modulus),
# a (multiplier) and c (increment), assuming an initial seed of zero
a1 = a - 1
t = pow(a, n, m * a1) - 1
t = (t * c // a1) % m
return t
def test(nsteps):
m = 2**64
a = 6364136223846793005
c = 1442695040888963407
#
print("Calculating by brute force:")
seed = 0
for i in range(nsteps):
seed = (seed * a + c) % m
print(seed)
#
print("Calculating by fast method:")
# Calculate nth term by modular exponentiation
print(lcg_skip(m, a, c, nsteps))
test(1000000)
So to create LCGs with non-overlapping output sequences, all you would need to do is use initial seed values generated by lcg_skip() with values of n that are far enough apart.
Well, for LCG it is known property to jump forward and backward in O(log2(N)) time where N is the distance between jump points, paper by F. Brown, "Random Number Generation with Arbitrary Stride," Trans. Am. Nucl. Soc. (Nov. 1994).
It means if you have LCG parameters (a, c) satisfying Hull–Dobell Theorem, then whole period would be 264 numbers before repeating themself, and say for Nt number pf threads you make jump distance of 264 / Nt, and all threads start with the same seed and just jump after initializing LCG by (264 / Nt)*threadId, and you would be completely safe from RNG correlations due to sequences overlap.
For simplest case of common 64 unsigned modulo math, as implemented in NumPy, code below should work fine
import numpy as np
class LCG(object):
UZERO: np.uint64 = np.uint64(0)
UONE : np.uint64 = np.uint64(1)
def __init__(self, seed: np.uint64, a: np.uint64, c: np.uint64) -> None:
self._seed: np.uint64 = np.uint64(seed)
self._a : np.uint64 = np.uint64(a)
self._c : np.uint64 = np.uint64(c)
def next(self) -> np.uint64:
self._seed = self._a * self._seed + self._c
return self._seed
def seed(self) -> np.uint64:
return self._seed
def set_seed(self, seed: np.uint64) -> np.uint64:
self._seed = seed
def skip(self, ns: np.int64) -> None:
"""
Signed argument - skip forward as well as backward
The algorithm here to determine the parameters used to skip ahead is
described in the paper F. Brown, "Random Number Generation with Arbitrary Stride,"
Trans. Am. Nucl. Soc. (Nov. 1994). This algorithm is able to skip ahead in
O(log2(N)) operations instead of O(N). It computes parameters
A and C which can then be used to find x_N = A*x_0 + C mod 2^M.
"""
nskip: np.uint64 = np.uint64(ns)
a: np.uint64 = self._a
c: np.uint64 = self._c
a_next: np.uint64 = LCG.UONE
c_next: np.uint64 = LCG.UZERO
while nskip > LCG.UZERO:
if (nskip & LCG.UONE) != LCG.UZERO:
a_next = a_next * a
c_next = c_next * a + c
c = (a + LCG.UONE) * c
a = a * a
nskip = nskip >> LCG.UONE
self._seed = a_next * self._seed + c_next
#%%
np.seterr(over='ignore')
seed = np.uint64(1)
rng64 = LCG(seed, np.uint64(6364136223846793005), np.uint64(1))
print(rng64.next())
print(rng64.next())
print(rng64.next())
#%%
rng64.skip(-3) # back by 3
print(rng64.next())
print(rng64.next())
print(rng64.next())
rng64.skip(-3) # back by 3
rng64.skip(2) # forward by 2
print(rng64.next())
Tested in Python 3.9.1, x64 Win 10

How can I descale x by n/d, when x*n overflows?

My problem is limited to unsigned integers of 256 bits.
I have a value x, and I need to descale it by the ratio n / d, where n < d.
The simple solution is of course x * n / d, but the problem is that x * n may overflow.
I am looking for any arithmetic trick which may help in reaching a result as accurate as possible.
Dividing each of n and d by gcd(n, d) before calculating x * n / d does not guarantee success.
Is there any process (iterative or other) which i can use in order to solve this problem?
Note that I am willing to settle on an inaccurate solution, but I'd need to be able to estimate the error.
NOTE: Using integer division instead of normal division
Let us suppose
x = ad + b
n = cd + e
Then find a,b,c,e as follows:
a = x/d
b = x%d
c = n/d
e = n%d
Then,
nx/d = acd + ae + bc + be/d
CALCULATING be/d
1. Represent e in binary form
2. Find b/d, 2b/d, 4b/d, 8b/d, ... 256b/d and their remainders
3. Find be/d = b*binary terms + their remainders
Example:
e = 101 in binary = 4+1
be/d = (b/d + 4b/d) + (b%d + 4b%d)/d
FINDING b/d, 2b/d, ... 256b/d
quotient(2*ib/d) = 2*quotient(ib /d) + (2*remainder(ib /d))/d
remainder(2*ib/d) = (2*remainder(ib/d))%d
Executes in O(number of bits)

What's an algorithm to get a number closest to a constant that can evenly (within a margin) divide into two other constants?

So let't say I have numbers A=1483 and B = 635. My X=100.0
Let's say my allowed MARGIN is 10.0
What's the best way to get the closest number to X (can be floating point) that can divide into A and B with a remainder that is less that MARGIN?
For an answer K. A % K <= MARGIN, B % K <= MARGIN, with K being as close to X as possible, for example |K - X| < 100
Let's try and write the problem with mathematical notations.
What you have is Euclidean divisions:
A = Q1*X + R1
B = Q2*X + R2
You want to find the minimal |x| such that
A = Q1'*(X+x) + R1' , |R1'| <= M
B = Q2'*(X+x) + R2' , |R2'| <= M
To help you finding such x, you have relations like:
A = Q1*(X+x) + R1-Q1*x
B = Q2*(X+x) + R2-Q2*x
From here, you should first concentrate on how to solve the example you gave, then try and generalize.
1483 = 14*100 + 83 = 15*100 - 17
635 = 6*100 + 35 = 7*100 - 65
Should you can take x > 0 in order to reduce R2 (35) down to 10, or x < 0 to increase R1 (-17) up to -10?
In the first case, x should be in interval [25/6 , 45/6] to bring |R2'| <= M, but at the same time it must be in interval [73/14 , 93/14] to bring |R1'| <= M.
Do these intervals overlap?
if yes you have a solution.
if no, then you have to try further (decrement quotients Q1' and/or Q2')
Just check with any decent interpreter (Squeak/Pharo Smalltalk here)
{25/6 . 45/6. 73/14 . 93/14} sorted
= {(25/6) . (73/14) . (93/14) . (15/2)}
So they overlap, starting at x=73/14.
But maybe you would get a closer x in the other direction?
I have not given an algorithm, just a clue, up to you to continue. But you see that increment does not have to be random (like 0.001).
For now the best way I have found is a brute force method by finding the GCD of A and B and decrease by a small interval (0.001) and find the smallest c(K) where K >= X and c(x) = A % x + B % x
If I had found a way to differentiate c(x) correctly, I would've liked to find its gradient and use gradient descent to find the most optimal value without brute force.

Algorithm to evaluate best weights for weighted average

I have a data set of the form:
[9.1 5.6 7.4] => 8.5, [4.1 4.4 5.2] => 4.9, ... , x => y(x)
So x is a real vector of three elements and y is a scalar function.
I'm assuming a weighted average model of this data:
y(x) = (a * x[0] + b * x[1] + c * x[2]) / (a+b+c) + E(x)
where E is an unknown random error term.
I need an algorithm to find a,b,c, that minimizes total sum square error:
error = sum over all x of { E(x)^2 }
for a given data set.
Assume that the weights are normalized to sum to 1 (which happily is without loss of generality), then we can re-cast the problem with c = 1 - a - b, so we are actually solving for a and b.
With this we can write
error(a,b) = sum over all x { a x[0] + b x[1] + (1 - a - b) x[2] - y(x) }^2
Now it's just a question of taking the partial derivatives d_error/da and d_error/db and setting them to zero to find the minimum.
With some fiddling, you get a system of two equations in a and b.
C(X[0],X[0],X[2]) a + C(X[0],X[1],X[2]) b = C(X[0],Y,X[2])
C(X[1],X[0],X[2]) a + C(X[1],X[1],X[2]) b = C(X[1],Y,X[2])
The meaning of X[i] is the vector of all i'th components from the dataset x values.
The meaning of Y is the vector of all y(x) values.
The coefficient function C has the following meaning:
C(p, q, r) = sum over i { p[i] ( q[i] - r[i] ) }
I'll omit how to solve the 2x2 system unless this is a problem.
If we plug in the two-element data set you gave, we should get precise coefficients because you can always approximate two points perfectly with a line. So for example the first equation coefficients are:
C(X[0],X[0],X[2]) = 9.1(9.1 - 7.4) + 4.1(4.1 - 5.2) = 10.96
C(X[0],X[1],X[2]) = -19.66
C(X[0],Y,X[2]) = 8.78
Similarly for the second equation: 4.68 -13.6 4.84
Solving the 2x2 system produces: a = 0.42515, b = -0.20958. Therefore c = 0.78443.
Note that in this problem a negative coefficient results. There is nothing to guarantee they'll be positive, though "real" data sets may produce this result.
Indeed if you compute weighted averages with these coefficients, they are 8.5 and 4.9.
For fun I also tried this data set:
X[0] X[1] X[2] Y
0.018056028 9.70442075 9.368093544 6.360312244
8.138752835 5.181373099 3.824747424 5.423581239
6.296398214 4.74405298 9.837741509 7.714662742
5.177385358 1.241610571 5.028388255 4.491743107
4.251033792 8.261317658 7.415111851 6.430957844
4.720645386 1.0721718 2.187147908 2.815078796
1.941872069 1.108191586 6.24591771 3.994268819
4.220448549 9.931055481 4.435085917 5.233711923
9.398867623 2.799376317 7.982096264 7.612485261
4.971020963 1.578519218 0.462459906 2.248086465
I generated the Y values with 1/3 x[0] + 1/6 x[1] + 1/2 x[2] + E where E is a random number in [-0.1..+0.1]. If the algorithm is working correctly we'd expect to get roughly a = 1/3 and b = 1/6 from this result. Indeed we get a = .3472 and b = .1845.
OP has now said that his actual data are larger than 3-vectors. This method generalizes without much trouble. If the vectors are of length n, then you get an n-1 x n-1 system to solve.

The "guess the number" game for arbitrary rational numbers?

I once got the following as an interview question:
I'm thinking of a positive integer n. Come up with an algorithm that can guess it in O(lg n) queries. Each query is a number of your choosing, and I will answer either "lower," "higher," or "correct."
This problem can be solved by a modified binary search, in which you listing powers of two until you find one that exceeds n, then run a standard binary search over that range. What I think is so cool about this is that you can search an infinite space for a particular number faster than just brute-force.
The question I have, though, is a slight modification of this problem. Instead of picking a positive integer, suppose that I pick an arbitrary rational number between zero and one. My question is: what algorithm can you use to most efficiently determine which rational number I've picked?
Right now, the best solution I have can find p/q in at most O(q) time by implicitly walking the Stern-Brocot tree, a binary search tree over all the rationals. However, I was hoping to get a runtime closer to the runtime that we got for the integer case, maybe something like O(lg (p + q)) or O(lg pq). Does anyone know of a way to get this sort of runtime?
I initially considered using a standard binary search of the interval [0, 1], but this will only find rational numbers with a non-repeating binary representation, which misses almost all of the rationals. I also thought about using some other way of enumerating the rationals, but I can't seem to find a way to search this space given just greater/equal/less comparisons.
Okay, here's my answer using continued fractions alone.
First let's get some terminology here.
Let X = p/q be the unknown fraction.
Let Q(X,p/q) = sign(X - p/q) be the query function: if it is 0, we've guessed the number, and if it's +/- 1 that tells us the sign of our error.
The conventional notation for continued fractions is A = [a0; a1, a2, a3, ... ak]
= a0 + 1/(a1 + 1/(a2 + 1/(a3 + 1/( ... + 1/ak) ... )))
We'll follow the following algorithm for 0 < p/q < 1.
Initialize Y = 0 = [ 0 ], Z = 1 = [ 1 ], k = 0.
Outer loop: The preconditions are that:
Y and Z are continued fractions of k+1 terms which are identical except in the last element, where they differ by 1, so that Y = [y0; y1, y2, y3, ... yk] and Z = [y0; y1, y2, y3, ... yk + 1]
(-1)k(Y-X) < 0 < (-1)k(Z-X), or in simpler terms, for k even, Y < X < Z and for k odd, Z < X < Y.
Extend the degree of the continued fraction by 1 step without changing the values of the numbers. In general, if the last terms are yk and yk + 1, we change that to [... yk, yk+1=∞] and [... yk, zk+1=1]. Now increase k by 1.
Inner loops: This is essentially the same as #templatetypedef's interview question about the integers. We do a two-phase binary search to get closer:
Inner loop 1: yk = ∞, zk = a, and X is between Y and Z.
Double Z's last term: Compute M = Z but with mk = 2*a = 2*zk.
Query the unknown number: q = Q(X,M).
If q = 0, we have our answer and go to step 17 .
If q and Q(X,Y) have opposite signs, it means X is between Y and M, so set Z = M and go to step 5.
Otherwise set Y = M and go to the next step:
Inner loop 2. yk = b, zk = a, and X is between Y and Z.
If a and b differ by 1, swap Y and Z, go to step 2.
Perform a binary search: compute M where mk = floor((a+b)/2, and query q = Q(X,M).
If q = 0, we're done and go to step 17.
If q and Q(X,Y) have opposite signs, it means X is between Y and M, so set Z = M and go to step 11.
Otherwise, q and Q(X,Z) have opposite signs, it means X is between Z and M, so set Y = M and go to step 11.
Done: X = M.
A concrete example for X = 16/113 = 0.14159292
Y = 0 = [0], Z = 1 = [1], k = 0
k = 1:
Y = 0 = [0; ∞] < X, Z = 1 = [0; 1] > X, M = [0; 2] = 1/2 > X.
Y = 0 = [0; ∞], Z = 1/2 = [0; 2], M = [0; 4] = 1/4 > X.
Y = 0 = [0; ∞], Z = 1/4 = [0; 4], M = [0; 8] = 1/8 < X.
Y = 1/8 = [0; 8], Z = 1/4 = [0; 4], M = [0; 6] = 1/6 > X.
Y = 1/8 = [0; 8], Z = 1/6 = [0; 6], M = [0; 7] = 1/7 > X.
Y = 1/8 = [0; 8], Z = 1/7 = [0; 7]
--> the two last terms differ by one, so swap and repeat outer loop.
k = 2:
Y = 1/7 = [0; 7, ∞] > X, Z = 1/8 = [0; 7, 1] < X,
M = [0; 7, 2] = 2/15 < X
Y = 1/7 = [0; 7, ∞], Z = 2/15 = [0; 7, 2],
M = [0; 7, 4] = 4/29 < X
Y = 1/7 = [0; 7, ∞], Z = 4/29 = [0; 7, 4],
M = [0; 7, 8] = 8/57 < X
Y = 1/7 = [0; 7, ∞], Z = 8/57 = [0; 7, 8],
M = [0; 7, 16] = 16/113 = X
--> done!
At each step of computing M, the range of the interval reduces. It is probably fairly easy to prove (though I won't do this) that the interval reduces by a factor of at least 1/sqrt(5) at each step, which would show that this algorithm is O(log q) steps.
Note that this can be combined with templatetypedef's original interview question and apply towards any rational number p/q, not just between 0 and 1, by first computing Q(X,0), then for either positive/negative integers, bounding between two consecutive integers, and then using the above algorithm for the fractional part.
When I have a chance next, I will post a python program that implements this algorithm.
edit: also, note that you don't have to compute the continued fraction each step (which would be O(k), there are partial approximants to continued fractions that can compute the next step from the previous step in O(1).)
edit 2: Recursive definition of partial approximants:
If Ak = [a0; a1, a2, a3, ... ak] = pk/qk, then pk = akpk-1 + pk-2, and qk = akqk-1 + qk-2. (Source: Niven & Zuckerman, 4th ed, Theorems 7.3-7.5. See also Wikipedia)
Example: [0] = 0/1 = p0/q0, [0; 7] = 1/7 = p1/q1; so [0; 7, 16] = (16*1+0)/(16*7+1) = 16/113 = p2/q2.
This means that if two continued fractions Y and Z have the same terms except the last one, and the continued fraction excluding the last term is pk-1/qk-1, then we can write Y = (ykpk-1 + pk-2) / (ykqk-1 + qk-2) and Z = (zkpk-1 + pk-2) / (zkqk-1 + qk-2). It should be possible to show from this that |Y-Z| decreases by at least a factor of 1/sqrt(5) at each smaller interval produced by this algorithm, but the algebra seems to be beyond me at the moment. :-(
Here's my Python program:
import math
# Return a function that returns Q(p0/q0,p/q)
# = sign(p0/q0-p/q) = sign(p0q-q0p)*sign(q0*q)
# If p/q < p0/q0, then Q() = 1; if p/q < p0/q0, then Q() = -1; otherwise Q()=0.
def makeQ(p0,q0):
def Q(p,q):
return cmp(q0*p,p0*q)*cmp(q0*q,0)
return Q
def strsign(s):
return '<' if s<0 else '>' if s>0 else '=='
def cfnext(p1,q1,p2,q2,a):
return [a*p1+p2,a*q1+q2]
def ratguess(Q, doprint, kmax):
# p2/q2 = p[k-2]/q[k-2]
p2 = 1
q2 = 0
# p1/q1 = p[k-1]/q[k-1]
p1 = 0
q1 = 1
k = 0
cf = [0]
done = False
while not done and (not kmax or k < kmax):
if doprint:
print 'p/q='+str(cf)+'='+str(p1)+'/'+str(q1)
# extend continued fraction
k = k + 1
[py,qy] = [p1,q1]
[pz,qz] = cfnext(p1,q1,p2,q2,1)
ay = None
az = 1
sy = Q(py,qy)
sz = Q(pz,qz)
while not done:
if doprint:
out = str(py)+'/'+str(qy)+' '+strsign(sy)+' X '
out += strsign(-sz)+' '+str(pz)+'/'+str(qz)
out += ', interval='+str(abs(1.0*py/qy-1.0*pz/qz))
if ay:
if (ay - az == 1):
[p0,q0,a0] = [pz,qz,az]
break
am = (ay+az)/2
else:
am = az * 2
[pm,qm] = cfnext(p1,q1,p2,q2,am)
sm = Q(pm,qm)
if doprint:
out = str(ay)+':'+str(am)+':'+str(az) + ' ' + out + '; M='+str(pm)+'/'+str(qm)+' '+strsign(sm)+' X '
print out
if (sm == 0):
[p0,q0,a0] = [pm,qm,am]
done = True
break
elif (sm == sy):
[py,qy,ay,sy] = [pm,qm,am,sm]
else:
[pz,qz,az,sz] = [pm,qm,am,sm]
[p2,q2] = [p1,q1]
[p1,q1] = [p0,q0]
cf += [a0]
print 'p/q='+str(cf)+'='+str(p1)+'/'+str(q1)
return [p1,q1]
and a sample output for ratguess(makeQ(33102,113017), True, 20):
p/q=[0]=0/1
None:2:1 0/1 < X < 1/1, interval=1.0; M=1/2 > X
None:4:2 0/1 < X < 1/2, interval=0.5; M=1/4 < X
4:3:2 1/4 < X < 1/2, interval=0.25; M=1/3 > X
p/q=[0, 3]=1/3
None:2:1 1/3 > X > 1/4, interval=0.0833333333333; M=2/7 < X
None:4:2 1/3 > X > 2/7, interval=0.047619047619; M=4/13 > X
4:3:2 4/13 > X > 2/7, interval=0.021978021978; M=3/10 > X
p/q=[0, 3, 2]=2/7
None:2:1 2/7 < X < 3/10, interval=0.0142857142857; M=5/17 > X
None:4:2 2/7 < X < 5/17, interval=0.00840336134454; M=9/31 < X
4:3:2 9/31 < X < 5/17, interval=0.00379506641366; M=7/24 < X
p/q=[0, 3, 2, 2]=5/17
None:2:1 5/17 > X > 7/24, interval=0.00245098039216; M=12/41 < X
None:4:2 5/17 > X > 12/41, interval=0.00143472022956; M=22/75 > X
4:3:2 22/75 > X > 12/41, interval=0.000650406504065; M=17/58 > X
p/q=[0, 3, 2, 2, 2]=12/41
None:2:1 12/41 < X < 17/58, interval=0.000420521446594; M=29/99 > X
None:4:2 12/41 < X < 29/99, interval=0.000246366100025; M=53/181 < X
4:3:2 53/181 < X < 29/99, interval=0.000111613371282; M=41/140 < X
p/q=[0, 3, 2, 2, 2, 2]=29/99
None:2:1 29/99 > X > 41/140, interval=7.21500721501e-05; M=70/239 < X
None:4:2 29/99 > X > 70/239, interval=4.226364059e-05; M=128/437 > X
4:3:2 128/437 > X > 70/239, interval=1.91492009996e-05; M=99/338 > X
p/q=[0, 3, 2, 2, 2, 2, 2]=70/239
None:2:1 70/239 < X < 99/338, interval=1.23789953207e-05; M=169/577 > X
None:4:2 70/239 < X < 169/577, interval=7.2514738621e-06; M=309/1055 < X
4:3:2 309/1055 < X < 169/577, interval=3.28550190148e-06; M=239/816 < X
p/q=[0, 3, 2, 2, 2, 2, 2, 2]=169/577
None:2:1 169/577 > X > 239/816, interval=2.12389981991e-06; M=408/1393 < X
None:4:2 169/577 > X > 408/1393, interval=1.24415093544e-06; M=746/2547 < X
None:8:4 169/577 > X > 746/2547, interval=6.80448470014e-07; M=1422/4855 < X
None:16:8 169/577 > X > 1422/4855, interval=3.56972657711e-07; M=2774/9471 > X
16:12:8 2774/9471 > X > 1422/4855, interval=1.73982239227e-07; M=2098/7163 > X
12:10:8 2098/7163 > X > 1422/4855, interval=1.15020646951e-07; M=1760/6009 > X
10:9:8 1760/6009 > X > 1422/4855, interval=6.85549088053e-08; M=1591/5432 < X
p/q=[0, 3, 2, 2, 2, 2, 2, 2, 9]=1591/5432
None:2:1 1591/5432 < X < 1760/6009, interval=3.06364213998e-08; M=3351/11441 < X
p/q=[0, 3, 2, 2, 2, 2, 2, 2, 9, 1]=1760/6009
None:2:1 1760/6009 > X > 3351/11441, interval=1.45456726663e-08; M=5111/17450 < X
None:4:2 1760/6009 > X > 5111/17450, interval=9.53679318849e-09; M=8631/29468 < X
None:8:4 1760/6009 > X > 8631/29468, interval=5.6473816179e-09; M=15671/53504 < X
None:16:8 1760/6009 > X > 15671/53504, interval=3.11036635336e-09; M=29751/101576 > X
16:12:8 29751/101576 > X > 15671/53504, interval=1.47201634215e-09; M=22711/77540 > X
12:10:8 22711/77540 > X > 15671/53504, interval=9.64157420569e-10; M=19191/65522 > X
10:9:8 19191/65522 > X > 15671/53504, interval=5.70501257346e-10; M=17431/59513 > X
p/q=[0, 3, 2, 2, 2, 2, 2, 2, 9, 1, 8]=15671/53504
None:2:1 15671/53504 < X < 17431/59513, interval=3.14052228667e-10; M=33102/113017 == X
Since Python handles biginteger math from the start, and this program uses only integer math (except for the interval calculations), it should work for arbitrary rationals.
edit 3: Outline of proof that this is O(log q), not O(log^2 q):
First note that until the rational number is found, the # of steps nk for each new continued fraction term is exactly 2b(a_k)-1 where b(a_k) is the # of bits needed to represent a_k = ceil(log2(a_k)): it's b(a_k) steps to widen the "net" of the binary search, and b(a_k)-1 steps to narrow it). See the example above, you'll note that the # of steps is always 1, 3, 7, 15, etc.
Now we can use the recurrence relation qk = akqk-1 + qk-2 and induction to prove the desired result.
Let's state it in this way: that the value of q after the Nk = sum(nk) steps required for reaching the kth term has a minimum: q >= A*2cN for some fixed constants A,c. (so to invert, we'd get that the # of steps N is <= (1/c) * log2 (q/A) = O(log q).)
Base cases:
k=0: q = 1, N = 0, so q >= 2N
k=1: for N = 2b-1 steps, q = a1 >= 2b-1 = 2(N-1)/2 = 2N/2/sqrt(2).
This implies A = 1, c = 1/2 could provide desired bounds. In reality, q may not double each term (counterexample: [0; 1, 1, 1, 1, 1] has a growth factor of phi = (1+sqrt(5))/2) so let's use c = 1/4.
Induction:
for term k, qk = akqk-1 + qk-2. Again, for the nk = 2b-1 steps needed for this term, ak >= 2b-1 = 2(nk-1)/2.
So akqk-1 >= 2(Nk-1)/2 * qk-1 >= 2(nk-1)/2 * A*2Nk-1/4 = A*2Nk/4/sqrt(2)*2nk/4.
Argh -- the tough part here is that if ak = 1, q may not increase much for that one term, and we need to use qk-2 but that may be much smaller than qk-1.
Let's take the rational numbers, in reduced form, and write them out in order first of denominator, then numerator.
1/2, 1/3, 2/3, 1/4, 3/4, 1/5, 2/5, 3/5, 4/5, 1/6, 5/6, ...
Our first guess is going to be 1/2. Then we'll go along the list until we have 3 in our range. Then we will take 2 guesses to search that list. Then we'll go along the list until we have 7 in our remaining range. Then we will take 3 guesses to search that list. And so on.
In n steps we'll cover the first 2O(n) possibilities, which is in the order of magnitude of efficiency that you were looking for.
Update: People didn't get the reasoning behind this. The reasoning is simple. We know how to walk a binary tree efficiently. There are O(n2) fractions with maximum denominator n. We could therefore search up to any particular denominator size in O(2*log(n)) = O(log(n)) steps. The problem is that we have an infinite number of possible rationals to search. So we can't just line them all up, order them, and start searching.
Therefore my idea was to line up a few, search, line up more, search, and so on. Each time we line up more we line up about double what we did last time. So we need one more guess than we did last time. Therefore our first pass uses 1 guess to traverse 1 possible rational. Our second uses 2 guesses to traverse 3 possible rationals. Our third uses 3 guesses to traverse 7 possible rationals. And our k'th uses k guesses to traverse 2k-1 possible rationals. For any particular rational m/n, eventually it will wind up putting that rational on a fairly big list that it knows how to do a binary search on efficiently.
If we did binary searches, then ignored everything we'd learned when we grab more rationals, then we'd put all of the rationals up to and including m/n in O(log(n)) passes. (That's because by that point we'll get to a pass with enough rationals to include every rational up to and including m/n.) But each pass takes more guesses, so that would be O(log(n)2) guesses.
However we actually do a lot better than that. With our first guess, we eliminate half the rationals on our list as being too big or small. Our next two guesses don't quite cut the space into quarters, but they don't come too far from it. Our next 3 guesses again don't quite cut the space into eighths, but they don't come too far from it. And so on. When you put it together, I'm convinced that the result is that you find m/n in O(log(n)) steps. Though I don't actually have a proof.
Try it out: Here is code to generate the guesses so that you can play and see how efficient it is.
#! /usr/bin/python
from fractions import Fraction
import heapq
import readline
import sys
def generate_next_guesses (low, high, limit):
upcoming = [(low.denominator + high.denominator,
low.numerator + high.numerator,
low.denominator, low.numerator,
high.denominator, high.numerator)]
guesses = []
while len(guesses) < limit:
(mid_d, mid_n, low_d, low_n, high_d, high_n) = upcoming[0]
guesses.append(Fraction(mid_n, mid_d))
heapq.heappushpop(upcoming, (low_d + mid_d, low_n + mid_n,
low_d, low_n, mid_d, mid_n))
heapq.heappush(upcoming, (mid_d + high_d, mid_n + high_n,
mid_d, mid_n, high_d, high_n))
guesses.sort()
return guesses
def ask (num):
while True:
print "Next guess: {0} ({1})".format(num, float(num))
if 1 < len(sys.argv):
wanted = Fraction(sys.argv[1])
if wanted < num:
print "too high"
return 1
elif num < wanted:
print "too low"
return -1
else:
print "correct"
return 0
answer = raw_input("Is this (h)igh, (l)ow, or (c)orrect? ")
if answer == "h":
return 1
elif answer == "l":
return -1
elif answer == "c":
return 0
else:
print "Not understood. Please say one of (l, c, h)"
guess_size_bound = 2
low = Fraction(0)
high = Fraction(1)
guesses = [Fraction(1,2)]
required_guesses = 0
answer = -1
while 0 != answer:
if 0 == len(guesses):
guess_size_bound *= 2
guesses = generate_next_guesses(low, high, guess_size_bound - 1)
#print (low, high, guesses)
guess = guesses[len(guesses)/2]
answer = ask(guess)
required_guesses += 1
if 0 == answer:
print "Thanks for playing!"
print "I needed %d guesses" % required_guesses
elif 1 == answer:
high = guess
guesses[len(guesses)/2:] = []
else:
low = guess
guesses[0:len(guesses)/2 + 1] = []
As an example to try it out I tried 101/1024 (0.0986328125) and found that it took 20 guesses to find the answer. I tried 0.98765 and it took 45 guesses. I tried 0.0123456789 and it needed 66 guesses and about a second to generate them. (Note, if you call the program with a rational number as an argument, it will fill in all of the guesses for you. This is a very helpful convenience.)
I've got it! What you need to do is to use a parallel search with bisection and continued fractions.
Bisection will give you a limit toward a specific real number, as represented as a power of two, and continued fractions will take the real number and find the nearest rational number.
How you run them in parallel is as follows.
At each step, you have l and u being the lower and upper bounds of bisection. The idea is, you have a choice between halving the range of bisection, and adding an additional term as a continued fraction representation. When both l and u have the same next term as a continued fraction, then you take the next step in the continued fraction search, and make a query using the continued fraction. Otherwise, you halve the range using bisection.
Since both methods increase the denominator by at least a constant factor (bisection goes by factors of 2, continued fractions go by at least a factor of phi = (1+sqrt(5))/2), this means your search should be O(log(q)). (There may be repeated continued fraction calculations, so it may end up as O(log(q)^2).)
Our continued fraction search needs to round to the nearest integer, not use floor (this is clearer below).
The above is kind of handwavy. Let's use a concrete example of r = 1/31:
l = 0, u = 1, query = 1/2. 0 is not expressible as a continued fraction, so we use binary search until l != 0.
l = 0, u = 1/2, query = 1/4.
l = 0, u = 1/4, query = 1/8.
l = 0, u = 1/8, query = 1/16.
l = 0, u = 1/16, query = 1/32.
l = 1/32, u = 1/16. Now 1/l = 32, 1/u = 16, these have different cfrac reps, so keep bisecting., query = 3/64.
l = 1/32, u = 3/64, query = 5/128 = 1/25.6
l = 1/32, u = 5/128, query = 9/256 = 1/28.4444....
l = 1/32, u = 9/256, query = 17/512 = 1/30.1176... (round to 1/30)
l = 1/32, u = 17/512, query = 33/1024 = 1/31.0303... (round to 1/31)
l = 33/1024, u = 17/512, query = 67/2048 = 1/30.5672... (round to 1/31)
l = 33/1024, u = 67/2048. At this point both l and u have the same continued fraction term 31, so now we use a continued fraction guess.
query = 1/31.
SUCCESS!
For another example let's use 16/113 (= 355/113 - 3 where 355/113 is pretty close to pi).
[to be continued, I have to go somewhere]
On further reflection, continued fractions are the way to go, never mind bisection except to determine the next term. More when I get back.
I think I found an O(log^2(p + q)) algorithm.
To avoid confusion in the next paragraph, a "query" refers to when the guesser gives the challenger a guess, and the challenger responds "bigger" or "smaller". This allows me to reserve the word "guess" for something else, a guess for p + q that is not asked directly to the challenger.
The idea is to first find p + q, using the algorithm you describe in your question: guess a value k, if k is too small, double it and try again. Then once you have an upper and lower bound, do a standard binary search. This takes O(log(p+q)T) queries, where T is an upper bound for the number of queries it takes to check a guess. Let's find T.
We want to check all fractions r/s with r + s <= k, and double k until k is sufficiently large. Note that there are O(k^2) fractions you need to check for a given value of k. Build a balanced binary search tree containing all these values, then search it to determine if p/q is in the tree. It takes O(log k^2) = O(log k) queries to confirm that p/q is not in the tree.
We will never guess a value of k greater than 2(p + q). Hence we can take T = O(log(p+q)).
When we guess the correct value for k (i.e., k = p + q), we will submit the query p/q to the challenger in the course of checking our guess for k, and win the game.
Total number of queries is then O(log^2(p + q)).
Okay, I think I figured out an O(lg2 q) algorithm for this problem that is based on Jason S's most excellent insight about using continued fractions. I thought I'd flesh the algorithm out all the way right here so that we have a complete solution, along with a runtime analysis.
The intuition behind the algorithm is that any rational number p/q within the range can be written as
a0 + 1 / (a1 + 1 / (a2 + 1 / (a3 + 1 / ...))
For appropriate choices of ai. This is called a continued fraction. More importantly, though these ai can be derived by running the Euclidean algorithm on the numerator and denominator. For example, suppose we want to represent 11/14 this way. We begin by noting that 14 goes into eleven zero times, so a crude approximation of 11/14 would be
0 = 0
Now, suppose that we take the reciprocal of this fraction to get 14/11 = 1 3/11. So if we write
0 + (1 / 1) = 1
We get a slightly better approximation to 11/14. Now that we're left with 3 / 11, we can take the reciprocal again to get 11/3 = 3 2/3, so we can consider
0 + (1 / (1 + 1/3)) = 3/4
Which is another good approximation to 11/14. Now, we have 2/3, so consider the reciprocal, which is 3/2 = 1 1/2. If we then write
0 + (1 / (1 + 1/(3 + 1/1))) = 5/6
We get another good approximation to 11/14. Finally, we're left with 1/2, whose reciprocal is 2/1. If we finally write out
0 + (1 / (1 + 1/(3 + 1/(1 + 1/2)))) = (1 / (1 + 1/(3 + 1/(3/2)))) = (1 / (1 + 1/(3 + 2/3)))) = (1 / (1 + 1/(11/3)))) = (1 / (1 + 3/11)) = 1 / (14/11) = 11/14
which is exactly the fraction we wanted. Moreover, look at the sequence of coefficients we ended up using. If you run the extended Euclidean algorithm on 11 and 14, you get that
11 = 0 x 14 + 11 --> a0 = 0
14 = 1 x 11 + 3 --> a1 = 1
11 = 3 x 3 + 2 --> a2 = 3
3 = 2 x 1 + 1 --> a3 = 2
It turns out that (using more math than I currently know how to do!) that this isn't a coincidence and that the coefficients in the continued fraction of p/q are always formed by using the extended Euclidean algorithm. This is great, because it tells us two things:
There can be at most O(lg (p + q)) coefficients, because the Euclidean algorithm always terminates in this many steps, and
Each coefficient is at most max{p, q}.
Given these two facts, we can come up with an algorithm to recover any rational number p/q, not just those between 0 and 1, by applying the general algorithm for guessing arbitrary integers n one at a time to recover all of the coefficients in the continued fraction for p/q. For now, though, we'll just worry about numbers in the range (0, 1], since the logic for handling arbitrary rational numbers can be done easily given this as a subroutine.
As a first step, let's suppose that we want to find the best value of a1 so that 1 / a1 is as close as possible to p/q and a1 is an integer. To do this, we can just run our algorithm for guessing arbitrary integers, taking the reciprocal each time. After doing this, one of two things will have happened. First, we might by sheer coincidence discover that p/q = 1/k for some integer k, in which case we're done. If not, we'll find that p/q is sandwiched between 1/(a1 - 1) and 1/a0 for some a1. When we do this, then we start working on the continued fraction one level deeper by finding the a2 such that p/q is between 1/(a1 + 1/a2) and 1/(a1 + 1/(a2 + 1)). If we magically find p/q, that's great! Otherwise, we then go one level down further in the continued fraction. Eventually, we'll find the number this way, and it can't take too long. Each binary search to find a coefficient takes at most O(lg(p + q)) time, and there are at most O(lg(p + q)) levels to the search, so we need only O(lg2(p + q)) arithmetic operations and probes to recover p/q.
One detail I want to point out is that we need to keep track of whether we're on an odd level or an even level when doing the search because when we sandwich p/q between two continued fractions, we need to know whether the coefficient we were looking for was the upper or the lower fraction. I'll state without proof that for ai with i odd you want to use the upper of the two numbers, and with ai even you use the lower of the two numbers.
I am almost 100% confident that this algorithm works. I'm going to try to write up a more formal proof of this in which I fill in all of the gaps in this reasoning, and when I do I'll post a link here.
Thanks to everyone for contributing the insights necessary to get this solution working, especially Jason S for suggesting a binary search over continued fractions.
Remember that any rational number in (0, 1) can be represented as a finite sum of distinct (positive or negative) unit fractions. For example, 2/3 = 1/2 + 1/6 and 2/5 = 1/2 - 1/10. You can use this to perform a straight-forward binary search.
Here is yet another way to do it. If there is sufficient interest, I will try to fill out the details tonight, but I can't right now because I have family responsibilities. Here is a stub of an implementation that should explain the algorithm:
low = 0
high = 1
bound = 2
answer = -1
while 0 != answer:
mid = best_continued_fraction((low + high)/2, bound)
while mid == low or mid == high:
bound += bound
mid = best_continued_fraction((low + high)/2, bound)
answer = ask(mid)
if -1 == answer:
low = mid
elif 1 == answer:
high = mid
else:
print_success_message(mid)
And here is the explanation. What best_continued_fraction(x, bound) should do is find the last continued fraction approximation to x with the denominator at most bound. This algorithm will take polylog steps to complete and finds very good (though not always the best) approximations. So for each bound we'll get something close to a binary search through all possible fractions of that size. Occasionally we won't find a particular fraction until we increase the bound farther than we should, but we won't be far off.
So there you have it. A logarithmic number of questions found with polylog work.
Update: And full working code.
#! /usr/bin/python
from fractions import Fraction
import readline
import sys
operations = [0]
def calculate_continued_fraction(terms):
i = len(terms) - 1
result = Fraction(terms[i])
while 0 < i:
i -= 1
operations[0] += 1
result = terms[i] + 1/result
return result
def best_continued_fraction (x, bound):
error = x - int(x)
terms = [int(x)]
last_estimate = estimate = Fraction(0)
while 0 != error and estimate.numerator < bound:
operations[0] += 1
error = 1/error
term = int(error)
terms.append(term)
error -= term
last_estimate = estimate
estimate = calculate_continued_fraction(terms)
if estimate.numerator < bound:
return estimate
else:
return last_estimate
def ask (num):
while True:
print "Next guess: {0} ({1})".format(num, float(num))
if 1 < len(sys.argv):
wanted = Fraction(sys.argv[1])
if wanted < num:
print "too high"
return 1
elif num < wanted:
print "too low"
return -1
else:
print "correct"
return 0
answer = raw_input("Is this (h)igh, (l)ow, or (c)orrect? ")
if answer == "h":
return 1
elif answer == "l":
return -1
elif answer == "c":
return 0
else:
print "Not understood. Please say one of (l, c, h)"
ow = Fraction(0)
high = Fraction(1)
bound = 2
answer = -1
guesses = 0
while 0 != answer:
mid = best_continued_fraction((low + high)/2, bound)
guesses += 1
while mid == low or mid == high:
bound += bound
mid = best_continued_fraction((low + high)/2, bound)
answer = ask(mid)
if -1 == answer:
low = mid
elif 1 == answer:
high = mid
else:
print "Thanks for playing!"
print "I needed %d guesses and %d operations" % (guesses, operations[0])
It appears slightly more efficient in guesses than the previous solution, and does a lot fewer operations. For 101/1024 it required 19 guesses and 251 operations. For .98765 it needed 27 guesses and 623 operations. For 0.0123456789 it required 66 guesses and 889 operations. And for giggles and grins, for 0.0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789 (that's 10 copies of the previous one) it required 665 guesses and 23289 operations.
You can sort rational numbers in a given interval by for example the pair (denominator, numerator). Then to play the game you can
Find the interval [0, N] using the doubling-step approach
Given an interval [a, b] shoot for the rational with smallest denominator in the interval that is the closest to the center of the interval
this is however probably still O(log(num/den) + den) (not sure and it's too early in the morning here to make me think clearly ;-) )

Resources