Related
I need to write a function that takes the sixth root of something (equivalently, raises something to the 1/6 power), and checks if the answer is an integer. I want this function to be as fast and as optimized as possible, and since this function needs to run a lot, I'm thinking it might be best to not have to calculate the whole root.
How would I write a function (language agnostic, although Python/C/C++ preferred) that returns False (or 0 or something equivalent) before having to compute the entirety of the sixth root? For instance, if I was taking the 6th root of 65, then my function should, upon realizing that that the result is not an int, stop calculating and return False, instead of first computing that the 6th of 65 is 2.00517474515, then checking if 2.00517474515 is an int, and finally returning False.
Of course, I'm asking this question under the impression that it is faster to do the early termination thing than the complete computation, using something like
print(isinstance(num**(1/6), int))
Any help or ideas would be greatly appreciated. I would also be interested in answers that are generalizable to lots of fractional powers, not just x^(1/6).
Here are some ideas of things you can try that might help eliminate non-sixth-powers quickly. For actual sixth powers, you'll still end up eventually needing to compute the sixth root.
Check small cases
If the numbers you're given have a reasonable probability of being small (less than 12 digits, say), you could build a table of small cases and check against that. There are only 100 sixth powers smaller than 10**12. If your inputs will always be larger, then there's little value in this test, but it's still a very cheap test to make.
Eliminate small primes
Any small prime factor must appear with an exponent that's a multiple of 6. To avoid too many trial divisions, you can bundle up some of the small factors.
For example, 2 * 3 * 5 * 7 * 11 * 13 * 17 * 19 * 23 = 223092870, which is small enough to fit in single 30-bit limb in Python, so a single modulo operation with that modulus should be fast.
So given a test number n, compute g = gcd(n, 223092870), and if the result is not 1, check that n is exactly divisible by g ** 6. If not, n is not a sixth power, and you're done. If n is exactly divisible by g**6, repeat with n // g**6.
Check the value modulo 124488 (for example)
If you carried out the previous step, then at this point you have a value that's not divisible by any prime smaller than 25. Now you can do a modulus test with a carefully chosen modulus: for example, any sixth power that's relatively prime to 124488 = 8 * 9 * 7 * 13 * 19 is congruent to one of the six values [1, 15625, 19657, 28729, 48385, 111385] modulo 124488. There are larger moduli that could be used, at the expense of having to check more possible residues.
Check whether it's a square
Any sixth power must be a square. Since Python (at least, Python >= 3.8) has a built-in integer square root function that's reasonably fast, it's efficient to check whether the value is a square before going for computing a full sixth root. (And if it is a square and you've already computed the square root, now you only need to extract a cube root rather than a sixth root.)
Use floating-point arithmetic
If the input is not too large, say 90 digits or smaller, and it's a sixth power then floating-point arithmetic has a reasonable chance of finding the sixth root exactly. However, Python makes no guarantees about the accuracy of a power operation, so it's worth making some additional checks to make sure that the result is within the expected range. For larger inputs, there's less chance of floating-point arithmetic getting the right result. The sixth root of (2**53 + 1)**6 is not exactly representable as a Python float (making the reasonable assumption that Python's float type matches the IEEE 754 binary64 format), and once n gets past 308 digits or so it's too large to fit into a float anyway.
Use integer arithmetic
Once you've exhausted all the cheap tricks, you're left with little choice but to compute the floor of the sixth root, then compare the sixth power of that with the original number.
Here's some Python code that puts together all of the tricks listed above. You should do your own timings targeting your particular use-case, and choose which tricks are worth keeping and which should be adjusted or thrown out. The order of the tricks will also be significant.
from math import gcd, isqrt
# Sixth powers smaller than 10**12.
SMALL_SIXTH_POWERS = {n**6 for n in range(100)}
def is_sixth_power(n):
"""
Determine whether a positive integer n is a sixth power.
Returns True if n is a sixth power, and False otherwise.
"""
# Sanity check (redundant with the small cases check)
if n <= 0:
return n == 0
# Check small cases
if n < 10**12:
return n in SMALL_SIXTH_POWERS
# Try a floating-point check if there's a realistic chance of it working
if n < 10**90:
s = round(n ** (1/6.))
if n == s**6:
return True
elif (s - 1) ** 6 < n < (s + 1)**6:
return False
# No conclusive result; fall through to the next test.
# Eliminate small primes
while True:
g = gcd(n, 223092870)
if g == 1:
break
n, r = divmod(n, g**6)
if r:
return False
# Check modulo small primes (requires that
# n is relatively prime to 124488)
if n % 124488 not in {1, 15625, 19657, 28729, 48385, 111385}:
return False
# Find the square root using math.isqrt, throw out non-squares
s = isqrt(n)
if s**2 != n:
return False
# Compute the floor of the cube root of s
# (which is the same as the floor of the sixth root of n).
# Code stolen from https://stackoverflow.com/a/35276426/270986
a = 1 << (s.bit_length() - 1) // 3 + 1
while True:
d = s//a**2
if a <= d:
return a**3 == s
a = (2*a + d)//3
This question was asked in TopCoder - SRM 577. Given 1 <= a < b <= 1000000, what is the minimum count of numbers to be inserted between a & b such that no two consecutive numbers will share a positive divisor greater than 1.
Example:
a = 2184; b = 2200. We need to insert 2 numbers 2195 & 2199 such that the condition holds true. (2184,2195,2199,2200)
a = 7; b= 42. One number is sufficient to insert between them. The number can be 11.
a = 17;b = 42. The GCD is already 1, so no need to insert any number.
Now, the interesting part is that for the given range [1,1000000] we never require more than 2 elements to be inserted between a and b. Even more, the 2 numbers are speculated to be a+1 and b-1 though it yet to be proven.
Can anyone prove this?
Can it be extended to larger range of numbers also? Say, [1,10^18] etc
Doh, sorry. The counterexample I have is
a=3199611856032532876288673657174760
b=3199611856032532876288673657174860
(Would be nice if this stupid site allowed everyone to edit its posts)
Each number has some factorization. If a, b each have a little number of distinct prime factors (DPF), and distance between them is large, it is certain there will be at least one number between them, whose set of DPF s has no elements in common with the two. So this will be our one-number pick n, such that gcd(a,n) == 1 and gcd(n,b) == 1. The higher we go, the more prime factors there are, potentially, and the probability for even gcd(a,b)==1 is higher and higher, and also for the one-num-in-between solution.
When will one-num solution not be possible? When a and b are highly-composite - have a lot of DPF s each - and are situated not too far from each other, so each intermediate number has some prime factors in common with one or two of them. But gcd(n,n+1)==1 for any n, always; so picking one of a+1 or b-1 - specifically the one with smallest amount of DPF s - will decrease the size of combined DPF set, and so picking one number between them will be possible. (... this is far from being rigorous though).
This is not a full answer, more like an illustration. Let's try this.
-- find a number between the two, that fulfills the condition
gg a b = let fs=union (fc a) (fc b)
in filter (\n-> null $ intersect fs $ fc n) [a..b]
fc = factorize
Try it:
Main> gg 5 43
[6,7,8,9,11,12,13,14,16,17,18,19,21,22,23,24,26,27,28,29,31,32,33,34,36,37,38,39
,41,42]
Main> gg 2184 2300
[2189,2201,2203,2207,2209,2213,2221,2227,2237,2239,2243,2251,2257,2263,2267,2269
,2273,2279,2281,2287,2291,2293,2297,2299]
Plenty of possibilities for just one number to pick between 5 and 43, or between 2184 and 2300. But what about the given pair, 2184 and 2200?
Main> gg 2184 2200
[]
No one number exists to put in between them. But obviously, gcd (n,n+1) === 1:
Main> gg 2185 2200
[2187,2191,2193,2197,2199]
Main> gg 2184 2199
[2185,2189,2195]
So having picked one adjacent number, we indeed have plenty of possibilities for the 2nd number. Your question is, to prove that it is always the case.
Let's look at their factorizations:
Main> mapM_ (print.(id&&&factorize)) [2184..2200]
(2184,[2,2,2,3,7,13])
(2185,[5,19,23])
(2186,[2,1093])
(2187,[3,3,3,3,3,3,3])
(2188,[2,2,547])
(2189,[11,199])
(2190,[2,3,5,73])
(2191,[7,313])
(2192,[2,2,2,2,137])
(2193,[3,17,43])
(2194,[2,1097])
(2195,[5,439])
(2196,[2,2,3,3,61])
(2197,[13,13,13])
(2198,[2,7,157])
(2199,[3,733])
(2200,[2,2,2,5,5,11])
It is obvious that the higher the range, the easier it is to satisfy the condition, because the variety of contributing prime factors is greater.
(a+1) won't always work by itself - consider 2185, 2200 case (similarly, for 2184,2199 the (b-1) won't work).
So if we happen to get two highly composite numbers as our a and b, picking an adjacent number to either one will help, because usually it will have only few factors.
This answer addresses that part of the question which asks for a proof that a subset of {a,a+1,b-1,b} will always work. The question says: “Even more, the 2 numbers are speculated to be a+1 and b-1 though it yet to be proven. Can anyone prove this?”. This answer shows that no such proof can exist.
An example that disproves that a subset of {a,a+1,b-1,b} always works is {105, 106, 370, 371} = {3·5·7, 2·53, 2·5·37, 7·53}. Let (x,y) denote gcd(x,y). For this example, (a,b)=7, (a,b-1)=5, (a+1,b-1)=2, (a+1,b)=53, so all of the sets {a,b}; {a, a+1, b}; {a,b-1,b}; and {a, a+1, b-1,b} fail.
This example is a result of the following reasoning: We want to find a,b such that every subset of {a,a+1,b-1,b} fails. Specifically, we need the following four gcd's to be greater than 1: (a,b), (a,b-1), (a+1,b-1), (a+1,b). We can do so by finding some e,f that divide even number a+1 and then construct b such that odd b is divisible by f and by some factor of a, while even b-1 is divisible by e. In this case, e=2 and f=53 (as a consequence of arbitrarily taking a=3·5·7 so that a has several small odd-prime factors).
a=3199611856032532876288673657174860
b=3199611856032532876288673657174960
appears to be a counterexample.
Suppose I have a real number. I want to approximate it with something of the form a+sqrt(b) for integers a and b. But I don't know the values of a and b. Of course I would prefer to get a good approximation with small values of a and b. Let's leave it undefined for now what is meant by "good" and "small". Any sensible definitions of those terms will do.
Is there a sane way to find them? Something like the continued fraction algorithm for finding fractional approximations of decimals. For more on the fractions problem, see here.
EDIT: To clarify, it is an arbitrary real number. All I have are a bunch of its digits. So depending on how good of an approximation we want, a and b might or might not exist. Brute force is naturally not a particularly good algorithm. The best I can think of would be to start adding integers to my real, squaring the result, and seeing if I come close to an integer. Pretty much brute force, and not a particularly good algorithm. But if nothing better exists, that would itself be interesting to know.
EDIT: Obviously b has to be zero or positive. But a could be any integer.
No need for continued fractions; just calculate the square-root of all "small" values of b (up to whatever value you feel is still "small" enough), remove everything before the decimal point, and sort/store them all (along with the b that generated it).
Then when you need to approximate a real number, find the radical whose decimal-portion is closet to the real number's decimal-portion. This gives you b - choosing the correct a is then a simple matter of subtraction.
This is actually more of a math problem than a computer problem, but to answer the question I think you are right that you can use continued fractions. What you do is first represent the target number as a continued fraction. For example, if you want to approximate pi (3.14159265) then the CF is:
3: 7, 15, 1, 288, 1, 2, 1, 3, 1, 7, 4 ...
The next step is create a table of CFs for square roots, then you compare the values in the table to the fractional part of the target value (here: 7, 15, 1, 288, 1, 2, 1, 3, 1, 7, 4...). For example, let's say your table had square roots for 1-99 only. Then you would find the closest match would be sqrt(51) which has a CF of 7: 7,14 repeating. The 7,14 is the closest to pi's 7,15. Thus your answer would be:
sqrt(51)-4
As the closest approximation given a b < 100 which is off by 0.00016. If you allow larger b's then you could get a better approximation.
The advantage of using CFs is that it is faster than working in, say, doubles or using floating point. For example, in the above case you only have to compare two integers (7 and 15), and you can also use indexing to make finding the closest entry in the table very fast.
This can be done using mixed integer quadratic programming very efficiently (though there are no run-time guarantees as MIQP is NP-complete.)
Define:
d := the real number you wish to approximate
b, a := two integers such that a + sqrt(b) is as "close" to d as possible
r := (d - a)^2 - b, is the residual of the approximation
The goal is to minimize r. Setup your quadratic program as:
x := [ s b t ]
D := | 1 0 0 |
| 0 0 0 |
| 0 0 0 |
c := [0 -1 0]^T
with the constraint that s - t = f (where f is the fractional part of d)
and b,t are integers (s is not)
This is a convex (therefore optimally solvable) mixed integer quadratic program since D is positive semi-definite.
Once s,b,t are computed, simply derive the answer using b=b, s=d-a and t can be ignored.
Your problem may be NP-complete, it would be interesting to prove if so.
Some of the previous answers use methods that are of time or space complexity O(n), where n is the largest “small number” that will be accepted. By contrast, the following method is O(sqrt(n)) in time, and O(1) in space.
Suppose that positive real number r = x + y, where x=floor(r) and 0 ≤ y < 1. We want to approximate r by a number of the form a + √b. If x+y ≈ a+√b then x+y-a ≈ √b, so √b ≈ h+y for some integer offset h, and b ≈ (h+y)^2. To make b an integer, we want to minimize the fractional part of (h+y)^2 over all eligible h. There are at most √n eligible values of h. See following python code and sample output.
import math, random
def findb(y, rhi):
bestb = loerror = 1;
for r in range(2,rhi):
v = (r+y)**2
u = round(v)
err = abs(v-u)
if round(math.sqrt(u))**2 == u: continue
if err < loerror:
bestb, loerror = u, err
return bestb
#random.seed(123456) # set a seed if testing repetitively
f = [math.pi-3] + sorted([random.random() for i in range(24)])
print (' frac sqrt(b) error b')
for frac in f:
b = findb(frac, 12)
r = math.sqrt(b)
t = math.modf(r)[0] # Get fractional part of sqrt(b)
print ('{:9.5f} {:9.5f} {:11.7f} {:5.0f}'.format(frac, r, t-frac, b))
(Note 1: This code is in demo form; the parameters to findb() are y, the fractional part of r, and rhi, the square root of the largest small number. You may wish to change usage of parameters. Note 2: The
if round(math.sqrt(u))**2 == u: continue
line of code prevents findb() from returning perfect-square values of b, except for the value b=1, because no perfect square can improve upon the accuracy offered by b=1.)
Sample output follows. About a dozen lines have been elided in the middle. The first output line shows that this procedure yields b=51 to represent the fractional part of pi, which is the same value reported in some other answers.
frac sqrt(b) error b
0.14159 7.14143 -0.0001642 51
0.11975 4.12311 0.0033593 17
0.12230 4.12311 0.0008085 17
0.22150 9.21954 -0.0019586 85
0.22681 11.22497 -0.0018377 126
0.25946 2.23607 -0.0233893 5
0.30024 5.29150 -0.0087362 28
0.36772 8.36660 -0.0011170 70
0.42452 8.42615 0.0016309 71
...
0.93086 6.92820 -0.0026609 48
0.94677 8.94427 -0.0024960 80
0.96549 11.95826 -0.0072333 143
0.97693 11.95826 -0.0186723 143
With the following code added at the end of the program, the output shown below also appears. This shows closer approximations for the fractional part of pi.
frac, rhi = math.pi-3, 16
print (' frac sqrt(b) error b bMax')
while rhi < 1000:
b = findb(frac, rhi)
r = math.sqrt(b)
t = math.modf(r)[0] # Get fractional part of sqrt(b)
print ('{:11.7f} {:11.7f} {:13.9f} {:7.0f} {:7.0f}'.format(frac, r, t-frac, b,rhi**2))
rhi = 3*rhi/2
frac sqrt(b) error b bMax
0.1415927 7.1414284 -0.000164225 51 256
0.1415927 7.1414284 -0.000164225 51 576
0.1415927 7.1414284 -0.000164225 51 1296
0.1415927 7.1414284 -0.000164225 51 2916
0.1415927 7.1414284 -0.000164225 51 6561
0.1415927 120.1415831 -0.000009511 14434 14641
0.1415927 120.1415831 -0.000009511 14434 32761
0.1415927 233.1415879 -0.000004772 54355 73441
0.1415927 346.1415895 -0.000003127 119814 164836
0.1415927 572.1415909 -0.000001786 327346 370881
0.1415927 911.1415916 -0.000001023 830179 833569
I do not know if there is any kind of standard algorithm for this kind of problem, but it does intrigue me, so here is my attempt at developing an algorithm that finds the needed approximation.
Call the real number in question r. Then, first I assume that a can be negative, in that case we can reduce the problem and now only have to find a b such that the decimal part of sqrt(b) is a good approximation of the decimal part of r. Let us now write r as r = x.y with x being the integer and y the decimal part.
Now:
b = r^2
= (x.y)^2
= (x + .y)^2
= x^2 + 2 * x * .y + .y^2
= 2 * x * .y + .y^2 (mod 1)
We now only have to find an x such that 0 = .y^2 + 2 * x * .y (mod 1) (approximately).
Filling that x into the formulas above we get b and can then calculate a as a = r - b. (All of these calculations have to be carefully rounded of course.)
Now, for the time being I am not sure if there is a way to find this x without brute forcing it. But even then, one can simple use a simple loop to find an x good enough.
I am thinking of something like this(semi pseudo code):
max_diff_low = 0.01 // arbitrary accuracy
max_diff_high = 1 - max_diff_low
y = r % 1
v = y^2
addend = 2 * y
x = 0
while (v < max_diff_high && v > max_diff_low)
x++;
v = (v + addend) % 1
c = (x + y) ^ 2
b = round(c)
a = round(r - c)
Now, I think this algorithm is fairly efficient, while even allowing you to specify the wished accuracy of the approximation. One thing that could be done that would turn it into an O(1) algorithm is calculating all the x and putting them into a lookup table. If one only cares about the first three decimal digits of r(for example), the lookup table would only have 1000 values, which is only 4kb of memory(assuming that 32bit integers are used).
Hope this is helpful at all. If anyone finds anything wrong with the algorithm, please let me know in a comment and I will fix it.
EDIT:
Upon reflection I retract my claim of efficiency. There is in fact as far as I can tell no guarantee that the algorithm as outlined above will ever terminate, and even if it does, it might take a long time to find a very large x that solves the equation adequately.
One could maybe keep track of the best x found so far and relax the accuracy bounds over time to make sure the algorithm terminates quickly, at the possible cost of accuracy.
These problems are of course non-existent, if one simply pre-calculates a lookup table.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
how to get uniformed random between a, b by a known uniformed random function RANDOM(0,1)
In the book of Introduction to algorithms, there is an excise:
Describe an implementation of the procedure Random(a, b) that only makes calls to Random(0,1). What is the expected running time of your procedure, as a function of a and b? The probability of the result of Random(a,b) should be pure uniformly distributed, as Random(0,1)
For the Random function, the results are integers between a and b, inclusively. For e.g., Random(0,1) generates either 0 or 1; Random(a, b) generates a, a+1, a+2, ..., b
My solution is like this:
for i = 1 to b-a
r = a + Random(0,1)
return r
the running time is T=b-a
Is this correct? Are the results of my solutions uniformly distributed?
Thanks
What if my new solution is like this:
r = a
for i = 1 to b - a //including b-a
r += Random(0,1)
return r
If it is not correct, why r += Random(0,1) makes r not uniformly distributed?
Others have explained why your solution doesn't work. Here's the correct solution:
1) Find the smallest number, p, such that 2^p > b-a.
2) Perform the following algorithm:
r=0
for i = 1 to p
r = 2*r + Random(0,1)
3) If r is greater than b-a, go to step 2.
4) Your result is r+a
So let's try Random(1,3).
So b-a is 2.
2^1 = 2, so p will have to be 2 so that 2^p is greater than 2.
So we'll loop two times. Let's try all possible outputs:
00 -> r=0, 0 is not > 2, so we output 0+1 or 1.
01 -> r=1, 1 is not > 2, so we output 1+1 or 2.
10 -> r=2, 2 is not > 2, so we output 2+1 or 3.
11 -> r=3, 3 is > 2, so we repeat.
So 1/4 of the time, we output 1. 1/4 of the time we output 2. 1/4 of the time we output 3. And 1/4 of the time we have to repeat the algorithm a second time. Looks good.
Note that if you have to do this a lot, two optimizations are handy:
1) If you use the same range a lot, have a class that computes p once so you don't have to compute it each time.
2) Many CPUs have fast ways to perform step 1 that aren't exposed in high-level languages. For example, x86 CPUs have the BSR instruction.
No, it's not correct, that method will concentrate around (a+b)/2. It's a binomial distribution.
Are you sure that Random(0,1) produces integers? it would make more sense if it produced floating point values between 0 and 1. Then the solution would be an affine transformation, running time independent of a and b.
An idea I just had, in case it's about integer values: use bisection. At each step, you have a range low-high. If Random(0,1) returns 0, the next range is low-(low+high)/2, else (low+high)/2-high.
Details and complexity left to you, since it's homework.
That should create (approximately) a uniform distribution.
Edit: approximately is the important word there. Uniform if b-a+1 is a power of 2, not too far off if it's close, but not good enough generally. Ah, well it was a spontaneous idea, can't get them all right.
No, your solution isn't correct. This sum'll have binomial distribution.
However, you can generate a pure random sequence of 0, 1 and treat it as a binary number.
repeat
result = a
steps = ceiling(log(b - a))
for i = 0 to steps
result += (2 ^ i) * Random(0, 1)
until result <= b
KennyTM: my bad.
I read the other answers. For fun, here is another way to find the random number:
Allocate an array with b-a elements.
Set all the values to 1.
Iterate through the array. For each nonzero element, flip the coin, as it were. If it is came up 0, set the element to 0.
Whenever, after a complete iteration, you only have 1 element remaining, you have your random number: a+i where i is the index of the nonzero element (assuming we start indexing on 0). All numbers are then equally likely. (You would have to deal with the case where it's a tie, but I leave that as an exercise for you.)
This would have O(infinity) ... :)
On average, though, half the numbers would be eliminated, so it would have an average case running time of log_2 (b-a).
First of all I assume you are actually accumulating the result, not adding 0 or 1 to a on each step.
Using some probabilites you can prove that your solution is not uniformly distibuted. The chance that the resulting value r is (a+b)/2 is greatest. For instance if a is 0 and b is 7, the chance that you get a value 4 is (combination 4 of 7) divided by 2 raised to the power 7. The reason for that is that no matter which 4 out of the 7 values are 1 the result will still be 4.
The running time you estimate is correct.
Your solution's pseudocode should look like:
r=a
for i = 0 to b-a
r+=Random(0,1)
return r
As for uniform distribution, assuming that the random implementation this random number generator is based on is perfectly uniform the odds of getting 0 or 1 are 50%. Therefore getting the number you want is the result of that choice made over and over again.
So for a=1, b=5, there are 5 choices made.
The odds of getting 1 involves 5 decisions, all 0, the odds of that are 0.5^5 = 3.125%
The odds of getting 5 involves 5 decisions, all 1, the odds of that are 0.5^5 = 3.125%
As you can see from this, the distribution is not uniform -- the odds of any number should be 20%.
In the algorithm you created, it is really not equally distributed.
The result "r" will always be either "a" or "a+1". It will never go beyond that.
It should look something like this:
r=0;
for i=0 to b-a
r = a + r + Random(0,1)
return r;
By including "r" into your computation, you are including the "randomness" of all the previous "for" loop runs.
Given a decimal x, I want to test if x is within 10^-12 of a rational number with denominator 9999 or less. Obviously I could do it by looking at x, 2x, 3x, and so on, and seeing if any of these is sufficiently close to an integer. But is there a more efficient algorithm?
There is an algorithm called the continued fraction algorithm that will give you "best" rational approximations in a certain defined sense. You can stop when your denominator exceeds 9999 and then go back to the previous convergent and compare to see if it is close enough. Of course if the decimal is a small enough rational number the algorithm will terminate early.
So, a couple of things:
I assume that by 'decimal x' you're referring to some floating point representation x. That is, you intend to implement this in some format that can't actually perfectly represent .1 or 1/3, etc. If you're doing this by hand or something else that has its own way to represent decimals, this won't apply.
Second, are you tied to those specific denominators and tolerances? I ask because if you're ok with powers of 2 (e.g. denominator up to 8192 with tolerance of 2^-35), you could easily take advantage of the fact that IEEE-754 style floating points are all rational numbers. Use the exponent to determine which digit in the mantissa corresponds to 2^-13, then ensure that the next 22 digits of the mantissa are 0 (or up to 22 if the precision isn't high enough to include 22 beyond that point). If so, you've got it.
Now, if you're not willing to alter your algorithm to use base 2, you could at least use this to narrow it down and do some elimination.
I see that you've already accepted an answer, but I'm going to chime in anyway.
The brute force method doesn't need to check every denominator. If you work your way backwards, you can eliminate not only the number you just checked but every factor of it. For example, once you've checked 9999 you don't need to check 3333, 1111, 909, 303, 101, 99, 33, 11, 9, 3, or 1; if the number can be expressed as a fraction of one of those, it can also be expressed as a fraction of 9999. It turns out that every number under 5000 is a factor of at least one number 5000 to 9999, so you've cut your search space in half.
Edit: I found this problem interesting enough to code a solution in Python.
def gcd(a, b):
if b == 0:
return a
return gcd(b, a % b)
def simplify(fraction_tuple):
divisor = gcd(fraction_tuple[0], fraction_tuple[1])
return fraction_tuple[0] / divisor, fraction_tuple[1] / divisor
def closest_fraction(value, max_denominator=9999, tolerance=1e-12, enforce_tolerance=False):
best_error, best_result = abs(value), (0,1)
for denominator in range(max_denominator/2+1, max_denominator+1):
numerator = round(value * denominator)
error = abs(value - (numerator / denominator))
if error < best_error:
best_error = error
best_result = int(numerator), denominator
if error <= tolerance:
break
if enforce_tolerance and best_error > tolerance:
return None
return simplify(best_result)