test if a decimal is sufficiently close to a rational number - algorithm

Given a decimal x, I want to test if x is within 10^-12 of a rational number with denominator 9999 or less. Obviously I could do it by looking at x, 2x, 3x, and so on, and seeing if any of these is sufficiently close to an integer. But is there a more efficient algorithm?

There is an algorithm called the continued fraction algorithm that will give you "best" rational approximations in a certain defined sense. You can stop when your denominator exceeds 9999 and then go back to the previous convergent and compare to see if it is close enough. Of course if the decimal is a small enough rational number the algorithm will terminate early.

So, a couple of things:
I assume that by 'decimal x' you're referring to some floating point representation x. That is, you intend to implement this in some format that can't actually perfectly represent .1 or 1/3, etc. If you're doing this by hand or something else that has its own way to represent decimals, this won't apply.
Second, are you tied to those specific denominators and tolerances? I ask because if you're ok with powers of 2 (e.g. denominator up to 8192 with tolerance of 2^-35), you could easily take advantage of the fact that IEEE-754 style floating points are all rational numbers. Use the exponent to determine which digit in the mantissa corresponds to 2^-13, then ensure that the next 22 digits of the mantissa are 0 (or up to 22 if the precision isn't high enough to include 22 beyond that point). If so, you've got it.
Now, if you're not willing to alter your algorithm to use base 2, you could at least use this to narrow it down and do some elimination.

I see that you've already accepted an answer, but I'm going to chime in anyway.
The brute force method doesn't need to check every denominator. If you work your way backwards, you can eliminate not only the number you just checked but every factor of it. For example, once you've checked 9999 you don't need to check 3333, 1111, 909, 303, 101, 99, 33, 11, 9, 3, or 1; if the number can be expressed as a fraction of one of those, it can also be expressed as a fraction of 9999. It turns out that every number under 5000 is a factor of at least one number 5000 to 9999, so you've cut your search space in half.
Edit: I found this problem interesting enough to code a solution in Python.
def gcd(a, b):
if b == 0:
return a
return gcd(b, a % b)
def simplify(fraction_tuple):
divisor = gcd(fraction_tuple[0], fraction_tuple[1])
return fraction_tuple[0] / divisor, fraction_tuple[1] / divisor
def closest_fraction(value, max_denominator=9999, tolerance=1e-12, enforce_tolerance=False):
best_error, best_result = abs(value), (0,1)
for denominator in range(max_denominator/2+1, max_denominator+1):
numerator = round(value * denominator)
error = abs(value - (numerator / denominator))
if error < best_error:
best_error = error
best_result = int(numerator), denominator
if error <= tolerance:
break
if enforce_tolerance and best_error > tolerance:
return None
return simplify(best_result)

Related

Early termination of fractional exponent calculation?

I need to write a function that takes the sixth root of something (equivalently, raises something to the 1/6 power), and checks if the answer is an integer. I want this function to be as fast and as optimized as possible, and since this function needs to run a lot, I'm thinking it might be best to not have to calculate the whole root.
How would I write a function (language agnostic, although Python/C/C++ preferred) that returns False (or 0 or something equivalent) before having to compute the entirety of the sixth root? For instance, if I was taking the 6th root of 65, then my function should, upon realizing that that the result is not an int, stop calculating and return False, instead of first computing that the 6th of 65 is 2.00517474515, then checking if 2.00517474515 is an int, and finally returning False.
Of course, I'm asking this question under the impression that it is faster to do the early termination thing than the complete computation, using something like
print(isinstance(num**(1/6), int))
Any help or ideas would be greatly appreciated. I would also be interested in answers that are generalizable to lots of fractional powers, not just x^(1/6).
Here are some ideas of things you can try that might help eliminate non-sixth-powers quickly. For actual sixth powers, you'll still end up eventually needing to compute the sixth root.
Check small cases
If the numbers you're given have a reasonable probability of being small (less than 12 digits, say), you could build a table of small cases and check against that. There are only 100 sixth powers smaller than 10**12. If your inputs will always be larger, then there's little value in this test, but it's still a very cheap test to make.
Eliminate small primes
Any small prime factor must appear with an exponent that's a multiple of 6. To avoid too many trial divisions, you can bundle up some of the small factors.
For example, 2 * 3 * 5 * 7 * 11 * 13 * 17 * 19 * 23 = 223092870, which is small enough to fit in single 30-bit limb in Python, so a single modulo operation with that modulus should be fast.
So given a test number n, compute g = gcd(n, 223092870), and if the result is not 1, check that n is exactly divisible by g ** 6. If not, n is not a sixth power, and you're done. If n is exactly divisible by g**6, repeat with n // g**6.
Check the value modulo 124488 (for example)
If you carried out the previous step, then at this point you have a value that's not divisible by any prime smaller than 25. Now you can do a modulus test with a carefully chosen modulus: for example, any sixth power that's relatively prime to 124488 = 8 * 9 * 7 * 13 * 19 is congruent to one of the six values [1, 15625, 19657, 28729, 48385, 111385] modulo 124488. There are larger moduli that could be used, at the expense of having to check more possible residues.
Check whether it's a square
Any sixth power must be a square. Since Python (at least, Python >= 3.8) has a built-in integer square root function that's reasonably fast, it's efficient to check whether the value is a square before going for computing a full sixth root. (And if it is a square and you've already computed the square root, now you only need to extract a cube root rather than a sixth root.)
Use floating-point arithmetic
If the input is not too large, say 90 digits or smaller, and it's a sixth power then floating-point arithmetic has a reasonable chance of finding the sixth root exactly. However, Python makes no guarantees about the accuracy of a power operation, so it's worth making some additional checks to make sure that the result is within the expected range. For larger inputs, there's less chance of floating-point arithmetic getting the right result. The sixth root of (2**53 + 1)**6 is not exactly representable as a Python float (making the reasonable assumption that Python's float type matches the IEEE 754 binary64 format), and once n gets past 308 digits or so it's too large to fit into a float anyway.
Use integer arithmetic
Once you've exhausted all the cheap tricks, you're left with little choice but to compute the floor of the sixth root, then compare the sixth power of that with the original number.
Here's some Python code that puts together all of the tricks listed above. You should do your own timings targeting your particular use-case, and choose which tricks are worth keeping and which should be adjusted or thrown out. The order of the tricks will also be significant.
from math import gcd, isqrt
# Sixth powers smaller than 10**12.
SMALL_SIXTH_POWERS = {n**6 for n in range(100)}
def is_sixth_power(n):
"""
Determine whether a positive integer n is a sixth power.
Returns True if n is a sixth power, and False otherwise.
"""
# Sanity check (redundant with the small cases check)
if n <= 0:
return n == 0
# Check small cases
if n < 10**12:
return n in SMALL_SIXTH_POWERS
# Try a floating-point check if there's a realistic chance of it working
if n < 10**90:
s = round(n ** (1/6.))
if n == s**6:
return True
elif (s - 1) ** 6 < n < (s + 1)**6:
return False
# No conclusive result; fall through to the next test.
# Eliminate small primes
while True:
g = gcd(n, 223092870)
if g == 1:
break
n, r = divmod(n, g**6)
if r:
return False
# Check modulo small primes (requires that
# n is relatively prime to 124488)
if n % 124488 not in {1, 15625, 19657, 28729, 48385, 111385}:
return False
# Find the square root using math.isqrt, throw out non-squares
s = isqrt(n)
if s**2 != n:
return False
# Compute the floor of the cube root of s
# (which is the same as the floor of the sixth root of n).
# Code stolen from https://stackoverflow.com/a/35276426/270986
a = 1 << (s.bit_length() - 1) // 3 + 1
while True:
d = s//a**2
if a <= d:
return a**3 == s
a = (2*a + d)//3

Generating strongly biased random numbers for tests

I want to run tests with randomized inputs and need to generate 'sensible' random
numbers, that is, numbers that match good enough to pass the tested function's
preconditions, but hopefully wreak havoc deeper inside its code.
math.random() (I'm using Lua) produces uniformly distributed random
numbers. Scaling these up will give far more big numbers than small numbers,
and there will be very few integers.
I would like to skew the random numbers (or generate new ones using the old
function as a randomness source) in a way that strongly favors 'simple' numbers,
but will still cover the whole range, i.e., extending up to positive/negative infinity
(or ±1e309 for double). This means:
numbers up to, say, ten should be most common,
integers should be more common than fractions,
numbers ending in 0.5 should be the most common fractions,
followed by 0.25 and 0.75; then 0.125,
and so on.
A different description: Fix a base probability x such that probabilities
will sum to one and define the probability of a number n as xk
where k is the generation in which n is constructed as a surreal
number1. That assigns x to 0, x2 to -1 and +1,
x3 to -2, -1/2, +1/2 and +2, and so on. This
gives a nice description of something close to what I want (it skews a bit too
much), but is near-unusable for computing random numbers. The resulting
distribution is nowhere continuous (it's fractal!), I'm not sure how to
determine the base probability x (I think for infinite precision it would be
zero), and computing numbers based on this by iteration is awfully
slow (spending near-infinite time to construct large numbers).
Does anyone know of a simple approximation that, given a uniformly distributed
randomness source, produces random numbers very roughly distributed as
described above?
I would like to run thousands of randomized tests, quantity/speed is more
important than quality. Still, better numbers mean less inputs get rejected.
Lua has a JIT, so performance is usually not much of an issue. However, jumps based
on randomness will break every prediction, and many calls to math.random()
will be slow, too. This means a closed formula will be better than an
iterative or recursive one.
1 Wikipedia has an article on surreal numbers, with
a nice picture. A surreal number is a pair of two surreal
numbers, i.e. x := {n|m}, and its value is the number in the middle of the
pair, i.e. (for finite numbers) {n|m} = (n+m)/2 (as rational). If one side
of the pair is empty, that's interpreted as increment (or decrement, if right
is empty) by one. If both sides are empty, that's zero. Initially, there are
no numbers, so the only number one can build is 0 := { | }. In generation
two one can build numbers {0| } =: 1 and { |0} =: -1, in three we get
{1| } =: 2, {|1} =: -2, {0|1} =: 1/2 and {-1|0} =: -1/2 (plus some
more complex representations of known numbers, e.g. {-1|1} ? 0). Note that
e.g. 1/3 is never generated by finite numbers because it is an infinite
fraction – the same goes for floats, 1/3 is never represented exactly.
How's this for an algorithm?
Generate a random float in (0, 1) with a library function
Generate a random integral roundoff point according to a desired probability density function (e.g. 0 with probability 0.5, 1 with probability 0.25, 2 with probability 0.125, ...).
'Round' the float by that roundoff point (e.g. floor((float_val << roundoff)+0.5))
Generate a random integral exponent according to another PDF (e.g. 0, 1, 2, 3 with probability 0.1 each, and decreasing thereafter)
Multiply the rounded float by 2exponent.
For a surreal-like decimal expansion, you need a random binary number.
Even bits tell you whether to stop or continue, odd bits tell you whether to go right or left on the tree:
> 0... => 0.0 [50%] Stop
> 100... => -0.5 [<12.5%] Go, Left, Stop
> 110... => 0.5 [<12.5%] Go, Right, Stop
> 11100... => 0.25 [<3.125%] Go, Right, Go, Left, Stop
> 11110... => 0.75 [<3.125%] Go, Right, Go, Right, Stop
> 1110100... => 0.125
> 1110110... => 0.375
> 1111100... => 0.625
> 1111110... => 0.875
One way to quickly generate a random binary number is by looking at the decimal digits in math.random() and replace 0-4 with '1' and 5-9 with '1':
0.8430419054348022
becomes
1000001010001011
which becomes -0.5
0.5513009827118367
becomes
1100001101001011
which becomes 0.25
etc
Haven't done much lua programming, but in Javascript you can do:
Math.random().toString().substring(2).split("").map(
function(digit) { return digit >= "5" ? 1 : 0 }
);
or true binary expansion:
Math.random().toString(2).substring(2)
Not sure which is more genuinely "random" -- you'll need to test it.
You could generate surreal numbers in this way, but most of the results will be decimals in the form a/2^b, with relatively few integers. On Day 3, only 2 integers are produced (-3 and 3) vs. 6 decimals, on Day 4 it is 2 vs. 14, and on Day n it is 2 vs (2^n-2).
If you add two uniform random numbers from math.random(), you get a new distribution which has a "triangle" like distribution (linearly decreasing from the center). Adding 3 or more will get a more 'bell curve' like distribution centered around 0:
math.random() + math.random() + math.random() - 1.5
Dividing by a random number will get a truly wild number:
A/(math.random()+1e-300)
This will return an results between A and (theoretically) A*1e+300,
though my tests show that 50% of the time the results are between A and 2*A
and about 75% of the time between A and 4*A.
Putting them together, we get:
round(6*(math.random()+math.random()+math.random() - 1.5)/(math.random()+1e-300))
This has over 70% of the number returned between -9 and 9 with a few big numbers popping up rarely.
Note that the average and sum of this distribution will tend to diverge towards a large negative or positive number, because the more times you run it, the more likely it is for a small number in the denominator to cause the number to "blow up" to a large number such as 147,967 or -194,137.
See gist for sample code.
Josh
You can immediately calculate the nth born surreal number.
Example, the 1000th Surreal number is:
convert to binary:
1000 dec = 1111101000 bin
1's become pluses and 0's minuses:
1111101000
+++++-+---
The first '1' bit is 0 value, the next set of similar numbers is +1 (for 1's) or -1 (for 0's), then the value is 1/2, 1/4, 1/8, etc for each subsequent bit.
1 1 1 1 1 0 1 0 0 0
+ + + + + - + - - -
0 1 1 1 1 h h h h h
+0+1+1+1+1-1/2+1/4-1/8-1/16-1/32
= 3+17/32
= 113/32
= 3.53125
The binary length in bits of this representation is equal to the day on which that number was born.
Left and right numbers of a surreal number are the binary representation with its tail stripped back to the last 0 or 1 respectively.
Surreal numbers have an even distribution between -1 and 1 where half of the numbers created to a particular day will exist. 1/4 of the numbers exists evenly distributed between -2 to -1 and 1 to 2 and so on. The max range will be negative to positive integers matching the number of days you provide. The numbers go to infinity slowly because each day only adds one to the negative and positive ranges and days contain twice as many numbers as the last.
Edit:
A good name for this bit representation is "sinary"
Negative numbers are transpositions. ex:
100010101001101s -> negative number (always start 10...)
111101010110010s -> positive number (always start 01...)
and we notice that all bits flip accept the first one which is a transposition.
Nan is => 0s (since all other numbers start with 1), which makes it ideal for representation in bit registers in a computer since leading zeros are required (we don't make ternary computer anymore... too bad)
All Conway surreal algebra can be done on these number without needing to convert to binary or decimal.
The sinary format can be seem as a one plus a simple one's counter with a 2's complement decimal representation attached.
Here is an incomplete report on finary (similar to sinary): https://github.com/peawormsworth/tools/blob/master/finary/Fine%20binary.ipynb

Random function returning number from interval

How would you implement a function that is returning a random number from interval 1..1000
in the case there is a number N determining the chance of reaching higher numbers or lower numbers?
It should behave as follows:
e.g.
if N = 0 and we will generate many times the random number we will get a certain equilibrium (every number from interval 1..1000 has equal chance).
if N = 2321 (I call it positive factor) it will be very hard to achieve small number (often will be generated numbers > 900, sometimes numbers near 500 and rarely numbers < 100). The highest the positive factor the highest probability for high numbers
if N = -2321 (negative factor) this will be the opposite of positive factor
It's clear that the generated numbers will create for given N certain characteristic curve. Could you advise me how to achieve this goal and what curves can I create? What possibilities do I have here? How would you limit positive and negative factors etc.
thank you for help
If you generate a uniform random number, and then raise it to a power > 1, it will get smaller, but stay in the range [0, 1]. If you raise it to a power greater than 0 but less than 1, it will get larger, but stay in the range [0, 1].
So you can use the exponent to pick a power when generating your random numbers.
def biased_random(scale, bias):
return random.random() ** bias * scale
sum(biased_random(1000, 2.5) for x in range(100)) / 100
291.59652962214676 # average less than 500
max(biased_random(1000, 2.5) for x in range(100))
963.81166161355998 # but still occasionally generates large numbers
sum(biased_random(1000, .3) for x in range(100)) / 100
813.90199860117821 # average > 500
min(biased_random(1000, .3) for x in range(100))
265.25040459294883 # but still occasionally generates small numbers
This problem is severely underspecified. There are a million ways to solve it as it is mentioned.
Instead of arbitrary positive and negative values, try to think what is the meaning behind them. IMHO, beta distribution is the one you should consider. By selecting the parameters \alpha and \beta you should be appropriately modulate the behavior of your distribution.
See what shapes you can get with certain \alpha and \beta http://en.wikipedia.org/wiki/Beta_distribution#Shapes
http://en.wikipedia.org/wiki/File:Beta_distribution_pdf.svg
Lets for beginning decide that we will pick numbers from [0,1] because it makes stuff simpler.
n is number that represents distribution (0,2321 or -2321) as in example
We need solution only for n > 0, because if n < 0. You can take positive version of n and subtract from 1.
One simple idea for PDF in interval [0,1] is x^n. (or at least this kind of shape)
Calculating CDF is then integrating x^n and is x^(n+1)/(n+1)
Because CDF must be 1 at the end (in our case at 1) our final CDF is than x^(n+1) and is properly weighted
In order to generate this kind distribution from this, we must calaculate quantile function
Quantile function is just inverse of CDF and is in our case. x^(1/(n+1))
And that is it. Your QF is x^(1/(n+1))
To generate numbers from [0,1] you have to pick uniformly distributetd random from [0,1] (most common random function in programming languages)
and than power this whit (1/(n+1))
Only problem I see is that it can be problem to calculate 1-x^(1/(-n+1)) correctly, where n < 0 but i think that you can use log1p,
so it becomes exp(log1p(-x^(1/(-n+1))) if n<0
conclusion whit normalizations
if n>=0: (x^(1/(n/1000+1)))*1000
if n<0: exp(log1p(-(x^(1/(-(n/1000)+1)))))*1000
where x is uniformly distributed random value in interval [0,1]

Complimentary Multiply With Carry period

I'm trying to figure out what the period of a particular CMWC pseudo-random number generator would be.
The wikipedia page has some examples of the period of different parameters for both a standard MWC and CMWC, but doesn't really answer how this is calculated.
Is there an easy way to calculate this for a given multiplier, r number of seeds, and base b?
For example, say I have the following parameters (for a CMWC):
b=2^32-1
a=4294966362
r=32
I have verified that p=a*b^r+1 is prime.
edit: oops, copied the wrong a value. Fixed it so p should be prime now.
b is a primitive root when its order is p-1, so b^k can assume every value from 1 to p-1, depending on value of k.
The order of an element is the minimum s with b^s=1 (mod p).
b is a primitive root if, and only if, b^(phi(p)/k) != 1 (!= means different) for every k divisor of phi(p), and phi(p) = (p-1) is the Euler's totient function (http://en.wikipedia.org/wiki/Euler%27s_totient_function).
In your example:
- phi(p) = a*b^r = p - 1.
- Divisors of a are {1, 2, 3, 31, 23091217, 4294966362}.
- Divisors of b are {1, 3, 5, 17, 257, 65537, 4294967295}.
So, (p-1) = 2*(3^33)*(5^32)*(17^32)*31*(257^32)*(65537^32)*23091217.
p-1 has 322,570,512 divisors (http://en.wikipedia.org/wiki/Divisor_function)
With modular exponentiation, it is possible to see that
b^((p-1)/3) = 1 (mod p)
so the order of b is different of p-1.
It is better choose numbers a and b with few divisors, then p-1 also will have few divisors, and it will be easy to calculate (phi(p) / k) for every divisor k. Order of b will be min{phi(p) / k} = min{(p-1)/k}.
In Marsaglia's article "On the randomness of Pi and other decimal expansions" (http://interstat.statjournals.net/YEAR/2005/articles/0510005.pdf), there are some values of a, b and r. Periods that are not full ate usefull too (see article).
Base b=2^32 doesn't have full period, but it returns integers from 0 to 2^32-1. Base b=2^32-1 can't return unbiased 32 bit integers (it will never return number 2^31-1 = 4294967295).
I've misunderstood what is required to get a full period:
b must also be a primitive root of p, which I don't think is the case here (to be honest, I don't have the math background to even begin to understand what a primitive root is). If there is a full period, the period would be a*b^r. As far as I can tell, it's impossible (or at least very difficult) to tell what the period would be otherwise (and quite frankly, it's not useful because in practice a full period is desired).
Source: Journal Of Modern Applied Statistical Methods

How to compute the "15% of the time" randomness?

I'm looking for a decent, elegant method of calculating this simple logic.
Right now I can't think of one, it's spinning my head.
I am required to do some action only 15% of the time.
I'm used to "50% of the time" where I just mod the milliseconds of the current time and see if it's odd or even, but I don't think that's elegant.
How would I elegantly calculate "15% of the time"? Random number generator maybe?
Pseudo-code or any language are welcome.
Hope this is not subjective, since I'm looking for the "smartest" short-hand method of doing that.
Thanks.
Solution 1 (double)
get a random double between 0 and 1 (whatever language you use, there must be such a function)
do the action only if it is smaller than 0.15
Solution 2 (int)
You can also achieve this by creating a random int and see if it is dividable to 6 or 7. UPDATE --> This is not optimal.
You can produce a random number between 0 and 99, and check if it's less than 15:
if (rnd.Next(100) < 15) ...
You can also reduce the numbers, as 15/100 is the same as 3/20:
if (rnd.Next(20) < 3) ...
Random number generator would give you the best randomness. Generate a random between 0 and 1, test for < 0.15.
Using the time like that isn't true random, as it's influenced by processing time. If a task takes less than 1 millisecond to run, then the next random choice will be the same one.
That said, if you do want to use the millisecond-based method, do milliseconds % 20 < 3.
Just use a PRNG. Like always, it's a performance v. accuracy trade-off. I think making your own doing directly off the time is a waste of time (pun intended). You'll probably get biasing effects even worse than a run of the mill linear congruential generator.
In Java, I would use nextInt:
myRNG.nextInt(100) < 15
Or (mostly) equivalently:
myRNG.nextInt(20) < 3
There are way to get a random integer in other languages (multiple ways actually, depending how accurate it has to be).
Using modulo arithmetic you can easily do something every Xth run like so
(6 will give you ruthly 15%
if( microtime() % 6 === ) do it
other thing:
if(rand(0,1) >= 0.15) do it
boolean array[100] = {true:first 15, false:rest};
shuffle(array);
while(array.size > 0)
{
// pop first element of the array.
if(element == true)
do_action();
else
do_something_else();
}
// redo the whole thing again when no elements are left.
Here's one approach that combines randomness and a guarantee that eventually you get a positive outcome in a predictable range:
Have a target (15 in your case), a counter (initialized to 0), and a flag (initialized to false).
Accept a request.
If the counter is 15, reset the counter and the flag.
If the flag is true, return negative outcome.
Get a random true or false based on one of the methods described in other answers, but use a probability of 1/(15-counter).
Increment counter
If result is true, set flag to true and return a positive outcome. Else return a negative outcome.
Accept next request
This means that the first request has probability of 1/15 of return positive, but by the 15th request, if no positive result has been returned, there's a probability of 1/1 of a positive result.
This quote is from a great article about how to use a random number generator:
Note: Do NOT use
y = rand() % M;
as this focuses on the lower bits of
rand(). For linear congruential random
number generators, which rand() often
is, the lower bytes are much less
random than the higher bytes. In fact
the lowest bit cycles between 0 and 1.
Thus rand() may cycle between even and
odd (try it out). Note rand() does not
have to be a linear congruential
random number generator. It's
perfectly permissible for it to be
something better which does not have
this problem.
and it contains formulas and pseudo-code for
r = [0,1) = {r: 0 <= r < 1} real
x = [0,M) = {x: 0 <= x < M} real
y = [0,M) = {y: 0 <= y < M} integer
z = [1,M] = {z: 1 <= z <= M} integer

Resources