How to find a specific sequence within the decimals of PI? - algorithm

I want to find a specific sequence of digits in the decimals of PI and that involves first computing PI to (quite possibly) infinity. The problem is that I don't know how to make a variable store that many digits or how to just use the newly computed digit so I can compare it to my sequence.
So how can I calculate PI and keep only the last decimal as an integer?
Thanks in advance.

This kind of problem can be solved very elegantly using lazy evaluation, like the one found in Haskell. Or using generators in Python, producing at most one number of Pi at a time, and checking against the corresponding position in target value that's being searched.
The advantage of either approach is that you don't have to generate a (potentially) infinite sequence of numbers, only generate as much as needed until you find what you're looking for. Of course, if the specific sequence really doesn't appear in the number Pi the algorithm will iterate forever, but at least the computer executing the program won't run out of memory.
Alternatively: you could use the BBP Formula, or a similar algorithm which allows the extraction of a specific digit in Pi.

You can use an iterative algorithm for calculating Pi, for example, the Gauss–Legendre algorithm.
To implement it, you will need a library that does arbitrary-precision arithmetic; one such library is GMP.
Apparently, someone has done most of the work for you: http://gmplib.org/pi-with-gmp.html

Here is an implementation in Python of a streaming algorithm described by Jeremy Gibbons in Unbounded Spigot Algorithms for the Digits of Pi (2004), Chaper 6:
def generate_digits_of_pi():
q = 1
r = 180
t = 60
i = 2
while True:
digit = ((i * 27 - 12) * q + r * 5) // (t * 5)
yield digit
u = i * 3
u = (u + 1) * 3 * (u + 2)
r = u * 10 * (q * (i * 5 - 2) + r - t * digit)
q *= 10 * i * (i * 2 - 1)
t *= u
i += 1
# Demo
iter = generate_digits_of_pi()
from time import sleep
import sys
for i in range(1000):
print (next(iter), end="")
sys.stdout.flush()
sleep(0.05)

Related

Minimize integer round-off error in PLL frequency calculation

On a particular STM32 microcontroller, the system clock is driven by a PLL whose frequency F is given by the following formula:
F := (S/M * (N + K/8192)) / P
S is the PLL input source frequency (1 - 64000000, or 64 MHz).
The other factors M, N, K, and P are the parameters the user can modify to calibrate the frequency. Judging by the bitmasks in the SDK I'm using, the value of each can be limited to a maximum of M < 64, N < 512, K < 8192, and P < 128.
Unfortunately, my target firmware does not have FPU support, so floating-point arithmetic is out. Instead, I need to compute F using integer-only arithmetic.
I have tried to rearrange the given formula with 3 goals in mind:
Expand and distribute all multiplication factors
Minimize the number of factors in each denominator
Minimize the total number of divisions performed
If two expressions have the same number of divisions, choose the one whose denominators have the least maximum (identified in earlier paragraph)
However, each of my attempts to expand and rearrange the expression all produce errors greater than the original formula as it was first expressed verbatim.
To test out different arrangements of the formula and compare error, I've written a small Go program you can run online here.
Is it possible to improve this formula so that error is minimized when using integer arithmetic? Also are any of my goals listed above incorrect or useless?
I took your program (your first parentheses is redundant, so I removed):
S K
--- * ( N + ------ )
M 8192
--------------------
P
and ran through QuickMath [1], and I got this:
S * (8192 * N + K)
------------------
8192 * M * P
or in Go code:
S * (8192 * N + K) / (8192 * M * P)
So it does reduce the amount of divisions. You could improve it further by
pulling out the lower constant:
S * (8192 * N + K) / (M * P) >> 13
https://quickmath.com
Looking at the answer by #StevenPerry, I realized the majority of error is introduced by the limited precision we have to represent K/8192. This error then gets propagated into the other factors and dividends.
Postponing that division, however, often results in integer overflow before its ever reached. Thus, the solution I've found unfortunately depends on widening these operands to 64-bit.
The result is of the same form as the other answer, but it must be emphasized that widening the operands to 64-bit is essential. In Go source code, this looks like:
var S, N, M, P, K uint32
...
F := uint32(uint64(S) * uint64(8192*N+K) / uint64(8192*M*P))
To see the accuracy of all three of these expressions, run the code yourself on the Go Playground.

how can I solve the limes of $n*log_2(n)$ vs $n^{log_3(4)}$

I want to manually calculate which of these two functions ($n*log_2(n)$ vs. $n^{log_3(4)}$) has a higher asymptotic increasing without using a calculator or any software.
My approach till now was:
lim n-> inf: \frac {$n*log:2(n)$} {$n^{log_3(4)}$}
Now use L´Hospital and derive each function:
\frac {$log_2(n)$ + $1/ln(2)n$ } {$log_3(4) n^{log_3(4) -1}}
Now use L´Hospital again:
\frac {$1/(ln(2)*n)$ + $1/(ln(2)*n) $} {$1/ln(3)4 $ * $n^{log_3(4)-1}$ + $log_3(4)-1 * n^{log_3(4)-2} * log_3(4) $}
My problem: If I calculate like that it results to a wrong solution. Does anyone have an idea how to solve that correctly?
Edit: I also noticed that your first derivative was incorrect.
Your first derivative and second evaluation of L' Hopitals rule is incorrect.
You start with:
f(n)=n*log2(n)
g(n)=n^(log3(4))
This gives:
f'(n)=log2(n) + n * (1/ln(2)) * n^(-1)
=log2(n) + 1/ln(2)
g'(n)=log3(4) * n^(log3(4)-1)
This gives:
f''(n)=(1/ln(2)) * n^(-1)
g''(n)=log3(4) * (log3(4)-1) * n^(log3(4)-2)
With your error in the first derivate you would have gotten f''(n)=(1/ln(2)) * n^(-1) - (1/ln(2)) * n^(-2), which still allows you to factor out n and results in the same final result.
Now that you have n in all of it, you can factor that out:
f''(n)/g''(n) = 1/[ln(2) * log3(4) * (log3(4)-1) * n^(log3(4)-2+1)]
= 1/[ln(2) * log3(4) * (log3(4)-1)] * n^(1-log3(4))]
Which now can be represented as:
k * n^(1-log3(4)) where k>0.
And the limit as this approaches infinity is 0. That means n^log3(4) has a greater asymptote than n * log2(n).
Alternatively, you can simplify first.
Note that both have a factor of n which can be removed, so instead you can have:
f(n)=log2(n)
g(n)=n^(log3(4)-1)
f'(n)=(1/ln(2)) * n^(-1)
g'(n)=(log3(4)-1) * n^(log3(4)-2)
f'(n)/g'(n) = (1/ln(2)) * n^(-1-log3(4)+2)/(log3(4)-1)
=(1/ln(2)) * n^(1-log3(4))/(log3(4)-1)
Again, the limit is 0, meaning that n^(log3(4)) has a greater asymptote.
The only thing extra that is needed to know is that log3(4) is greater than 1 as 4 is greater than 3.
That means (log3(4)-1)>0 and (1-log3(4))<0.
Also remember that the correct result may not be what you think it is. These 2 equations cross when n~= 30 000
Also, I'm not sure if this belongs here or on math.

Calculating powers (e.g. 2^11) quickly [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
The most efficient way to implement an integer based power function pow(int, int)
How can I calculate powers with better runtime?
E.g. 2^13.
I remember seeing somewhere that it has something to do with the following calculation:
2^13 = 2^8 * 2^4 * 2^1
But I can't see how calculating each component of the right side of the equation and then multiplying them would help me.
Any ideas?
Edit: I did mean with any base. How do the algorithms you've mentioned below, in particular the "Exponentation by squaring", improve the runtime / complexity?
There is a generalized algorithm for this, but in languages that have bit-shifting, there's a much faster way to compute powers of 2. You just put in 1 << exp (assuming your bit shift operator is << as it is in most languages that support the operation).
I assume you're looking for the generalized algorithm and just chose an unfortunate base as an example. I will give this algorithm in Python.
def intpow(base, exp):
if exp == 0:
return 1
elif exp == 1:
return base
elif (exp & 1) != 0:
return base * intpow(base * base, exp // 2)
else:
return intpow(base * base, exp // 2)
This basically causes exponents to be able to be calculated in log2 exp time. It's a divide and conquer algorithm. :-) As someone else said exponentiation by squaring.
If you plug your example into this, you can see how it works and is related to the equation you give:
intpow(2, 13)
2 * intpow(4, 6)
2 * intpow(16, 3)
2 * 16 * intpow(256, 1)
2 * 16 * 256 == 2^1 * 2^4 * 2^8
Use bitwise shifting. Ex. 1 << 11 returns 2^11.
Powers of two are the easy ones. In binary 2^13 is a one followed by 13 zeros.
You'd use bit shifting, which is a built in operator in many languages.
You can use exponentiation by squaring. This is also known as "square-and-multiply" and works for bases != 2, too.
If you're not limiting yourself to powers of two, then:
k^2n = (k^n)^2
The fastest free algorithm I know of is by Phillip S. Pang, Ph.D and can the source code can be found here.
It uses table-driven decomposition, by which it is possible to make exp() function, which is 2-10 times faster, then native exp() of Pentium(R) processor.

How can I efficiently calculate the binomial cumulative distribution function?

Let's say that I know the probability of a "success" is P. I run the test N times, and I see S successes. The test is akin to tossing an unevenly weighted coin (perhaps heads is a success, tails is a failure).
I want to know the approximate probability of seeing either S successes, or a number of successes less likely than S successes.
So for example, if P is 0.3, N is 100, and I get 20 successes, I'm looking for the probability of getting 20 or fewer successes.
If, on the other hadn, P is 0.3, N is 100, and I get 40 successes, I'm looking for the probability of getting 40 our more successes.
I'm aware that this problem relates to finding the area under a binomial curve, however:
My math-fu is not up to the task of translating this knowledge into efficient code
While I understand a binomial curve would give an exact result, I get the impression that it would be inherently inefficient. A fast method to calculate an approximate result would suffice.
I should stress that this computation has to be fast, and should ideally be determinable with standard 64 or 128 bit floating point computation.
I'm looking for a function that takes P, S, and N - and returns a probability. As I'm more familiar with code than mathematical notation, I'd prefer that any answers employ pseudo-code or code.
Exact Binomial Distribution
def factorial(n):
if n < 2: return 1
return reduce(lambda x, y: x*y, xrange(2, int(n)+1))
def prob(s, p, n):
x = 1.0 - p
a = n - s
b = s + 1
c = a + b - 1
prob = 0.0
for j in xrange(a, c + 1):
prob += factorial(c) / (factorial(j)*factorial(c-j)) \
* x**j * (1 - x)**(c-j)
return prob
>>> prob(20, 0.3, 100)
0.016462853241869437
>>> 1-prob(40-1, 0.3, 100)
0.020988576003924564
Normal Estimate, good for large n
import math
def erf(z):
t = 1.0 / (1.0 + 0.5 * abs(z))
# use Horner's method
ans = 1 - t * math.exp( -z*z - 1.26551223 +
t * ( 1.00002368 +
t * ( 0.37409196 +
t * ( 0.09678418 +
t * (-0.18628806 +
t * ( 0.27886807 +
t * (-1.13520398 +
t * ( 1.48851587 +
t * (-0.82215223 +
t * ( 0.17087277))))))))))
if z >= 0.0:
return ans
else:
return -ans
def normal_estimate(s, p, n):
u = n * p
o = (u * (1-p)) ** 0.5
return 0.5 * (1 + erf((s-u)/(o*2**0.5)))
>>> normal_estimate(20, 0.3, 100)
0.014548164531920815
>>> 1-normal_estimate(40-1, 0.3, 100)
0.024767304545069813
Poisson Estimate: Good for large n and small p
import math
def poisson(s,p,n):
L = n*p
sum = 0
for i in xrange(0, s+1):
sum += L**i/factorial(i)
return sum*math.e**(-L)
>>> poisson(20, 0.3, 100)
0.013411150012837811
>>> 1-poisson(40-1, 0.3, 100)
0.046253037645840323
I was on a project where we needed to be able to calculate the binomial CDF in an environment that didn't have a factorial or gamma function defined. It took me a few weeks, but I ended up coming up with the following algorithm which calculates the CDF exactly (i.e. no approximation necessary). Python is basically as good as pseudocode, right?
import numpy as np
def binomial_cdf(x,n,p):
cdf = 0
b = 0
for k in range(x+1):
if k > 0:
b += + np.log(n-k+1) - np.log(k)
log_pmf_k = b + k * np.log(p) + (n-k) * np.log(1-p)
cdf += np.exp(log_pmf_k)
return cdf
Performance scales with x. For small values of x, this solution is about an order of magnitude faster than scipy.stats.binom.cdf, with similar performance at around x=10,000.
I won't go into a full derivation of this algorithm because stackoverflow doesn't support MathJax, but the thrust of it is first identifying the following equivalence:
For all k > 0, sp.misc.comb(n,k) == np.prod([(n-k+1)/k for k in range(1,k+1)])
Which we can rewrite as:
sp.misc.comb(n,k) == sp.misc.comb(n,k-1) * (n-k+1)/k
or in log space:
np.log( sp.misc.comb(n,k) ) == np.log(sp.misc.comb(n,k-1)) + np.log(n-k+1) - np.log(k)
Because the CDF is a summation of PMFs, we can use this formulation to calculate the binomial coefficient (the log of which is b in the function above) for PMF_{x=i} from the coefficient we calculated for PMF_{x=i-1}. This means we can do everything inside a single loop using accumulators, and we don't need to calculate any factorials!
The reason most of the calculations are done in log space is to improve the numerical stability of the polynomial terms, i.e. p^x and (1-p)^(1-x) have the potential to be extremely large or extremely small, which can cause computational errors.
EDIT: Is this a novel algorithm? I've been poking around on and off since before I posted this, and I'm increasingly wondering if I should write this up more formally and submit it to a journal.
I think you want to evaluate the incomplete beta function.
There's a nice implementation using a continued fraction representation in "Numerical Recipes In C", chapter 6: 'Special Functions'.
I can't totally vouch for the efficiency, but Scipy has a module for this
from scipy.stats.distributions import binom
binom.cdf(successes, attempts, chance_of_success_per_attempt)
An efficient and, more importantly, numerical stable algorithm exists in the domain of Bezier Curves used in Computer Aided Design. It is called de Casteljau's algorithm used to evaluate the Bernstein Polynomials used to define Bezier Curves.
I believe that I am only allowed one link per answer so start with Wikipedia - Bernstein Polynomials
Notice the very close relationship between the Binomial Distribution and the Bernstein Polynomials. Then click through to the link on de Casteljau's algorithm.
Lets say I know the probability of throwing a heads with a particular coin is P.
What is the probability of me throwing
the coin T times and getting at least
S heads?
Set n = T
Set beta[i] = 0 for i = 0, ... S - 1
Set beta[i] = 1 for i = S, ... T
Set t = p
Evaluate B(t) using de Casteljau
or at most S heads?
Set n = T
Set beta[i] = 1 for i = 0, ... S
Set beta[i] = 0 for i = S + 1, ... T
Set t = p
Evaluate B(t) using de Casteljau
Open source code probably exists already. NURBS Curves (Non-Uniform Rational B-spline Curves) are a generalization of Bezier Curves and are widely used in CAD. Try openNurbs (the license is very liberal) or failing that Open CASCADE (a somewhat less liberal and opaque license). Both toolkits are in C++, though, IIRC, .NET bindings exist.
If you are using Python, no need to code it yourself. Scipy got you covered:
from scipy.stats import binom
# probability that you get 20 or less successes out of 100, when p=0.3
binom.cdf(20, 100, 0.3)
>>> 0.016462853241869434
# probability that you get exactly 20 successes out of 100, when p=0.3
binom.pmf(20, 100, 0.3)
>>> 0.0075756449257260777
From the portion of your question "getting at least S heads" you want the cummulative binomial distribution function. See http://en.wikipedia.org/wiki/Binomial_distribution for the equation, which is described as being in terms of the "regularized incomplete beta function" (as already answered). If you just want to calculate the answer without having to implement the entire solution yourself, the GNU Scientific Library provides the function: gsl_cdf_binomial_P and gsl_cdf_binomial_Q.
The DCDFLIB Project has C# functions (wrappers around C code) to evaluate many CDF functions, including the binomial distribution. You can find the original C and FORTRAN code here. This code is well tested and accurate.
If you want to write your own code to avoid being dependent on an external library, you could use the normal approximation to the binomial mentioned in other answers. Here are some notes on how good the approximation is under various circumstances. If you go that route and need code to compute the normal CDF, here's Python code for doing that. It's only about a dozen lines of code and could easily be ported to any other language. But if you want high accuracy and efficient code, you're better off using third party code like DCDFLIB. Several man-years went into producing that library.
Try this one, used in GMP. Another reference is this.
import numpy as np
np.random.seed(1)
x=np.random.binomial(20,0.6,10000) #20 flips of coin,probability of
heads percentage and 10000 times
done.
sum(x>12)/len(x)
The output is 41% of times we got 12 heads.

"Approximate" greatest common divisor

Suppose you have a list of floating point numbers that are approximately multiples of a common quantity, for example
2.468, 3.700, 6.1699
which are approximately all multiples of 1.234. How would you characterize this "approximate gcd", and how would you proceed to compute or estimate it?
Strictly related to my answer to this question.
You can run Euclid's gcd algorithm with anything smaller then 0.01 (or a small number of your choice) being a pseudo 0. With your numbers:
3.700 = 1 * 2.468 + 1.232,
2.468 = 2 * 1.232 + 0.004.
So the pseudo gcd of the first two numbers is 1.232. Now you take the gcd of this with your last number:
6.1699 = 5 * 1.232 + 0.0099.
So 1.232 is the pseudo gcd, and the mutiples are 2,3,5. To improve this result, you may take the linear regression on the data points:
(2,2.468), (3,3.7), (5,6.1699).
The slope is the improved pseudo gcd.
Caveat: the first part of this is algorithm is numerically unstable - if you start with very dirty data, you are in trouble.
Express your measurements as multiples of the lowest one. Thus your list becomes 1.00000, 1.49919, 2.49996. The fractional parts of these values will be very close to 1/Nths, for some value of N dictated by how close your lowest value is to the fundamental frequency. I would suggest looping through increasing N until you find a sufficiently refined match. In this case, for N=1 (that is, assuming X=2.468 is your fundamental frequency) you would find a standard deviation of 0.3333 (two of the three values are .5 off of X * 1), which is unacceptably high. For N=2 (that is, assuming 2.468/2 is your fundamental frequency) you would find a standard deviation of virtually zero (all three values are within .001 of a multiple of X/2), thus 2.468/2 is your approximate GCD.
The major flaw in my plan is that it works best when the lowest measurement is the most accurate, which is likely not the case. This could be mitigated by performing the entire operation multiple times, discarding the lowest value on the list of measurements each time, then use the list of results of each pass to determine a more precise result. Another way to refine the results would be adjust the GCD to minimize the standard deviation between integer multiples of the GCD and the measured values.
This reminds me of the problem of finding good rational-number approximations of real numbers. The standard technique is a continued-fraction expansion:
def rationalizations(x):
assert 0 <= x
ix = int(x)
yield ix, 1
if x == ix: return
for numer, denom in rationalizations(1.0/(x-ix)):
yield denom + ix * numer, numer
We could apply this directly to Jonathan Leffler's and Sparr's approach:
>>> a, b, c = 2.468, 3.700, 6.1699
>>> b/a, c/a
(1.4991896272285252, 2.4999594813614263)
>>> list(itertools.islice(rationalizations(b/a), 3))
[(1, 1), (3, 2), (925, 617)]
>>> list(itertools.islice(rationalizations(c/a), 3))
[(2, 1), (5, 2), (30847, 12339)]
picking off the first good-enough approximation from each sequence. (3/2 and 5/2 here.) Or instead of directly comparing 3.0/2.0 to 1.499189..., you could notice than 925/617 uses much larger integers than 3/2, making 3/2 an excellent place to stop.
It shouldn't much matter which of the numbers you divide by. (Using a/b and c/b you get 2/3 and 5/3, for instance.) Once you have integer ratios, you could refine the implied estimate of the fundamental using shsmurfy's linear regression. Everybody wins!
I'm assuming all of your numbers are multiples of integer values. For the rest of my explanation, A will denote the "root" frequency you are trying to find and B will be an array of the numbers you have to start with.
What you are trying to do is superficially similar to linear regression. You are trying to find a linear model y=mx+b that minimizes the average distance between a linear model and a set of data. In your case, b=0, m is the root frequency, and y represents the given values. The biggest problem is that the independent variables X are not explicitly given. The only thing we know about X is that all of its members must be integers.
Your first task is trying to determine these independent variables. The best method I can think of at the moment assumes that the given frequencies have nearly consecutive indexes (x_1=x_0+n). So B_0/B_1=(x_0)/(x_0+n) given a (hopefully) small integer n. You can then take advantage of the fact that x_0 = n/(B_1-B_0), start with n=1, and keep ratcheting it up until k-rnd(k) is within a certain threshold. After you have x_0 (the initial index), you can approximate the root frequency (A = B_0/x_0). Then you can approximate the other indexes by finding x_n = rnd(B_n/A). This method is not very robust and will probably fail if the error in the data is large.
If you want a better approximation of the root frequency A, you can use linear regression to minimize the error of the linear model now that you have the corresponding dependent variables. The easiest method to do so uses least squares fitting. Wolfram's Mathworld has a in-depth mathematical treatment of the issue, but a fairly simple explanation can be found with some googling.
Interesting question...not easy.
I suppose I would look at the ratios of the sample values:
3.700 / 2.468 = 1.499...
6.1699 / 2.468 = 2.4999...
6.1699 / 3.700 = 1.6675...
And I'd then be looking for a simple ratio of integers in those results.
1.499 ~= 3/2
2.4999 ~= 5/2
1.6675 ~= 5/3
I haven't chased it through, but somewhere along the line, you decide that an error of 1:1000 or something is good enough, and you back-track to find the base approximate GCD.
The solution which I've seen and used myself is to choose some constant, say 1000, multiply all numbers by this constant, round them to integers, find the GCD of these integers using the standard algorithm and then divide the result by the said constant (1000). The larger the constant, the higher the precision.
This is a reformulaiton of shsmurfy's solution when you a priori choose 3 positive tolerances (e1,e2,e3)
The problem is then to search smallest positive integers (n1,n2,n3) and thus largest root frequency f such that:
f1 = n1*f +/- e1
f2 = n2*f +/- e2
f3 = n3*f +/- e3
We assume 0 <= f1 <= f2 <= f3
If we fix n1, then we get these relations:
f is in interval I1=[(f1-e1)/n1 , (f1+e1)/n1]
n2 is in interval I2=[n1*(f2-e2)/(f1+e1) , n1*(f2+e2)/(f1-e1)]
n3 is in interval I3=[n1*(f3-e3)/(f1+e1) , n1*(f3+e3)/(f1-e1)]
We start with n1 = 1, then increment n1 until the interval I2 and I3 contain an integer - that is floor(I2min) different from floor(I2max) same with I3
We then choose smallest integer n2 in interval I2, and smallest integer n3 in interval I3.
Assuming normal distribution of floating point errors, the most probable estimate of root frequency f is the one minimizing
J = (f1/n1 - f)^2 + (f2/n2 - f)^2 + (f3/n3 - f)^2
That is
f = (f1/n1 + f2/n2 + f3/n3)/3
If there are several integers n2,n3 in intervals I2,I3 we could also choose the pair that minimize the residue
min(J)*3/2=(f1/n1)^2+(f2/n2)^2+(f3/n3)^2-(f1/n1)*(f2/n2)-(f1/n1)*(f3/n3)-(f2/n2)*(f3/n3)
Another variant could be to continue iteration and try to minimize another criterium like min(J(n1))*n1, until f falls below a certain frequency (n1 reaches an upper limit)...
I found this question looking for answers for mine in MathStackExchange (here and here).
I've only managed (yet) to measure the appeal of a fundamental frequency given a list of harmonic frequencies (following the sound/music nomenclature), which can be useful if you have a reduced number of options and is feasible to compute the appeal of each one and then choose the best fit.
C&P from my question in MSE (there the formatting is prettier):
being v the list {v_1, v_2, ..., v_n}, ordered from lower to higher
mean_sin(v, x) = sum(sin(2*pi*v_i/x), for i in {1, ...,n})/n
mean_cos(v, x) = sum(cos(2*pi*v_i/x), for i in {1, ...,n})/n
gcd_appeal(v, x) = 1 - sqrt(mean_sin(v, x)^2 + (mean_cos(v, x) - 1)^2)/2, which yields a number in the interval [0,1].
The goal is to find the x that maximizes the appeal. Here is the (gcd_appeal) graph for your example [2.468, 3.700, 6.1699], where you find that the optimum GCD is at x = 1.2337899957639993
Edit:
You may find handy this JAVA code to calculate the (fuzzy) divisibility (aka gcd_appeal) of a divisor relative to a list of dividends; you can use it to test which of your candidates makes the best divisor. The code looks ugly because I tried to optimize it for performance.
//returns the mean divisibility of dividend/divisor as a value in the range [0 and 1]
// 0 means no divisibility at all
// 1 means full divisibility
public double divisibility(double divisor, double... dividends) {
double n = dividends.length;
double factor = 2.0 / divisor;
double sum_x = -n;
double sum_y = 0.0;
double[] coord = new double[2];
for (double v : dividends) {
coordinates(v * factor, coord);
sum_x += coord[0];
sum_y += coord[1];
}
double err = 1.0 - Math.sqrt(sum_x * sum_x + sum_y * sum_y) / (2.0 * n);
//Might happen due to approximation error
return err >= 0.0 ? err : 0.0;
}
private void coordinates(double x, double[] out) {
//Bhaskara performant approximation to
//out[0] = Math.cos(Math.PI*x);
//out[1] = Math.sin(Math.PI*x);
long cos_int_part = (long) (x + 0.5);
long sin_int_part = (long) x;
double rem = x - cos_int_part;
if (cos_int_part != sin_int_part) {
double common_s = 4.0 * rem;
double cos_rem_s = common_s * rem - 1.0;
double sin_rem_s = cos_rem_s + common_s + 1.0;
out[0] = (((cos_int_part & 1L) * 8L - 4L) * cos_rem_s) / (cos_rem_s + 5.0);
out[1] = (((sin_int_part & 1L) * 8L - 4L) * sin_rem_s) / (sin_rem_s + 5.0);
} else {
double common_s = 4.0 * rem - 4.0;
double sin_rem_s = common_s * rem;
double cos_rem_s = sin_rem_s + common_s + 3.0;
double common_2 = ((cos_int_part & 1L) * 8L - 4L);
out[0] = (common_2 * cos_rem_s) / (cos_rem_s + 5.0);
out[1] = (common_2 * sin_rem_s) / (sin_rem_s + 5.0);
}
}

Resources