For a homework assignment, I've been asked to calculate the runtime of various algorithms. The part that is stumping me is the 2^N, because the size of N is so large.
Assuming a 2^N algorithm with a a data size N=1000 takes 5 seconds to execute, calculate the runtime for data sizes {2000, 3000, 10000}
Now, 2^2000/2^1000 = 2^1000 by the property of exponent division. The result is 5.071509e+301 seconds to execute on a data set of 2000 items.
How can I give a number for the next two sizes? 2^2000 and 2^9000 both return infinity in any calculator that I use. The professors hint is that 2^10 is approximate to 10^3, which means that 1024 is approximate to 1000.
The function of time complexity is f(N) = c * 2N.
Since f(1000) = 5, we can solve for c = 5 / 21000.
So f(N) = 5 * 2N - 1000, and f(k * 1000) = 5 * 2(k - 1)*1000.
The blockade seems to be with calculating 2n, so I will guide you on this.
I will take one of your example to demonstrate the method:
21000 = 210 * 100 = (210)100 ≈ (103)100 = 103 * 100 = 10300 = 1e300
That is using your teacher's hint, another way to calculate the number more accurately is:
21000 = 101000 * log102 = 10301.03 = 100.03 * 10301 = 1.0715e301
(As you can see, the more accurate method will be 10.7 times more than the result from the approximate method)
Obviously, you have not tried WolframAlpha:
2^9000 = 1861919823602446870051646947443265806218680881620164150161309755333181392347475969737916885903925979688282376725245050928552019850699830936678289110574564545897808148390669479768088103164174862614314896361751492419355200744919106773071079582599147802839706215610874713577461171484389014531465527630202236883843662826326484204605969450292309212479576754101484210438029392847117393209646612418243468945316443055403112189769994194382253211069075682210324665380367163398351260390065146550963353638226230571087153090476005831034463871223599543367440127371321307935600294235235555686651194174520755709233252147269712646152398661272615394330256766919854213808655136454478279520538925315159120689449046824765276993871084725126289174642261134186532952389529497451225294280309592333118547431857834022107299493307642886550035108349812538868548204110949267007844894921304724439232280030484032946555862021618822463975371563390466476093942333021906014621095132521668771544547398222309222385454252810109269995041305909748903485123565191403462125774254093965366838199092942669763234497755795922782368415391408328615941087687168142703569478330639528167193433105543630476588701348505543567751790789117717660152741734303989082897740369036292716627460893145378475903236426283709902853825833522685221030263417298462852391838147572583018386475574242223720125609852369550108535609838296845885540331405153336504029657425852682616998765068978970510668576704757343969900471616618487531217423608126248306049345412199702325217046285213249285846153975313978242675003402962824121160875928608618335006217051712260195130737083017631349816669929159176667868417353866905513835554999462637481342721400975639504482716709384155856026263304270260886740810287158032302083388891446551907198669473320114371718138443660594911466234734942910722449045244457975320043694169261701681656148362205170532994281152576853709669767416127252738437485793910219846836498268031157261792859230067295449184914230926461991700842371010236588596679342492743911615365711320743215987995901101163306076544568626531198809370907612331483471833845958618077480217892890533116603350040147116218227928732817945323636422047507156427135297222275812678187736036773320854938458989641651199078792658709405149957007882875374868180442468709128120647047303125901843060904208622152998500617510802742432155144605848971282208131749602408191808406410632175265171306237793956570406435402345476628467209068729893134433124291187516075267594298323098487990166243403468912628117388443154921474663149036640983804471509564893426039130480070632369458141251485052936042164865777219873983089572387511794739572152294019631222145910901210840561445507713655658889336373566800391206925212386456974315749376
As for the hint:
29000 = (210)900 ≈ (103)900 = 102700
Considering that you know the runtime for 21000, it's easy to determine the runtime for 210000 without WolframAlpha:
21000 => 5s
210000 = 21000*29000 => 5*29000s
Related
I’ve got a question about trying to decide the big O notation for a Rabin-Miller algorithm I have implemented.
When generating a
512 bits prime, I get a runtime of 22 seconds,
1024 takes 237 seconds,
2048 takes 2942 seconds.
How can I determine the big O notation for these values? It seems to me that the runtime increases by roughly 10 times everytime the bitsize increases by 2. Does that mean that it’s O(10n)?
You have too few points to estimate the function (and O(function) as well) from the experiment.
x | f(x)
-----------
512 | 22
1024 | 237
2048 | 2942
If we test O(n) (O(10n) is in fact O(n)) as f(x) = Ax + B guess with a help of Least Squares method, we'll get a good fit
A = 2.0
B = -1330.5
R = 0.964 (Correlation)
However, many alternative functions have better support
f(x) = Ax**4 + B with correlation R = 0.99990 <- actual best fit
f(x) = Ax**3 + B -/- R = 0.99937 <- expected
f(x) = Ax**2 + B -/- R = 0.99232
you want more points to find out the right function: when having three values only Ax**4 + B (which corresponds to O(x**4)) is the leader so far, but we can't reject the expected complexity which is Ax**3 + B.
Finally, we can guess (we can't conclude with three points only) that the implementation is suboptimal: O(x**4) instead of expected O(x**3)
If our guess O(x**4) is correct one, than we might expect that doubling x: x -> 2 * x we increase the time 16-fold (2**4 == 16)
I have function in my model that counts user's score:
def score
(MULTIPLER * Math::log10(bets.count * summary_value ** accuracy + 1)).floor
end
My point is to test that it grows logarithmically?
The point of a test isn't to prove it always works (this is the area for static typing/proofs), but to check that it is probably working. This is normally good enough. I'm guessing you are doing it for a game, and what to ensure the function doesn't "grow" too quickly.
A way we could do that is to try a number of values, and check whether they are following a general logarithmic pattern.
For example, consider a pure logarithmic function f(x) = log(x) (any base):
If f(x) = y, then f(x^n) = f(x) * n.
So, if f(x^n) == (f(x) * n), then the function is logarithmic.
Compare that to a linear function, eg f(x) == x * 2. f(x^n) = x^n * 2, ie x^(n - 1) times bigger (a lot bigger).
You may have a more complex logarithmic function, eg f(x) = log(x + 7) + 3456. The pattern still holds though, just less accurately. So what I did was:
Attempt to calculate the constant value, by using x = 1
Find the difference f(x^n) - f(x) * n.
Find the absolute difference of f((x*100)^n) - f(100x) * n
If (3)/(2) is less than 10, it is almost certainly not linear, and probably logarithmic. The 10 is just an arbitrary number. Most linear functions will be different by a factor of more than a billion. Even functions like sqrt(x) will have a bigger difference than 10.
My example code will just have the score method take a parameter, and test against that (to keep it simple + I don't have your supporting code).
require 'rspec'
require 'rspec/mocks/standalone'
def score(input)
Math.log2(input * 3 + 1000 * 3) * 3 + 100 + Math.sin(input)
end
describe "score" do
it "grows logarithmacally based on input" do
x = 50
n = 8
c = score(1)
result1 = (score(x ** n) - c) / ((score(x) -c) * n)
x *= 100
result2 = (score(x ** n) - c) / ((score(x) -c) * n)
(result2 / result1).abs.should be < 10
end
end
Though I almost forget complex math knowledge, it seems the fact can't stop me answering the question.
My suggestion as follows
describe "#score" do
it "grows logarithmically" do
round_1 = FactoryGirl.create_list(:bet, 10, value: 5).score
round_2 = FactoryGirl.create_list(:bet, 11, value: 5).score
# Then expect some math relation between round_1 and round_2,
# calculated by you manually.
end
end
If you need to treat this function like a black box, the only true solution is to get a bunch of values and see if their curve is well-approximated by a logarithmic curve, focusing on large n. If you could put reasonable bounds on it, like a log(n) < score(n) < b log(n) for some values a and b then you could just check that.
Generally speaking, the best way to see if a function grows is to plot some data on a graph. Just use some graph plotting gem and evaluate the result.
A logarithmic function will always look like this:
(source: sosmath.com)
You can then adjust how fast it grows through your parameters, and replot the graph, until you found yourself happy with the result.
I need to find the default probability of a derivative via Monte-Carlo simulation with C++ in VS2010 with Intel TBB and MKL installed and only 1GB of memory.
Let S(t) denote the price of the derivative at time t. The current price is S(0) = 100.
For simplicity, the derivative is defined by (the real derivative is more complex):
With a probability of 99 percent S(t+1) = S(t) * exp(X), with X ~ N(mu=0, sigma=0.01)
With a probability of 1 percent the derivative jumps down to S(t+1) = S(t) * 0.4
If the derivative falls below 10 somewhere in 0 < t <=250 the following happens:
With ha probability of 80 percent the price of the derivative at S(t) is set to 1.5 * S(t) or with a probability of 10 percent we will have a default.
Let's say I need to run at least 10mn simulations of this derivative and count all losses.
Then my simulated default probability is #defaults / 10mn. The serial code somehow looks like:
for j = 1 to #simulation/{
for t = 1 to 250{
generate S(t+1)
check if S(t+1) defaults
}
}
How do I parallelize the code?
What is the best strategy for the random number generation? I cannot generate the random numbers of type double a priori, because 10mn * 250 * 2 (at least) = 500mn random number * 8Byte = 4GB.
Is it a good idea to divide the number of simulations into chunks of 10 * number of processors?
VSLStreamStatePtr stream[10*#processors];
for i = 1 to 10*#processor{
vslNewStream( &stream[i], VSL_BRNG_MT2203+i, seed );
}
tbb_parallel_for i = 1 to 10*#processors{
use stream[i] for random number generation
generate 10mn / (10*#processor) * 250 random numbers ~ N(0, 0.01) and store them in a vector.
generate 10mn / (10*#processor) * 250 random numbers ~ Bernoulli(0.01) and store them in a vector.
generate 10mn / (10*#processor) * 250 random numbers ~ Bernoulli(0.01) and store them in a vector.
for j = 1 to #simulation/(10*#processors){
use
for t = 1 to 250{
generate S(t+1) using the vectors filled with random numbers
check if S(t+1) defaults
}
}
}
Any help would be appreciated...
Matt
Let's say that I know the probability of a "success" is P. I run the test N times, and I see S successes. The test is akin to tossing an unevenly weighted coin (perhaps heads is a success, tails is a failure).
I want to know the approximate probability of seeing either S successes, or a number of successes less likely than S successes.
So for example, if P is 0.3, N is 100, and I get 20 successes, I'm looking for the probability of getting 20 or fewer successes.
If, on the other hadn, P is 0.3, N is 100, and I get 40 successes, I'm looking for the probability of getting 40 our more successes.
I'm aware that this problem relates to finding the area under a binomial curve, however:
My math-fu is not up to the task of translating this knowledge into efficient code
While I understand a binomial curve would give an exact result, I get the impression that it would be inherently inefficient. A fast method to calculate an approximate result would suffice.
I should stress that this computation has to be fast, and should ideally be determinable with standard 64 or 128 bit floating point computation.
I'm looking for a function that takes P, S, and N - and returns a probability. As I'm more familiar with code than mathematical notation, I'd prefer that any answers employ pseudo-code or code.
Exact Binomial Distribution
def factorial(n):
if n < 2: return 1
return reduce(lambda x, y: x*y, xrange(2, int(n)+1))
def prob(s, p, n):
x = 1.0 - p
a = n - s
b = s + 1
c = a + b - 1
prob = 0.0
for j in xrange(a, c + 1):
prob += factorial(c) / (factorial(j)*factorial(c-j)) \
* x**j * (1 - x)**(c-j)
return prob
>>> prob(20, 0.3, 100)
0.016462853241869437
>>> 1-prob(40-1, 0.3, 100)
0.020988576003924564
Normal Estimate, good for large n
import math
def erf(z):
t = 1.0 / (1.0 + 0.5 * abs(z))
# use Horner's method
ans = 1 - t * math.exp( -z*z - 1.26551223 +
t * ( 1.00002368 +
t * ( 0.37409196 +
t * ( 0.09678418 +
t * (-0.18628806 +
t * ( 0.27886807 +
t * (-1.13520398 +
t * ( 1.48851587 +
t * (-0.82215223 +
t * ( 0.17087277))))))))))
if z >= 0.0:
return ans
else:
return -ans
def normal_estimate(s, p, n):
u = n * p
o = (u * (1-p)) ** 0.5
return 0.5 * (1 + erf((s-u)/(o*2**0.5)))
>>> normal_estimate(20, 0.3, 100)
0.014548164531920815
>>> 1-normal_estimate(40-1, 0.3, 100)
0.024767304545069813
Poisson Estimate: Good for large n and small p
import math
def poisson(s,p,n):
L = n*p
sum = 0
for i in xrange(0, s+1):
sum += L**i/factorial(i)
return sum*math.e**(-L)
>>> poisson(20, 0.3, 100)
0.013411150012837811
>>> 1-poisson(40-1, 0.3, 100)
0.046253037645840323
I was on a project where we needed to be able to calculate the binomial CDF in an environment that didn't have a factorial or gamma function defined. It took me a few weeks, but I ended up coming up with the following algorithm which calculates the CDF exactly (i.e. no approximation necessary). Python is basically as good as pseudocode, right?
import numpy as np
def binomial_cdf(x,n,p):
cdf = 0
b = 0
for k in range(x+1):
if k > 0:
b += + np.log(n-k+1) - np.log(k)
log_pmf_k = b + k * np.log(p) + (n-k) * np.log(1-p)
cdf += np.exp(log_pmf_k)
return cdf
Performance scales with x. For small values of x, this solution is about an order of magnitude faster than scipy.stats.binom.cdf, with similar performance at around x=10,000.
I won't go into a full derivation of this algorithm because stackoverflow doesn't support MathJax, but the thrust of it is first identifying the following equivalence:
For all k > 0, sp.misc.comb(n,k) == np.prod([(n-k+1)/k for k in range(1,k+1)])
Which we can rewrite as:
sp.misc.comb(n,k) == sp.misc.comb(n,k-1) * (n-k+1)/k
or in log space:
np.log( sp.misc.comb(n,k) ) == np.log(sp.misc.comb(n,k-1)) + np.log(n-k+1) - np.log(k)
Because the CDF is a summation of PMFs, we can use this formulation to calculate the binomial coefficient (the log of which is b in the function above) for PMF_{x=i} from the coefficient we calculated for PMF_{x=i-1}. This means we can do everything inside a single loop using accumulators, and we don't need to calculate any factorials!
The reason most of the calculations are done in log space is to improve the numerical stability of the polynomial terms, i.e. p^x and (1-p)^(1-x) have the potential to be extremely large or extremely small, which can cause computational errors.
EDIT: Is this a novel algorithm? I've been poking around on and off since before I posted this, and I'm increasingly wondering if I should write this up more formally and submit it to a journal.
I think you want to evaluate the incomplete beta function.
There's a nice implementation using a continued fraction representation in "Numerical Recipes In C", chapter 6: 'Special Functions'.
I can't totally vouch for the efficiency, but Scipy has a module for this
from scipy.stats.distributions import binom
binom.cdf(successes, attempts, chance_of_success_per_attempt)
An efficient and, more importantly, numerical stable algorithm exists in the domain of Bezier Curves used in Computer Aided Design. It is called de Casteljau's algorithm used to evaluate the Bernstein Polynomials used to define Bezier Curves.
I believe that I am only allowed one link per answer so start with Wikipedia - Bernstein Polynomials
Notice the very close relationship between the Binomial Distribution and the Bernstein Polynomials. Then click through to the link on de Casteljau's algorithm.
Lets say I know the probability of throwing a heads with a particular coin is P.
What is the probability of me throwing
the coin T times and getting at least
S heads?
Set n = T
Set beta[i] = 0 for i = 0, ... S - 1
Set beta[i] = 1 for i = S, ... T
Set t = p
Evaluate B(t) using de Casteljau
or at most S heads?
Set n = T
Set beta[i] = 1 for i = 0, ... S
Set beta[i] = 0 for i = S + 1, ... T
Set t = p
Evaluate B(t) using de Casteljau
Open source code probably exists already. NURBS Curves (Non-Uniform Rational B-spline Curves) are a generalization of Bezier Curves and are widely used in CAD. Try openNurbs (the license is very liberal) or failing that Open CASCADE (a somewhat less liberal and opaque license). Both toolkits are in C++, though, IIRC, .NET bindings exist.
If you are using Python, no need to code it yourself. Scipy got you covered:
from scipy.stats import binom
# probability that you get 20 or less successes out of 100, when p=0.3
binom.cdf(20, 100, 0.3)
>>> 0.016462853241869434
# probability that you get exactly 20 successes out of 100, when p=0.3
binom.pmf(20, 100, 0.3)
>>> 0.0075756449257260777
From the portion of your question "getting at least S heads" you want the cummulative binomial distribution function. See http://en.wikipedia.org/wiki/Binomial_distribution for the equation, which is described as being in terms of the "regularized incomplete beta function" (as already answered). If you just want to calculate the answer without having to implement the entire solution yourself, the GNU Scientific Library provides the function: gsl_cdf_binomial_P and gsl_cdf_binomial_Q.
The DCDFLIB Project has C# functions (wrappers around C code) to evaluate many CDF functions, including the binomial distribution. You can find the original C and FORTRAN code here. This code is well tested and accurate.
If you want to write your own code to avoid being dependent on an external library, you could use the normal approximation to the binomial mentioned in other answers. Here are some notes on how good the approximation is under various circumstances. If you go that route and need code to compute the normal CDF, here's Python code for doing that. It's only about a dozen lines of code and could easily be ported to any other language. But if you want high accuracy and efficient code, you're better off using third party code like DCDFLIB. Several man-years went into producing that library.
Try this one, used in GMP. Another reference is this.
import numpy as np
np.random.seed(1)
x=np.random.binomial(20,0.6,10000) #20 flips of coin,probability of
heads percentage and 10000 times
done.
sum(x>12)/len(x)
The output is 41% of times we got 12 heads.
Suppose you have a list of floating point numbers that are approximately multiples of a common quantity, for example
2.468, 3.700, 6.1699
which are approximately all multiples of 1.234. How would you characterize this "approximate gcd", and how would you proceed to compute or estimate it?
Strictly related to my answer to this question.
You can run Euclid's gcd algorithm with anything smaller then 0.01 (or a small number of your choice) being a pseudo 0. With your numbers:
3.700 = 1 * 2.468 + 1.232,
2.468 = 2 * 1.232 + 0.004.
So the pseudo gcd of the first two numbers is 1.232. Now you take the gcd of this with your last number:
6.1699 = 5 * 1.232 + 0.0099.
So 1.232 is the pseudo gcd, and the mutiples are 2,3,5. To improve this result, you may take the linear regression on the data points:
(2,2.468), (3,3.7), (5,6.1699).
The slope is the improved pseudo gcd.
Caveat: the first part of this is algorithm is numerically unstable - if you start with very dirty data, you are in trouble.
Express your measurements as multiples of the lowest one. Thus your list becomes 1.00000, 1.49919, 2.49996. The fractional parts of these values will be very close to 1/Nths, for some value of N dictated by how close your lowest value is to the fundamental frequency. I would suggest looping through increasing N until you find a sufficiently refined match. In this case, for N=1 (that is, assuming X=2.468 is your fundamental frequency) you would find a standard deviation of 0.3333 (two of the three values are .5 off of X * 1), which is unacceptably high. For N=2 (that is, assuming 2.468/2 is your fundamental frequency) you would find a standard deviation of virtually zero (all three values are within .001 of a multiple of X/2), thus 2.468/2 is your approximate GCD.
The major flaw in my plan is that it works best when the lowest measurement is the most accurate, which is likely not the case. This could be mitigated by performing the entire operation multiple times, discarding the lowest value on the list of measurements each time, then use the list of results of each pass to determine a more precise result. Another way to refine the results would be adjust the GCD to minimize the standard deviation between integer multiples of the GCD and the measured values.
This reminds me of the problem of finding good rational-number approximations of real numbers. The standard technique is a continued-fraction expansion:
def rationalizations(x):
assert 0 <= x
ix = int(x)
yield ix, 1
if x == ix: return
for numer, denom in rationalizations(1.0/(x-ix)):
yield denom + ix * numer, numer
We could apply this directly to Jonathan Leffler's and Sparr's approach:
>>> a, b, c = 2.468, 3.700, 6.1699
>>> b/a, c/a
(1.4991896272285252, 2.4999594813614263)
>>> list(itertools.islice(rationalizations(b/a), 3))
[(1, 1), (3, 2), (925, 617)]
>>> list(itertools.islice(rationalizations(c/a), 3))
[(2, 1), (5, 2), (30847, 12339)]
picking off the first good-enough approximation from each sequence. (3/2 and 5/2 here.) Or instead of directly comparing 3.0/2.0 to 1.499189..., you could notice than 925/617 uses much larger integers than 3/2, making 3/2 an excellent place to stop.
It shouldn't much matter which of the numbers you divide by. (Using a/b and c/b you get 2/3 and 5/3, for instance.) Once you have integer ratios, you could refine the implied estimate of the fundamental using shsmurfy's linear regression. Everybody wins!
I'm assuming all of your numbers are multiples of integer values. For the rest of my explanation, A will denote the "root" frequency you are trying to find and B will be an array of the numbers you have to start with.
What you are trying to do is superficially similar to linear regression. You are trying to find a linear model y=mx+b that minimizes the average distance between a linear model and a set of data. In your case, b=0, m is the root frequency, and y represents the given values. The biggest problem is that the independent variables X are not explicitly given. The only thing we know about X is that all of its members must be integers.
Your first task is trying to determine these independent variables. The best method I can think of at the moment assumes that the given frequencies have nearly consecutive indexes (x_1=x_0+n). So B_0/B_1=(x_0)/(x_0+n) given a (hopefully) small integer n. You can then take advantage of the fact that x_0 = n/(B_1-B_0), start with n=1, and keep ratcheting it up until k-rnd(k) is within a certain threshold. After you have x_0 (the initial index), you can approximate the root frequency (A = B_0/x_0). Then you can approximate the other indexes by finding x_n = rnd(B_n/A). This method is not very robust and will probably fail if the error in the data is large.
If you want a better approximation of the root frequency A, you can use linear regression to minimize the error of the linear model now that you have the corresponding dependent variables. The easiest method to do so uses least squares fitting. Wolfram's Mathworld has a in-depth mathematical treatment of the issue, but a fairly simple explanation can be found with some googling.
Interesting question...not easy.
I suppose I would look at the ratios of the sample values:
3.700 / 2.468 = 1.499...
6.1699 / 2.468 = 2.4999...
6.1699 / 3.700 = 1.6675...
And I'd then be looking for a simple ratio of integers in those results.
1.499 ~= 3/2
2.4999 ~= 5/2
1.6675 ~= 5/3
I haven't chased it through, but somewhere along the line, you decide that an error of 1:1000 or something is good enough, and you back-track to find the base approximate GCD.
The solution which I've seen and used myself is to choose some constant, say 1000, multiply all numbers by this constant, round them to integers, find the GCD of these integers using the standard algorithm and then divide the result by the said constant (1000). The larger the constant, the higher the precision.
This is a reformulaiton of shsmurfy's solution when you a priori choose 3 positive tolerances (e1,e2,e3)
The problem is then to search smallest positive integers (n1,n2,n3) and thus largest root frequency f such that:
f1 = n1*f +/- e1
f2 = n2*f +/- e2
f3 = n3*f +/- e3
We assume 0 <= f1 <= f2 <= f3
If we fix n1, then we get these relations:
f is in interval I1=[(f1-e1)/n1 , (f1+e1)/n1]
n2 is in interval I2=[n1*(f2-e2)/(f1+e1) , n1*(f2+e2)/(f1-e1)]
n3 is in interval I3=[n1*(f3-e3)/(f1+e1) , n1*(f3+e3)/(f1-e1)]
We start with n1 = 1, then increment n1 until the interval I2 and I3 contain an integer - that is floor(I2min) different from floor(I2max) same with I3
We then choose smallest integer n2 in interval I2, and smallest integer n3 in interval I3.
Assuming normal distribution of floating point errors, the most probable estimate of root frequency f is the one minimizing
J = (f1/n1 - f)^2 + (f2/n2 - f)^2 + (f3/n3 - f)^2
That is
f = (f1/n1 + f2/n2 + f3/n3)/3
If there are several integers n2,n3 in intervals I2,I3 we could also choose the pair that minimize the residue
min(J)*3/2=(f1/n1)^2+(f2/n2)^2+(f3/n3)^2-(f1/n1)*(f2/n2)-(f1/n1)*(f3/n3)-(f2/n2)*(f3/n3)
Another variant could be to continue iteration and try to minimize another criterium like min(J(n1))*n1, until f falls below a certain frequency (n1 reaches an upper limit)...
I found this question looking for answers for mine in MathStackExchange (here and here).
I've only managed (yet) to measure the appeal of a fundamental frequency given a list of harmonic frequencies (following the sound/music nomenclature), which can be useful if you have a reduced number of options and is feasible to compute the appeal of each one and then choose the best fit.
C&P from my question in MSE (there the formatting is prettier):
being v the list {v_1, v_2, ..., v_n}, ordered from lower to higher
mean_sin(v, x) = sum(sin(2*pi*v_i/x), for i in {1, ...,n})/n
mean_cos(v, x) = sum(cos(2*pi*v_i/x), for i in {1, ...,n})/n
gcd_appeal(v, x) = 1 - sqrt(mean_sin(v, x)^2 + (mean_cos(v, x) - 1)^2)/2, which yields a number in the interval [0,1].
The goal is to find the x that maximizes the appeal. Here is the (gcd_appeal) graph for your example [2.468, 3.700, 6.1699], where you find that the optimum GCD is at x = 1.2337899957639993
Edit:
You may find handy this JAVA code to calculate the (fuzzy) divisibility (aka gcd_appeal) of a divisor relative to a list of dividends; you can use it to test which of your candidates makes the best divisor. The code looks ugly because I tried to optimize it for performance.
//returns the mean divisibility of dividend/divisor as a value in the range [0 and 1]
// 0 means no divisibility at all
// 1 means full divisibility
public double divisibility(double divisor, double... dividends) {
double n = dividends.length;
double factor = 2.0 / divisor;
double sum_x = -n;
double sum_y = 0.0;
double[] coord = new double[2];
for (double v : dividends) {
coordinates(v * factor, coord);
sum_x += coord[0];
sum_y += coord[1];
}
double err = 1.0 - Math.sqrt(sum_x * sum_x + sum_y * sum_y) / (2.0 * n);
//Might happen due to approximation error
return err >= 0.0 ? err : 0.0;
}
private void coordinates(double x, double[] out) {
//Bhaskara performant approximation to
//out[0] = Math.cos(Math.PI*x);
//out[1] = Math.sin(Math.PI*x);
long cos_int_part = (long) (x + 0.5);
long sin_int_part = (long) x;
double rem = x - cos_int_part;
if (cos_int_part != sin_int_part) {
double common_s = 4.0 * rem;
double cos_rem_s = common_s * rem - 1.0;
double sin_rem_s = cos_rem_s + common_s + 1.0;
out[0] = (((cos_int_part & 1L) * 8L - 4L) * cos_rem_s) / (cos_rem_s + 5.0);
out[1] = (((sin_int_part & 1L) * 8L - 4L) * sin_rem_s) / (sin_rem_s + 5.0);
} else {
double common_s = 4.0 * rem - 4.0;
double sin_rem_s = common_s * rem;
double cos_rem_s = sin_rem_s + common_s + 3.0;
double common_2 = ((cos_int_part & 1L) * 8L - 4L);
out[0] = (common_2 * cos_rem_s) / (cos_rem_s + 5.0);
out[1] = (common_2 * sin_rem_s) / (sin_rem_s + 5.0);
}
}