Related
can someone explain to me why the following code
LogLikelihood[ MultinomialDistribution[ countstot, {dt1/ttot,
dt2/ttot, dt3/ttot, dt4/ttot, dt5/ttot}], {CR1, CR2, CR3, CR4,
CR5}]
does not produce a number as output, but instead this:
LogLikelihood[ MultinomialDistribution[ 156, {318/1049, 159/1049,
208/1049, 222/1049, 142/1049}], {0.00186,
0.00185, 0.00136, 0.00108, 0.00115}]
it is the first time I use LogLikelihood and MultinomialDistribution, and I have probably done something wrong, but I can't really understand what.
Thanks
Taking a few clues from the documentation.
d = MultinomialDistribution[
156, {318/1049, 159/1049, 208/1049, 222/1049, 142/1049}] // N;
These are the mean results expected from this distribution
m = Mean[d]
{47.2908, 23.6454, 30.9323, 33.0143, 21.1173}
Total[m]
156.
Taking some random values
r = RandomVariate[d]
{51, 17, 23, 41, 24}
The log-likelihood of these values (non-negative integer input for multinomial)
LogLikelihood[d, {r}]
-12.9418
Total[r]
156
Scaling up your figures and rounding so that they total 156
values = {0.00186, 0.00185, 0.00136, 0.00108, 0.00115};
factor = 156/Total[values];
scaled = 0.999 factor values;
rounded = Round[scaled]
{40, 39, 29, 23, 25}
Total[rounded]
156
LogLikelihood[d, {rounded}]
-16.555
I would like to create a random number generator, that generates a random decimal number:
Greater than 0.0
Less than 15.0
Where the probability of that number being close to 2.0 is relatively high
The probability of it being near 15.0 or very close to zero is very low
I'm terrifically poor at mathematics but my research seems to tell me I want to pull a random number from a Cumulative Distribution Function resembling a Fisher–Snedecor (F) pattern, a bit like this one:
http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/742d7708-efd3-492c-abff-6044d78e3bbd/Image/6303a2314437d8fcf2f72d9a56b1293a/f_distribution_probability.png
I am using a Ruby gem called Distribution (https://github.com/sciruby/distribution) to try and achieve this. It looks like the right tool, but I'm having a terrible time trying to understand how to use it to achieve the desired outcome :( Any help please.
I'll take it back, there is no rng call for F. So, if you want to use Distribution gem, what I would propose is to use Chi2 with 4 degrees of freedom.
Mode for Chi2 with k degress of freedom is equal to k-2, so for 4 d.f. you'll get mode at 2, see here. My Ruby is rusty, bear with me
require 'distribution'
normal = Distribution::Normal.rng(0)
g1 = normal.call
g2 = normal.call
g3 = normal.call
g4 = normal.call
chi2 = g1*g1 + g2*g2 + g3*g3 + g4*g4
UPDATE
You have to truncate it at 15, so if generated chi2 is greater than 15 just reject it and generate another one. Though I would say you won't see a lot of
value above 15, check graphs for PDF/CDF.
UPDATE II
And if you want to get samples from F, make generic Chi2 generator for d degrees of freedom from code above, and just sample ratio of chi2, check here
chi2_d1 = DChi2(d1)
chi2_d2 = DChi2(d2)
f = (chi2_d1.call / d1) / (chi2_d2.call / d2)
UPDATE III
And, frankly, I don't see how you could get F distribution working for you. It is ok at 0, but mode is equal to (d1-2)/d1 * d2/(d2 + 2), and it is hard to see it equal to 2. Graph you provided has mode at about 1/3.
Here's a very crude, unscientific, non-mathy attempt at using the F-distribution with the parameters you gave in the F-function image (3 and 36).
First I calculate what F-value is needed for the CDF to be 0.975 (100% - 2.5% for the upper end of the range for your number 15):
To calculate that we can use the p_value method like so:
> F_15 = Distribution::F.p_value(0.975, 3, 36)
=> 3.5046846420861977
Next we simply use a multiplier so that when we calculate the CDF it will return the value 15 when the F-value is F_15.
> M = 15 / F_15
=> 4.27998565687528
And now we can generate random numbers with rand, which has a range of 0..1 like so:
[M * Distribution::F.p_value(rand, 3, 36), 15].min
The question is will this function be close to the number 2 with a 45% probability? Well..sort of. You need to pick the right parameters for the F-distribution to tweak the curve (or just adjust the multiplier M). But here's a sample with the parameters from your image:
0.step(0.99, 0.02).map { |n|
sprintf("%0.2f", M * Distribution::F.p_value(n, 3, 36))
}
Gives you:
["0.00", "0.26", "0.42", "0.57", "0.70", "0.83", "0.95", "1.07",
"1.20", "1.31", "1.43", "1.55", "1.67", "1.80", "1.92", "2.04",
"2.17", "2.30", "2.43", "2.56", "2.70", "2.84", "2.98", "3.13",
"3.28", "3.44", "3.60", "3.77", "3.95", "4.13", "4.32", "4.52",
"4.73", "4.95", "5.18", "5.43", "5.69", "5.97", "6.28", "6.61",
"6.97", "7.37", "7.81", "8.32", "8.90", "9.60", "10.45", "11.56",
"13.14", "15.90"]
Sometimes you know which distribution applies because of the nature of the data. If, for example, the random variable is the sum of independent, identical Bernouli (two-state) random variables, you know the former has a binomial distribution, which can be approximated by a Normal distribution. When, as here, that does not apply, you can use a continuous distribution, shaped by it's parameters, or simply use a discrete distribution. Others have made suggestions for using various continuous distributions, so I'll pass on some remarks about using a discrete distribution.
Suppose the discrete probability density function were the following:
pdf = [[0.5, 0.03], [1.0, 0.06], [1.5, 0.10], [ 2.0, 0.15], [2.5 , 0.15], [ 3.0, 0.10],
[4.0, 0.11], [6.0, 0.14], [9.0, 0.10], [12.0, 0.03], [14.0, 0.02], [15.0, 0.01]]
pdf.map(&:last).reduce(:+)
#=> 1.0
This could be interpreted as there being a probability of 0.03 that the random variable will be less than 0.5, a 0.06 probability of the random variable being greater than or equal 0.5 and less than 1.0, and so on.
A discrete pdf might be constructed from historical data or by sampling, an advantage it has over using a continuous distribution. It can be made arbitrarily fine by increasing the numbers of intervals.
Next convert the pdf to a cumulative distribution function:
cum = 0.0
cdf = pdf.map { |k,v| [k, cum += v] }
#=> [[0.5, 0.03], [1.0, 0.09], [1.5, 0.19], [2.0, 0.34], [2.5, 0.49], [3.0, 0.59],
# [4.0, 0.7], [6.0, 0.84], [9.0, 0.94], [12.0, 0.97], [14.0, 0.99], [15.0, 1.0]]
Now use Kernel#rand to generate pseudo random variates between 0.0 and 1.0 and use Enumerable#find to associate the random variate with a cdf key:
def rnd(cdf)
r = rand
cdf.find { |k,v| r < v }.first
end
Note that cdf.find { |k,v| rand < v }.first would produce erroneous results, since rand is executed for each key-value pair of cdf.
Let's try it 100,000 times, recording the relative frequencies
n = 100_000
inc = 1.0/n
n.times.with_object(Hash.new(0.0)) { |_, h| h[rnd(cdf)] += inc }.
sort.
map { |k,v| [k, v.round(5)] }.to_h
#=> { 0.5=>0.03053, 1.0=>0.05992, 1.5=>0.10084, 2.0=>0.14959, 2.5=>0.15024,
# 3.0=>0.10085, 4.0=>0.10946, 6.0=>0.13923, 9.0=>0.09919, 12.0=>0.03073,
# 14.0=>0.01931, 15.0=>0.01011}
Im struggling with my thesis on wave energy devices. Since I am a newbie to FORTRAN 90, I would like to improve my programming skills. Therefore, I just picked up an example from
http://rosettacode.org/wiki/Cholesky_decomposition
and tried to implement what is explained in the homepage. Basically it is about to program the Cholesky factorization of a 3x3 matrix A. I know there are already packages that do the decomposition for Fortran, but I would like to experience myself the effort in learning how to program.
There is no error in compilation, but the results do not match. I basically find out all the elements despite of the element L(3,3). Attached, you can find the code I've created from scratch in Fortran 90:
Program Cholesky_decomp
implicit none
!size of the matrix
INTEGER, PARAMETER :: m=3 !rows
INTEGER, PARAMETER :: n=3 !cols
REAL, DIMENSION(m,n) :: A, L
REAL :: sum1, sum2
INTEGER i,j,k
! Assign values to the matrix
A(1,:)=(/ 25, 15, -5 /)
A(2,:)=(/ 15, 18, 0 /)
A(3,:)=(/ -5, 0, 11 /)
! Initialize values
L(1,1)=sqrt(A(1,1))
L(2,1)=A(2,1)/L(1,1)
L(2,2)=sqrt(A(2,2)-L(2,1)*L(2,1))
L(3,1)=A(3,1)/L(1,1)
sum1=0
sum2=0
do i=1,n
do k=1,i
do j=1,k-1
if (i==k) then
sum1=sum1+(L(k,j)*L(k,j))
L(k,k)=sqrt(A(k,k)-sum1)
elseif (i > k) then
sum2=sum2+(L(i,j)*L(k,j))
L(i,k)=(1/L(k,k))*(A(i,k)-sum2)
else
L(i,k)=0
end if
end do
end do
end do
!write output
do i=1,m
print "(3(1X,F6.1))",L(i,:)
end do
End program Cholesky_decomp
Can you tell me what is the mistake in the code? I get L(3,3)=0 when it should be L(3,3)=3. I'm totally lost, and just for the record: on the Rosetta code homepage there is no solution for fortran, so any any hint is appreciated.
Thank you very much in advance.
You want to set sum1 and sum2 to zero for each iteration of the i and k loops.
I've finally found out how to solve the problem for greater order, 4x4 matrices, etc. as presented in the link I attached above. Here is the final code:
Program Cholesky_decomp
!*************************************************!
!LBH # ULPGC 06/03/2014
!Compute the Cholesky decomposition for a matrix A
!after the attached
!http://rosettacode.org/wiki/Cholesky_decomposition
!note that the matrix A is complex since there might
!be values, where the sqrt has complex solutions.
!Here, only the real values are taken into account
!*************************************************!
implicit none
INTEGER, PARAMETER :: m=3 !rows
INTEGER, PARAMETER :: n=3 !cols
COMPLEX, DIMENSION(m,n) :: A
REAL, DIMENSION(m,n) :: L
REAL :: sum1, sum2
INTEGER i,j,k
! Assign values to the matrix
A(1,:)=(/ 25, 15, -5 /)
A(2,:)=(/ 15, 18, 0 /)
A(3,:)=(/ -5, 0, 11 /)
!!!!!!!!!!!!another example!!!!!!!
!A(1,:) = (/ 18, 22, 54, 42 /)
!A(2,:) = (/ 22, 70, 86, 62 /)
!A(3,:) = (/ 54, 86, 174, 134 /)
!A(4,:) = (/ 42, 62, 134, 106 /)
! Initialize values
L(1,1)=real(sqrt(A(1,1)))
L(2,1)=A(2,1)/L(1,1)
L(2,2)=real(sqrt(A(2,2)-L(2,1)*L(2,1)))
L(3,1)=A(3,1)/L(1,1)
!for greater order than m,n=3 add initial row value
!for instance if m,n=4 then add the following line
!L(4,1)=A(4,1)/L(1,1)
do i=1,n
do k=1,i
sum1=0
sum2=0
do j=1,k-1
if (i==k) then
sum1=sum1+(L(k,j)*L(k,j))
L(k,k)=real(sqrt(A(k,k)-sum1))
elseif (i > k) then
sum2=sum2+(L(i,j)*L(k,j))
L(i,k)=(1/L(k,k))*(A(i,k)-sum2)
else
L(i,k)=0
end if
end do
end do
end do
!write output
do i=1,m
print "(3(1X,F6.1))",L(i,:)
end do
End program Cholesky_decomp
Look forward to hear about comments, better ways to program it, corrections and any kind of feedback. Thanks 2 francescalus for answering so quickly!
Regards, lbh
I'm looking for a method to generate a pseudorandom stream with a somewhat odd property - I want clumps of nearby numbers.
The tricky part is, I can only keep a limited amount of state no matter how large the range is. There are algorithms that give a sequence of results with minimal state (linear congruence?)
Clumping means that there's a higher probability that the next number will be close rather than far.
Example of a desirable sequence (mod 10): 1 3 9 8 2 7 5 6 4
I suspect this would be more obvious with a larger stream, but difficult to enter by hand.
Update:
I don't understand why it's impossible, but yes, I am looking for, as Welbog summarized:
Non-repeating
Non-Tracking
"Clumped"
Cascade a few LFSRs with periods smaller than you need, combining them to get a result such than the fastest changing register controls the least significant values. So if you have L1 with period 3, L2 with period 15 and L3 with some larger period, N = L1(n) + 3 * L2(n/3) + 45 * L3(n/45). This will obviously generate 3 clumped values, then jump and general another 3 clumped values. Use something other than multiplication ( such as mixing some of the bits of the higher period registers ) or different periods to make the clump spread wider than the period of the first register. It won't be particularly smoothly random, but it will be clumpy and non-repeating.
For the record, I'm in the "non-repeating, non-random, non-tracking is a lethal combination" camp, and I hope some simple though experiments will shed some light. This is not formal proof by any means. Perhaps someone will shore it up.
So, I can generate a sequence that has some randomness easily:
Given x_i, x_(i+1) ~ U(x_i, r), where r > x_i.
For example:
if x_i = 6, x_(i+1) is random choice from (6+epsilon, some_other_real>6). This guarantees non-repeating, but at the cost that the distribution is monotonically increasing.
Without some condition (like monotonicity), inherent to the sequence of generated numbers themselves, how else can you guarantee uniqueness without carrying state?
Edit: So after researching RBarryYoung's claim of "Linear Congruential Generators" (not differentiators... is this what RBY meant), and clearly, I was wrong! These sequences exist, and by necessity, any PRNG whose next number is dependent only on the current number and some global, non changing state can't have repeats within a cycle (after some initial burn it period).
By defining the "clumping features" in terms of the probability distribution of its size, and the probability distribution of its range, you can then use simple random generators with the underlying distribution and produce the sequences.
One way to get "clumpy" numbers would be to use a normal distribution.
You start the random list with your "initial" random value, then you generate a random number with the mean of the previous random value and a constant variance, and repeat as necessary. The overall variance of your entire list of random numbers should be approximately constant, but the "running average" of your numbers will drift randomly with no particular bias.
>>> r = [1]
>>> for x in range(20):
r.append(random.normalvariate(r[-1], 1))
>>> r
[1, 0.84583267252801408, 0.18585962715584259, 0.063850022580489857, 1.2892164299497422,
0.019381814281494991, 0.16043424295472472, 0.78446377124854461, 0.064401889591144235,
0.91845494342245126, 0.20196939102054179, -1.6521524237203531, -1.5373703928440983,
-2.1442902977248215, 0.27655425357702956, 0.44417440706703393, 1.3128647361934616,
2.7402744740729705, 5.1420432435119352, 5.9326297626477125, 5.1547981880261782]
I know it's hard to tell by looking at the numbers, but you can sort of see that the numbers clump together a little bit - the 5.X's at the end, and the 0.X's on the second row.
If you need only integers, you can just use a very large mean and variance, and truncate/divide to obtain integer output. Normal distributions by definition are a continuous distribution, meaning all real numbers are potential output - it is not restricted to integers.
Here's a quick scatter plot in Excel of 200 numbers generated this way (starting at 0, constant variance of 1):
scatter data http://img178.imageshack.us/img178/8677/48855312.png
Ah, I just read that you want non-repeating numbers. No guarantee of that in a normal distribution, so you might have to take into account some of the other approaches others have mentioned.
I don't know of an existing algorithm that would do this, but it doesn't seem difficult to roll your own (depending on how stringent the "limited amount of state" requirement is). For example:
RANGE = (1..1000)
CLUMP_ODDS = .5
CLUMP_DIST = 10
last = rand(RANGE)
while still_want_numbers
if rand(CLUMP_ODDS) # clump!
next = last + rand(CLUMP_DIST) - (CLUMP_DIST / 2) # do some boundary checking here
else # don't clump!
next = rand(RANGE)
end
print next
last = next
end
It's a little rudimentary, but would something like that suit your needs?
In the range [0, 10] the following should give a uniform distribution. random() yields a (pseudo) random number r with 0 <= r < 1.
x(n + 1) = (x(n) + 5 * (2 * random() - 1)) mod 10
You can get your desired behavior by delinearizing random() - for example random()^k will be skewed towards small numbers for k > 1. An possible function could be the following, but you will have to try some exponents to find your desired distribution. And keep the exponent odd, if you use the following function... ;)
x(n + 1) = (x(n) + 5 * (2 * random() - 1)^3) mod 10
How about (psuedo code)
// clumpiness static in that value retained between calls
static float clumpiness = 0.0f; // from 0 to 1.0
method getNextvalue(int lastValue)
float r = rand(); // float from 0 to 1
int change = MAXCHANGE * (r - 0.5) * (1 - clumpiness);
clumpiness += 0.1 * rand() ;
if (clumpiness >= 1.0) clumpiness -= 1.0;
// -----------------------------------------
return Round(lastValue + change);
Perhaps you could generate a random sequence, and then do some strategic element swapping to get the desired property.
For example, if you find 3 values a,b,c in the sequence such that a>b and a>c, then with some probability you could swap elements a and b or elements a and c.
EDIT in response to comment:
Yes, you could have a buffer on the stream that is whatever size you are comfortable with. Your swapping rules could be deterministic, or based on another known, reproducible psuedo-random sequence.
Does a sequence like 0, 94, 5, 1, 3, 4, 14, 8, 10, 9, 11, 6, 12, 7, 16, 15, 17, 19, 22, 21, 20, 13, 18, 25, 24, 26, 29, 28, 31, 23, 36, 27, 42, 41, 30, 33, 34, 37, 35, 32, 39, 47, 44, 46, 40, 38, 50, 43, 45, 48, 52, 49, 55, 54, 57, 56, 64, 51, 60, 53, 59, 62, 61, 69, 68, 63, 58, 65, 71, 70, 66, 73, 67, 72, 79, 74, 81, 77, 76, 75, 78, 83, 82, 85, 80, 87, 84, 90, 89, 86, 96, 93, 98, 88, 92, 99, 95, 97, 2, 91 (mod 100) look good to you?
This is the output of a small ruby program (explanations below):
#!/usr/bin/env ruby
require 'digest/md5'
$seed = 'Kind of a password'
$n = 100 # size of sequence
$k = 10 # mixing factor (higher means less clumping)
def pseudo_random_bit(k, n)
Digest::MD5.hexdigest($seed + "#{k}|#{n}")[-1] & 1
end
def sequence(x)
h = $n/2
$k.times do |k|
# maybe exchange 1st with 2nd, 3rd with 4th, etc
x ^= pseudo_random_bit(k, x >> 1) if x < 2*h
# maybe exchange 1st with last
if [0, $n-1].include? x
x ^= ($n-1)*pseudo_random_bit(k, 2*h)
end
# move 1st to end
x = (x - 1) % $n
# maybe exchange 1st with 2nd, 3rd with 4th, etc
# (corresponds to 2nd with 3rd, 4th with 5th, etc)
x ^= pseudo_random_bit(k, h+(x >> 1)) if x < 2*(($n-1)/2)
# move 1st to front
x = (x + 1) % $n
end
x
end
puts (0..99).map {|x| sequence(x)}.join(', ')
The idea is basically to start with the sequence 0..n-1 and disturb the order by passing k times over the sequence (more passes means less clumping). In each pass one first looks at the pairs of numbers at positions 0 and 1, 2 and 3, 4 and 5 etc (general: 2i and 2i+1) and flips a coin for each pair. Heads (=1) means exchange the numbers in the pair, tails (=0) means don't exchange them. Then one does the same for the pairs at positions 1 and 2, 3 and 4, etc (general: 2i+1 and 2i+2). As you mentioned that your sequence is mod 10, I additionally exchanged positions 0 and n-1 if the coin for this pair dictates it.
A single number x can be mapped modulo n after k passes to any number of the interval [x-k, x+k] and is approximately binomial distributed around x. Pairs (x, x+1) of numbers are not independently modified.
As pseudo-random generator I used only the last of the 128 output bits of the hash function MD5, choose whatever function you want instead. Thanks to the clumping one won't get a "secure" (= unpredictable) random sequence.
Maybe you can chain together 2 or more LCGs in a similar manner described for the LSFRs described here. Incement the least-significant LCG with its seed, on full-cycle, increment the next LCG. You only need to store a seed for each LCG. You could then weight each part and sum the parts together. To avoid repititions in the 'clumped' LstSig part you can randomly reseed the LCG on each full cycle.
I've become very addicted to Project Euler recently and am trying to do this one next! I've started some analysis on it and have reduced the problem down substantially already. Here's my working:
A = pqr and
1/A = 1/p + 1/q + 1/r so pqr/A =
pq + pr + qr
And because of the first equation:
pq+pr+qr = 1
Since exactly two of p, q and r have
to be negative, we can simplify the
equation down to finding:
abc for which ab = ac+bc+1
Solving for c we get:
ab-1 = (a+b)c
c = (ab-1)/(a+b)
This means we need to find a and b for
which:
ab = 1 (mod a+b)
And then our A value with those a and
b is:
A = abc = ab(ab-1)/(a+b)
Sorry if that's a lot of math! But now all we have to deal with is one condition and two equations. Now since I need to find the 150,000th smallest integer written as ab(ab-1)/(a+b) with ab = 1 (mod a+b), ideally I want to search (a, b) for which A is as small as possible.
For ease I assumed a < b and I have also noticed that gcd(a, b) = 1.
My first implementation is straight forward and even finds 150,000 solutions fast enough. However, it takes far too long to find the 150,000 smallest solutions. Here's the code anyway:
n = 150000
seen = set()
a = 3
while len(seen) < n:
for b in range(2, a):
if (a*b)%(a+b) != 1: continue
seen.add(a*b*(a*b-1)//(a+b))
print(len(seen), (a, b), a*b*(a*b-1)//(a+b))
a += 1
My next thought was to use Stern-Brocot trees but that is just too slow to find solutions. My final algorithm was to use the Chinese remainder theorem to check if different values of a+b yield solutions. That code is complicated and although faster, it isn't fast enough...
So I'm absolutely out of ideas! Anyone got any ideas?
As with many of the Project Euler problems, the trick is to find a technique that reduces the brute force solution into something more straight forward:
A = pqr and
1/A = 1/p + 1/q + 1/r
So,
pq + qr + rp = 1 or -r = (pq - 1)/(p + q)
Without loss of generality, 0 < p < -q < -r
There exists k , 1 <= k <= p
-q = p + k
-r = (-p(p + k) – 1) / (p + -p – k) = (p^2 + 1)/k + p
But r is an integer, so k divides p^2 + 1
pqr = p(p + q)((p^2 + 1)/k + p)
So to compute A we need to iterate over p, and where k can only take values which are divisors of p squared plus 1.
Adding each solution to a set, we can stop when we find the required 150000th Alexandrian integer.
This article about Chinese remainder, fast implementation, can help you : www.codeproject.com/KB/recipes/CRP.aspx
This is more links for tools and libraries :
Tools:
Maxima
http://maxima.sourceforge.net/
Maxima is a system for the manipulation of symbolic and numerical expressions, including differentiation, integration, Taylor series, Laplace transforms, ordinary differential equations, systems of linear equations, polynomials, and sets, lists, vectors, matrices, and tensors. Maxima yields high precision numeric results by using exact fractions, arbitrary precision integers, and variable precision floating point numbers. Maxima can plot functions and data in two and three dimensions.
Mathomatic
http://mathomatic.org/math/
Mathomatic is a free, portable, general-purpose CAS (Computer Algebra System) and calculator software that can symbolically solve, simplify, combine, and compare equations, perform complex number and polynomial arithmetic, etc. It does some calculus and is very easy to use.
Scilab
www.scilab.org/download/index_download.php
Scilab is a numerical computation system similiar to Matlab or Simulink. Scilab includes hundreds of mathematical functions, and programs from various languages (such as C or Fortran) can be added interactively.
mathstudio
mathstudio.sourceforge.net
An interactive equation editor and step-by-step solver.
Library:
Armadillo C++ Library
http://arma.sourceforge.net/
The Armadillo C++ Library aims to provide an efficient base for linear algebra operations (matrix and vector maths) while having a straightforward and easy to use interface.
Blitz++
http://www.oonumerics.org/blitz/
Blitz++ is a C++ class library for scientific computing
BigInteger C#
http://msdn.microsoft.com/pt-br/magazine/cc163441.aspx
libapmath
http://freshmeat.net/projects/libapmath
Welcome to the homepage of the APMath-project. Aim of this project is the implementation of an arbitrary precision C++-library, that is the most convenient in use, this means all operations are implemented as operator-overloadings, naming is mostly the same as that of .
libmat
http://freshmeat.net/projects/libmat
MAT is a C++ mathematical template class library. Use this library for various matrix operations, finding roots of polynomials, solving equations, etc. The library contains only C++ header files, so no compilation is necessary.
animath
http://www.yonsen.bz/animath/animath.html
Animath is a Finite Element Method library entirely implemented in C++. It is suited for fluid-structure interaction simulation, and it is mathematically based on higher-order tetrahedral elements.
OK. Here's some more playing with my Chinese Remainder Theorem solution. It turns out that a+b cannot be the product of any prime, p, unless p = 1 (mod 4). This allows faster computation as we only have to check a+b which are multiples of primes such as 2, 5, 13, 17, 29, 37...
So here is a sample of possible a+b values:
[5, 8, 10, 13, 16, 17, 20, 25, 26, 29, 32, 34, 37, 40, 41, 50, 52, 53, 58, 61, 64, 65, 68, 73, 74, 80, 82, 85, 89, 97, 100]
And here is the full program using the Chinese Remainder Theorem:
cachef = {}
def factors(n):
if n in cachef: cachef[n]
i = 2
while i*i <= n:
if n%i == 0:
r = set([i])|factors(n//i)
cachef[n] = r
return r
i += 1
r = set([n])
cachef[n] = r
return r
cachet = {}
def table(n):
if n == 2: return 1
if n%4 != 1: return
if n in cachet: return cachet[n]
a1 = n-1
for a in range(1, n//2+1):
if (a*a)%n == a1:
cachet[n] = a
return a
cacheg = {}
def extended(a, b):
if a%b == 0:
return (0, 1)
else:
if (a, b) in cacheg: return cacheg[(a, b)]
x, y = extended(b, a%b)
x, y = y, x-y*(a//b)
cacheg[(a, b)] = (x, y)
return (x, y)
def go(n):
f = [a for a in factors(n)]
m = [table(a) for a in f]
N = 1
for a in f: N *= a
x = 0
for i in range(len(f)):
if not m[i]: return 0
s, t = extended(f[i], N//f[i])
x += t*m[i]*N//f[i]
x %= N
a = x
while a < n:
b = n-a
if (a*b-1)%(a+b) == 0: return a*b*(a*b-1)//(a+b)
a += N
li = [5, 8, 10, 13, 16, 17, 20, 25, 26, 29, 32, 34, 37, 40, 41, 50, 52, 53, 58, 61, 64, 65, 68, 73, 74, 80, 82, 85, 89, 97, 100]
r = set([6])
find = 6
for a in li:
g = go(a)
if g:
r.add(g)
#print(g)
else:
pass#print(a)
r = list(r)
r.sort()
print(r)
print(len(r), 'with', len(li), 'iterations')
This is better but I hope to improve it further (for example a+b = 2^n seem to never be solutions).
I've also started considering basic substitutions such as:
a = u+1 and b = v+1
ab = 1 (mod a+b)
uv+u+v = 0 (mod u+v+2)
However, I can't see much improvement with that...