Gamma function approximation - ruby

I'm trying to write a three-parameter method that approximates the gamma function over a certain interval. The approximation should be a right-endpoint Riemann sum.
The gamma function is given by:
GAMMA(s) =
inf
INT x^(s-1) * exp(-x) dx
0
The right-endpoint Riemann sum approximation over the interval (0, m) should therefore be:
GAMMA(s) ~
m
SUM ((m/n)*i)^(s-1) * exp(-(m/n)*i) * delta_x where delta_x = (m/n)
i=1
My code is as follows:
def gamma(x = 4.0, n = 100000, m = 2500)
array = *(1..n)
result = array.inject(0) {|sum, i| sum + ((((m/n)*i)**(x-1))*((2.7183)**(-(m/n)*i))*(m/n))}
end
puts gamma
The code should return an approximation for 3! = 6, but instead it returns 0.0. Any ideas where I may be going wrong?

The problem is that when you do m/n
you are doing an integer division (eg 3/4 = 0) when you expect a float division (3/4 = 0.75)
you need to define your n and m as floats.
You can rewrite it as
def gamma(x = 4.0, n = 100000, m = 2500)
n = n.to_f
m = m.to_f
(1..n).to_a.inject(0) do |sum, i|
sum + ((((m/n)*i)**(x-1))*((Math::E)**(-(m/n)*i))*(m/n))
end
end
PS: also you do not need the array and result variables.
PS2: consider using Math::E instead of 2.7183

Your problem was identified by #xlembouras. You might consider writing the method as follows.
Code
def gamma(x = 4.0, n = 100000, m = 2500)
ratio = m.to_f/n
xm1 = x-1.0
ratio * (1..m).inject(0) do |sum,i|
ixratio = i*ratio
sum + ixratio**xm1 * Math.exp(-ixratio)
end
end
Examples
gamma(x=4.0, n= 40, m =10).round(6) #=> 1.616233
gamma.round(6) #=> 6.0
Please confirm these calculations are correct.

Related

Is it possible to do a poisson distribution with the probabilities based on integers?

Working within Solidity and the Ethereum EVM and Decimals don't exist. Is there a way I could mathematically still create a Poisson distribution using integers ? it doesnt have to be perfect, i.e rounding or losing some digits may be acceptable.
Let me preface by stating that what follows is not going to be (directly) helpful to you with etherium/solidity. However, it produces probability tables that you might be able to use for your work.
I ended up intrigued by the question of how accurate you could be in expressing the Poisson probabilities as rationals, so I put together the following script in Ruby to try things out:
def rational_poisson(lmbda)
Hash.new.tap do |h| # create a hash and pass it to this block as 'h'.
# Make all components of the calculations rational to allow
# cancellations to occur wherever possible when dividing
e_to_minus_lambda = Math.exp(-lmbda).to_r
factorial = 1r
lmbda = lmbda.to_r
power = 1r
(0...).each do |x|
unless x == 0
power *= lmbda
factorial *= x
end
value = (e_to_minus_lambda / factorial) * power
# the following double inversion/conversion bounds the result
# by the significant bits in the mantissa of a float
approx = Rational(1, (1 / value).to_f)
h[x] = approx
break if x > lmbda && approx.numerator <= 1
end
end
end
if __FILE__ == $PROGRAM_NAME
lmbda = (ARGV.shift || 2.0).to_f # read in a lambda (defaults to 2.0)
pmf = rational_poisson(lmbda) # create the pmf for a Poisson with that lambda
pmf.each { |key, value| puts "p(#{key}) = #{value} = #{value.to_f}" }
puts "cumulative error = #{1.0 - pmf.values.inject(&:+)}" # does it sum to 1?
end
Things to know as you glance through the code. Appending .to_r to a value or expression converts it to a rational, i.e., a ratio of two integers; values with an r suffix are rational constants; and (0...).each is an open-ended iterator which will loop until the break condition is met.
That little script produces results such as:
localhost:pjs$ ruby poisson_rational.rb 1.0
p(0) = 2251799813685248/6121026514868073 = 0.36787944117144233
p(1) = 2251799813685248/6121026514868073 = 0.36787944117144233
p(2) = 1125899906842624/6121026514868073 = 0.18393972058572117
p(3) = 281474976710656/4590769886151055 = 0.061313240195240384
p(4) = 70368744177664/4590769886151055 = 0.015328310048810096
p(5) = 17592186044416/5738462357688819 = 0.003065662009762019
p(6) = 1099511627776/2151923384133307 = 0.0005109436682936699
p(7) = 274877906944/3765865922233287 = 7.299195261338141e-05
p(8) = 34359738368/3765865922233287 = 9.123994076672677e-06
p(9) = 67108864/66196861914257 = 1.0137771196302974e-06
p(10) = 33554432/330984309571285 = 1.0137771196302975e-07
p(11) = 33554432/3640827405284135 = 9.216155633002704e-09
p(12) = 4194304/5461241107926203 = 7.68012969416892e-10
p(13) = 524288/8874516800380079 = 5.907792072437631e-11
p(14) = 32768/7765202200332569 = 4.2198514803125934e-12
p(15) = 256/909984632851473 = 2.8132343202083955e-13
p(16) = 16/909984632851473 = 1.7582714501302472e-14
p(17) = 1/966858672404690 = 1.0342773236060278e-15
cumulative error = 0.0

Finding the continued fraction of 2^(1/3) to very high precision

Here I'll use the notation
It is possible to find the continued fraction of a number by computing it then applying the definition, but that requires at least O(n) bits of memory to find a0, a1 ... an, in practice it is a much worse. Using double floating point precision it is only possible to find a0, a1 ... a19.
An alternative is to use the fact that if a,b,c are rational numbers then there exist unique rationals p,q,r such that 1/(a+b*21/3+c*22/3) = x+y*21/3+z*22/3, namely
So if I represent x,y, and z to absolute precision using the boost rational lib I can obtain floor(x + y*21/3+z*22/3) accurately only using double precision for 21/3 and 22/3 because I only need it to be within 1/2 of the true value. Unfortunately the numerators and denominators of x,y, and z grow considerably fast, and if you use regular floats instead the errors pile up quickly.
This way I was able to compute a0, a1 ... a10000 in under an hour, but somehow mathematica can do that in 2 seconds. Here's my code for reference
#include <iostream>
#include <boost/multiprecision/cpp_int.hpp>
namespace mp = boost::multiprecision;
int main()
{
const double t_1 = 1.259921049894873164767210607278228350570251;
const double t_2 = 1.587401051968199474751705639272308260391493;
mp::cpp_rational p = 0;
mp::cpp_rational q = 1;
mp::cpp_rational r = 0;
for(unsigned int i = 1; i != 10001; ++i) {
double p_f = static_cast<double>(p);
double q_f = static_cast<double>(q);
double r_f = static_cast<double>(r);
uint64_t floor = p_f + t_1 * q_f + t_2 * r_f;
std::cout << floor << ", ";
p -= floor;
//std::cout << floor << " " << p << " " << q << " " << r << std::endl;
mp::cpp_rational den = (p * p * p + 2 * q * q * q +
4 * r * r * r - 6 * p * q * r);
mp::cpp_rational a = (p * p - 2 * q * r) / den;
mp::cpp_rational b = (2 * r * r - p * q) / den;
mp::cpp_rational c = (q * q - p * r) / den;
p = a;
q = b;
r = c;
}
return 0;
}
The Lagrange algorithm
The algorithm is described for example in Knuth's book The Art of Computer Programming, vol 2 (Ex 13 in section 4.5.3 Analysis of Euclid's Algorithm, p. 375 in 3rd edition).
Let f be a polynomial of integer coefficients whose only real root is an irrational number x0 > 1. Then the Lagrange algorithm calculates the consecutive quotients of the continued fraction of x0.
I implemented it in python
def cf(a, N=10):
"""
a : list - coefficients of the polynomial,
i.e. f(x) = a[0] + a[1]*x + ... + a[n]*x^n
N : number of quotients to output
"""
# Degree of the polynomial
n = len(a) - 1
# List of consecutive quotients
ans = []
def shift_poly():
"""
Replaces plynomial f(x) with f(x+1) (shifts its graph to the left).
"""
for k in range(n):
for j in range(n - 1, k - 1, -1):
a[j] += a[j+1]
for _ in range(N):
quotient = 1
shift_poly()
# While the root is >1 shift it left
while sum(a) < 0:
quotient += 1
shift_poly()
# Otherwise, we have the next quotient
ans.append(quotient)
# Replace polynomial f(x) with -x^n * f(1/x)
a.reverse()
a = [-x for x in a]
return ans
It takes about 1s on my computer to run cf([-2, 0, 0, 1], 10000). (The coefficients correspond to the polynomial x^3 - 2 whose only real root is 2^(1/3).) The output agrees with the one from Wolfram Alpha.
Caveat
The coefficients of the polynomials evaluated inside the function quickly become quite large integers. So this approach needs some bigint implementation in other languages (Pure python3 deals with it, but for example numpy doesn't.)
You might have more luck computing 2^(1/3) to high accuracy and then trying to derive the continued fraction from that, using interval arithmetic to determine if the accuracy is sufficient.
Here's my stab at this in Python, using Halley iteration to compute 2^(1/3) in fixed point. The dead code is an attempt to compute fixed-point reciprocals more efficiently than Python via Newton iteration -- no dice.
Timing from my machine is about thirty seconds, spent mostly trying to extract the continued fraction from the fixed point representation.
prec = 40000
a = 1 << (3 * prec + 1)
two_a = a << 1
x = 5 << (prec - 2)
while True:
x_cubed = x * x * x
two_x_cubed = x_cubed << 1
x_prime = x * (x_cubed + two_a) // (two_x_cubed + a)
if -1 <= x_prime - x <= 1: break
x = x_prime
cf = []
four_to_the_prec = 1 << (2 * prec)
for i in range(10000):
q = x >> prec
r = x - (q << prec)
cf.append(q)
if True:
x = four_to_the_prec // r
else:
x = 1 << (2 * prec - r.bit_length())
while True:
delta_x = (x * ((four_to_the_prec - r * x) >> prec)) >> prec
if not delta_x: break
x += delta_x
print(cf)

Sum of outer products multiplied by a scalar in MATLAB

I would like to vectorize the sum of products hereafter in order to speed up my Matlab code. Would it be possible?
for i=1:N
A=A+hazard(i)*Z(i,:)'*Z(i,:);
end
where hazard is a vector (N x 1) and Z is a matrix (N x p).
Thanks!
With matrix multiplication only:
A = A + Z'*diag(hazard)*Z;
Note, however, that this requires more operations than Divakar's bsxfun approach, because diag(hazard) is an NxN matrix consisting mostly of zeros.
To save some time, you could define the inner matrix as sparse using spdiags, so that multiplication can be optimized:
A = A + full(Z'*spdiags(hazard, 0, zeros(N))*Z);
Benchmarking
Timing code:
Z = rand(N,p);
hazard = rand(N,1);
timeit(#() Z'*diag(hazard)*Z)
timeit(#() full(Z'*spdiags(hazard, 0, zeros(N))*Z))
timeit(#() bsxfun(#times,Z,hazard)'*Z)
With N = 1000; p = 300;
ans =
0.1423
ans =
0.0441
ans =
0.0325
With N = 2000; p = 1000;
ans =
1.8889
ans =
0.7110
ans =
0.6600
With N = 1000; p = 2000;
ans =
1.8159
ans =
1.2471
ans =
1.2264
It is seen that the bsxfun-based approach is consistently faster.
You can use bsxfun and matrix-multiplication -
A = bsxfun(#times,Z,hazard).'*Z + A

Gradient descent does not return incorrect prediction for linear function

I've implemented following Batch Gradient descednt algorithm, based on various sources I was able to find around web and in lecture notes.
This implementation isn't ideal in terms of stopping criteria, but for my sample it should work.
Inputs:
x = [1,1;1,2;1,3;1,4;1,5];
y = [1;2;3;4;5];
theta = [0;0];
Code:
tempTheta = [0;0];
for c = 1:10000,
for j = 1:2,
sum = 0;
for i = 1:5,
sum = sum + ((dot(theta', x(i, :)) - y(j)) * x(i,j));
end
sum = (sum / 5) * 0.01;
tempTheta(j) = theta(j) - sum;
end
theta = tempTheta;
end
The expected result is theta = [0;1], but my implementation always returns theta = [-3.5, 1.5].
I've tried various combinations of alpha and starting point, but without luck. Where am I making mistake?
In this line
sum = sum + ((dot(theta', x(i, :)) - y(j)) * x(i,j));
you are using a wrong index of y, it should be y(i), as j is a dimension iterator, not the sample iterator.
After the change
theta =
-1.5168e-07
1.0000e+00

Making a more efficient monte carlo simulation

So, I've written this code that should effectively estimate the area under the curve of the function defined as h(x). My problem is that i need to be able to estimate the area to within 6 decimal places, but the algorithm i've defined in estimateN seems to be using too heavy for my machine. Essentially the question is how can i make the following code more efficient? Is there a way i can get rid of that loop?
h = function(x) {
return(1+(x^9)+(x^3))
}
estimateN = function(n) {
count = 0
k = 1
xpoints = runif(n, 0, 1)
ypoints = runif(n, 0, 3)
while(k <= n){
if(ypoints[k]<=h(xpoints[k]))
count = count+1
k = k+1
}
#because of the range that im using for y
return(3*(count/n))
}
#uses the fact that err<=1/sqrt(n) to determine size of dataset
estimate_to = function(i) {
n = (10^i)^2
print(paste(n, " repetitions: ", estimateN(n)))
}
estimate_to(6)
Replace this code:
count = 0
k = 1
while(k <= n){
if(ypoints[k]<=h(xpoints[k]))
count = count+1
k = k+1
}
With this line:
count <- sum(ypoints <= h(xpoints))
If it's truly efficiency you're striving for, integrate is several orders of magnitude faster (not to mention more memory efficient) for this problem.
integrate(h, 0, 1)
# 1.35 with absolute error < 1.5e-14
microbenchmark(integrate(h, 0, 1), estimate_to(3), times=10)
# Unit: microseconds
# expr min lq median uq max neval
# integrate(h, 0, 1) 14.456 17.769 42.918 54.514 83.125 10
# estimate_to(3) 151980.781 159830.956 162290.668 167197.742 174881.066 10

Resources