SciPy: way to speed up a complicated integral - performance

I have a very complex integral to calculate:
from __future__ import division
from scipy.integrate import quad, nquad
import numpy as np
alpha = np.array([0.298073, 1.242567, 5.782948, 38.474970])
trial = np.array([0.08704173, 0.52509737, 0.51920929, 0.31233737])
class EigenvalueProblem:
def __init__(self, a, t):
self.alpha = a
self.trial = t
# Hamiltonian, interaction part
def hartree_integrand(self, coeff):
def hartree_potential(rr2):
return np.array([coeff[ii] * coeff[jj] *
np.exp(-(self.alpha[ii] +
self.alpha[jj]) * rr2 ** 2)
for ii in range(0, 4) for jj in range(0, 4)]).sum()
def length(theta, rr1, rr2):
return 1 / np.sqrt(rr1 ** 2 + rr2 ** 2 -
2 * rr1 * rr2 * np.cos(theta))
def tmp(theta, rr1, rr2):
return 8 * np.pi ** 2 * rr1 ** 2 * rr2 ** 2 * \
np.sin(theta) * hartree_potential(rr2) * \
length(theta, rr1, rr2)
def integrand(ii, jj, theta, rr1, rr2):
return np.exp(-(self.alpha[ii] + self.alpha[jj]) * rr1 ** 2) * tmp(theta, rr1, rr2)
return [
nquad(lambda theta, rr1, rr2: integrand(i, j, theta, rr1, rr2),
[[0, np.pi], [0, np.inf], [0, np.inf]]) for i in range(0, 4) for j in range(0, 4)]
hat = EigenvalueProblem(alpha, trial)
print hat.hartree_integrand(trial)
mathematically what I want to calculate is like this (which is the integrand function), with paremeters here. However, it takes more than several hours to compute this integral. I wonder is there any method to speed up it? Thank you very much!

You ought first to extend limits in integration over r1 and r2 to be from -Infinity to +Infinity - extend limits, multiply by 1/2*1/2, etc
Second, switch to use Gauss-Hermite quadrature, which is exactly suited to integrate function with e-x2 kernels.
Appropriate code is in NumPy, see references therein

Related

Monte Carlo program throws a method error in Julia

I am running this code but it shows a method error. Can someone please help me?
Code:
function lsmc_am_put(S, K, r, σ, t, N, P)
Δt = t / N
R = exp(r * Δt)
T = typeof(S * exp(-σ^2 * Δt / 2 + σ * √Δt * 0.1) / R)
X = Array{T}(N+1, P)
for p = 1:P
X[1, p] = x = S
for n = 1:N
x *= R * exp(-σ^2 * Δt / 2 + σ * √Δt * randn())
X[n+1, p] = x
end
end
V = [max(K - x, 0) / R for x in X[N+1, :]]
for n = N-1:-1:1
I = V .!= 0
A = [x^d for d = 0:3, x in X[n+1, :]]
β = A[:, I]' \ V[I]
cV = A' * β
for p = 1:P
ev = max(K - X[n+1, p], 0)
if I[p] && cV[p] < ev
V[p] = ev / R
else
V[p] /= R
end
end
end
return max(mean(V), K - S)
end
lsmc_am_put(100, 90, 0.05, 0.3, 180/365, 1000, 10000)
error:
MethodError: no method matching (Array{Float64})(::Int64, ::Int64)
Closest candidates are:
(Array{T})(::LinearAlgebra.UniformScaling, ::Integer, ::Integer) where T at /Volumes/Julia-1.8.3/Julia-1.8.app/Contents/Resources/julia/share/julia/stdlib/v1.8/LinearAlgebra/src/uniformscaling.jl:508
(Array{T})(::Nothing, ::Any...) where T at baseext.jl:45
(Array{T})(::UndefInitializer, ::Int64) where T at boot.jl:473
...
Stacktrace:
[1] lsmc_am_put(S::Int64, K::Int64, r::Float64, σ::Float64, t::Float64, N::Int64, P::Int64)
# Main ./REPL[39]:5
[2] top-level scope
# REPL[40]:1
I tried this code and I was expecting a numeric answer but this error came up. I tried to look it up on google but I found nothing that matches my situation.
The error occurs where you wrote X = Array{T}(N+1, P). Instead, use one of the following approaches if you need a Vector:
julia> Array{Float64, 1}([1,2,3])
3-element Vector{Float64}:
1.0
2.0
3.0
julia> Vector{Float64}([1, 2, 3])
3-element Vector{Float64}:
1.0
2.0
3.0
And in your case, you should write X = Array{T,1}([N+1, P]) or X = Vector{T}([N+1, P]). But since there's such a X[1, p] = x = S expression in your code, I guess you mean to initialize a 2D array and update its elements through the algorithm. For this, you can define X like the following:
X = zeros(Float64, N+1, P)
# Or
X = Array{Float64, 2}(undef, N+1, P)
So, I tried the following in your code:
# I just changed the definition of `X` in your code like the following
X = Array{T, 2}(undef, N+1, P)
#And the result of the code was:
julia> lsmc_am_put(100, 90, 0.05, 0.3, 180/365, 1000, 10000)
3.329213731484463

Why does the code terminate with a "Solution Not Found" error and "EXIT: Converged to a point of local infeasibility. Problem may be infeasible"?

I cannot seem to figure out why IPOPT cannot find a solution to this. Initially, I thought the problem was totally infeasible but when I reduce the value of col_total to any number below 161000 or comment out the last constraint equation that contains col_total, it solves and EXITs with an Optimal Solution Found and a final objective value function of -161775.256826753. I have solved the same Maximization problem using Artificial Bee Colony and Particle Swamp Optimization techniques, and they solve and return optimal objective value function at least 225000 and 226000 respectively. Could it be that another solver is required? I have also tried APOPT, BPOPT, and IPOPT and have tinkered around with the tolerance values, but no combination none seems to work just yet. The code is posted below. Any guidance will be hugely appreciated.
from gekko import GEKKO
import numpy as np
distances = np.array([[[0, 0],[0,0],[0,0],[0,0]],\
[[155,0],[0,0],[0,0],[0,0]],\
[[310,0],[155,0],[0,0],[0,0]],\
[[465,0],[310,0],[155,0],[0,0]],\
[[620,0],[465,0],[310,0],[155,0]]])
alpha = 0.5 / np.log(30/0.075)
diam = 31
free = 7
rho = 1.2253
area = np.pi * (diam / 2)**2
min_v = 5.5
axi_max = 0.32485226746
col_total = 176542.96546512868
rat = 14
nn = 5
u_hub_lowerbound = 5.777777777777778
c_pow = 0.59230249
p_max = 0.5 * rho * area * c_pow * free**3
# Initialize Model
m = GEKKO(remote=True)
#initialize variables, Set lower and upper bounds
x = [m.Var(value = 0.03902278, lb = 0, ub = axi_max) \
for i in range(nn)]
# i = 0
b = 1
c = 0
v_s = list()
for i in range(nn-1): # Loop runs for nn-1 times
# print(i)
# print(i,b,c)
squared_defs = list()
while i < b:
d = distances[b][c][0]
r = distances[b][c][1]
ss = (2 * (alpha * d) / diam)
tt = r / ((diam/2) + (alpha * d))
squared_defs.append((2 * x[i] / (1 + ss**2)) * np.exp(-(tt**2)) ** 2)
i+=1
c+=1
#Equations
m.Equation((free * (1 - (sum(squared_defs))**0.5)) - rat <= 0)
m.Equation((free * (1 - (sum(squared_defs))**0.5)) - u_hub_lowerbound >= 0)
v_s.append(free * (1 - (sum(squared_defs))**0.5))
squared_defs.clear()
b+=1
c=0
# Inserts free as the first item on the v_s list to
# increase len(v_s) to nn, so that 'v_s' and 'x'
# are of same length
v_s.insert(0, free)
gamma = list()
for i in range(len(x)):
bet = (4*x[i]*((1-x[i])**2) * rho * area) / 2
gam = bet * v_s[i]**3
gamma.append(gam)
#Equations
m.Equation(x[i] - axi_max <= 0)
m.Equation((((4*x[i]*((1-x[i])**2) * rho * area) / 2) \
* v_s[i]**3) - p_max <= 0)
m.Equation((((4*x[i]*((1-x[i])**2) * rho * area) / 2) * \
v_s[i]**3) > 0)
#Equation
m.Equation(col_total - sum(gamma) <= 0)
#Objective
y = sum(gamma)
m.Maximize(y) # Maximize
#Set global options
m.options.IMODE = 3 #steady state optimization
#Solve simulation
m.options.SOLVER = 3
m.solver_options = ['linear_solver ma27','mu_strategy adaptive','max_iter 2500', 'tol 1.0e-5' ]
m.solve()
Built the equations without .value in the expressions. The x[i].value is only needed at the end to view the solution after the solution is complete or to initialize the value of x[i]. The expression m.Maximize(y) is more readable than m.Obj(-y) although they are equivalent.
from gekko import GEKKO
import numpy as np
distances = np.array([[[0, 0],[0,0],[0,0],[0,0]],\
[[155,0],[0,0],[0,0],[0,0]],\
[[310,0],[155,0],[0,0],[0,0]],\
[[465,0],[310,0],[155,0],[0,0]],\
[[620,0],[465,0],[310,0],[155,0]]])
alpha = 0.5 / np.log(30/0.075)
diam = 31
free = 7
rho = 1.2253
area = np.pi * (diam / 2)**2
min_v = 5.5
axi_max = 0.069262150781
col_total = 20000
p_max = 4000
rat = 14
nn = 5
# Initialize Model
m = GEKKO(remote=True)
#initialize variables, Set lower and upper bounds
x = [m.Var(value = 0.03902278, lb = 0, ub = axi_max) \
for i in range(nn)]
i = 0
b = 1
c = 0
v_s = list()
for turbs in range(nn-1): # Loop runs for nn-1 times
squared_defs = list()
while i < b:
d = distances[b][c][0]
r = distances[b][c][1]
ss = (2 * (alpha * d) / diam)
tt = r / ((diam/2) + (alpha * d))
squared_defs.append((2 * x[i] / (1 + ss**2)) \
* m.exp(-(tt**2)) ** 2)
i+=1
c+=1
#Equations
m.Equation((free * (1 - (sum(squared_defs))**0.5)) - rat <= 0)
m.Equation(min_v - (free * (1 - (sum(squared_defs))**0.5)) <= 0 )
v_s.append(free * (1 - (sum(squared_defs))**0.5))
squared_defs.clear()
b+=1
a=0
c=0
# Inserts free as the first item on the v_s list to
# increase len(v_s) to nn, so that 'v_s' and 'x'
# are of same length
v_s.insert(0, free)
beta = list()
gamma = list()
for i in range(len(x)):
bet = (4*x[i]*((1-x[i])**2) * rho * area) / 2
gam = bet * v_s[i]**3
#Equations
m.Equation((((4*x[i]*((1-x[i])**2) * rho * area) / 2) \
* v_s[i]**3) - p_max <= 0)
m.Equation((((4*x[i]*((1-x[i])**2) * rho * area) / 2) \
* v_s[i]**3) > 0)
gamma.append(gam)
#Equation
m.Equation(col_total - sum(gamma) <= 0)
#Objective
y = sum(gamma)
m.Maximize(y) # Maximize
#Set global options
m.options.IMODE = 3 #steady state optimization
#Solve simulation
m.options.SOLVER = 3
m.solve()
This gives a successful solution with maximized objective 20,000:
Number of Iterations....: 12
(scaled) (unscaled)
Objective...............: -4.7394814741924645e+00 -1.9999999999929641e+04
Dual infeasibility......: 4.4698510326511536e-07 1.8862194343304290e-03
Constraint violation....: 3.8275766582203308e-11 1.2941979026166479e-07
Complementarity.........: 2.1543608536533588e-09 9.0911246952931704e-06
Overall NLP error.......: 4.6245685940749926e-10 1.8862194343304290e-03
Number of objective function evaluations = 80
Number of objective gradient evaluations = 13
Number of equality constraint evaluations = 80
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 13
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 12
Total CPU secs in IPOPT (w/o function evaluations) = 0.010
Total CPU secs in NLP function evaluations = 0.011
EXIT: Optimal Solution Found.
The solution was found.
The final value of the objective function is -19999.9999999296
---------------------------------------------------
Solver : IPOPT (v3.12)
Solution time : 3.210000000399305E-002 sec
Objective : -19999.9999999296
Successful solution
---------------------------------------------------

Finding the continued fraction of 2^(1/3) to very high precision

Here I'll use the notation
It is possible to find the continued fraction of a number by computing it then applying the definition, but that requires at least O(n) bits of memory to find a0, a1 ... an, in practice it is a much worse. Using double floating point precision it is only possible to find a0, a1 ... a19.
An alternative is to use the fact that if a,b,c are rational numbers then there exist unique rationals p,q,r such that 1/(a+b*21/3+c*22/3) = x+y*21/3+z*22/3, namely
So if I represent x,y, and z to absolute precision using the boost rational lib I can obtain floor(x + y*21/3+z*22/3) accurately only using double precision for 21/3 and 22/3 because I only need it to be within 1/2 of the true value. Unfortunately the numerators and denominators of x,y, and z grow considerably fast, and if you use regular floats instead the errors pile up quickly.
This way I was able to compute a0, a1 ... a10000 in under an hour, but somehow mathematica can do that in 2 seconds. Here's my code for reference
#include <iostream>
#include <boost/multiprecision/cpp_int.hpp>
namespace mp = boost::multiprecision;
int main()
{
const double t_1 = 1.259921049894873164767210607278228350570251;
const double t_2 = 1.587401051968199474751705639272308260391493;
mp::cpp_rational p = 0;
mp::cpp_rational q = 1;
mp::cpp_rational r = 0;
for(unsigned int i = 1; i != 10001; ++i) {
double p_f = static_cast<double>(p);
double q_f = static_cast<double>(q);
double r_f = static_cast<double>(r);
uint64_t floor = p_f + t_1 * q_f + t_2 * r_f;
std::cout << floor << ", ";
p -= floor;
//std::cout << floor << " " << p << " " << q << " " << r << std::endl;
mp::cpp_rational den = (p * p * p + 2 * q * q * q +
4 * r * r * r - 6 * p * q * r);
mp::cpp_rational a = (p * p - 2 * q * r) / den;
mp::cpp_rational b = (2 * r * r - p * q) / den;
mp::cpp_rational c = (q * q - p * r) / den;
p = a;
q = b;
r = c;
}
return 0;
}
The Lagrange algorithm
The algorithm is described for example in Knuth's book The Art of Computer Programming, vol 2 (Ex 13 in section 4.5.3 Analysis of Euclid's Algorithm, p. 375 in 3rd edition).
Let f be a polynomial of integer coefficients whose only real root is an irrational number x0 > 1. Then the Lagrange algorithm calculates the consecutive quotients of the continued fraction of x0.
I implemented it in python
def cf(a, N=10):
"""
a : list - coefficients of the polynomial,
i.e. f(x) = a[0] + a[1]*x + ... + a[n]*x^n
N : number of quotients to output
"""
# Degree of the polynomial
n = len(a) - 1
# List of consecutive quotients
ans = []
def shift_poly():
"""
Replaces plynomial f(x) with f(x+1) (shifts its graph to the left).
"""
for k in range(n):
for j in range(n - 1, k - 1, -1):
a[j] += a[j+1]
for _ in range(N):
quotient = 1
shift_poly()
# While the root is >1 shift it left
while sum(a) < 0:
quotient += 1
shift_poly()
# Otherwise, we have the next quotient
ans.append(quotient)
# Replace polynomial f(x) with -x^n * f(1/x)
a.reverse()
a = [-x for x in a]
return ans
It takes about 1s on my computer to run cf([-2, 0, 0, 1], 10000). (The coefficients correspond to the polynomial x^3 - 2 whose only real root is 2^(1/3).) The output agrees with the one from Wolfram Alpha.
Caveat
The coefficients of the polynomials evaluated inside the function quickly become quite large integers. So this approach needs some bigint implementation in other languages (Pure python3 deals with it, but for example numpy doesn't.)
You might have more luck computing 2^(1/3) to high accuracy and then trying to derive the continued fraction from that, using interval arithmetic to determine if the accuracy is sufficient.
Here's my stab at this in Python, using Halley iteration to compute 2^(1/3) in fixed point. The dead code is an attempt to compute fixed-point reciprocals more efficiently than Python via Newton iteration -- no dice.
Timing from my machine is about thirty seconds, spent mostly trying to extract the continued fraction from the fixed point representation.
prec = 40000
a = 1 << (3 * prec + 1)
two_a = a << 1
x = 5 << (prec - 2)
while True:
x_cubed = x * x * x
two_x_cubed = x_cubed << 1
x_prime = x * (x_cubed + two_a) // (two_x_cubed + a)
if -1 <= x_prime - x <= 1: break
x = x_prime
cf = []
four_to_the_prec = 1 << (2 * prec)
for i in range(10000):
q = x >> prec
r = x - (q << prec)
cf.append(q)
if True:
x = four_to_the_prec // r
else:
x = 1 << (2 * prec - r.bit_length())
while True:
delta_x = (x * ((four_to_the_prec - r * x) >> prec)) >> prec
if not delta_x: break
x += delta_x
print(cf)

What is the easiest to implement linear regression algorithm?

I want to implement single variable regression using ordinary least squares. I have no access to linear algebra or calculus libraries, so any matrix operations or differentiation methods needs to be implemented by me. What is the least complex method?
John D. Cook has an excelent post on the subject with a simple C++ implementation. His implementation uses constant memory and can be parallelized with little effort.
I wrote a simple Python version of it. Use with caution, there may be bugs:
class Regression:
def __init__(self):
self.n = 0.0
self.sXY = 0.0
self.xM1 = 0.0
self.xM2 = 0.0
self.yM1 = 0.0
self.yM2 = 0.0
def add(self, x, y):
self.sXY += (self.xM1 - x) * (self.yM1 - y) * self.n / (self.n + 1.0);
n1 = self.n;
self.n+=1;
xdelta = x - self.xM1;
xdelta_n = xdelta / self.n;
self.xM1 += xdelta_n;
self.xM2 += xdelta * xdelta_n * n1;
ydelta = y - self.yM1;
ydelta_n = ydelta / self.n;
self.yM1 += ydelta_n;
self.yM2 += ydelta * ydelta_n * n1;
def count(self):
return self.n
def slope(self):
return self.sXY / self.xM2
def intercept(self):
return self.yM1 - (self.sXY / self.xM2) * self.xM1
def correlation(self):
return self.sXY / (self.xM2**0.5 * self.yM2**0.5)
def covariance(self):
return self.sXY / self.n
r = Regression()
r.add(1, 2)
r.add(4, 9)
r.add(16, 17)
r.add(17, 13)
r.add(21, 11)
print 'Count:', r.count()
print 'Slope:', r.slope()
print 'Intercept:', r.intercept()
print 'Correlation:', r.correlation()
print 'Covariance:', r.covariance()

Scipy - A better way to avoid manually loop when matrix is sparse

Logistic regression's objective function is
and the gradient is
where w is a scipy's csr sparse matrix with dim n-by-1.
My question is, when I have one scipy's csr sparse matrix and one numpy array, X_train and y_train respectively. (Each row of X_train is x_i, each element of y_train is y_i)
Is there a better way to calculate the gradient without using manully for loop?
For further information, I'm implementing large scale logistic regression. Therefore the performance is important.
Thanks.
Update 5/19 (Add my current code)
Thanks for #Jaime's reminding, here is my code. I basically want to see if there is a better way to implement gradient(X, y, w).
import numpy as np
import scipy as sp
from sklearn import datasets
from numpy.linalg import norm
from scipy import sparse
eta = 0.01
xi = 0.1
C = 1
X_train, y_train = datasets.load_svmlight_file('lr/datasets/a9a')
X_test, y_test = datasets.load_svmlight_file('lr/datasets/a9a.t', n_features=X_train.shape[1])
def gradient(X, y, w):
# w should be a col vector
summation = w
for i in range(X.shape[0]):
exp_i = np.exp( y[i] * X.getrow(i).dot(w)[0, 0] )
summation = summation - (y[i] / (1 + exp_i)) * X.getrow(i).T
return summation
def hes_mul(X, D, s):
# w and s should be a col vector
# should return a col vector
return s + C * X.T.dot( D.dot( X.dot(s) ) )
def cg(X, y, w):
# gradF is col vector, so all of these are col vectors
gradF = gradient(X, y, w)
s = sparse.csr_matrix( np.zeros(X_train.shape[1]) ).T
r = -1 * gradF
d = r
D = []
for i in range(X.shape[0]):
exp_i = np.exp( (-1) * y[i] * w.T.dot(X.getrow(i).T)[0, 0] )
D.append(exp_i / ((1 + exp_i) ** 2))
D = sparse.diags(D, 0)
while True:
r_norm = np.sqrt((r.data ** 2).sum())
print r_norm
print np.sqrt((gradF.data ** 2).sum())
if r_norm <= xi * np.sqrt((gradF.data ** 2).sum()):
return s
hes_mul_d = hes_mul(X, D, d)
alpha = (r_norm ** 2) / d.T.dot( hes_mul_d )[0, 0]
s = s + alpha * d
r = r - alpha * hes_mul_d
beta = (r.data ** 2).sum() / (r_norm ** 2)
d = r + beta * d
w = sparse.csr_matrix( np.zeros(X_train.shape[1]) ).T
s = cg(X_train, y_train, w)

Resources