Random walk algorithm for pricing barrier options - algorithm

How to realize the random walk algorithm with MATLAB? I don't understand this algorithm, because the condition 3 never holds. I post my code in the end.
Choose a time-step h>0 so that M=9/h is a integer.
Set log(L_0) = L_0 = 0.13 and Z(0)=0.
rho_k are independent random variables distributed by the law P(+-1)=0.5
1) log(L_{k+1}) = log(L_k)-0.5*sigma^2*h+sigma*sqrt(h)*rho_{k+1}
2) Z_{k+1} = Z_k - sigma* [(erfc((2*2^(1/2)*(log((25*exp(log L_k))/7) + 9/32))/3)/2 - erfc((2*2^(1/2)*(log(100*exp(log L_k)) + 9/32))/3)/2 + erfc((2*2^(1/2)*(log(7/(25*exp(log L_k))) - 9/32))/3)/56 - erfc((2*2^(1/2)*(log(196/(25*exp(log L_k))) - 9/32))/3)/56 + (exp(log L_k)*((2*2^(1/2)*exp(-(8*(log(7/(25*exp(log L_k))) - 9/32)^2)/9))/(3*exp(log L_k)*pi^(1/2)) - (2*2^(1/2)*exp(-(8*(log(196/(25*exp(log L_k))) - 9/32)^2)/9))/(3*exp(log L_k)*pi^(1/2))))/28 - exp(log L_k)*((2*2^(1/2)*exp(-(8*(log((25*exp(log L_k))/7) + 9/32)^2)/9))/(3*exp(log L_k)*pi^(1/2)) - (2*2^(1/2)*exp(-(8*(log(100*exp(log L_k)) + 9/32)^2)/9))/(3*exp(log L_k)*pi^(1/2))) - (14*2^(1/2)*exp(-(8*(log(7/(25*exp(log L_k))) + 9/32)^2)/9))/(75*exp(log L_k)*pi^(1/2)) + (14*2^(1/2)*exp(-(8*(log(196/(25*exp(log L_k))) + 9/32)^2)/9))/(75*exp(log L_k)*pi^(1/2)) + (2^(1/2)*exp(-(8*(log((25*exp(log L_k))/7) - 9/32)^2)/9))/(150*exp(log L_k)*pi^(1/2)) - (2^(1/2)*exp(-(8*(log(100*exp(log L_k)) - 9/32)^2)/9))/(150*exp(log L_k)*pi^(1/2))))]*sqrt(h)*rho_{k+1};
3) log L_k < log(H)+1/2*sigma^2*h-sigma*sqrt(h)
H=0.28
sigma=0.25
K=0.01
To realize the algorithm we follow the random walk generated by (1),
and at each time t_k; we check whether the condition 3 holds. If it does not, L_k has reached the boundary zone and we stop the chain at log(H). If it does,
we perform (1)-(2) to find log(L_k+1) and Z_{k+1}. If k+1=M; we stop, otherwise we continue with the algorithm.
The outcome of simulating each trajectory is a point (t_kappa, log(L_kappa), Z_kappa).
Evaluate the expectation E[(exp(log(L_{kappa}))-K)^+ * Chi(kappa=M) + Z_{kappa}]=price with the Monte Carlo technique and do 10^6 Monte Carlo runs.
Here is my code. It doesn't work, because I don't get something near 0.0657 (the result).
Y = zeros(1,M+1); %ln L_k = Y(k)
Z = zeros(1,M+1);
sigma = 0.25;
H = 0.28;
K = 0.01;
Y(1) = 0.13;
Z(1) = 0;
M = 90;
h = 0.1;
for k = 1:M+1
vec = [-1 1];
index = random('unid', length(vec),1);
x(1,k) = vec(index);
end %'Rho'
for k = 1:M
if Y(k) < log(H) + 1/2*sigma^2*h - sigma*sqrt(h) %Bedingung
Y(k+1) = Y(k) - (1/2)*(sigma)^2*h + sigma*sqrt(h)*x(k+1);
Z(k+1) = Z(k) + (-sigma*(erfc((2*2^(1/2)*(log((25*exp(Y(k)))/7) + 9/32))/3)/2 - erfc((2*2^(1/2)*(log(100*exp(Y(k))) + 9/32))/3)/2 + erfc((2*2^(1/2)*(log(7/(25*exp(Y(k)))) - 9/32))/3)/56 - erfc((2*2^(1/2)*(log(196/(25*exp(Y(k)))) - 9/32))/3)/56 + (exp(Y(k))*((2*2^(1/2)*exp(-(8*(log(7/(25*exp(Y(k)))) - 9/32)^2)/9))/(3*exp(Y(k))*pi^(1/2)) - (2*2^(1/2)*exp(-(8*(log(196/(25*exp(Y(k)))) - 9/32)^2)/9))/(3*exp(Y(k))*pi^(1/2))))/28 - exp(Y(k))*((2*2^(1/2)*exp(-(8*(log((25*exp(Y(k)))/7) + 9/32)^2)/9))/(3*exp(Y(k))*pi^(1/2)) - (2*2^(1/2)*exp(-(8*(log(100*exp(Y(k))) + 9/32)^2)/9))/(3*exp(Y(k))*pi^(1/2))) - (14*2^(1/2)*exp(-(8*(log(7/(25*exp(Y(k)))) + 9/32)^2)/9))/(75*exp(Y(k))*pi^(1/2)) + (14*2^(1/2)*exp(-(8*(log(196/(25*exp(Y(k)))) + 9/32)^2)/9))/(75*exp(Y(k))*pi^(1/2)) + (2^(1/2)*exp(-(8*(log((25*exp(Y(k)))/7) - 9/32)^2)/9))/(150*exp(Y(k))*pi^(1/2)) - (2^(1/2)*exp(-(8*(log(100*exp(Y(k))) - 9/32)^2)/9))/(150*exp(Y(k))*pi^(1/2))))*sqrt(h)*x(k+1);
else
Y(k)=log(H);
break
end
end
if k == M
end
if (L(M)-K) > 0
(L(M)-K);
else
0;
end
return
exp(Y(M))-K+Z(M)

Related

Ruby algorithms loops codewars

I got stuck with below task and spent about 3 hours trying to figure it out.
Task description: A man has a rather old car being worth $2000. He saw a secondhand car being worth $8000. He wants to keep his old car until he can buy the secondhand one.
He thinks he can save $1000 each month but the prices of his old car and of the new one decrease of 1.5 percent per month. Furthermore this percent of loss increases by 0.5 percent at the end of every two months. Our man finds it difficult to make all these calculations.
How many months will it take him to save up enough money to buy the car he wants, and how much money will he have left over?
My code so far:
def nbMonths(startPriceOld, startPriceNew, savingperMonth, percentLossByMonth)
dep_value_old = startPriceOld
mth_count = 0
total_savings = 0
dep_value_new = startPriceNew
mth_count_new = 0
while startPriceOld != startPriceNew do
if startPriceOld >= startPriceNew
return mth_count = 0, startPriceOld - startPriceNew
end
dep_value_new = dep_value_new - (dep_value_new * percentLossByMonth / 100)
mth_count_new += 1
if mth_count_new % 2 == 0
dep_value_new = dep_value_new - (dep_value_new * 0.5) / 100
end
dep_value_old = dep_value_old - (dep_value_old * percentLossByMonth / 100)
mth_count += 1
total_savings += savingperMonth
if mth_count % 2 == 0
dep_value_old = dep_value_old - (dep_value_old * 0.5) / 100
end
affordability = total_savings + dep_value_old
if affordability >= dep_value_new
return mth_count, affordability - dep_value_new
end
end
end
print nbMonths(2000, 8000, 1000, 1.5) # Expected result[6, 766])
The data are as follows.
op = 2000.0 # current old car value
np = 8000.0 # current new car price
sv = 1000.0 # annual savings
dr = 0.015 # annual depreciation, both cars (1.5%)
cr = 0.005. # additional depreciation every two years, both cars (0.5%)
After n >= 0 months the man's (let's call him "Rufus") savings plus the value of his car equal
sv*n + op*(1 - n*dr - (cr + 2*cr + 3*cr +...+ (n/2)*cr))
where n/2 is integer division. As
cr + 2*cr + 3*cr +...+ (n/2)*cr = cr*((1+2+..+n)/2) = cr*(1+n/2)*(n/2)
the expression becomes
sv*n + op*(1 - n*dr - cr*(1+(n/2))*(n/2))
Similarly, after n years the cost of the car he wants to purchase will fall to
np * (1 - n*dr - cr*(1+(n/2))*(n/2))
If we set these two expressions equal we obtain the following.
sv*n + op - op*dr*n - op*cr*(n/2) - op*cr*(n/2)**2 =
np - np*dr*n - np*cr*(n/2) - np*cr*(n/2)**2
which reduces to
cr*(np-op)*(n/2)**2 + (sv + dr*(np-op))*n + cr*(np-op)*(n/2) - (np-op) = 0
or
cr*(n/2)**2 + (sv/(np-op) + dr)*n + cr*(n/2) - 1 = 0
If we momentarily treat (n/2) as a float division, this expression reduces to a quadratic.
(cr/4)*n**2 + (sv/(np-op) + dr + cr/2)*n - 1 = 0
= a*n**2 + b*n + c = 0
where
a = cr/4 = 0.005/4 = 0.00125
b = sv/(np-op) + dr + cr/(2*a) = 1000.0/(8000-2000) + 0.015 + 0.005/2 = 0.18417
c = -1
Incidentally, Rufus doesn't have a computer, but he does have an HP 12c calculator his grandfather gave him when he was a kid, which is perfectly adequate for these simple calculations.
The roots are computed as follows.
(-b + Math.sqrt(b**2 - 4*a*c))/(2*a) #=> 5.24
(-b - Math.sqrt(b**2 - 4*a*c))/(2*a) #=> -152.58
It appears that Rufus can purchase the new vehicle (if it's still for sale) in six years. Had we been able able to solve the above equation for n/2 using integer division it might have turned out that Rufus would have had to wait longer. That’s because for a given n both cars would have depreciated less (or at least not not more), and because the car to be purchased is more expensive than the current car, the difference in values would be greater than that obtained with the float approximation for 1/n. We need to check that, however. After n years, Rufus' savings and the value of his beater will equal
sv*n + op*(1 - dr*n - cr*(1+(n/2))*(n/2))
= 1000*n + 2000*(1 - 0.015*n - 0.005*(1+(n/2))*(n/2))
For n = 6 this equals
1000*6 + 2000*(1 - 0.015*6 - 0.005*(1+(6/2))*(6/2))
= 1000*6 + 2000*(1 - 0.015*6 - 0.005*(1+3)*3)
= 1000*6 + 2000*0.85
= 7700
The cost of Rufus' dream car after n years will be
np * (1 - dr*n - cr*(1+(n/2))*(n/2))
= 8000 * (1 - 0.015*n - 0.005*(1+(n/2))*(n/2))
For n=6 this becomes
8000 * (1 - 0.015*6 - 0.005*(1+(6/2))*(6/2))
= 8000*0.85
= 6800
(Notice that the factor 0.85 is the same in both calculations.)
Yes, Rufus will be able to buy the car in 6 years.
def nbMonths(old, new, savings, percent)
percent = percent.fdiv(100)
current_savings = 0
months = 0
loop do
break if current_savings + old >= new
current_savings += savings
old -= old * percent
new -= new * percent
months += 1
percent += 0.005 if months.odd?
end
[months, (current_savings + old - new).round]
end

Filling a matrix using parallel processing in Julia

I'm trying to speed up the solution time for a dynamic programming problem in Julia (v. 0.5.0), via parallel processing. The problem involves choosing the optimal values for every element of a 1073 x 19 matrix at every iteration, until successive matrix differences fall within a tolerance. I thought that, within each iteration, filling in the values for each element of the matrix could be parallelized. However, I'm seeing a huge performance degradation using SharedArray, and I'm wondering if there's a better way to approach parallel processing for this problem.
I construct the arguments for the function below:
est_params = [.788,.288,.0034,.1519,.1615,.0041,.0077,.2,0.005,.7196]
r = 0.015
tau = 0.35
rho =est_params[1]
sigma =est_params[2]
delta = 0.15
gamma =est_params[3]
a_capital =est_params[4]
lambda1 =est_params[5]
lambda2 =est_params[6]
s =est_params[7]
theta =est_params[8]
mu =est_params[9]
p_bar_k_ss =est_params[10]
beta = (1+r)^(-1)
sigma_range = 4
gz = 19
gp = 29
gk = 37
lnz=collect(linspace(-sigma_range*sigma,sigma_range*sigma,gz))
z=exp(lnz)
gk_m = fld(gk,2)
# Need to add mu somewhere to k_ss
k_ss = (theta*(1-tau)/(r+delta))^(1/(1-theta))
k=cat(1,map(i->k_ss*((1-delta)^i),collect(1:gk_m)),map(i->k_ss/((1-delta)^i),collect(1:gk_m)))
insert!(k,gk_m+1,k_ss)
sort!(k)
p_bar=p_bar_k_ss*k_ss
p = collect(linspace(-p_bar/2,p_bar,gp))
#Tauchen
N = length(z)
Z = zeros(N,1)
Zprob = zeros(Float32,N,N)
Z[N] = lnz[length(z)]
Z[1] = lnz[1]
zstep = (Z[N] - Z[1]) / (N - 1)
for i=2:(N-1)
Z[i] = Z[1] + zstep * (i - 1)
end
for a = 1 : N
for b = 1 : N
if b == 1
Zprob[a,b] = 0.5*erfc(-((Z[1] - mu - rho * Z[a] + zstep / 2) / sigma)/sqrt(2))
elseif b == N
Zprob[a,b] = 1 - 0.5*erfc(-((Z[N] - mu - rho * Z[a] - zstep / 2) / sigma)/sqrt(2))
else
Zprob[a,b] = 0.5*erfc(-((Z[b] - mu - rho * Z[a] + zstep / 2) / sigma)/sqrt(2)) -
0.5*erfc(-((Z[b] - mu - rho * Z[a] - zstep / 2) / sigma)/sqrt(2))
end
end
end
# Collecting tauchen results in a 2 element array of linspace and array; [2] gets array
# Zprob=collect(tauchen(gz, rho, sigma, mu, sigma_range))[2]
Zcumprob=zeros(Float32,gz,gz)
# 2 in cumsum! denotes the 2nd dimension, i.e. columns
cumsum!(Zcumprob, Zprob,2)
gm = gk * gp
control=zeros(gm,2)
for i=1:gk
control[(1+gp*(i-1)):(gp*i),1]=fill(k[i],(gp,1))
control[(1+gp*(i-1)):(gp*i),2]=p
end
endog=copy(control)
E=Array(Float32,gm,gm,gz)
for h=1:gm
for m=1:gm
for j=1:gz
# set the nonzero net debt indicator
if endog[h,2]<0
p_ind=1
else
p_ind=0
end
# set the investment indicator
if (control[m,1]-(1-delta)*endog[h,1])!=0
i_ind=1
else
i_ind=0
end
E[m,h,j] = (1-tau)*z[j]*(endog[h,1]^theta) + control[m,2]-endog[h,2]*(1+r*(1-tau)) +
delta*endog[h,1]*tau-(control[m,1]-(1-delta)*endog[h,1]) -
(i_ind*gamma*endog[h,1]+endog[h,1]*(a_capital/2)*(((control[m,1]-(1-delta)*endog[h,1])/endog[h,1])^2)) +
s*endog[h,2]*p_ind
elem = E[m,h,j]
if E[m,h,j]<0
E[m,h,j]=elem+lambda1*elem-.5*lambda2*elem^2
else
E[m,h,j]=elem
end
end
end
end
I then constructed the function with serial processing. The two for loops iterate through each element to find the largest value in a 1072-sized (=the gm scalar argument in the function) array:
function dynam_serial(E,gm,gz,beta,Zprob)
v = Array(Float32,gm,gz )
fill!(v,E[cld(gm,2),cld(gm,2),cld(gz,2)])
Tv = Array(Float32,gm,gz)
# Set parameters for the loop
convcrit = 0.0001 # chosen convergence criterion
diff = 1 # arbitrary initial value greater than convcrit
while diff>convcrit
exp_v=v*Zprob'
for h=1:gm
for j=1:gz
Tv[h,j]=findmax(E[:,h,j] + beta*exp_v[:,j])[1]
end
end
diff = maxabs(Tv - v)
v=copy(Tv)
end
end
Timing this, I get:
#time dynam_serial(E,gm,gz,beta,Zprob)
> 106.880008 seconds (91.70 M allocations: 203.233 GB, 15.22% gc time)
Now, I try using Shared Arrays to benefit from parallel processing. Note that I reconfigured the iteration so that I only have one for loop, rather than two. I also use v=deepcopy(Tv); otherwise, v is copied as an Array object, rather than a SharedArray:
function dynam_parallel(E,gm,gz,beta,Zprob)
v = SharedArray(Float32,(gm,gz),init = S -> S[Base.localindexes(S)] = myid() )
fill!(v,E[cld(gm,2),cld(gm,2),cld(gz,2)])
# Set parameters for the loop
convcrit = 0.0001 # chosen convergence criterion
diff = 1 # arbitrary initial value greater than convcrit
while diff>convcrit
exp_v=v*Zprob'
Tv = SharedArray(Float32,gm,gz,init = S -> S[Base.localindexes(S)] = myid() )
#sync #parallel for hj=1:(gm*gz)
j=cld(hj,gm)
h=mod(hj,gm)
if h==0;h=gm;end;
#async Tv[h,j]=findmax(E[:,h,j] + beta*exp_v[:,j])[1]
end
diff = maxabs(Tv - v)
v=deepcopy(Tv)
end
end
Timing the parallel version; and using a 4-core 2.5 GHz I7 processor with 16GB of memory, I get:
addprocs(3)
#time dynam_parallel(E,gm,gz,beta,Zprob)
> 164.237208 seconds (2.64 M allocations: 201.812 MB, 0.04% gc time)
Am I doing something incorrect here? Or is there a better way to approach parallel processing in Julia for this particular problem? I've considered using Distributed Arrays, but it's difficult for me to see how to apply them to the present problem.
UPDATE:
Per #DanGetz and his helpful comments, I turned instead to trying to speed up the serial processing version. I was able to get performance down to 53.469780 seconds (67.36 M allocations: 103.419 GiB, 19.12% gc time) through:
1) Upgrading to 0.6.0 (saved about 25 seconds), which includes the helpful #views macro.
2) Preallocating the main array I'm trying to fill in (Tv), per the section on Preallocating Outputs in the Julia Performance Tips: https://docs.julialang.org/en/latest/manual/performance-tips/. (saved another 25 or so seconds)
The biggest remaining slow-down seems to be coming from the add_vecs function, which sums together subarrays of two larger matrices. I've tried devectorizing and using BLAS functions, but haven't been able to produce better performance.
In any event, the improved code for dynam_serial is below:
function add_vecs(r::Array{Float32},h::Int,j::Int,E::Array{Float32},exp_v::Array{Float32},beta::Float32)
#views r=E[:,h,j] + beta*exp_v[:,j]
return r
end
function dynam_serial(E::Array{Float32},gm::Int,gz::Int,beta::Float32,Zprob::Array{Float32})
v = Array{Float32}(gm,gz)
fill!(v,E[cld(gm,2),cld(gm,2),cld(gz,2)])
Tv = Array{Float32}(gm,gz)
r = Array{Float32}(gm)
# Set parameters for the loop
convcrit = 0.0001 # chosen convergence criterion
diff = 1 # arbitrary initial value greater than convcrit
while diff>convcrit
exp_v=v*Zprob'
for h=1:gm
for j=1:gz
#views Tv[h,j]=findmax(add_vecs(r,h,j,E,exp_v,beta))[1]
end
end
diff = maximum(abs,Tv - v)
v=copy(Tv)
end
return Tv
end
If add_vecs seems to be the critical function, writing an explicit for loop could offer more optimization. How does the following benchmark:
function add_vecs!(r::Array{Float32},h::Int,j::Int,E::Array{Float32},
exp_v::Array{Float32},beta::Float32)
#inbounds for i=1:size(E,1)
r[i]=E[i,h,j] + beta*exp_v[i,j]
end
return r
end
UPDATE
To continue optimizing dynam_serial I have tried to remove more allocations. The result is:
function add_vecs_and_max!(gm::Int,r::Array{Float64},h::Int,j::Int,E::Array{Float64},
exp_v::Array{Float64},beta::Float64)
#inbounds for i=1:gm
r[i] = E[i,h,j]+beta*exp_v[i,j]
end
return findmax(r)[1]
end
function dynam_serial(E::Array{Float64},gm::Int,gz::Int,
beta::Float64,Zprob::Array{Float64})
v = Array{Float64}(gm,gz)
fill!(v,E[cld(gm,2),cld(gm,2),cld(gz,2)])
r = Array{Float64}(gm)
exp_v = Array{Float64}(gm,gz)
# Set parameters for the loop
convcrit = 0.0001 # chosen convergence criterion
diff = 1.0 # arbitrary initial value greater than convcrit
while diff>convcrit
A_mul_Bt!(exp_v,v,Zprob)
diff = -Inf
for h=1:gm
for j=1:gz
oldv = v[h,j]
newv = add_vecs_and_max!(gm,r,h,j,E,exp_v,beta)
v[h,j]= newv
diff = max(diff, oldv-newv, newv-oldv)
end
end
end
return v
end
Switching the functions to use Float64 should increase speed (as CPUs are inherently optimized for 64-bit word lengths). Also, using the mutating A_mul_Bt! directly saves another allocation. Avoiding the copy(...) by switching the arrays v and Tv.
How do these optimizations improve your running time?
2nd UPDATE
Updated the code in the UPDATE section to use findmax. Also, changed dynam_serial to use v without Tv, as there was no need to save the old version except for the diff calculation, which is now done inside the loop.
Here's the code I copied-and-pasted, provided by Dan Getz above. I include the array and scalar definitions exactly as I ran them. Performance was: 39.507005 seconds (11 allocations: 486.891 KiB) when running #time dynam_serial(E,gm,gz,beta,Zprob).
using SpecialFunctions
est_params = [.788,.288,.0034,.1519,.1615,.0041,.0077,.2,0.005,.7196]
r = 0.015
tau = 0.35
rho =est_params[1]
sigma =est_params[2]
delta = 0.15
gamma =est_params[3]
a_capital =est_params[4]
lambda1 =est_params[5]
lambda2 =est_params[6]
s =est_params[7]
theta =est_params[8]
mu =est_params[9]
p_bar_k_ss =est_params[10]
beta = (1+r)^(-1)
sigma_range = 4
gz = 19 #15 #19
gp = 29 #19 #29
gk = 37 #25 #37
lnz=collect(linspace(-sigma_range*sigma,sigma_range*sigma,gz))
z=exp.(lnz)
gk_m = fld(gk,2)
# Need to add mu somewhere to k_ss
k_ss = (theta*(1-tau)/(r+delta))^(1/(1-theta))
k=cat(1,map(i->k_ss*((1-delta)^i),collect(1:gk_m)),map(i->k_ss/((1-delta)^i),collect(1:gk_m)))
insert!(k,gk_m+1,k_ss)
sort!(k)
p_bar=p_bar_k_ss*k_ss
p = collect(linspace(-p_bar/2,p_bar,gp))
#Tauchen
N = length(z)
Z = zeros(N,1)
Zprob = zeros(Float64,N,N)
Z[N] = lnz[length(z)]
Z[1] = lnz[1]
zstep = (Z[N] - Z[1]) / (N - 1)
for i=2:(N-1)
Z[i] = Z[1] + zstep * (i - 1)
end
for a = 1 : N
for b = 1 : N
if b == 1
Zprob[a,b] = 0.5*erfc(-((Z[1] - mu - rho * Z[a] + zstep / 2) / sigma)/sqrt(2))
elseif b == N
Zprob[a,b] = 1 - 0.5*erfc(-((Z[N] - mu - rho * Z[a] - zstep / 2) / sigma)/sqrt(2))
else
Zprob[a,b] = 0.5*erfc(-((Z[b] - mu - rho * Z[a] + zstep / 2) / sigma)/sqrt(2)) -
0.5*erfc(-((Z[b] - mu - rho * Z[a] - zstep / 2) / sigma)/sqrt(2))
end
end
end
# Collecting tauchen results in a 2 element array of linspace and array; [2] gets array
# Zprob=collect(tauchen(gz, rho, sigma, mu, sigma_range))[2]
Zcumprob=zeros(Float64,gz,gz)
# 2 in cumsum! denotes the 2nd dimension, i.e. columns
cumsum!(Zcumprob, Zprob,2)
gm = gk * gp
control=zeros(gm,2)
for i=1:gk
control[(1+gp*(i-1)):(gp*i),1]=fill(k[i],(gp,1))
control[(1+gp*(i-1)):(gp*i),2]=p
end
endog=copy(control)
E=Array(Float64,gm,gm,gz)
for h=1:gm
for m=1:gm
for j=1:gz
# set the nonzero net debt indicator
if endog[h,2]<0
p_ind=1
else
p_ind=0
end
# set the investment indicator
if (control[m,1]-(1-delta)*endog[h,1])!=0
i_ind=1
else
i_ind=0
end
E[m,h,j] = (1-tau)*z[j]*(endog[h,1]^theta) + control[m,2]-endog[h,2]*(1+r*(1-tau)) +
delta*endog[h,1]*tau-(control[m,1]-(1-delta)*endog[h,1]) -
(i_ind*gamma*endog[h,1]+endog[h,1]*(a_capital/2)*(((control[m,1]-(1-delta)*endog[h,1])/endog[h,1])^2)) +
s*endog[h,2]*p_ind
elem = E[m,h,j]
if E[m,h,j]<0
E[m,h,j]=elem+lambda1*elem-.5*lambda2*elem^2
else
E[m,h,j]=elem
end
end
end
end
function add_vecs_and_max!(gm::Int,r::Array{Float64},h::Int,j::Int,E::Array{Float64},
exp_v::Array{Float64},beta::Float64)
maxr = -Inf
#inbounds for i=1:gm r[i] = E[i,h,j]+beta*exp_v[i,j]
maxr = max(r[i],maxr)
end
return maxr
end
function dynam_serial(E::Array{Float64},gm::Int,gz::Int,
beta::Float64,Zprob::Array{Float64})
v = Array{Float64}(gm,gz)
fill!(v,E[cld(gm,2),cld(gm,2),cld(gz,2)])
Tv = Array{Float64}(gm,gz)
r = Array{Float64}(gm)
exp_v = Array{Float64}(gm,gz)
# Set parameters for the loop
convcrit = 0.0001 # chosen convergence criterion
diff = 1.0 # arbitrary initial value greater than convcrit
while diff>convcrit
A_mul_Bt!(exp_v,v,Zprob)
diff = -Inf
for h=1:gm
for j=1:gz
Tv[h,j]=add_vecs_and_max!(gm,r,h,j,E,exp_v,beta)
diff = max(abs(Tv[h,j]-v[h,j]),diff)
end
end
(v,Tv)=(Tv,v)
end
return v
end
Now, here's another version of the algorithm and inputs. The functions are similar to what Dan Getz suggested, except that I use findmax rather than an iterated max function to find the array maximum. In the input construction, I am using both Float32 and mixing different bit-types together. However, I've consistently achieved better performance this way: 24.905569 seconds (1.81 k allocations: 46.829 MiB, 0.01% gc time). But it's not clear at all why.
using SpecialFunctions
est_params = [.788,.288,.0034,.1519,.1615,.0041,.0077,.2,0.005,.7196]
r = 0.015
tau = 0.35
rho =est_params[1]
sigma =est_params[2]
delta = 0.15
gamma =est_params[3]
a_capital =est_params[4]
lambda1 =est_params[5]
lambda2 =est_params[6]
s =est_params[7]
theta =est_params[8]
mu =est_params[9]
p_bar_k_ss =est_params[10]
beta = Float32((1+r)^(-1))
sigma_range = 4
gz = 19
gp = 29
gk = 37
lnz=collect(linspace(-sigma_range*sigma,sigma_range*sigma,gz))
z=exp(lnz)
gk_m = fld(gk,2)
# Need to add mu somewhere to k_ss
k_ss = (theta*(1-tau)/(r+delta))^(1/(1-theta))
k=cat(1,map(i->k_ss*((1-delta)^i),collect(1:gk_m)),map(i->k_ss/((1-delta)^i),collect(1:gk_m)))
insert!(k,gk_m+1,k_ss)
sort!(k)
p_bar=p_bar_k_ss*k_ss
p = collect(linspace(-p_bar/2,p_bar,gp))
#Tauchen
N = length(z)
Z = zeros(N,1)
Zprob = zeros(Float32,N,N)
Z[N] = lnz[length(z)]
Z[1] = lnz[1]
zstep = (Z[N] - Z[1]) / (N - 1)
for i=2:(N-1)
Z[i] = Z[1] + zstep * (i - 1)
end
for a = 1 : N
for b = 1 : N
if b == 1
Zprob[a,b] = 0.5*erfc(-((Z[1] - mu - rho * Z[a] + zstep / 2) / sigma)/sqrt(2))
elseif b == N
Zprob[a,b] = 1 - 0.5*erfc(-((Z[N] - mu - rho * Z[a] - zstep / 2) / sigma)/sqrt(2))
else
Zprob[a,b] = 0.5*erfc(-((Z[b] - mu - rho * Z[a] + zstep / 2) / sigma)/sqrt(2)) -
0.5*erfc(-((Z[b] - mu - rho * Z[a] - zstep / 2) / sigma)/sqrt(2))
end
end
end
# Collecting tauchen results in a 2 element array of linspace and array; [2] gets array
# Zprob=collect(tauchen(gz, rho, sigma, mu, sigma_range))[2]
Zcumprob=zeros(Float32,gz,gz)
# 2 in cumsum! denotes the 2nd dimension, i.e. columns
cumsum!(Zcumprob, Zprob,2)
gm = gk * gp
control=zeros(gm,2)
for i=1:gk
control[(1+gp*(i-1)):(gp*i),1]=fill(k[i],(gp,1))
control[(1+gp*(i-1)):(gp*i),2]=p
end
endog=copy(control)
E=Array(Float32,gm,gm,gz)
for h=1:gm
for m=1:gm
for j=1:gz
# set the nonzero net debt indicator
if endog[h,2]<0
p_ind=1
else
p_ind=0
end
# set the investment indicator
if (control[m,1]-(1-delta)*endog[h,1])!=0
i_ind=1
else
i_ind=0
end
E[m,h,j] = (1-tau)*z[j]*(endog[h,1]^theta) + control[m,2]-endog[h,2]*(1+r*(1-tau)) +
delta*endog[h,1]*tau-(control[m,1]-(1-delta)*endog[h,1]) -
(i_ind*gamma*endog[h,1]+endog[h,1]*(a_capital/2)*(((control[m,1]-(1-delta)*endog[h,1])/endog[h,1])^2)) +
s*endog[h,2]*p_ind
elem = E[m,h,j]
if E[m,h,j]<0
E[m,h,j]=elem+lambda1*elem-.5*lambda2*elem^2
else
E[m,h,j]=elem
end
end
end
end
function add_vecs!(gm::Int,r::Array{Float32},h::Int,j::Int,E::Array{Float32},
exp_v::Array{Float32},beta::Float32)
#inbounds #views for i=1:gm
r[i]=E[i,h,j] + beta*exp_v[i,j]
end
return r
end
function dynam_serial(E::Array{Float32},gm::Int,gz::Int,beta::Float32,Zprob::Array{Float32})
v = Array{Float32}(gm,gz)
fill!(v,E[cld(gm,2),cld(gm,2),cld(gz,2)])
Tv = Array{Float32}(gm,gz)
# Set parameters for the loop
convcrit = 0.0001 # chosen convergence criterion
diff = 1.00000 # arbitrary initial value greater than convcrit
iter=0
exp_v=Array{Float32}(gm,gz)
r=Array{Float32}(gm)
while diff>convcrit
A_mul_Bt!(exp_v,v,Zprob)
for h=1:gm
for j=1:gz
Tv[h,j]=findmax(add_vecs!(gm,r,h,j,E,exp_v,beta))[1]
end
end
diff = maximum(abs,Tv - v)
(v,Tv)=(Tv,v)
end
return v
end

Find area of two overlapping circles using monte carlo method

Actually i have two intersecting circles as specified in the figure
i want to find the area of each part separately using Monte carlo method in Matlab .
The code doesn't draw the rectangle or the circles correctly so
i guess what is wrong is my calculation for the x and y and i am not much aware about the geometry equations for solving it so i need help about the equations.
this is my code so far :
n=1000;
%supposing that a rectangle will contain both circles so :
% the mid point of the distance between 2 circles will be (0,6)
% then by adding the radius of the left and right circles the total distance
% will be 27 , 11 from the left and 16 from the right
% width of rectangle = 24
x=27.*rand(n-1)-11;
y=24.*rand(n-1)+2;
count=0;
for i=1:n
if((x(i))^2+(y(i))^2<=25 && (x(i))^2+(y(i)-12)^2<=100)
count=count+1;
figure(2);
plot(x(i),y(i),'b+')
hold on
elseif(~(x(i))^2+(y(i))^2<=25 &&(x(i))^2+(y(i)-12)^2<=100)
figure(2);
plot(x(i),y(i),'y+')
hold on
else
figure(2);
plot(x(i),y(i),'r+')
end
end
Here are the errors I found:
x = 27*rand(n,1)-5
y = 24*rand(n,1)-12
The rectangle extents were incorrect, and if you use rand(n-1) will give you a (n-1) by (n-1) matrix.
and
first If:
(x(i))^2+(y(i))^2<=25 && (x(i)-12)^2+(y(i))^2<=100
the center of the large circle is at x=12 not y=12
Second If:
~(x(i))^2+(y(i))^2<=25 &&(x(i)-12)^2+(y(i))^2<=100
This code can be improved by using logical indexing.
For example, using R, you could do (Matlab code is left as an excercise):
n = 10000
x = 27*runif(n)-5
y = 24*runif(n)-12
plot(x,y)
r = (x^2 + y^2)<=25 & ((x-12)^2 + y^2)<=100
g = (x^2 + y^2)<=25
b = ((x-12)^2 + y^2)<=100
points(x[g],y[g],col="green")
points(x[b],y[b],col="blue")
points(x[r],y[r],col="red")
which gives:
Here is my generic solution for any two circles (without any hardcoded value):
function [ P ] = circles_intersection_area( k1, k2, N )
%CIRCLES_INTERSECTION_AREA Summary...
% Adnan A.
x1 = k1(1);
y1 = k1(2);
r1 = k1(3);
x2 = k2(1);
y2 = k2(2);
r2 = k2(3);
if sqrt((x1-x2)*(x1-x2) + (y1-y2)*(y1-y2)) >= (r1 + r2)
% no intersection
P = 0;
return
end
% Wrapper rectangle config
a_min = x1 - r1 - 2*r2;
a_max = x1 + r1 + 2*r2;
b_min = y1 - r1 - 2*r2;
b_max = y1 + r1 + 2*r2;
% Monte Carlo algorithm
n = 0;
for i = 1:N
rand_x = unifrnd(a_min, a_max);
rand_y = unifrnd(b_min, b_max);
if sqrt((rand_x - x1)^2 + (rand_y - y1)^2) < r1 && sqrt((rand_x - x2)^2 + (rand_y - y2)^2) < r2
% is a point in the both of circles
n = n + 1;
plot(rand_x,rand_y, 'go-');
hold on;
else
plot(rand_x,rand_y, 'ko-');
hold on;
end
end
P = (a_max - a_min) * (b_max - b_min) * n / N;
end
Call it like: circles_intersection_area([-0.4,0,1], [0.4,0,1], 10000) where the first param is the first circle (x,y,r) and the second param is the second circle.
Without using For loop.
n = 100000;
data = rand(2,n);
data = data*2*30 - 30;
x = data(1,:);
y = data(2,:);
plot(x,y,'ro');
inside5 = find(x.^2 + y.^2 <=25);
hold on
plot (x(inside5),y(inside5),'bo');
hold on
inside12 = find(x.^2 + (y-12).^2<=144);
plot (x(inside12),y(inside12),'g');
hold on
insidefinal1 = find(x.^2 + y.^2 <=25 & x.^2 + (y-12).^2>=144);
insidefinal2 = find(x.^2 + y.^2 >=25 & x.^2 + (y-12).^2<=144);
% plot(x(insidefinal1),y(insidefinal1),'bo');
hold on
% plot(x(insidefinal2),y(insidefinal2),'ro');
insidefinal3 = find(x.^2 + y.^2 <=25 & x.^2 + (y-12).^2<=144);
% plot(x(insidefinal3),y(insidefinal3),'ro');
area1=(60^2)*(length(insidefinal1)/n);
area3=(60^2)*(length(insidefinal2)/n);
area2= (60^2)*(length(insidefinal3)/n);

Solving a System of four equations (with logs) in Mathematica

I am trying to solve a system of four equations in four variables. I have read a number of threads on similar issues and tried to follow the suggestions. But I think it is a bit messy because of the logs and cross products here. This is the exact system:
7*w = (7*w+5*x+2*y+z) * ( 0.76 + 0.12*Log[w] -0.08*Log[x] -0.03*Log[y] -0.07*Log[7*w+5*x + 2*y + z]),
5*x = (7*w+5*x+2*y+z) * ( 0.84 - 0.08*Log[w] +0.11*Log[x] -0.02*Log[y] -0.08*Log[7*w+5*x + 2*y + z]),
2*y = (7*w+5*x+2*y+z) * (-0.45 - 0.03*Log[w] -0.02*Log[x] +0.05*Log[y] +0.12*Log[7*w+5*x + 2*y + z]),
1*z = (7*w+5*x+2*y+z) * (-0.16 + 0*Log[w] - 0*Log[x] - 0*Log[y] + 0.03*Log[7*w+5*x + 2*y + z])
(FYI-I am a young economist, and this is an extension of a consumer demand system.) Theoretically, we know that there exists a unique solution to this system that is positive.
Trys
Solve & NSolve : As there should be a solution I tried these but neither works. I guess that the system has too many logs to handle.
FindRoot : I started with an initial value of (14,15,10,100) which I get from my data. FindRoot returns the last value (which does not satisfy my system) and the following message.
FindRoot::lstol: The line search decreased the step size to within tolerance specified by AccuracyGoal and PrecisionGoal but was unable.....
I tried different initial values, including the value returned by FindRoot. I tried to analyze the pattern of the solution value at each step. I didn’t see any pattern but noticed that the z values become negative early in the process. So I put bounds on the values. This just stops the code at the minimum value of 0.1. I also tried an exponential system instead of log, same issues.
Reap[FindRoot[{
7*w==(7*w+5*x + 2*y + z)*(0.76 + 0.12*Log[w] -0.08*Log[x] -0.03*Log[y] -0.07*Log[7*w+5*x + 2*y + z]),
5*x==(7*w+5*x + 2*y + z)*(0.84 -0.08*Log[w] +0.11*Log[x] -0.02*Log[y] -0.08*Log[7*w+5*x + 2*y + z]),
2*y==(7*w+5*x + 2*y + z)*(-0.45 - 0.03*Log[w] -0.02*Log[x] +0.05*Log[y] +0.12*Log[7*w+5*x + 2*y + z]),
z==(7*w+5*x + 2*y + z)*(-0.16 + 0*Log[w] -0*Log[x] -0*Log[y] +0.03*Log[7*w+5*x + 2*y + z])},
{{w,14,0.1,500},{x,15,0.1,500},{y,10,0.1,500},
{z,100,0.1,500}},EvaluationMonitor:>Sow[{w,x,y,z}] ]]
FindMinimum : As we can write this problem as a minimization problem, I tried this (following the suggestion here). The value returned did not converge the system or equations to zero. I tried with only the first two equations, and that sort of converged to zero.
Hope this is engaging enough for the experts here! Any ideas how I should find the solution or why can’t I? It’s the first time I am using Mathematica, and unfortunately the first time I am empirically solving a system/optimizing! Thanks a lot.
{g1,g2,g3, g4}={7*w - (7*w+5*x+2*y+z)* (0.76+0.12*Log[w]-0.08*Log[x]-0.03*Log[y] -0.07*Log[7*w+5*x+2*y+z]),5*x - (7*w+5*x+2*y+z)*(0.84-0.08*Log[w]+0.11*Log[x]-0.02*Log[y] -0.08*Log[7*w+5*x+2*y+z]),2*y - (7*w+5*x+2*y+z)*(-0.45-0.03*Log[w]-0.02*Log[x]+0.05*Log[y]+0.12*Log[7*w+5*x+2*y+z]), 1*z - (7*w+5*x+2*y+z)*(-0.16+0*Log[w]-0*Log[x]-0*Log[y]+0.03*Log[7*w+5*x+2*y+z])};subdomain=0<w<100 &&0<x<100 && 0<y<100 && 0<z<100;res=FindMinimum[{Total[{g1,g2,g3, g4}^2],subdomain},{w,x,y,z},AccuracyGoal->5]{g1,g2,g3,g4}/.res[[2]]
I don't have access to Mathematica, I put your equations into AMPL which is free for students. Here is what I did:
var w := 14 >= 0;
var x := 15 >= 0;
var y := 10 >= 0;
var z := 100 >= 0;
eq1: 7*w = (7*w+5*x+2*y+z) * ( 0.76 + 0.12*log(w) -0.08*log(x) -0.03*log(y) -0.07*log(7*w+5*x + 2*y + z));
eq2: 5*x = (7*w+5*x+2*y+z) * ( 0.84 - 0.08*log(w) +0.11*log(x) -0.02*log(y) -0.08*log(7*w+5*x + 2*y + z));
eq3: 2*y = (7*w+5*x+2*y+z) * (-0.45 - 0.03*log(w) -0.02*log(x) +0.05*log(y) +0.12*log(7*w+5*x + 2*y + z));
eq4: 1*z = (7*w+5*x+2*y+z) * (-0.16 + 0*log(w) - 0*log(x) - 0*log(y) +0.03*log(7*w+5*x + 2*y + z));
option show_stats 1;
option presolve 10;
option solver "/home/ali/ampl/ipopt"; # put your path here
option seed 1731;
# Initial solve
solve;
display w, x, y, z;
# Multistart
for {1..10} {
for {j in 1.._snvars}
let _svar[j] := Uniform(1, 50);
solve;
if (solve_result_num < 200) then {
display w, x, y, z;
}
}
If I only require that the variables are nonnegative, I get rubbish, for example
w = 2.39266e-11
x = 6.62678e-11
y = 1.57043e-24
z = 7.0842e-10
or
w = 1.09972e-12
x = 9.77807e-11
y = 3.36229e-21
z = 1.85441e-09
Numerically, these are indeed solutions, they satisfy the equations to a fairly high precision, although I am pretty sure it's not what you are looking for. This indicates issues with your model.
If I increase the lower bounds of the variables a bit:
var w := 14 >= 0.1;
var x := 15 >= 0.1;
var y := 10 >= 0.1;
var z := 100 >= 0.01;
I get, even with multistart, Ipopt 3.11.6: Converged to a locally infeasible point. Problem may be infeasible. This again indicates issues with your model equations.
I am afraid you will have to revise your model.
This won't fix the issues with your model equations but I would introduce new variables: a=log(w), b=log(x), c=log(y), d=log(z). Then w=exp(a) and so on. It has the advantage that the function evaluations won't fail due to negative arguments for the logarithm function.
I would probably also introduce a new variable for (7*w+5*x+2*y+z) just to make the equations more compact.
Neither of these new variables will solve the above issues with your model equations.
If it is really your first time using Mathematica, you might be better off with AMPL and IPOPT; these tools are custom-tailored for solving equations and optimization problems. I suggest you use the AMPL mailing list if you have question and not Stackoverflow; you will get better answers on the mailing list.
This method will often rapidly find approximate solutions, NMinimize the sum of squares with constraints.
In[2]:= NMinimize[{
(7*w - (7*w + 5*x + 2*y + z)*(0.76 + 0.12*Log[w] - 0.08*Log[x] -
0.03*Log[y] - 0.07*Log[7*w + 5*x + 2*y + z]))^2 +
(5*x - (7*w + 5*x + 2*y + z)*(0.84 - 0.08*Log[w] + 0.11*Log[x] -
0.02*Log[y] - 0.08*Log[7*w + 5*x + 2*y + z]))^2 +
(2*y - (7*w + 5*x + 2*y + z)*(-0.45 - 0.03*Log[w] - 0.02*Log[x] +
0.05*Log[y] + 0.12*Log[7*w + 5*x + 2*y + z]))^2 +
(1*z - (7*w + 5*x + 2*y + z)*(-0.16 + 0*Log[w] +
0.03*Log[7*w + 5*x + 2*y + z]))^2,
w > 0 && x > 0 && y > 0 && z > 0}, {w, x, y, z},
Method -> "RandomSearch"]
Out[2]= {9.34024*10^-12, {w->1.86998*10^-8, x->3.83383*10^-8, y->4.59973*10^-8, z->5.29581*10^-7}}

Can someone help me vectorize / speed up this Matlab Loop?

correlation = zeros(length(s1), 1);
sizeNum = 0;
for i = 1 : length(s1) - windowSize - delta
s1Dat = s1(i : i + windowSize);
s2Dat = s2(i + delta : i + delta + windowSize);
if length(find(isnan(s1Dat))) == 0 && length(find(isnan(s2Dat))) == 0
if(var(s1Dat) ~= 0 || var(s2Dat) ~= 0)
sizeNum = sizeNum + 1;
correlation(i) = abs(corr(s1Dat, s2Dat)) ^ 2;
end
end
end
What's happening here:
Run through every values in s1. For every value, get a slice for s1
till s1 + windowSize.
Do the same for s2, only get the slice after an intermediate delta.
If there are no NaN's in any of the two slices and they aren't flat,
then get the correlaton between them and add that to the
correlation matrix.
This is not an answer, I am trying to understand what is being asked.
Take some data:
N = 1e4;
s1 = cumsum(randn(N, 1)); s2 = cumsum(randn(N, 1));
s1(randi(N, 50, 1)) = NaN; s2(randi(N, 50, 1)) = NaN;
windowSize = 200; delta = 100;
Compute correlations:
tic
corr_s = zeros(N - windowSize - delta, 1);
for i = 1:(N - windowSize - delta)
s1Dat = s1(i:(i + windowSize));
s2Dat = s2((i + delta):(i + delta + windowSize));
corr_s(i) = corr(s1Dat, s2Dat);
end
inds = isnan(corr_s);
corr_s(inds) = 0;
corr_s = corr_s .^ 2; % square of correlation coefficient??? Why?
sizeNum = sum(~inds);
toc
This is what you want to do, right? A moving window correlation function? This is a very interesting question indeed …

Resources