I'm working on a Jacobi solver for the Poisson equation using Julia. The solver is called iteratively until err is sufficiently small (~1e-8), which takes around 25,000 loops through the function for my nx = ny = 80 test case. Profiling shows that most of the time is spent in the inner loop (as expected), but memory allocation seems to be running away--the #time macro gives 38 gigabytes allocated in order to reach convergence, which seems way too much since I don't think I'm creating new arrays for each loop.
function jacobi(P::Array{Float64,2}, maxiter::Int64)
P_old = copy(P)
for j = 2:ny-1
# Main body loop
for i = 2:nx-1
#inbounds P[i,j] = ((P_old[i+1,j] + P_old[i-1,j])*dx2
+ (P_old[i,j+1] + P_old[i,j-1])*dy2)/denom-Rmod[i,j]
end
end
err = vecnorm(P::Array{Float64,2}-P_old::Array{Float64,2})/sqrt(nx+ny)
return (P, err)
end
I've timed the function for 1000 loops, calling from a function wrapper (methodwrap) that sets initial conditions:
function methodwrap(solver, maxiter::Int64) # (solver fn name, max # of iterations)
P = copy(P0)
iter = 1
err = 1.0
maxerr = 1e-8
prog = Progress(maxiter,.2, "Solving using $solver method", 10) # Show progress bar
while (err > maxerr) && (iter < maxiter)
P, err = solver(P, maxiter)
next!(prog) # Iterates progress bar counter
iter += 1
end
println()
return (P, iter, err)
end
Contrary to my wishes, it looks like memory allocation scales with the number of loops, so I'm doing something wrong. It looks as if approximately 1.4 mb is allocated with each Jacobi pass:
julia> #time methodwrap(jacobi,1000)
Solving using jacobi method 98%|##########| ETA: 0:00:00
elapsed time: 4.001988593 seconds (1386549012 bytes allocated, 26.45% gc time)
I've tried reducing the inner loop arrays to vector subarrays and using #simd:
function jacobi2(P::Array{Float64,2}, maxiter::Int64)
P_old = copy(P)::Array{Float64,2}
for j = 2:ny-1
# Main body loop
Pojm = sub(P_old,:,j-1)
Poj = sub(P_old,:,j)
Pojp = sub(P_old,:,j+1)
Pj = sub(P,:,j)
Rmodj = sub(Rmod,:,j)
#simd for i = 2:nx-1
#inbounds Pj[i] = ((Poj[i+1] + Poj[i-1])*dx2
+ (Pojp[i] + Pojm[i])*dy2)/denom-Rmodj[i]
end
end
err = vecnorm(P::Array{Float64,2}-P_old::Array{Float64,2})/sqrt(nx+ny)
return (P, err)
end
However, this only seems to increase memory allocation and decrease speed, and I get a #simd warning:
julia> #time methodwrap(jacobi2,1000);
Warning: could not attach metadata for #simd loop.
Solving using jacobi2 method: 100%|##########| ETA: 0:00:00
elapsed time: 4.947097666 seconds (1455818184 bytes allocated, 29.85% gc time)
This is my first project in Julia, so I'm probably making a really obvious mistake, but I haven't found a solution yet. I've defined global vars as constants. I've gone through the performance tips several times, I've linted the file, I've used TypeCheck to make sure my types are consistent, and everything looks fairly kosher to my eyes. What am I doing wrong? I've posted my full code on Gist if you'd like to check that as well.
It turns out the problem was subtle. I made 3 changes (see below). I did use as #IainDunning suggested --track-allocation=user which pointed to the questionable line. Both of these problems come from using global variables.
After these changes
julia> #time methodwrap(jacobi,1000)
elapsed time: 0.481986712 seconds (116650236 bytes allocated)
change 1 add const to nx and ny
You had const everywhere except for these 2 variables but when left non const and global that cause the loop iterator i to allocate unnecessarily.
nx=80 # Number of mesh points in the x-direction
ny=80 # Number of mesh points in the y-direction
was changed to
const nx=80 # Number of mesh points in the x-direction
const ny=80 # Number of mesh points in the y-direction
change 2: avoid Rmod of type Array{Any,2}
const Rmod = dx2*dy2*R/(2*(dx2+dy2))
was changed to
const Rmod = convert(Array{Float64,2},dx2*dy2*R/(2*(dx2+dy2)))
Related
I am working on a scientific code that is experiencing issues with parallelization.
The parallel version is slower than the serial one and I am not sure if the right approaches are used for this application.
How can I improve the performance of the parallel calculation?
Is the right approach being used or should other packages / functions be considered for parallelization?
I have already tried a larger workload, however this makes no difference.
I suspect the problem is somehow due to data movement between workers, but I don't know how to check or improve this one.
Parallel programming with Julia is still relatively new for me, so I am very grateful for any help!
The simulation code is something of a benchmark for the Julia programming language, as our team is considering using Julia for all future projects if strong performance advantages to the current workflow can be demonstrated.
Because of this, I would like to maximize performance, also since calculations with very large models as well as possible use on a cluster are planned.
Minimum Working Example
The critical parts of the code can be broken down to the following example.
I start the process as follows:
using Distributed
addprocs();
#everywhere using SharedArrays, LinearAlgebra, Test
First I define the simulation model, containing all data used for the calculations.
Is it actually okay to store SharedArrays with other data in a struct or should a different approach be used?
#everywhere struct Model
idx::Vector{Tuple{Int,Int}} # indices
A::SharedMatrix{Float64} # results, will be constantly updated
B::Vector{Float64} # part of pre-processing, will only be read
end
See the non-parallel version of the function used for the update of the model below.
function update(m::Model, factor::Float64)
L::Float64 = 0.
k::Float64 = 0.
cnt::Int = 0
for (i,j) in m.idx
cnt+=1
L = norm(m.A[:,i]-m.A[:,j])
k = factor * m.B[cnt]
m.A[:,i] .+= k*L
m.A[:,j] .-= k*L
end
end
For parallelization, I simply tried the following. Is perhaps an approach with pmap better in this case?
#everywhere function parallel_update(m::Model, factor::Float64)
L::Float64 = 0.
k::Float64 = 0.
cnt::Int = 0
#sync #distributed for (i,j) in m.idx
cnt+=1
L = norm(m.A[:,i]-m.A[:,j])
k = factor * m.B[cnt]
m.A[:,i] .+= k*L
m.A[:,j] .-= k*L
end
end
To test the results I use the following function:
#everywhere function test_my_code()
# provide some data
n = 10000000
idx = [(rand(1:n),rand(1:n)) for k=1:n]
A = SharedArray(hcat(([rand(0.:1000.);rand(0.:1000.);rand(0.:1000.)] for k=1:n)...))
B = [rand(0.:1000.) for k=1:n]
# define models
model1 = Model(idx,A,B)
model2 = Model(idx,A,B)
# test and compare results
#time update(model1,2.)
#time parallel_update(model2,2.)
#test model1 == model2
end
julia> test_my_code() # first run
6.350694 seconds (50.00 M allocations: 5.215 GiB, 13.66% gc time)
11.422999 seconds (6.69 k allocations: 446.156 KiB)
Test Passed
julia> test_my_code() # second run
6.286828 seconds (50.00 M allocations: 5.215 GiB, 18.35% gc time)
6.297144 seconds (2.92 k allocations: 143.516 KiB)
Test Passed
Note: significant performance improvements for the serial code
I was already able to significantly improve the performance of the serial function and reduce the number of allocations to zero.
Since this seems to make no difference to the parallelization problem, I used the shorter, easier-to-read version for the previous example.
See the serial code below.
using LinearAlgebra, Test
struct Model
idx::Vector{Tuple{Int,Int}}
A::Matrix{Float64}
B::Vector{Float64}
end
function update(m::Model, factor::Float64)
L::Float64 = 0.
k::Float64 = 0.
cnt::Int = 0
for (i,j) in m.idx
cnt+=1
L = norm(m.A[:,i]-m.A[:,j])
k = factor * m.B[cnt]
m.A[:,i] .+= k*L
m.A[:,j] .-= k*L
end
end
function update_fast(m::Model, factor::Float64)
L::Float64 = 0.
k::Float64 = 0.
cnt::Int = 0
for (i,j) in m.idx
cnt+=1
L = sqrt((m.A[1,i]-m.A[1,j])^2 +
(m.A[2,i]-m.A[2,j])^2 +
(m.A[3,i]-m.A[3,j])^2)
k = factor * m.B[cnt]
m.A[1,i] += k*L
m.A[2,i] += k*L
m.A[3,i] += k*L
m.A[1,j] -= k*L
m.A[2,j] -= k*L
m.A[3,j] -= k*L
end
end
function test_serial_speedup()
n = 10000000
idx = [(rand(1:n),rand(1:n)) for k=1:n]
A = hcat(([rand(0.:1000.);rand(0.:1000.);rand(0.:1000.)] for k=1:n)...)
B = [rand(0.:1000.) for k=1:n]
model1 = Model(idx,A,B)
model2 = Model(idx,A,B)
#time update(model1,2.)
#time update_fast(model2,2.)
#test model1 == model2
end
julia> test_serial_speedup()
5.008049 seconds (50.00 M allocations: 5.215 GiB, 18.14% gc time)
0.464986 seconds
Test Passed
I am trying to write a fast coordinate descent algorithm for solving ordinary least squares regression. The following Julia code works, but I don't understand why it's allocating so much memory
function OLS_cd{T<:Float64}(A::Array{T,2}, b::Array{T,1}, tolerance::T=1e-12)
N,P = size(A)
x = zeros(P)
r = copy(b)
d = ones(P)
while sum(d.*d) > tolerance
#inbounds for j = 1:P
d[j] = sum(A[:,j].*r)
x[j] += d[j]
r -= d[j]*A[:,j]
end
end
return(x)
end
On the data I generate with
n = 100
p = 75
σ = 0.1
β_nz = float([i*(-1)^i for i in 1:10])
β = append!(β_nz,zeros(p-length(β_nz)))
X = randn(n,p); X .-= mean(X,1); X ./= sqrt(sum(abs2(X),1))
y = X*β + σ*randn(n); y .-= mean(y);
Using #benchmark OLS_cd(X, y) I get
BenchmarkTools.Trial:
memory estimate: 65.94 mb
allocs estimate: 151359
--------------
minimum time: 19.316 ms (16.49% GC)
median time: 20.545 ms (16.60% GC)
mean time: 22.164 ms (16.24% GC)
maximum time: 42.114 ms (10.82% GC)
--------------
samples: 226
evals/sample: 1
time tolerance: 5.00%
memory tolerance: 1.00%
The OLS problem gets harder as p gets bigger, and I've noticed that the as I make p bigger and need to run longer, the more memory Julia allocates.
Why would each pass through the while loop allocate more memory? To my eye, it seems like all of my operations are in place, and the types are clearly specified.
Nothing popped out to me while profiling, but I could post that output as well if it's useful.
Update:
As pointed out below, temporary arrays caused by using vectorized operations were the culprit. The following eliminated extraneous allocations and runs pretty quickly:
function OLS_cd_unrolled{T<:Float64}(A::Array{T,2}, b::Array{T,1}, tolerance::T=1e-12)
N,P = size(A)
x = zeros(P)
r = copy(b)
d = ones(P)
while norm(d,Inf) > tolerance
#inbounds for j = 1:P
d[j] = 0.0; #inbounds for i = 1:N d[j] += A[i,j]*r[i] end
#inbounds for i = 1:N r[i] -= d[j]*A[i,j] end
x[j] += d[j]
end
end
return(x)
end
A[:,j] creates a copy, not a view. You want to use #view A[:,j] or view(A,:,j).
You can devectorize r -= d[j]*A[:,j] with r .= -.(r,d[j]*A[:.j]) to get rid of some more temporaries. As #LutfullahTomak said sum(A[:,j].*r) should devectorize as dot(view(A,:,j),r) to get rid of all of the temporaries in there. To use an infix operator, you can use \cdot, as in view(A,:,j)⋅r.
You should read up on copies vs views and how vectorization causes temporary arrays. The jist of it is that when vectorized operations occur, they have to create a new vector as output. Instead, you want to write to an existing vector. r = ... for an array changes reference, so r = ex for some expression which makes an array will make a new array, and then point r to that array. r .= ex will replace the values of the array r with the values from the expression. The former allocates a temporary, the latter does not. Repeated applications of this idea is where all of the temporaries come from.
Actually, sum(d.*d) , sum(A[:,j].*r) and so on are not inplace and make temporary arrays.. First, sum(d.*d) == dot(d,d) I think and sum(A[:,j].*r) makes 2 temporary arrays. I'd do dot(view(A,:,j),r) for the latter. Current stable version of julia(0.5) doesn't have short version for r -= d[j]*A[:,j] so you need to devectorize it make a loop.
I'm trying to oversimplify this as much as possible.
functions f1and f2 implement a very simplified version of a roulette wheel selection over a Vector R. The only difference between them is that f1 uses a for and f2 a while. Both functions return the index of the array where the condition was met.
R=rand(100)
function f1(X::Vector)
l = length(X)
r = rand()*X[l]
for i = 1:l
if r <= X[i]
return i
end
end
end
function f2(X::Vector)
l = length(X)
r = rand()*X[l]
i = 1
while true
if r <= X[i]
return i
end
i += 1
end
end
now I created a couple of test functions...
M is the number of times we repeat the function execution.
Now this is critical... I want to store the values I get from the functions because I need them later... To oversimplify the code I just created a new variable r where I sum up the returns from the functions.
function test01(M,R)
cumR = cumsum(R)
r = 0
for i = 1:M
a = f1(cumR)
r += a
end
return r
end
function test02(M,R)
cumR = cumsum(R)
r = 0
for i = 1:M
a = f2(cumR)
r += a
end
return r
end
So, next I get:
#time test01(1e7,R)
elapsed time: 1.263974802 seconds (320000832 bytes allocated, 15.06% gc time)
#time test02(1e7,R)
elapsed time: 0.57086421 seconds (1088 bytes allocated)
So, for some reason I can't figure out f1 allocates a lot of memory and its even greater the larger M gets.
I said the line r += a was critical, because if I remove it from both test functions, I get the same result with both tests, so no problems! So I thought there was a problem with the type of a being returned by the functions (because f1 returns the iterator of the for loop, and f2 uses its own variable i "manually declared" inside the function).
But...
aa = f1(cumsum(R))
bb = f2(cumsum(R))
typeof(aa) == typeof(bb)
true
So... what that hell is going on???
I apologize if this is some sort of basic question but, I've been going over this for over 3 hours now and couldn't find an answer... Even though the functions are fixed by using a while loop I hate not knowing what's going on.
Thanks.
When you see lots of surprising allocations like that, a good first thing to check is type-stability. The #code_warntype macro is very helpful here:
julia> #code_warntype f1(R)
# … lots of annotated code, but the important part is this last line:
end::Union{Int64,Void}
Compare that to f2:
julia> #code_warntype f2(R)
# ...
end::Int64
So, why are the two different? Julia thinks that f1 might sometimes return nothing (which is of type Void)! Look again at your f1 function: what would happen if the last element of X is NaN? It'll just fall off the end of the function with no explicit return statement. In f2, however, you'll end up indexing beyond the bounds of X and get an error instead. Fix this type-instabillity by deciding what to do if the loop completes without finding the answer and you'll see more similar timings.
As I stated in the comment, your functions f1 and f2 both contain random numbers inside it, and you are using the random numbers as stopping criterion. Thus, there is no deterministic way to measure which of the functions is faster (doesn't depend in the implementation).
You can replace f1 and f2 functions to accept r as a parameter:
function f1(X::Vector, r)
for i = 1:length(X)
if r <= X[i]
return i
end
end
end
function f2(X::Vector, r)
i = 1
while i <= length(X)
if r <= X[i]
return i
end
i += 1
end
end
And then measure the time properly with the same R and r for both functions:
>>> R = cumsum(rand(100))
>>> r = rand(1_000_000) * R[end] # generate 1_000_000 random thresholds
>>> #time for i=1:length(r); f1(R, r[i]); end;
0.177048 seconds (4.00 M allocations: 76.278 MB, 2.70% gc time)
>>> #time for i=1:length(r); f2(R, r[i]); end;
0.173244 seconds (4.00 M allocations: 76.278 MB, 2.76% gc time)
As you can see, the timings are now nearly identical. Any difference will be caused for external factors (warming or processor busy with other tasks).
I wrote a function to calculate the joint entropy of each column pair in a matrix. But I would like to increase the performance regarding time and memory.
The function looks like this:
function jointentropy(aln)
mat = Array(Float64,size(aln,2),size(aln,2))
for i in combinations(1:size(aln,2),2)
a = i[1]
b = i[2]
mina, maxa = extrema(aln[:,a])
minb, maxb = extrema(aln[:,b])
h = Array(Float64,(maxa-mina+1,maxb-minb+1))
h = hist2d([aln[:,a] aln[:,b]],mina-1:1:maxa,minb-1:1:maxb)[3]
h = h/size(aln[:,1],1)
I,J,V = findnz(h)
l = sparse(I,J,log2(V),maxa-mina+1,maxb-minb+1)
mat[b,a] = - sum(l.*h)
end
return mat
end
Matrices that go into this function look like this:
rand(45:122,rand(1:2000),rand(1:2000))
An example with a 500x500 matrix resulted in the following #time output:
elapsed time: 33.692081413 seconds (33938843192 bytes allocated, 36.42% gc time)
...which seems to be a whole lot of memory...
Any suggestions on how to speed up this function and reduce memory allocation?
Thanks in advance for any help!
Here are a few ideas to speed up your function.
If the range of all the columns is roughly the same, you can move the extrema computations outside the loop and reuse the same h array.
hist2d creates a new array: you can use hist2d! to reuse the previous one.
The assignment h = h/size(aln[:,1],1) creates a new array.
The division in h = h/size(aln[:,1],1) is done for all the elements of the array, including the zeroes.
You can use a loop instead of findnz and a sparse matrix (findnz already contains a loop).
.
function jointentropy2(aln)
n1 = size(aln,1)
n2 = size(aln,2)
mat = Array(Float64,n2,n2)
lower, upper = extrema(aln)
m = upper-lower+1
h = Array(Float64,(m,m))
for a in 1:n2
for b in (a+1):n2
Base.hist2d!(h,[aln[:,a] aln[:,b]],lower-1:1:upper,lower-1:1:upper)[3]
s = 0
for i in 1:m
for j in 1:m
if h[i,j] != 0
p = h[i,j] / n1
s += p * log2(p)
end
end
end
mat[b,a] = - s
end
end
return mat
end
This is twice as fast as the initial function,
and the memory allocations were divided by 4.
aln = rand(45:122,500,400)
#time x = jointentropy(aln)
# elapsed time: 26.946314168 seconds (21697858752 bytes allocated, 29.97% gc time)
#time y = jointentropy2(aln)
# elapsed time: 13.626282821 seconds (5087119968 bytes allocated, 16.21% gc time)
x - y # approximately zero (at least below the diagonal --
# the matrix was not initialized above it)
The next candidate for optimization is hist2d (here, you could use a loop and a sparse matrix).
#profile jointentropy2(aln)
Profile.print()
I'm trying to parallelize a little scientific code I wrote. But when I add #parallelize, similar code on just one processor suddenly takes 10 times as long to execute. It should take roughly the same amount of time. The first code makes one memory allocation, while the second makes 20. But zeros(Float64, num_bins) should not be a bottleneck. num_bins is 1800. So each call to zeros() should be allocating 8*1800 bytes. 20 calls to allocate 14,400 bytes should not be taking this long.
I can't figure out what I'm doing wrong, and the Julia documentation is vague and non-specific about how variables are accessed within #parallel. Both versions of the code below compute the correct value for the rdf vector. Can anyone tell by looking at it what is making it allocate so much memory and take so long?
atoms = readAtoms(file)
rdf = zeros(Float64, num_bins)
#time for k = 1:20
for i = 1:num_atoms
for j = 1:num_atoms
r = distance(k, atoms, i, atoms, j)
bin_number = floor(r / dr) + 1
rdf[bin_number] += 1
end
end
end
elapsed time: 8.1 seconds (0 bytes allocated)
atoms = readAtoms(file)
#time rdf = #parallel (+) for k = 1:20
rdf_part = zeros(Float64, num_bins)
for i = 1:num_atoms
for j = 1:num_atoms
r = distance(k, atoms, i, atoms, j)
bin_number = floor(r / dr) + 1
rdf_part[bin_number] += 1
end
end
rdf_part
end
elapsed time: 81.2 seconds (33472513332 bytes allocated, 17.40% gc time)