Julia: Parallel code slower than Sequential code, are there alternatives to remotecall()? - parallel-processing

When I run in parallel a simple function (with different inputs per worker) the time it takes for a parallelized version code is consistently longer than if it was sequential.
The simple function is
addprocs(2)
#everywhere function simple(i,Q,vt_0,ct_0)
a=sum((rand(1:i),vt_0,ct_0))*2
return a
end
The Parallelized version of my code is
#In Parallel
tic()
N=10000;
v=zeros(N);
c=zeros(N);
vt_0=0;
ct_0=0;
for i=1:N
v[i]=fetch(remotecall(simple,2,i,1,vt_0,ct_0))
c[i]=fetch(remotecall(simple,3,i,2,vt_0,ct_0))
vt_0=v[i]
ct_0=c[i]
end
toc()
While the sequential code would be just calling the function in each iteration of the loop (no remote call() nor fetch(), I am thinking the way in which I call the workers in parallel in Julia with remotecall() is the root of the problem, but I am not sure.
Can anybody help me figure out why this parallel code is so slow compared to just call the function? Or is this simple calculation not worth parallelizing it?
EDIT:
Here is the Sequential code:
#Sequential
tic()
N=10000;
v=zeros(N,2);
c=zeros(N,2);
vt_0=0;
ct_0=0;
for i=1:N
v[i]=simple(i,1,vt_0,ct_0)
c[i]=simple(i,2,vt_0,ct_0)
vt_0=v[i]
ct_0=c[i]
end
toc()

Even for the case of the very-"shallow"-computing inside the simple(), Julia documentation recommends to rather use remotecall_fetch() instead of fetch( remotecall( ... ) ):
The function remotecall_fetch() exists for this purpose. It is equivalent to fetch(remotecall(...)) but is more efficient.
The [PARALLEL]-process #spawn / fetch() overheads
This is a very often overlooked topic, great to actively recognise and trying to reason about this subject. Actually this is a common root-cause of adverse impact on [PARALLEL]-process scheduling final performance. If indeed interested in details, getting both the mathematical model + an interactive UI-tool to simulate / evaluate the net-effects of the overhead-strict speedup formula on [PARALLEL] code-executions for as much as 2 .. 8000+ CPU-cores, feel free to read more on this here
The main "Suspect" here are the value-inter-process-dependencies:
One might be hidden - inside the system function rand(). In case a cryptographically strong implementation is used, each, indeed EACH call to the rand() must update the central source-of-randomness'-state. That means, that for this particular reason, all, indeed ALL the spawned processes have to have established and also maintained a queue to this central-shared-service, which will not be present at all ( as it can make zero source-of-randomness'-state updates troubles ) in a simple [SEQ]-code-execution, but will require some additional hidden overheads ( MUTEX / LOCKS / value-update atomics et al ) in case of [PAR]-code-execution units, that actually "share" this central resource ( being hidden to be strictly transactional, with a concurrency-capacity of 1, operating a 1:N-blocking-logic signalling, with none rollback ... ) and must execute and enforce the atomically safe update of the source-of-randomness'-state, before any next call to the source-of-randomness' service may get served.
the other is {prior|next}-loop step dependency, the more once mutually crossed among the pair of calls to simple()-process instances. As illustrated below, this mutual inter-dependence actually makes all potentially spawned-calls to have to be arranged as a pure [SEQ]-process-schedule, and the "remaining" sum() is indeed not much meat to chew in [PARALLEL]-process schedule.
ACTUAL MUTUAL INTER-PROCESS DEPENDENCYILLUSTRATION DEPICTS IMPOSSIBILITY TO AT LEAST INJECT LOOPSO AS TO MAKE [PARALLEL]-PROCESSING MORE EFFICIENT:
// +-------------<---- a way to "inject" a for-loop
// | +----------<---- NOT CONSUMED AT ALL
// | |
// | | +--******* BUT ********* DEPENDENCY
// | | |
// | | v
// | | +-------<-->- v[]-{prior|next}-step DEPENDENCY
// | | | +-<-->- c[]-{prior|next}-step DEPENDENCY
// | | | |
// function simple( i, Q, vt_0, ct_0 )
#everywhere function simple( N, Q, vt_0, ct_0, aVEC )
for i = 1:N
aVEC[i] = sum( ( rand(1:i), vt_0, ct_0 ) )*2
// a = sum( ( rand(1:i), vt_0, ct_0 ) )*2
// ret a
end
While adding CSP-channels with explicit inter-process communication via { put!() | take!() } Channel-methods may solve this dependency being communicated, guess what, the coroutines-scheduling will only add additional overheads, so expect to pay even more to get less.
A minor note on raw-profiling:
In all cases, it is advisable to place a tic() .. toc()-bracket tight to the code-section under test, and avoid and exclude any and all memory-allocations and similar extremely long and noisy parts from the actual measured-execution:
// ------------------------------------------------------------<SuT>-START
tic()
for i=1:N
v[i]=fetch(remotecall(simple,2,i,1,vt_0,ct_0))
c[i]=fetch(remotecall(simple,3,i,2,vt_0,ct_0))
vt_0=v[i]
ct_0=c[i]
end
toc()
// ------------------------------------------------------------<SuT>-FINISH

Related

Parallel Quicksort in Python

I would like to implement the Parallel Quicksort in Python.
I know Quicksort, you have to choose a pivot, partition, but how do spawned them as independent task in Python?
Here is the pseudocode for it:
QS(A[1:n])
if n=1 then return A[1]
pivot <--any value from A (random)
L <- A[A[:] < pivot]
R <- A[A[:] > pivot]
A(L) <- spawn QS(L)
A(R) <- QS(R)
sync
return A(L) ++ A(R)
You can do it, but it's unlikely to speed up your code. You can use ThreadPoolExecutor to create a thread and get result back from it. Here is a simple illustration with a function that sums an array:
from concurrent.futures import ThreadPoolExecutor
pool = ThreadPoolExecutor(max_workers=1)
def add(arr):
if len(arr)<2:
return sum(arr) #cheating a little
mid = len(arr)//2
f = pool.submit(add,arr[:mid])
y = add(arr[mid:])
return y+f.result()
submit() takes a function's name as first argument and then takes the function's arguments. So for your code it'll be something like f = pool.submit(QS,L).
Please remember though Python supports concurrency but not parallelism using thread. Take a look here for their difference. So above code will actually run in single thread. Now you can use ProcessPoolExecutor for process parallelism, which python supports well. But the overhead in data IO will probably eat up any speed up you gain from process parallelism.

In Haskell, is it correct measuring performance using a timestamp obtained at the beginning and in the end of a function execution?

I want to measure the performance of a Haskell function. This function is executed concurrently.
Is it correct to measure its performance using timestamps that getCurrentTime function returns? Does lazyness affects the measuring?
I want to save these times on a log. I have looked some logging libraries, but the time they return is not as precise as the timestamp that getCurrentTime returns. I use XES format on my log.
The code i use is something like this: (i did not compile it)
import Data.Time.Clock
measuredFunction:: Int -> IO (Int,UTCTime,UTCTime)
measuredFunction x = do
time' <- getCurrentTime
--performs some IO action that returns x'
time'' <- getCurrentTime
return (x',time',time'')
runTest :: Int -> Int -> IO ()
runTest init end = do
when (init <= end) (do
forkIO (do
(x',time',time'') <- measuredFunction 1
-- saves time' and time '' in a log
)
runTest (init+1) end )
It depends on the function. Some values have all their information immediately, whereas others can have expensive stuff going on "beyond the top layer". Here's a contrived example:
example :: (Int, Int)
example = (1+1, head [ x | x <- [1..], x == 10^6 ])
If you load this up in ghci, you will see (2, printed, and then after some delay, the remainder of the value 1000000) is printed. If you get a value like this, then the function will "return" "before" the expensive sub-value has been computed. But you can use deepseq to ensure that a value is computed all the way and doesn't have any sub-computations left.
Benchmarking is subtle, and there are a lot of ways to do it wrong (especially in Haskell). Fortunately we have a very good benchmarking library called criterion
(tutorial) which I definitely recommend you use if you are trying to get reliable results.

Scala: branching statement optimisation using callback function degrading performance

It is common in C/C++ programming to use function pointers to optimize branching in the main data path. So I wrote a test program to find out if similar performance savings can be gotten in Scala using functional programming techniques. The usecase is that of a function which is invoked millions of times and has a branching statement based on a global flag. The code using if() statement -
val b = true
def test() = {
if(b) // do something
else // do something else
}
for(i <- 0 to 100000) test()
And trying to get rid of the if() I did this -
def onTrue() = { // do something }
def onFalse() = { // do something else }
lazy val callback: () => Unit = if(b) onTrue else onFalse
def test() = callback()
for(i <- 0 to 100000) test()
I did a comparison of both these programs by running them for large counters (in the for loop) and running them many times and using the System.nanoTime() differential to measure the time taken.
And my tests seem to suggest that the callback method is actually SLOWER than using if() in the loop. The reason for this could be that a function call requires the params and returns to be pushed on the stack and a new stack frame created etc. Given this finding wanted to know -
Is there a functional way one could code which will better the performance of using the if() in the loop with Scala?
#inline works with compiler. Is there a runtime equivalent to avoid the stack activities? (similar to tail call optimization)
Could my test or results be inaccurate/erroneous in some way?
3) It's very easy to get your methodology wrong when testing this way. Use something like JMH if you want quasi-trustable microbenchmarks!
2) The JVM does inlining at runtime.
1) You aren't measuring a difference in whether something is "functional". You're measuring the difference between using a lazy val and not. If you don't have the lazy val in there, the JVM will probably be able to optimize your code (depending on what "do something" is).
If you remove the lazy val, the second one optimizes to the same speed in my hands. (It has an extra mandatory check for every access that it isn't being initialized in a multi-threaded context.)

Julia: Parallel for loop over partitions iterator

So I'm trying to iterate over the list of partitions of something, say 1:n for some n between 13 and 21. The code that I ideally want to run looks something like this:
valid_num = #parallel (+) for p in partitions(1:n)
int(is_valid(p))
end
println(valid_num)
This would use the #parallel for to map-reduce my problem. For example, compare this to the example in the Julia documentation:
nheads = #parallel (+) for i=1:200000000
Int(rand(Bool))
end
However, if I try my adaptation of the loop, I get the following error:
ERROR: `getindex` has no method matching getindex(::SetPartitions{UnitRange{Int64}}, ::Int64)
in anonymous at no file:1433
in anonymous at multi.jl:1279
in run_work_thunk at multi.jl:621
in run_work_thunk at multi.jl:630
in anonymous at task.jl:6
which I think is because I am trying to iterate over something that is not of the form 1:n (EDIT: I think it's because you cannot call p[3] if p=partitions(1:n)).
I've tried using pmap to solve this, but because the number of partitions can get really big, really quickly (there are more than 2.5 million partitions of 1:13, and when I get to 1:21 things will be huge), constructing such a large array becomes an issue. I left it running over night and it still didn't finish.
Does anyone have any advice for how I can efficiently do this in Julia? I have access to a ~30 core computer and my task seems easily parallelizable, so I would be really grateful if anyone knows a good way to do this in Julia.
Thank you so much!
The below code gives 511, the number of partitions of size 2 of a set of 10.
using Iterators
s = [1,2,3,4,5,6,7,8,9,10]
is_valid(p) = length(p)==2
valid_num = #parallel (+) for i = 1:30
sum(map(is_valid, takenth(chain(1:29,drop(partitions(s), i-1)), 30)))
end
This solution combines the takenth, drop, and chain iterators to get the same effect as the take_every iterator below under PREVIOUS ANSWER. Note that in this solution, every process must compute every partition. However, because each process uses a different argument to drop, no two processes will ever call is_valid on the same partition.
Unless you want to do a lot of math to figure out how to actually skip partitions, there is no way to avoid computing partitions sequentially on at least one process. I think Simon's answer does this on one process and distributes the partitions. Mine asks each worker process to compute the partitions itself, which means the computation is being duplicated. However, it is being duplicated in parallel, which (if you actually have 30 processors) will not cost you time.
Here is a resource on how iterators over partitions are actually computed: http://www.informatik.uni-ulm.de/ni/Lehre/WS03/DMM/Software/partitions.pdf.
PREVIOUS ANSWER (More complicated than necessary)
I noticed Simon's answer while writing mine. Our solutions seem similar to me, except mine uses iterators to avoid storing partitions in memory. I'm not sure which would actually be faster for what size sets, but I figure it's good to have both options. Assuming it takes you significantly longer to compute is_valid than to compute the partitions themselves, you can do something like this:
s = [1,2,3,4]
is_valid(p) = length(p)==2
valid_num = #parallel (+) for i = 1:30
foldl((x,y)->(x + int(is_valid(y))), 0, take_every(partitions(s), i-1, 30))
end
which gives me 7, the number of partitions of size 2 for a set of 4. The take_every function returns an iterator that returns every 30th partition starting with the ith. Here is the code for that:
import Base: start, done, next
immutable TakeEvery{Itr}
itr::Itr
start::Any
value::Any
flag::Bool
skip::Int64
end
function take_every(itr, offset, skip)
value, state = Nothing, start(itr)
for i = 1:(offset+1)
if done(itr, state)
return TakeEvery(itr, state, value, false, skip)
end
value, state = next(itr, state)
end
if done(itr, state)
TakeEvery(itr, state, value, true, skip)
else
TakeEvery(itr, state, value, false, skip)
end
end
function start{Itr}(itr::TakeEvery{Itr})
itr.value, itr.start, itr.flag
end
function next{Itr}(itr::TakeEvery{Itr}, state)
value, state_, flag = state
for i=1:itr.skip
if done(itr.itr, state_)
return state[1], (value, state_, false)
end
value, state_ = next(itr.itr, state_)
end
if done(itr.itr, state_)
state[1], (value, state_, !flag)
else
state[1], (value, state_, false)
end
end
function done{Itr}(itr::TakeEvery{Itr}, state)
done(itr.itr, state[2]) && !state[3]
end
One approach would be to divide the problem up into pieces that are not too big to realize and then process the items within each piece in parallel, e.g. as follows:
function my_take(iter,state,n)
i = n
arr = Array[]
while !done(iter,state) && (i>0)
a,state = next(iter,state)
push!(arr,a)
i = i-1
end
return arr, state
end
function get_part(npart,npar)
valid_num = 0
p = partitions(1:npart)
s = start(p)
while !done(p,s)
arr,s = my_take(p,s,npar)
valid_num += #parallel (+) for a in arr
length(a)
end
end
return valid_num
end
valid_num = #time get_part(10,30)
I was going to use the take() method to realize up to npar items from the iterator but take() appears to be deprecated so I've included my own implementation which I've called my_take(). The getPart() function therefore uses my_take() to obtain up to npar partitions at a time and carry out a calculation on them. In this case, the calculation just adds up their lengths, because I don't have the code for the OP's is_valid() function. get_part() then returns the result.
Because the length() calculation isn't very time-consuming, this code is actually slower when run on parallel processors than it is on a single processor:
$ julia -p 1 parpart.jl
elapsed time: 10.708567515 seconds (373025568 bytes allocated, 6.79% gc time)
$ julia -p 2 parpart.jl
elapsed time: 15.70633439 seconds (548394872 bytes allocated, 9.14% gc time)
Alternatively, pmap() could be used on each piece of the problem instead of the parallel for loop.
With respect to the memory issue, realizing 30 items from partitions(1:10) took nearly 1 gigabyte of memory on my PC when I ran Julia with 4 worker processes so I expect realizing even a small subset of partitions(1:21) will require a great deal of memory. It may be desirable to estimate how much memory would be needed to see if it would be at all possible before trying such a computation.
With respect to the computation time, note that:
julia> length(partitions(1:10))
115975
julia> length(partitions(1:21))
474869816156751
... so even efficient parallel processing on 30 cores might not be enough to make the larger problem solvable in a reasonable time.

Operating in parallel on a large constant datastructure in Julia

I have a large vector of vectors of strings:
There are around 50,000 vectors of strings,
each of which contains 2-15 strings of length 1-20 characters.
MyScoringOperation is a function which operates on a vector of strings (the datum) and returns an array of 10100 scores (as Float64s). It takes about 0.01 seconds to run MyScoringOperation (depending on the length of the datum)
function MyScoringOperation(state:State, datum::Vector{String})
...
score::Vector{Float64} #Size of score = 10000
I have what amounts to a nested loop.
The outer loop typically would runs for 500 iterations
data::Vector{Vector{String}} = loaddata()
for ii in 1:500
score_total = zeros(10100)
for datum in data
score_total+=MyScoringOperation(datum)
end
end
On one computer, on a small test case of 3000 (rather than 50,000) this takes 100-300 seconds per outer loop.
I have 3 powerful servers with Julia 3.9 installed (and can get 3 more easily, and then can get hundreds more at the next scale).
I have basic experience with #parallel, however it seems like it is spending a lot of time copying the constant (It more or less hang on the smaller testing case)
That looks like:
data::Vector{Vector{String}} = loaddata()
state = init_state()
for ii in 1:500
score_total = #parallel(+) for datum in data
MyScoringOperation(state, datum)
end
state = update(state, score_total)
end
My understanding of the way this implementation works with #parallel is that it:
For Each ii:
partitions data into a chuck for each worker
sends that chuck to each worker
works all process there chunks
main procedure sums the results as they arrive.
I would like to remove step 2,
so that instead of sending a chunk of data to each worker,
I just send a range of indexes to each worker, and they look it up from their own copy of data. or even better, only giving each only their own chunk, and having them reuse it each time (saving on a lot of RAM).
Profiling backs up my belief about the functioning of #parellel.
For a similarly scoped problem (with even smaller data),
the non-parallel version runs in 0.09seconds,
and the parallel runs in
And the profiler shows almost all the time is spent 185 seconds.
Profiler shows almost 100% of this is spend interacting with network IO.
This should get you started:
function get_chunks(data::Vector, nchunks::Int)
base_len, remainder = divrem(length(data),nchunks)
chunk_len = fill(base_len,nchunks)
chunk_len[1:remainder]+=1 #remained will always be less than nchunks
function _it()
for ii in 1:nchunks
chunk_start = sum(chunk_len[1:ii-1])+1
chunk_end = chunk_start + chunk_len[ii] -1
chunk = data[chunk_start: chunk_end]
produce(chunk)
end
end
Task(_it)
end
function r_chunk_data(data::Vector)
all_chuncks = get_chunks(data, nworkers()) |> collect;
remote_chunks = [put!(RemoteRef(pid)::RemoteRef, all_chuncks[ii]) for (ii,pid) in enumerate(workers())]
#Have to add the type annotation sas otherwise it thinks that, RemoteRef(pid) might return a RemoteValue
end
function fetch_reduce(red_acc::Function, rem_results::Vector{RemoteRef})
total = nothing
#TODO: consider strongly wrapping total in a lock, when in 0.4, so that it is garenteed safe
#sync for rr in rem_results
function gather(rr)
res=fetch(rr)
if total===nothing
total=res
else
total=red_acc(total,res)
end
end
#async gather(rr)
end
total
end
function prechunked_mapreduce(r_chunks::Vector{RemoteRef}, map_fun::Function, red_acc::Function)
rem_results = map(r_chunks) do rchunk
function do_mapred()
#assert r_chunk.where==myid()
#pipe r_chunk |> fetch |> map(map_fun,_) |> reduce(red_acc, _)
end
remotecall(r_chunk.where,do_mapred)
end
#pipe rem_results|> convert(Vector{RemoteRef},_) |> fetch_reduce(red_acc, _)
end
rchunk_data breaks the data into chunks, (defined by get_chunks method) and sends those chunks each to a different worker, where they are stored in RemoteRefs.
The RemoteRefs are references to memory on your other proccesses(and potentially computers), that
prechunked_map_reduce does a variation on a kind of map reduce to have each worker first run map_fun on each of it's chucks elements, then reduce over all the elements in its chuck using red_acc (a reduction accumulator function). Finally each worker returns there result which is then combined by reducing them all together using red_acc this time using the fetch_reduce so that we can add the first ones completed first.
fetch_reduce is a nonblocking fetch and reduce operation. I believe it has no raceconditions, though this maybe because of a implementation detail in #async and #sync. When julia 0.4 comes out, it is easy enough to put a lock in to make it obviously have no race conditions.
This code isn't really battle hardened. I don;t believe the
You also might want to look at making the chuck size tunable, so that you can seen more data to faster workers (if some have better network or faster cpus)
You need to reexpress your code as a map-reduce problem, which doesn't look too hard.
Testing that with:
data = [float([eye(100),eye(100)])[:] for _ in 1:3000] #480Mb
chunk_data(:data, data)
#time prechunked_mapreduce(:data, mean, (+))
Took ~0.03 seconds, when distributed across 8 workers (none of them on the same machine as the launcher)
vs running just locally:
#time reduce(+,map(mean,data))
took ~0.06 seconds.

Resources