Julia equivalent of Python multiprocessing.Pool.map - multiprocessing

My multi processing needs are very simple: I work in machine learning, and I sometimes need to evaluate an algorithm in multiple datasets, or multiple algorithms in a dataset, or some such. I just need to run a function with some arguments and get a number.
I need no RPC, shared data, nothing.
In Julia, I am getting an error with the following code:
type Model
param
end
# 1. I have several algorithms/models
models = [Model(i) for i in 1:50]
# 2. I have one dataset
X = rand(50, 5)
# 3. I want to paralelize this function
#everywhere function transform(m)
sum(X .* m.param)
end
addprocs(3)
println(pmap(transform, models))
I keep getting errors such as,
ERROR: LoadError: On worker 2:
UndefVarError: #transform not defined
Also, is there a way to avoid having to place #everywhere everywhere? Can I just tell that all variables should be copied over to the workers when they are created (as is done in Python multiprocessing)?
My typical code looks obviously much more complicated than this, with models ranging several files.
For reference, this is what I would do in Python:
import numpy as np
import time
# 1. I have several algorithms/models
class Model:
def __init__(self, param):
self.param = param
models = [Model(i) for i in range(1,51)]
# 2. I have one dataset
X = np.random.random((50, 5))
# 3. I want to paralelize this function
def transform(m):
return np.sum(X * m.param)
import multiprocessing
pool = multiprocessing.Pool(4)
print(pool.map(transform, models))

Core issues is you need to add the processes before you attempt to define things on them.
addprocs should always be the first thing you do, before using even (see below).
This is why it is often done with the -p flag when you start julia.
Or with a ---machinefile <file> or with a -L <file>
#everywhere exectutes the code on all processes the currently exist.
i.e. process added after the #everywhere do not have the code executed on them.
Also you missed a few #everywheres.
addprocs(3)
#everywhere type Model
param
end
# 1. I have several algorithms/models
models = [Model(i) for i in 1:50]
# 2. I have one dataset
#everywhere X = rand(50, 5)
# 3. I want to paralelize this function
#everywhere function transform(m)
sum(X .* m.param)
end
println(pmap(transform, models))
Alternatives with fewer #everywheres.
use a block to send a whole block of code #everywhere
addprocs(3)
#everywhere begin
type Model
param
end
X = rand(50, 5)
function transform(m)
sum(X .* m.param)
end
end
models = [Model(i) for i in 1:50]
println(pmap(transform, models))
Use local variables
Local variables (including functions), are sent as required.
though this doesn't help for types.
addprocs(3)
#everywhere type Model
param
end
function main()
X = rand(50, 5)
models = [Model(i) for i in 1:50]
function transform(m)
sum(X .* m.param)
end
println(pmap(transform, models))
end
main()
Use modules
When you using Foo the module Foo is loaded on all processes.
But not brought into scope.
It is a bit weird and counter intuitive.
So much so that I can't conjure a working example of it.
but someone else might.

Related

TypeError using pmap in Julia 1.6.1

I have been working to parallelize a function which reads in an input of polynomials and outputs their roots. The polynomials are in the form of a matrix with columns being each polynomial. It works perfectly fine calling directly, but I get strange behaviour once I use pmap.
using Distributed
#everywhere begin
using PolynomialRoots
end
addprocs(2)
#everywhere global test = ones(Int64, 10,10)
#everywhere begin
function polyRootS(n)
q=test[:,n]
r = roots(q) # provides poly roots from a vector input
return r
end
end
function parPolyRoots()
pmap(x->polyRootS(x) ? error("failed") : x, 1:10; on_error=identity)
end
for i in 1:10
r = polyRootS(i)
println(r)
end
parPolyRoots()
As far as I have read pmap should in effect just be parallelizing the loop when calling polyRootS yet I instead get the outputs:
ComplexF64[0.30901699437494734 + 0.9510565162951534im, 0.8090169943749473 - 0.587785252292473im, -0.8090169943749473 - 0.587785252292473im, -0.8090169943749475 + 0.587785252292473im, 0.8090169943749475 + 0.5877852522924731im, 0.30901699437494745 - 0.9510565162951535im, -1.0 - 7.671452424696258e-18im, -0.30901699437494734 - 0.9510565162951536im, -0.3090169943749474 + 0.9510565162951536im]
for my test matrix and
TypeError(:if, "", Bool, ComplexF64[0.30901699437494734 + 0.9510565162951534im, 0.8090169943749473 - 0.587785252292473im, -0.8090169943749473 - 0.587785252292473im, -0.8090169943749475 + 0.587785252292473im, 0.8090169943749475 + 0.5877852522924731im, 0.30901699437494745 - 0.9510565162951535im, -1.0 - 7.671452424696258e-18im, -0.30901699437494734 - 0.9510565162951536im, -0.3090169943749474 + 0.9510565162951536im])
for my pmap function call. So I am quite puzzled. If someone could point me to some different documentation for pmap and or a few examples that would be especially helpful.
A different question, can I start an #everything begin end statement and cover my defined functions, data, and packages at once or is it best practice to individually isolate them?
Using Julia 1.6.1 in VSCode with Julia extension. Running on Win10 with Intel i5.
The problem is not to do with pmap, it's because of the anonymous function you've defined to be its first argument:
x->polyRootS(x) ? error("failed") : x
If you look at the TypeError (and refer the docs about TypeError's syntax), you can see that it's complaining that it was expecting a Bool and instead got a ComplexF64 array. The left side of the ? in the ternary operator is equivalent to an if condition, and Julia expects a Boolean value there. Instead, the call to polyRootS is returning a ComplexF64[] value.
It's not clear what you're trying to check there either. Your polyRootS definition doesn't return any error indicator return value, and any exceptions will be handled by the on_error argument already.
So, replacing that anonymous function with just polyRootS as the first argument to pmap should work.

How can I pass multiple parameters to a parallel operation in Octave?

I wrote a function that acts on each combination of columns in an input matrix. It uses multiple for loops and is very slow, so I am trying to parallelize it to use the maximum number of threads on my computer.
I am having difficulty finding the correct syntax to set this up. I'm using the Parallel package in octave, and have tried several ways to set up the calls. Here are two of them, in a simplified form, as well as a non-parallel version that I believe works:
function A = parallelExample(M)
pkg load parallel;
# Get total count of columns
ct = columns(M);
# Generate column pairs
I = nchoosek([1:ct],2);
ops = rows(I);
slice = ones(1, ops);
Ic = mat2cell(I, slice, 2);
## # Non-parallel
## A = zeros(1, ops);
## for i = 1:ops
## A(i) = cmbtest(Ic{i}, M);
## endfor
# Parallelized call v1
A = parcellfun(nproc, #cmbtest, Ic, {M});
## # Parallelized call v2
## afun = #(x) cmbtest(x, M);
## A = parcellfun(nproc, afun, Ic);
endfunction
# function to apply
function P = cmbtest(indices, matrix)
colset = matrix(:,indices);
product = colset(:,1) .* colset(:,2);
P = sum(product);
endfunction
For both of these examples I generate every combination of two columns and convert those pairs into a cell array that the parcellfun function should split up. In the first, I attempt to convert the input matrix M into a 1x1 cell array so it goes to each parallel instance in the same form. I get the error 'C must be a cell array' but this must be internal to the parcellfun function. In the second, I attempt to define an anonymous function that includes the matrix. The error I get here specifies that 'cmbtest' is undefined.
(Naturally, the actual function I'm trying to apply is far more complex than cmbtest here)
Other things I have tried:
Put M into a global variable so it doesn't need to be passed. Seemed to be impossible to put a global variable in a function file, though I may just be having syntax issues.
Make cmbtest a nested function so it can access M (parcellfun doesn't support that)
I'm out of ideas at this point and could use help figuring out how to get this to work.
Converting my comments above to an answer.
When performing parallel operations, it is useful to think of each parallel worker that will result as separate and independent octave instances, which need to have appropriate access to all functions and variables they will require in order to do their independent work.
Therefore, do not rely on subfunctions when calling parcellfun from a main function, since this might lead to errors if the worker is unable to access the subfunction directly under the hood.
In this case, separating the subfunction into its own file fixed the problem.

How to measure time of julia program?

If I want to calculate for things in Julia
invQa = ChebyExp(g->1/Q(g),0,1,5)
a1Inf = ChebyExp(g->Q(g),1,10,5)
invQb = ChebyExp(g->1/Qd(g),0,1,5)
Qb1Inf = ChebyExp(g->Qd(g),1,10,5)
How can I count the time? How many seconds do i have to wait for the four things up be done? Do I put tic() at the beginning and toc() at the end?
I tried #elapsed, but no results.
The basic way is to use
#time begin
#code
end
But note that you never should benchmark in the global scope.
A package that can help you benchmark your code is BenchmarkTools.jl which you should check out as well.
You could do something like this (I guess that g is input parameter):
function cheby_test(g::Your_Type)
invQa = ChebyExp(g->1/Q(g),0,1,5)
a1Inf = ChebyExp(g->Q(g),1,10,5)
invQb = ChebyExp(g->1/Qd(g),0,1,5)
Qb1Inf = ChebyExp(g->Qd(g),1,10,5)
end
function test()
g::Your_Type = small_quick #
cheby_test(g) #= function is compiled here and
you like to exclude compile time from test =#
g = real_data()
#time cheby_test(g) # here you measure time for real data
end
test()
I propose to call #time not in global scope if you like to get proper allocation info from time macro.

Julia - Sending GLPK.Prob to worker

I am using GLPK with Julia, and using the methods written by spencerlyon
sendto(2, lp = lp) #lp is type GLPK.Prob
However, I cant seem to send a type GLPK.Prob between workers. Whenever I do try to send a type GLPK.Prob, it gets 'sent' and calling
remotecall_fetch(2, whos)
confirms that the GLPK.Prob got sent
The problem appears when I try to solve it by calling
simplex(lp)
the error
GLPK.GLPKError("invalid GLPK.Prob")
appears. I know that the GLPK.Prob isnt originally an invalid GLPK.Prob and if I decide to construct the GLPK.Prob type explicitly on another worker, fx worker 2, calling simplex runs just fine
This is a problem as the GLPK.Prob is generated from a custom type of mine that is a bit on the heavy side
tl;dr Are there possibly some types that cannot be sent between workers properly?
Update
I see now that calling
remotecall_fetch(2, simplex, lp)
will return the above GLPK error
Furthermore I've just noticed that the GLPK module has got a method called
GLPK.copy_prob(GLPK.Prob, GLPK.Prob, Int)
but deepcopy (and certainly not copy) wont work when copying a GLPK.Prob
Example
function create_lp()
lp = GLPK.Prob()
GLPK.set_prob_name(lp, "sample")
GLPK.term_out(GLPK.OFF)
GLPK.set_obj_dir(lp, GLPK.MAX)
GLPK.add_rows(lp, 3)
GLPK.set_row_bnds(lp,1,GLPK.UP,0,100)
GLPK.set_row_bnds(lp,2,GLPK.UP,0,600)
GLPK.set_row_bnds(lp,3,GLPK.UP,0,300)
GLPK.add_cols(lp, 3)
GLPK.set_col_bnds(lp,1,GLPK.LO,0,0)
GLPK.set_obj_coef(lp,1,10)
GLPK.set_col_bnds(lp,2,GLPK.LO,0,0)
GLPK.set_obj_coef(lp,2,6)
GLPK.set_col_bnds(lp,3,GLPK.LO,0,0)
GLPK.set_obj_coef(lp,3,4)
s = spzeros(3,3)
s[1,1] = 1
s[1,2] = 1
s[1,3] = 1
s[2,1] = 10
s[3,1] = 2
s[2,2] = 4
s[3,2] = 2
s[2,3] = 5
s[3,3] = 6
GLPK.load_matrix(lp, s)
return lp
end
This will return a lp::GLPK.Prob() which will return 733.33 when running
simplex(lp)
result = get_obj_val(lp)#returns 733.33
However, doing
addprocs(1)
remotecall_fetch(2, simplex, lp)
will result in the error above
It looks like the problem is that your lp object contains a pointer.
julia> lp = create_lp()
GLPK.Prob(Ptr{Void} #0x00007fa73b1eb330)
Unfortunately, working with pointers and parallel processing is difficult - if different processes have different memory spaces then it won't be clear which memory address the process should look at in order to access the memory that the pointer points to. These issues can be overcome, but apparently they require individual work for each data type that involves said pointers, see this GitHub discussion for more.
Thus, my thought would be that if you want to access the pointer on the worker, you could just create it on that worker. E.g.
using GLPK
addprocs(2)
#everywhere begin
using GLPK
function create_lp()
lp = GLPK.Prob()
GLPK.set_prob_name(lp, "sample")
GLPK.term_out(GLPK.OFF)
GLPK.set_obj_dir(lp, GLPK.MAX)
GLPK.add_rows(lp, 3)
GLPK.set_row_bnds(lp,1,GLPK.UP,0,100)
GLPK.set_row_bnds(lp,2,GLPK.UP,0,600)
GLPK.set_row_bnds(lp,3,GLPK.UP,0,300)
GLPK.add_cols(lp, 3)
GLPK.set_col_bnds(lp,1,GLPK.LO,0,0)
GLPK.set_obj_coef(lp,1,10)
GLPK.set_col_bnds(lp,2,GLPK.LO,0,0)
GLPK.set_obj_coef(lp,2,6)
GLPK.set_col_bnds(lp,3,GLPK.LO,0,0)
GLPK.set_obj_coef(lp,3,4)
s = spzeros(3,3)
s[1,1] = 1
s[1,2] = 1
s[1,3] = 1
s[2,1] = 10
s[3,1] = 2
s[2,2] = 4
s[3,2] = 2
s[2,3] = 5
s[3,3] = 6
GLPK.load_matrix(lp, s)
return lp
end
end
a = #spawnat 2 eval(:(lp = create_lp()))
b = #spawnat 2 eval(:(result = simplex(lp)))
fetch(b)
See the documentation below on #spawn for more info on using it, as it can take a bit of getting used to.
The macros #spawn and #spawnat are two of the tools that Julia makes available to assign tasks to workers. Here is an example:
julia> #spawnat 2 println("hello world")
RemoteRef{Channel{Any}}(2,1,3)
julia> From worker 2: hello world
Both of these macros will evaluate an expression on a worker process. The only difference between the two is that #spawnat allows you to choose which worker will evaluate the expression (in the example above worker 2 is specified) whereas with #spawn a worker will be automatically chosen, based on availability.
In the above example, we simply had worker 2 execute the println function. There was nothing of interest to return or retrieve from this. Often, however, the expression we sent to the worker will yield something we wish to retrieve. Notice in the example above, when we called #spawnat, before we got the printout from worker 2, we saw the following:
RemoteRef{Channel{Any}}(2,1,3)
This indicates that the #spawnat macro will return a RemoteRef type object. This object in turn will contain the return values from our expression that is sent to the worker. If we want to retrieve those values, we can first assign the RemoteRef that #spawnat returns to an object and then, and then use the fetch() function which operates on a RemoteRef type object, to retrieve the results stored from an evaluation performed on a worker.
julia> result = #spawnat 2 2 + 5
RemoteRef{Channel{Any}}(2,1,26)
julia> fetch(result)
7
The key to being able to use #spawn effectively is understanding the nature behind the expressions that it operates on. Using #spawn to send commands to workers is slightly more complicated than just typing directly what you would type if you were running an "interpreter" on one of the workers or executing code natively on them. For instance, suppose we wished to use #spawnat to assign a value to a variable on a worker. We might try:
#spawnat 2 a = 5
RemoteRef{Channel{Any}}(2,1,2)
Did it work? Well, let's see by having worker 2 try to print a.
julia> #spawnat 2 println(a)
RemoteRef{Channel{Any}}(2,1,4)
julia>
Nothing happened. Why? We can investigate this more by using fetch() as above. fetch() can be very handy because it will retrieve not just successful results but also error messages as well. Without it, we might not even know that something has gone wrong.
julia> result = #spawnat 2 println(a)
RemoteRef{Channel{Any}}(2,1,5)
julia> fetch(result)
ERROR: On worker 2:
UndefVarError: a not defined
The error message says that a is not defined on worker 2. But why is this? The reason is that we need to wrap our assignment operation into an expression that we then use #spawn to tell the worker to evaluate. Below is an example, with explanation following:
julia> #spawnat 2 eval(:(a = 2))
RemoteRef{Channel{Any}}(2,1,7)
julia> #spawnat 2 println(a)
RemoteRef{Channel{Any}}(2,1,8)
julia> From worker 2: 2
The :() syntax is what Julia uses to designate expressions. We then use the eval() function in Julia, which evaluates an expression, and we use the #spawnat macro to instruct that the expression be evaluated on worker 2.
We could also achieve the same result as:
julia> #spawnat(2, eval(parse("c = 5")))
RemoteRef{Channel{Any}}(2,1,9)
julia> #spawnat 2 println(c)
RemoteRef{Channel{Any}}(2,1,10)
julia> From worker 2: 5
This example demonstrates two additional notions. First, we see that we can also create an expression using the parse() function called on a string. Secondly, we see that we can use parentheses when calling #spawnat, in situations where this might make our syntax more clear and manageable.

passing shared array in julia using #everywhere

I found this post - Shared array usage in Julia, which is clearly close but I still don't really understand what to do in my case.
I am trying to pass a shared array to a function I define, and call that function using #everywhere. The following, which has no shared array, works:
#everywhere mat = rand(3,3)
#everywhere foo1(x::Array) = det(x)
Then this
#everywhere println(foo1(mat))
properly produces different results from each worker. Now let me include a shared array:
test = SharedArray(Float64,10)
#everywhere foo2(x::Array,y::SharedArray) = det(x) + sum(y)
Then this
#everywhere println(foo2(mat,test))
fails on the workers.
ERROR: On worker 2:
UndefVarError: test not defined
etc. I can get what I want like this:
for w in procs()
#spawnat w println(foo2(eval(:mat),test))
end
This works - but is it optimal? Is there a way to make it work with #everywhere?
While it's tempting to use "named variables" on workers, it generally seems to work better if you access them via references. Schematically, you might do something like this:
mat = [#spawnat p rand(3,3) for p in workers()] # process 1 holds references to objects on workers
#sync for (i, p) in enumerate(workers())
#spawnat p foo(mat[i], sharedarray)
end

Resources