Can this implement Matlabpool in parallel? - parallel-processing

Im using the Matpower - Matlab toolbox in parallel computing and building the computer cluster to simulate the programme which is shown below:
matlabpool open job1 5 % matlabpool means computer cluster
spmd %the statement from the Parallel computing toolbox
% Run all the statements in parallel
% first part of code
if labindex==1
runopf('casea');
end
% second part of code
if labindex==2
runopf('caseb');
end
end
matlabpool close;
When the labindex is 1 the first part of code in this program is running in "computer1" in the cluster, and so forth when the labindex is 2, then the second part of code in the program is "running in computer2". My question is the main code shown above running in sequence or in parallel?
By which I mean, does the second part code has to wait to be executed until the first part of code is executed or two parts of codes can be executed parallel at the two different computers in the cluster?

The code between spmd and corresponding end is sent to all workers (5 in your case) and they execute these instructions in parallel. Then, in your code you instructed worker #1 to execute runopf('casea'); and worker #2 runopf('caseb');. Workers #3 to #5 will effectively do nothing.
Technically, worker #2 will execute runopf('caseb'); a little later. The delay appears because worker #2 will also check the first if statement (but will not execute the code in it).

Related

Asyncio: All tasks are executed at once despite small grouped-tasks

I have created all the tasks in a for loop first.
loop = asyncio.get_event_loop()
tasks =[]
for i in range(100):
task = loop.create_task(...)
tasks.append(task)
Instead of executing all 100 of them, I am trying to fire just a few, say just 1, at a time so I did something like below
await asyncio.wait([tasks[0]]) # just the very first one in the list
# also tried with `asyncio.gather` instead of `asyncio.wait`.
Expected behavior (at least to me)
Work with the very first task as I am only providing one task
Actual behavior
All 100 tasks are fired. How can I only fire just a few?
How can I only fire just a few?
I think the pattern you are looking for is using a queue and having a controlled number of consumers (workers). See for example
https://docs.python.org/3/library/asyncio-queue.html

subprocess32.Popen crashes (cpu 100%)

I have been trying to use subprocess32.Popen but this causes my system to crash (CPU 100%). So, I have the following code:
import subprocess32 as subprocess
for i in some_iterable:
output = subprocess.Popen(['/path/to/sh/file/script.sh',i[0],i[1],i[2],i[3],i[4],i[5]],shell=False,stdin=None,stdout=None,stderr=None,close_fds=True)
Before this, I had the following:
import subprocess32 as subprocess
for i in some_iterable:
output subprocess.check_output(['/path/to/sh/file/script.sh',i[0],i[1],i[2],i[3],i[4],i[5]])
.. and I had no problems with this - except that it was dead slow.
With Popen I see that this is fast - but my CPU goes too 100% in a couple of secs and the system crashes - forcing a hard reboot.
I am wondering what it is I am doing which is making Popen to crash?
On Linux,Python2.7 if that helps at all.
Thanks.
The problem is that you are trying to start 2 millon processes at once, which is blocking your system.
A solution would be to use a Pool to limit the maximum number of processes that can run at a time, and wait for each process to finish. For this cases where you're starting subprocesses and waiting for them (IO bound), a thread pool from the multiprocessing.dummy module would do:
import multiprocessing.dummy as mp
import subprocess32 as subprocess
def run_script(args):
args = ['/path/to/sh/file/script.sh'] + args
process = subprocess.Popen(args, close_fds=True)
# wait for exit and return exit code
# or use check_output() instead of Popen() if you need to process the output.
return process.wait()
# use a pool of 10 to allow at most 10 processes to be alive at a time
threadpool = mp.Pool(10)
# pool.imap or pool.imap_unordered should be used to avoid creating a list
# of all 2M return values in memory
results = threadpool.imap_unordered(run_script, some_iterable)
for result in results:
... # process result if needed
I've left out most of the arguments to Popen because you are using the default values anyway. The size of the pool should probably be in the range of your available CPU cores if your script is doing comutational work, if it's doing mostly IO (network access, writing files, ...) then probably more.

Julia Parallel macro does not seem to work

I'm playing around for the first time with parallel computing with julia. I'm having a bit of a headache. So let's say I start julia as follows: julia -p 4. Then I declare the a function for all processors and then I use it with pmap and also with #parallel for.
#everywhere function count_heads(n)
c::Int = 0
for i=1:n
c += rand(Bool)
end
n, c # tuple (input, output)
end
###### first part ######
v=pmap(count_heads, 50000:1000:70000)
println("Result first part")
println(v)
###### second part ######
println("Result second part")
#parallel for i in 50000:1000:70000
println(count_heads(i))
end
The result is the following.
Result first part
Counting heads function
Any[(50000,24894),(51000,25559),(52000,26141),(53000,26546),(54000,27056),(55000,27426),(56000,28024),(57000,28380),(58000,29001),(59000,29398),(60000,30100),(61000,30608),(62000,31001),(63000,31520),(64000,32200),(65000,32357),(66000,33063),(67000,33674),(68000,34085),(69000,34627),(70000,34902)]
Result second part
From worker 4: (61000, From worker 5: (66000, From worker 2: (50000, From worker 3: (56000
Thus, the funcion pmap is apparently working fine but #parallel for is stopping or it doesn't give me the results. Am I doing something wrong?
Thanks!
Update
If at the end of the code I put sleep(10). It does the work correctly.
From worker 5: (66000,33182)
From worker 3: (56000,27955)
............
From worker 3: (56000,27955)
Both of your examples work properly on my laptop so I'm not sure but I think this answer might solve your problem!
It should work as expected if you add #sync before the #parallel for
From the julia Parallel Computing Docs http://docs.julialang.org/en/release-0.4/manual/parallel-computing/:
... the reduction operator can be omitted if it is not
needed. In that case, the loop executes asynchronously, i.e. it spawns
independent tasks on all available workers and returns an array of
RemoteRef immediately without waiting for completion. The caller can
wait for the RemoteRef completions at a later point by calling fetch()
on them, or wait for completion at the end of the loop by prefixing it
with #sync, like #sync #parallel for.
So you are maybe calling println on the RemoteRef before it has completed.

How to add delay after one full round of jmeter scripts run

My requirement is like this:
1. Run one full round of test using jmeter
2. Wait for 90 secs
3. Restart step1
4. I have to continue step1 and this entire process continues for 4 days.
Can someone help me how to add the delay after all the threads have been executed
Create a thread group with 1 thread with a loop count of forever and and click "scheduler". Set a start time and end time.
Put your tests within the thread.
Add a constant timer after the last test (still within the thread group though) and within thread delay, use 90000. (This converts to 90 seconds)

Getting Thread not to run until join in ruby

I am getting into ruby and have been using threads for a little while now with out fully understanding them. I notice that when adding a thread to an array and if I add a sleep() command as the first command the thread does not run until I do a join which is mostly what I want. So I have 2 questions.
1.Is that suppose to happen?
2.Is there a better way to do that other then the way I'm doing it. Here is a sample code that I have to show what I'm talking about.
job = Array.new
10.times do |n|
job << Thread.new do
sleep 0.001
puts "done #{n}"
end
end
#job.each do |t|
#t.join
#end
puts "End of script"
Output is
End of script
If I remove the comments output is
done 1
done 0
done 7
done 6
done 5
done 4
done 3
done 2
done 9
done 8
End of script
So I use this now but I don't understand why it does that. Sometimes I notice even doing something like `echo hi` instead of sleep does the trick.
Thanks in advance.
Timing of threads isn't a defined behavior. Once you put them to sleep, they will be put in a queue to be run later. You can't ever expect it to run one way or another.
Your main program doesn't take very long to run, so it is likely to happen to finish before your other threads get picked back up to run again. Really, when you think about it, 0.001 seconds is quite a long time to computer, so spinning off 10 threads in that time is likely to happen -- but even if it takes longer, there is no guarantee the thread will resume immediately after .001 seconds. Often there's really no guarantee it won't start before .001 seconds, either, but sleep calls usually don't end early.
When you add the join calls, you are introducing additional time into your main thread which allows the other threads time to run, so this behavior is expected.

Resources