I have a complex model written in Matlab. The model was not written by us and is best thought of as a "black box" i.e. in order to fix the relevant problems from the inside would require rewritting the entire model which would take years.
If I have an "embarrassingly parallel" problem I can use an array to submit X variations of the same simulation with the option #SBATCH --array=1-X. However, clusters normally have a (frustratingly small) limit on the maximum array size.
Whilst using a PBS/TORQUE cluster I have got around this problem by forcing Matlab to run on a single thread, requesting multiple CPUs and then running multiple instances of Matlab in the background. An example submission script is:
#!/bin/bash
<OTHER PBS COMMANDS>
#PBS -l nodes=1:ppn=5,walltime=30:00:00
#PBS -t 1-600
<GATHER DYNAMIC ARGUMENTS FOR MATLAB FUNCTION CALLS BASED ON ARRAY NUMBER>
# define Matlab options
options="-nodesktop -noFigureWindows -nosplash -singleCompThread"
for sub_job in {1..5}
do
<GATHER DYNAMIC ARGUMENTS FOR MATLAB FUNCTION CALLS BASED ON LOOP NUMBER (i.e. sub_job)>
matlab ${options} -r "run_model(${arg1}, ${arg2}, ..., ${argN}); exit" &
done
wait
<TIDY UP AND FINISH COMMANDS>
Can anyone help me do the equivalent on a SLURM cluster?
The par function will not run my model in a parallel loop in Matlab.
The PBS/TORQUE language was very intuitive but SLURM's is confusing me. Assuming a similarly structured submission script as my PBS example, here is what I think certain commands will result in.
--ncpus-per-task=5 seems like the most obvious one to me. Would I put srun in front of the matlab command in the loop or leave it as it is in the PBS script loop?
--ntasks=5 I would imagine would request 5 CPUs but will run in serial unless a program specifically requests them (i.e. MPI or Python-Multithreaded etc). Would I need to put srun in front of the Matlab command in this case?
I am not a big expert on array jobs but I can help you with the inner loop.
I would always use GNU parallel to run several serial processes in parallel, within a single job that has more than one CPU available. It is a simple perl script, so not difficult to 'install', and its syntax is extremely easy. What it basically does is to run some (nested) loop in parallel. Each iteration of this loop contains a (long) process, like your Matlab command. In contrast to your solution it does not submit all these processes at once, but it runs only N processes at the same time (where N is the number of CPUs you have available). As soon as one finishes, the next one is submitted, and so on until your entire loop is finished. It is perfectly fine that not all processes take the same amount of time, as soon as one CPU is freed, another process is started.
Then, what you would like to do is to launch 600 jobs (for which I substitute 3 below, to show the complete behavior), each with 5 CPUs. To do that you could do the following (whereby I have not included the actual run of matlab, but that trivially can be included):
#!/bin/bash
#SBATCH --job-name example
#SBATCH --out job.slurm.out
#SBATCH --nodes 1
#SBATCH --ntasks 1
#SBATCH --cpus-per-task 5
#SBATCH --mem 512
#SBATCH --time 30:00:00
#SBATCH --array 1-3
cmd="echo matlab array=${SLURM_ARRAY_TASK_ID}"
parallel --max-procs=${SLURM_CPUS_PER_TASK} "$cmd,subjob={1}; sleep 30" ::: {1..5}
Submitting this job using:
$ sbatch job.slurm
submits 3 jobs to the queue. For example:
$ squeue | grep tdegeus
3395882_1 debug example tdegeus R 0:01 1 c07
3395882_2 debug example tdegeus R 0:01 1 c07
3395882_3 debug example tdegeus R 0:01 1 c07
Each job gets 5 CPUs. These are exploited by the parallel command, to run your inner loop in parallel. Once again, the range of this inner loop may be (much) larger than 5, parallel takes care of the balancing between the 5 available CPUs within this job.
Let's inspect the output:
$ cat job.slurm.out
matlab array=2,subjob=1
matlab array=2,subjob=2
matlab array=2,subjob=3
matlab array=2,subjob=4
matlab array=2,subjob=5
matlab array=1,subjob=1
matlab array=3,subjob=1
matlab array=1,subjob=2
matlab array=1,subjob=3
matlab array=1,subjob=4
matlab array=3,subjob=2
matlab array=3,subjob=3
matlab array=1,subjob=5
matlab array=3,subjob=4
matlab array=3,subjob=5
You can clearly see the 3 times 5 processes run at the same time now (as their output is mixed).
No need in this case to use srun. SLURM will create 3 jobs. Within each job everything happens on individual compute nodes (i.e. as if you were running on your own system).
Installing GNU Parallel - option 1
To 'install' GNU parallel into your home folder, for example in ~/opt.
Download the latest GNU Parallel.
Make the directory ~/opt if it does not yet exist
mkdir $HOME/opt
'Install' GNU Parallel:
tar jxvf parallel-latest.tar.bz2
cd parallel-XXXXXXXX
./configure --prefix=$HOME/opt
make
make install
Add ~/opt to your path:
export PATH=$HOME/opt/bin:$PATH
(To make it permanent, add that line to your ~/.bashrc.)
Installing GNU Parallel - option 2
Use conda.
(Optional) Create a new environment
conda create --name myenv
Load an existing environment:
conda activate myenv
Install GNU parallel:
conda install -c conda-forge parallel
Note that the command is available only when the environment is loaded.
While Tom's suggestion to use GNU Parallel is a good one, I will attempt to answer the question asked.
If you want to run 5 instances of the matlab command with the same arguments (for example if they were communicating via MPI) then you would want to ask for --ncpus-per-task=1, --ntasks=5 and you should preface your matlab line with srun and get rid of the loop.
In your case, as each of your 5 calls to matlab are independent, you want to ask for --ncpus-per-task=5, --ntasks=1. This will ensure that you allocate 5 CPU cores per job to do with as you wish. You can preface your matlab line with srun if you wish but it will make little difference you are only running one task.
Of course, this is only efficient if each of your 5 matlab runs take the same amount of time since if one takes much longer then the other 4 CPU cores will be sitting idle, waiting for the fifth to finish.
You can do it with python and subprocess, in what I describe below you just set the number of nodes and tasks and that is it, no need for an array, no need to match the size of the array to the number of simulations, etc... It will just execute python code until it is done, more nodes faster execution.
Also, it is easier to decide on variables as everything is being prepared in python (which is easier than bash).
It does assume that the Matlab scripts save the output to file - nothing is returned by this function (it can be changed..)
In the sbatch script you need to add something like this:
#!/bin/bash
#SBATCH --output=out_cluster.log
#SBATCH --error=err_cluster.log
#SBATCH --time=8:00:00
#SBATCH --nodes=36
#SBATCH --exclusive
#SBATCH --cpus-per-task=2
export IPYTHONDIR="`pwd`/.ipython"
export IPYTHON_PROFILE=ipyparallel.${SLURM_JOBID}
whereis ipcontroller
sleep 3
echo "===== Beginning ipcontroller execution ======"
ipcontroller --init --ip='*' --nodb --profile=${IPYTHON_PROFILE} --ping=30000 & # --sqlitedb
echo "===== Finish ipcontroller execution ======"
sleep 15
srun ipengine --profile=${IPYTHON_PROFILE} --timeout=300 &
sleep 75
echo "===== Beginning python execution ======"
python run_simulations.py
depending on your system, read more here:https://ipyparallel.readthedocs.io/en/latest/process.html
and run_simulations.py should contain something like this:
import os
from ipyparallel import Client
import sys
from tqdm import tqdm
import subprocess
from subprocess import PIPE
def run_sim(x):
import os
import subprocess
from subprocess import PIPE
# send job!
params = [str(i) for i in x]
p1 = subprocess.Popen(['matlab','-r',f'"run_model({x[0]},{x[1]})"'], env=dict(**os.environ))
p1.wait()
return
##load ipython parallel
rc = Client(profile=os.getenv('IPYTHON_PROFILE'))
print('Using ipyparallel with %d engines', len(rc))
lview = rc.load_balanced_view()
view = rc[:]
print('Using ipyparallel with %d engines', len(rc))
sys.stdout.flush()
map_function = lview.map_sync
to_send = []
#prepare variables <-- here you should prepare the arguments for matlab
####################
for param_1 in [1,2,3,4]:
for param_2 in [10,20,40]:
to_send.append([param_1, param_2])
ind_raw_features = lview.map_async(run_sim,to_send)
all_results = []
print('Sending jobs');sys.stdout.flush()
for i in tqdm(ind_raw_features,file=sys.stdout):
all_results.append(i)
You also get a progress bar in the stdout, which is nice... you can also easily add a check to see if the output files exist and ignore a run.
So I have a situation where I'm running numerous commands with parallel and piping the output to another script that consumes the output. The problem I'm having is that my script that does the processing of output needs to know when a particular command has finished executing.
I'm using the --tag option so that I know what command has generated output but currently I have to wait until parallel is done running all commands before I can know that I'm not going to get anymore output from a particular command. From my understanding of parallel I see the following possible solutions but none really suit me.
I could group the output lines with the --line-buffer option so it
looks like that were ran sequentially. Then whenever I see output
from the next command I know the previous has finished, however
doing it that way slows me up as one command may take 30 seconds to
complete while after it there may 20 other commands that only took
one second and I wish to process them in as close to real-time as
possible.
I could wrap my command in a tiny bash script that outputs 'Process
with some ID DONE' to get the notification the command completed. I
don't really like this because I'm running several hundred commands
at a time and don't really want to add all those extra bash
processes.
I am really hoping that I'm just missing something in the docs and there is a flag in there to do what I'm looking for.
My understanding is that parallel is implemented in perl, which I'm comfortable with, but would rather not have to add the functionality myself unless its completely necessary.
Any help or suggestions are greatly appreciated.
The default behaviour with --tag should work perfectly. It will not output anything until the job is done. And then your postprocessor can simply grab the argument from the start of the line.
Example:
parallel -j3 --tag 'echo Job {} start; sleep {}; echo Job {} ended' ::: 7 1 3 5 2 4 6
If you want to keep the order:
parallel -j3 --keep-order --tag 'echo Job {} start; sleep {}; echo Job {} ended' ::: 7 1 3 5 2 4 6
Notice how the jobs would mix if the output was done immediately. Compare with --ungroup (which you do not want):
parallel -j3 --ungroup 'echo Job {} start; sleep {}; echo Job {} ended' ::: 7 1 3 5 2 4 6
I have a Julia script that converts csvs to a binary format. Trust me it's great. I also have many (seemingly innumerable) csvs that I want to process. It's a shared network and so I can only process five files at a clip without savagely burdening the CPU and making my coworkers irate and potentially unstable. Accordingly, I want to run the script in groups of five, wait for them to finish, and then run the next batch as background processes until it's Miller time all using Julia's wonderful run() function ala:
julia csvparse3.jl /home/file1.csv > /dev/null 2>&1 &
I'm fairly certain that I could sidestep all of this by using addprocs() and pmap() if I made my parsing script into a Julia module/function. However, the reason I'm asking this is because I don't know what I would then do if my original script was written in Fortran or even worse Python? Is there a way for me to achieve my aforementioned goals for an arbitrary number of external programs, ascertain when the processes are finished, and start anew in the context of a simple loop? Many thanks.
With GNU Parallel you can run:
parallel -j5 julia csvparse3.jl ::: /home/*.csv > /dev/null 2>&1
GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to.
If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:
GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:
Installation
If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash
For other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README
Learn more
See more examples: http://www.gnu.org/software/parallel/man.html
Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel
I need to run many (hundreds) commands in shell, but I only want to have a maximum of 4 processes running (from the queue) at once. Each process will last several hours.
When a process finishes I want the next command to be "popped" from the queue and executed.
I also want to be able to add more process after the beginning, and it will be great if I could remove some jobs from the queue, or at least empty the queue.
I have seen solutions using makefile, but this only work if I have all my list of commands before the beginning. Also tried using mkfifo sjobq, and others, but I never could reach my needs...
Does anyone have code to solve this problem?
Edit: In response to Mark Setchell
The solution with tail -f and parallel is almost perfect, but when I do it, it always keep not launching the last 4 commands until I add more, and so on, I don't know why, and it is quite troublesome...
As for Redis, good solution also, but it takes more time to master all of it.
Thanks !
Use GNU Parallel to make a job queue like this:
# Clear out file containing job queue
> jobqueue
# Start GNU Parallel processing jobs from queue
# -k means "keep" output in order
# -j 4 means run 4 jobs at a time
tail -f jobqueue | parallel -k -j 4
# From another terminal, submit 40 jobs to the queue
for i in {1..40}; do echo "sleep 5;date +'%H:%M:%S Job $i'"; done >> jobqueue
Another option is to use REDIS - see my answer here Run several jobs parallelly and Efficiently
I need a shell script that will create a loop to start parallel tasks read in from a file...
Something in the lines of..
#!/bin/bash
mylist=/home/mylist.txt
for i in ('ls $mylist')
do
do something like cp -rp $i /destination &
end
wait
So what I am trying to do is send a bunch of tasks in the background with the "&" for each line in $mylist and wait for them to finish before existing.
However, there may be a lot of lines in there so I want to control how many parallel background processes get started; want to be able to max it at say.. 5? 10?
Any ideas?
Thank you
Your task manager will make it seem like you can run many parallel jobs. How many you can actually run to obtain maximum efficiency depends on your processor. Overall you don't have to worry about starting too many processes because your system will do that for you. If you want to limit them anyway because the number could get absurdly high you could use something like this (provided you execute a cp command every time):
...
while ...; do
jobs=$(pgrep 'cp' | wc -l)
[[ $jobs -gt 50 ]] && (sleep 100 ; continue)
...
done
The number of running cp commands will be stored in the jobs variable and before starting a new iteration it will check if there are too many already. Note that we jump to a new iteration so you'd have to keep track of how many commands you already executed. Alternatively you could use wait.
Edit:
On a side note, you can assign a specific CPU core to a process using taskset, it may come in handy when you have fewer more complex commands.
You are probably looking for something like this using GNU Parallel:
parallel -j10 cp -rp {} /destination :::: /home/mylist.txt
GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to.
If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:
GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:
Installation
If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash
For other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README
Learn more
See more examples: http://www.gnu.org/software/parallel/man.html
Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel