How to run jobs in paralell using one slurm batch script? - parallel-processing

I am trying to run multiple python scripts in parallel with one Slurm batch script. Take a look at the example below:
#!/bin/bash
#
#SBATCH --job-name=test
#SBATCH --output=/dev/null
#SBATCH --error=/dev/null
#SBATCH --ntasks=2
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=1G
#SBATCH --partition=All
#SBATCH --time=5:00
srun sleep 60
srun sleep 60
wait
How do I tweak the script such that the execution will take only 60 sec (instead of 120) ? Splitting the script into two scripts is not an option.

As written, that script is running two sleep commands in parallel, two times in a row.
Each srun command initiates a step, and since you set --ntasks=2 each step instantiates two tasks (here the sleep command).
If you want to run two 1-task steps in parallel, you should write it this way:
srun --exclusive -n 1 -c 1 sleep 60 &
srun --exclusive -n 1 -c 1 sleep 60 &
wait
Then each step only instantiates one task, and is backgrounded by the & delimiter, meaning the next srun can start immediately. The wait command makes sure the script terminates only when both steps are finished.
In that context, the xargs command and the GNU parallel commands can be useful to avoid writing multiple identical srun lines or avoiding a for-loop.
For instance, if you have multiple files you need to run your script over:
find /path/to/data/*.csv -print0 | xargs -0 -n1 -P $SLURM_NTASKS srun -n1 --exclusive python my_python_script.py
This is equivalent to writing as many
srun -n 1 -c 1 --exclusive python my_python_script.py /path/to/data/file1.csv &
srun -n 1 -c 1 --exclusive python my_python_script.py /path/to/data/file1.csv &
srun -n 1 -c 1 --exclusive python my_python_script.py /path/to/data/file1.csv &
[...]
GNU parallel is useful to iterate over parameter values:
parallel -P $SLURM_NTASKS srun -n1 --exclusive python my_python_script.py ::: {1..1000}
will run
python my_python_script.py 1
python my_python_script.py 2
python my_python_script.py 3
...
python my_python_script.py 1000
Another approach is to just run
srun python my_python_script.py
and, inside the Python script, to look for the SLURM_PROCID environment variable and split the work according to its value. The srun command will start multiple instances of the script and each will 'see' a different value for SLURM_PROCID.
import os
print(os.environ['SLURM_PROCID'])

Related

How to run programs from a bash script on different GPUs?

I usually run two separate jobs (program1 and program2) on two different GPUs.
I would like to be able to run these two jobs from a single bash script but still on two different GPUs with a slurm .out file for each programs. Is this possible?
#!/bin/bash -l
#SBATCH --time=1:00:00
#SBATCH --gres=gpu:v100:1
#SBATCH --mem=90g
#SBATCH --cpus-per-task=6 -N 1
program1
#!/bin/bash -l
#SBATCH --time=1:00:00
#SBATCH --gres=gpu:v100:1
#SBATCH --mem=90g
#SBATCH --cpus-per-task=6 -N 1
program2
The script below seems to run both programs on the same GPU with a single .out file as output.
#!/bin/bash -l
#SBATCH --time=1:00:00
#SBATCH --gres=gpu:v100:1
#SBATCH --mem=90g
#SBATCH --cpus-per-task=6 -N 1
program1 &
program2 &
wait
Thanks for your help.
First way
You could write a submit script that gets the name of the executable as a command line argument and another script that calls the submit script. The submit script "submit.sh" could look like this:
#!/bin/bash -l
#SBATCH --time=1:00:00
#SBATCH --gres=gpu:v100:1
#SBATCH --mem=90g
#SBATCH --cpus-per-task=6 -N 1
$1
The second script "run_all.sh" could look like this:
#!/bin/bash
sbatch submit.sh program1
sbatch submit.sh program2
Now you can start your jobs with:$ ./run_all.sh
Second way
You don't have to use scripts to provide all the information for slurm. It is possible to pass the job information as arguments from the sbatch call: sbatch [OPTIONS(0)...] [ : [OPTIONS(N)...]] script(0) [args(0)...]
A script like this then could be useful:
#!/bin/bash -l
slurm_opt= --time=1:00:00 --gres=gpu:v100:1 --mem=90g --cpus-per-task=6 -N 1 --wrap
sbatch $slurm_opt program1
sbatch $slurm_opt program2
Note the --wrap option. It allows you to have any executable not just a script after it.

What does the keyword --exclusive mean in slurm?

This is a follow up question from [How to run jobs in paralell using one slurm batch script?]. The goal was to create a single SBatch-Script, which can start multiple processes and run them in parallel. The Answer given by
damienfrancois was very detailed and looked something like this.
#!/bin/bash
#
#SBATCH --job-name=test
#SBATCH --output=/dev/null
#SBATCH --error=/dev/null
#SBATCH --partition=All
srun -n 1 -c 1 --exclusive sleep 60 &
srun -n 1 -c 1 --exclusive sleep 60 &
....
wait
However, I am not able to understand the exclusive keyword. If I use the keyword, one node of the cluster is chosen and all processes are launched there. However, I would like Slurm to distribute the ["sleeps"/steps] over the entire cluster.
So how does the keyword exclusive work ? According to the Slurm documentaion, the restriction to one node should not happen, since the keyword is used within a step-allocation.
[I am new to Slurm]

How to submit a script partly as array

I'm using Slurm on my lab's server, and I would like to submit a job that looks like this:
#SBATCH ...
mkdir my/file/architecture
echo "#HEADER" > my/file/architecture/output_summary.txt
for f in my/dir/*.csv; do
python3 myscript.py $f
done
Is there any way to run this so that it will complete the first instructions, then run the for loop in parallel? Each step is independant, so they can run at the same time.
The initial steps are not very complex, so if needed I could separate it into a separate SBATCH script. my/dir/ however contains about 7000 csv files to processes, so typing them all out manually would be a pain.
GNU Parallel might be a good fit here, or xargs, though I prefer parallel in Slurm jobs.
Here's an example of an sbatch script running an 8-way parallel:
#!/bin/sh
#SBATCH ...
#SBATCH --nodes=1
#SBATCH --ntasks=
srun="srun --exclusive -N1 -n1"
# -j is the number of tasks parallel runs so we set it to $SLURM_NTASKS
# Note that --ntasks=1 and --cpus-per-task=8 will have srun start one copy of the program at a time. We use "find" to generate a list of files to operate on.
find /my/dir/*.csv -type f | parallel -j $SLURM_NTASKS "$srun python3 myscript.py {}"
The easiest way is to run on a single node, though parallel can use SSH (I believe) to run on multiple computers.

Do I need a single bash file for each task in SLURM?

I am trying to launch several task in a SLURM-managed cluster, and would like to avoid dealing with dozens of files.
Right now, I have 50 tasks (subscripted i, and for simplicity, i is also the input parameter of my program), and for each one a single bash file slurm_run_i.sh which indicates the computations configuration, and the srun command:
#!/bin/bash
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH -J pltCV
#SBATCH --mem=30G
srun python plotConvergence.py i
I am then using another bash file to submit all these tasks, slurm_run_all.sh
#!/bin/bash
for i in {1..50}:
sbatch slurm_run_$i.sh
done
This works (50 jobs are running on the cluster), but I find it troublesome to have more than 50 input files. Searching a solution, I came up with the & command, obtaining something as:
#!/bin/bash
#SBATCH --ntasks=50
#SBATCH --cpus-per-task=1
#SBATCH -J pltall
#SBATCH --mem=30G
# Running jobs
srun python plotConvergence.py 1 &
srun python plotConvergence.py 2 &
...
srun python plotConvergence.py 49 &
srun python plotConvergence.py 50 &
wait
echo "All done"
Which seems to run as well. However, I cannot manage each of these jobs independently: the output of squeue shows I have a single job (pltall) running on a single node. As there are only 12 cores on each node in the partition I am working in, I am assuming most of my jobs are waiting on the single node I've been allocated to. Setting the -N option doesn't change anything too.. Moreover, I cannot cancel some jobs individually anymore if I realize there's a mistake or something, which sounds problematic to me.
Is my interpretation right, and is there a better way (I guess) than my attempt to process several jobs in slurm without being lost among many files ?
What you are looking for is the jobs array feature of Slurm.
In your case, you would have a single submission file (slurm_run.sh) like this:
#!/bin/bash
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH -J pltCV
#SBATCH --mem=30G
#SBATCH --array=1-50
srun python plotConvergence.py ${SLURM_ARRAY_TASK_ID}
and then submit the array of jobs with
sbatch slurm_run.sh
You will see that you will have 50 jobs submitted. You can cancel all of them at once or one by one. See the man page of sbatch for details.

SLURM sbatch script not running all srun commands in while loop

I'm trying to submit multiple jobs in parallel as a preprocessing step in sbatch using srun. The loop reads a file containing 40 file names and uses "srun command" on each file. However, not all files are being sent off with srun and the rest of the sbatch script continues after the ones that did get submitted finish. The real sbatch script is more complicated and I can't use arrays with this so that won't work. This part should be pretty straightforward though.
I made this simple test case as a sanity check and it does the same thing. For every file name in the file list (40) it creates a new file containing 'foo' in it. Every time I submit the script with sbatch it results in a different number of files being sent off with srun.
#!/bin/sh
#SBATCH --job-name=loop
#SBATCH --nodes=5
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=1
#SBATCH --time=00:10:00
#SBATCH --mem-per-cpu=1G
#SBATCH -A zheng_lab
#SBATCH -p exacloud
#SBATCH --error=/home/exacloud/lustre1/zheng_lab/users/eggerj/Dissertation/splice_net_prototype/beatAML_data/splicing_quantification/test_build_parallel/log_files/test.%J.err
#SBATCH --output=/home/exacloud/lustre1/zheng_lab/users/eggerj/Dissertation/splice_net_prototype/beatAML_data/splicing_quantification/test_build_parallel/log_files/test.%J.out
DIR=/home/exacloud/lustre1/zheng_lab/users/eggerj/Dissertation/splice_net_prototype/beatAML_data/splicing_quantification/test_build_parallel
SAMPLES=$DIR/samples.txt
OUT_DIR=$DIR/test_out
FOO_FILE=$DIR/foo.txt
# Create output directory
srun -N 1 -n 1 -c 1 mkdir $OUT_DIR
# How many files to run
num_files=$(srun -N 1 -n 1 -c 1 wc -l $SAMPLES)
echo "Number of input files: " $num_files
# Create a new file for every file in listing (run 5 at a time, 1 for each node)
while read F ;
do
fn="$(rev <<< "$F" | cut -d'/' -f 1 | rev)" # Remove path for writing output to new directory
echo $fn
srun -N 1 -n 1 -c 1 cat $FOO_FILE > $OUT_DIR/$fn.out &
done <$SAMPLES
wait
# How many files actually got created
finished=$(srun -N 1 -n 1 -c 1 ls -lh $OUT_DIR/*out | wc -l)
echo "Number of files submitted: " $finished
Here is my output log file the last time I tried to run it:
Number of input files: 40 /home/exacloud/lustre1/zheng_lab/users/eggerj/Dissertation/splice_net_prototype/beatAML_data/splicing_quantification/test_build_parallel/samples.txt
sample1
sample2
sample3
sample4
sample5
sample6
sample7
sample8
Number of files submitted: 8
The issue is that srun redirects its stdin to the tasks it starts, and therefore the contents of $SAMPLES is consumed, in an unpredictable way, by all the cat commands that are started.
Try with
srun --input none -N 1 -n 1 -c 1 cat $FOO_FILE > $OUT_DIR/$fn.out &
The --input none parameter will tell srun to not mess with stdin.

Resources