Add exception to slurm array - bash

I have the following slurm script:
#!/bin/bash
#SBATCH -A XXX-CPU
#SBATCH --mail-type=BEGIN,END,FAIL
#SBATCH -p cclake
#SBATCH -D analyses/
#SBATCH -c 12
#SBATCH -t 01:00:00
#SBATCH --mem=10G
#SBATCH -J splitBAM
#SBATCH -a 1-12
#SBATCH -o analyses/splitBAM_%a.log
sed -n ${SLURM_ARRAY_TASK_ID}p analyses/slurm/commands1.csv | bash
sed -n ${SLURM_ARRAY_TASK_ID}p analyses/slurm/commands2.csv | bash
sed -n ${SLURM_ARRAY_TASK_ID}p analyses/slurm/commands3.csv | bash
Normally I would run another slurm script with a single bash command to remove some files with the option #SBATCH --dependency=afterok:job_id(first job).
What I want to do is include this in the above script, but when I add the line rm file1 file2 file3 it will obviously do this for each job in the array, but I only want to run the command once after all the jobs in the array have finished.
Is there a way to mark this command so that is not part of the array? This would allow me to do all the things with one script instead of two.

There is not specific Slurm syntax to do that, but you can add an if statement at the end of the script, looking for are any other jobs from that array.
if [[ $(squeue -h -j $SLURM_JOB_ID | wc -l) == 1 ]] ; then
rm file1 file2 file3
fi
If there is no other job running from that job array, then delete the files.

Related

How to run programs from a bash script on different GPUs?

I usually run two separate jobs (program1 and program2) on two different GPUs.
I would like to be able to run these two jobs from a single bash script but still on two different GPUs with a slurm .out file for each programs. Is this possible?
#!/bin/bash -l
#SBATCH --time=1:00:00
#SBATCH --gres=gpu:v100:1
#SBATCH --mem=90g
#SBATCH --cpus-per-task=6 -N 1
program1
#!/bin/bash -l
#SBATCH --time=1:00:00
#SBATCH --gres=gpu:v100:1
#SBATCH --mem=90g
#SBATCH --cpus-per-task=6 -N 1
program2
The script below seems to run both programs on the same GPU with a single .out file as output.
#!/bin/bash -l
#SBATCH --time=1:00:00
#SBATCH --gres=gpu:v100:1
#SBATCH --mem=90g
#SBATCH --cpus-per-task=6 -N 1
program1 &
program2 &
wait
Thanks for your help.
First way
You could write a submit script that gets the name of the executable as a command line argument and another script that calls the submit script. The submit script "submit.sh" could look like this:
#!/bin/bash -l
#SBATCH --time=1:00:00
#SBATCH --gres=gpu:v100:1
#SBATCH --mem=90g
#SBATCH --cpus-per-task=6 -N 1
$1
The second script "run_all.sh" could look like this:
#!/bin/bash
sbatch submit.sh program1
sbatch submit.sh program2
Now you can start your jobs with:$ ./run_all.sh
Second way
You don't have to use scripts to provide all the information for slurm. It is possible to pass the job information as arguments from the sbatch call: sbatch [OPTIONS(0)...] [ : [OPTIONS(N)...]] script(0) [args(0)...]
A script like this then could be useful:
#!/bin/bash -l
slurm_opt= --time=1:00:00 --gres=gpu:v100:1 --mem=90g --cpus-per-task=6 -N 1 --wrap
sbatch $slurm_opt program1
sbatch $slurm_opt program2
Note the --wrap option. It allows you to have any executable not just a script after it.

SLURM sbatch script not running all srun commands in while loop

I'm trying to submit multiple jobs in parallel as a preprocessing step in sbatch using srun. The loop reads a file containing 40 file names and uses "srun command" on each file. However, not all files are being sent off with srun and the rest of the sbatch script continues after the ones that did get submitted finish. The real sbatch script is more complicated and I can't use arrays with this so that won't work. This part should be pretty straightforward though.
I made this simple test case as a sanity check and it does the same thing. For every file name in the file list (40) it creates a new file containing 'foo' in it. Every time I submit the script with sbatch it results in a different number of files being sent off with srun.
#!/bin/sh
#SBATCH --job-name=loop
#SBATCH --nodes=5
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=1
#SBATCH --time=00:10:00
#SBATCH --mem-per-cpu=1G
#SBATCH -A zheng_lab
#SBATCH -p exacloud
#SBATCH --error=/home/exacloud/lustre1/zheng_lab/users/eggerj/Dissertation/splice_net_prototype/beatAML_data/splicing_quantification/test_build_parallel/log_files/test.%J.err
#SBATCH --output=/home/exacloud/lustre1/zheng_lab/users/eggerj/Dissertation/splice_net_prototype/beatAML_data/splicing_quantification/test_build_parallel/log_files/test.%J.out
DIR=/home/exacloud/lustre1/zheng_lab/users/eggerj/Dissertation/splice_net_prototype/beatAML_data/splicing_quantification/test_build_parallel
SAMPLES=$DIR/samples.txt
OUT_DIR=$DIR/test_out
FOO_FILE=$DIR/foo.txt
# Create output directory
srun -N 1 -n 1 -c 1 mkdir $OUT_DIR
# How many files to run
num_files=$(srun -N 1 -n 1 -c 1 wc -l $SAMPLES)
echo "Number of input files: " $num_files
# Create a new file for every file in listing (run 5 at a time, 1 for each node)
while read F ;
do
fn="$(rev <<< "$F" | cut -d'/' -f 1 | rev)" # Remove path for writing output to new directory
echo $fn
srun -N 1 -n 1 -c 1 cat $FOO_FILE > $OUT_DIR/$fn.out &
done <$SAMPLES
wait
# How many files actually got created
finished=$(srun -N 1 -n 1 -c 1 ls -lh $OUT_DIR/*out | wc -l)
echo "Number of files submitted: " $finished
Here is my output log file the last time I tried to run it:
Number of input files: 40 /home/exacloud/lustre1/zheng_lab/users/eggerj/Dissertation/splice_net_prototype/beatAML_data/splicing_quantification/test_build_parallel/samples.txt
sample1
sample2
sample3
sample4
sample5
sample6
sample7
sample8
Number of files submitted: 8
The issue is that srun redirects its stdin to the tasks it starts, and therefore the contents of $SAMPLES is consumed, in an unpredictable way, by all the cat commands that are started.
Try with
srun --input none -N 1 -n 1 -c 1 cat $FOO_FILE > $OUT_DIR/$fn.out &
The --input none parameter will tell srun to not mess with stdin.

How to loop through a script with SLURM? (sbatch and srun)

New to slurm, I have a script that was written to run the same command many times that has multiple inputs and outputs. If i have another shell script, is there a way that I can loop through that in multiple srun commands. My thought you would be something along the lines of:
shell script:
#!/bin/bash
ExCommand -f input1a -b input2a -c input3a -o outputa
ExCommand -f input1b -b input2b -c input3b -o outputb
ExCommand -f input1c -b input2c -c input3c -o outputc
ExCommand -f input1d -b input2d -c input3d -o outputd
ExCommand -f input1e -b input2e -c input3e -o outpute
sbatch script
#!/bin/bash
## Job Name
#SBATCH --job-name=collectAlignmentMetrics
## Allocation Definition
## Resources
## Nodes
#SBATCH --nodes=1
## Time limir
#SBATCH --time=4:00:00
## Memory per node
#SBATCH --mem=64G
## Specify the working directory for this job
for line in shellscript
do
srun command
done
Any ideas?
Try replace your for loop with this:
while read -r line;
do
if [[ $line == \#* ]]; continue ; fi
srun $line
done < shellscript

Use a bash file to loop over other bash files with different paramer values

I have a shell file script.sh with the following commands:
#!/bin/sh
#SBATCH --partition=univ2
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=13
mpirun -n 25 benchmark.out $param
where param is an integer from the set {1,2,...,10}. Here param is a command line argument that is passed over to the executable benchmark.out. I want to create another shell file master.sh (in the same directory as script.sh) which would contain a loop over param (from 1 to 10), such that upon each iteration, script.sh is executed with a given value of param. How should this file look like? Thank you.
Master
#!/bin/bash
for param in `seq 1 1 10`; do
./script.sh $param
done
Script
#!/bin/sh
#SBATCH --partition=univ2
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=13
mpirun -n 25 benchmark.out $1

Change the value of external SLURM variable

I am running a bash script to run jobs on Linux clusters, using SLURM. The relevant part of the script is given below (slurm.sh):
#!/bin/bash
#SBATCH -p parallel
#SBATCH --qos=short
#SBATCH --exclusive
#SBATCH -o out.log
#SBATCH -e err.log
#SBATCH --open-mode=append
#SBATCH --cpus-per-task=1
#SBATCH -J hadoopslurm
#SBATCH --time=01:30:00
#SBATCH --mem-per-cpu=1000
#SBATCH --mail-user=amukherjee708#gmail.com
#SBATCH --mail-type=ALL
#SBATCH -N 5
I am calling this script from another script (ext.sh), a part of which is given below:
#!/bin/bash
for i in {1..3}
do
source slurm.sh
done
..
I want to manipulate the value of the N variable is slurm.sh (#SBATCH -N 5) by setting it to values like 3,6,8 etc, inside the for loop of ext.sh. How do I access the variable programmatically from ext.sh? Please help.
First note that if you simply source the shell script, you will not submit a job to Slurm, you will simply run the job on the submission node. So you need to write
#!/bin/bash
for i in {1..3}
do
sbatch slurm.sh
done
Now if you want to change the -N programmatically, one option is to remove it from the file slurm.sh and add it as argument to the sbatch command:
#!/bin/bash
for i in {1..3}
do
sbatch -N $i slurm.sh
done
The above script will submit three jobs with respectively 1, 2, and 3 nodes requested.

Resources