Let's suppose I have the following list of variables in a txt file (var.txt):
AAA
ABC
BBB
CCC
the following R script (script.R), where x is one variable in var.txt:
print(x)
and the following HPC slurm job script (job.sh):
#!/bin/bash
#SBATCH --job-name test
#SBATCH --ntasks 8
#SBATCH --time 04:00
#SBATCH --output out
#SBATCH --error err
Rscript script.R
How can I run the job.sh script 4 times in sequence, each time with a different variable inside script.R?
Expected output:
4 slurm jobs with script.R printing AAA, ABC, BBB, and CCC.
This is the typical workload suited for a job array. With a submission script like this
#!/bin/bash
#SBATCH --job-name test
#SBATCH --ntasks 8
#SBATCH --time 04:00
#SBATCH --output out
#SBATCH --error err
#SBATCH --array=0-3
readarray -t VARS < var.txt
VAR=${VARS[$SLURM_ARRAY_TASK_ID]}
export VAR
Rscript script.R
and script.R being
print(Sys.getenv("VAR"))
you will get a four-jobs job array, each one running the R script with a different value of the env var VAR, taken from the var.txt file.
Related
I'm trying to submit an array of jobs using slurm, but it's not working as I expected. My bash script is test.sh:
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem=10G
#SBATCH --account=myaccount
#SBATCH --partition=partition
#SBATCH --time=10:00:00
###Array setup here
#SBATCH --array=1-6
#SBATCH --output=test_%a.out
echo TEST MESSAGE 1
echo $SLURM_ARRAY_TASK_ID
python test.py
The test.py code:
print('TEST MESSAGE 2')
I then submitted this job by doing:
sbatch --wrap="bash test.sh"
I'm not even sure if this is how I should run it. Because there are already SBATCH commands in the bash script, should I just be running bash test.sh?
I was expecting that 5 jobs would be submitted and that $SLURM_ARRAY_TASK_ID would increase incrementally, but that's not happening. Just one job is submitting and the output is:
TEST MESSAGE 1
TEST MESSAGE 2
So the $SLURM_ARRAY_TASK_ID never get's printed and seems to be the problem. Can anyone tell me what I'm doing wrong?
You just need to submit the script with sbatch test.sh. Using --wrap in the way you've done it just runs test.sh as a simple Bash script, so none of the Slurm-specific parts are used.
I have the following bash script that runs on a HPC using slurm:
#!/bin/bash
#SBATCH --job-name test
#SBATCH --ntasks 10
#SBATCH --time 00-01:00
#SBATCH --output out
#SBATCH --error err
#SBATCH --array=0-9
readarray -t VARS < list_VAR.txt
VAR=${VARS[$SLURM_ARRAY_TASK_ID]}
export VAR
bash data_0_"$VAR".sh
The above bash script sends 10 jobs (#SBATCH --array=0-9) to the HPC to run the data_0_"$VAR".sh script, where "$VAR" is a given string contained within the list_VAR.txt file.
Let's suppose now I have a second list_VAR_2.txt file that contains a list of numbers from 0 to 3 and I want to apply it to the job-array above, along with list_VAR.txt. The data_0_"$VAR".sh script to be ran will then convert to data_"$VAR_2"_"$VAR".sh.
Is there a way to add this further list of variables list_VAR_2.txt to the bash script?
Thanks
#####################
Update, list_VAR.txt
aa
bh
wwe
ftq
juu
d
8i
yz5
qq1p
m75
list_VAR_2.txt
0
1
2
3
You could load your list_VAR_2.txt content into an array like you did for the first file. Then loop on the arrays to build your bash commands.
Ex:
#!/bin/bash
readarray -t VARS < list_VAR.txt
readarray -t VARS_2 < list_VAR_2.txt
for VAR in "${VARS[#]}"
do
for VAR_2 in "${VARS_2[#]}"
do
bash data_"$VAR_2"_"$VAR".sh
done
done
Or build the bash commands by specifying the indexes of the array elements you want.
Ex.
bash data_"${VARS[INDEX1]}"_"${VARS_2[INDEX2]}".sh
I am new to cluster computation and I want to repeat one empirical experiment 100 times on python. For each experiment, I need to generate a set of data and solve an optimization problem, then I want to obtain the averaged value. To save time, I hope to do it in parallel. For example, suppose I can use 20 cores, I only need to repeat 5 times on each core.
Here's an example of a test.slurm script that I use for running the test.py script on a single core:
#!/bin/bash
#SBATCH --job-name=test
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem=4G
#SBATCH --time=72:00:00
#SBATCH --mail-type=begin
#SBATCH --mail-type=end
#SBATCH --mail-user=address#email
module purge
module load anaconda3/2018.12
source activate py36
python test.py
If I want to run it in multiple cores, how should I modify the slurm file accordingly?
To run the test on multiple cores, you can use srun -n option. After -n specify the number of processes, you need to launch.
srun -n 20 python test.py
srun is the launcher in slurm.
Or you can change the ntasks, cpus-per-task in slurm file.
The slurm file will look like this:
#!/bin/bash
#SBATCH --job-name=test
#SBATCH --nodes=1
#SBATCH --ntasks=20
#SBATCH --cpus-per-task=1
#SBATCH --mem=4G
#SBATCH --time=72:00:00
#SBATCH --mail-type=begin
#SBATCH --mail-type=end
#SBATCH --mail-user=address#email
module purge
module load anaconda3/2018.12
source activate py36
python test.py
I am using slurm on a cluster to run jobs and submit a script that looks like below with sbatch:
#!/usr/bin/env bash
#SBATCH -o slurm.sh.out
#SBATCH -p defq
#SBATCH --mail-type=ALL
#SBATCH --mail-user=my.email#something.com
echo "hello"
Can I somehow comment out a #SBATCH line, e.g. the #SBATCH --mail-user=my.email#something.com in this script? Since the slurm instructions are bash comments themselves I would not know how to achieve this.
just add another # at the beginning.
##SBATCH --mail-user...
This will not be processed by Slurm
I am running a bash script to run jobs on Linux clusters, using SLURM. The relevant part of the script is given below (slurm.sh):
#!/bin/bash
#SBATCH -p parallel
#SBATCH --qos=short
#SBATCH --exclusive
#SBATCH -o out.log
#SBATCH -e err.log
#SBATCH --open-mode=append
#SBATCH --cpus-per-task=1
#SBATCH -J hadoopslurm
#SBATCH --time=01:30:00
#SBATCH --mem-per-cpu=1000
#SBATCH --mail-user=amukherjee708#gmail.com
#SBATCH --mail-type=ALL
#SBATCH -N 5
I am calling this script from another script (ext.sh), a part of which is given below:
#!/bin/bash
for i in {1..3}
do
source slurm.sh
done
..
I want to manipulate the value of the N variable is slurm.sh (#SBATCH -N 5) by setting it to values like 3,6,8 etc, inside the for loop of ext.sh. How do I access the variable programmatically from ext.sh? Please help.
First note that if you simply source the shell script, you will not submit a job to Slurm, you will simply run the job on the submission node. So you need to write
#!/bin/bash
for i in {1..3}
do
sbatch slurm.sh
done
Now if you want to change the -N programmatically, one option is to remove it from the file slurm.sh and add it as argument to the sbatch command:
#!/bin/bash
for i in {1..3}
do
sbatch -N $i slurm.sh
done
The above script will submit three jobs with respectively 1, 2, and 3 nodes requested.