I'm trying to submit multiple jobs in parallel as a preprocessing step in sbatch using srun. The loop reads a file containing 40 file names and uses "srun command" on each file. However, not all files are being sent off with srun and the rest of the sbatch script continues after the ones that did get submitted finish. The real sbatch script is more complicated and I can't use arrays with this so that won't work. This part should be pretty straightforward though.
I made this simple test case as a sanity check and it does the same thing. For every file name in the file list (40) it creates a new file containing 'foo' in it. Every time I submit the script with sbatch it results in a different number of files being sent off with srun.
#!/bin/sh
#SBATCH --job-name=loop
#SBATCH --nodes=5
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=1
#SBATCH --time=00:10:00
#SBATCH --mem-per-cpu=1G
#SBATCH -A zheng_lab
#SBATCH -p exacloud
#SBATCH --error=/home/exacloud/lustre1/zheng_lab/users/eggerj/Dissertation/splice_net_prototype/beatAML_data/splicing_quantification/test_build_parallel/log_files/test.%J.err
#SBATCH --output=/home/exacloud/lustre1/zheng_lab/users/eggerj/Dissertation/splice_net_prototype/beatAML_data/splicing_quantification/test_build_parallel/log_files/test.%J.out
DIR=/home/exacloud/lustre1/zheng_lab/users/eggerj/Dissertation/splice_net_prototype/beatAML_data/splicing_quantification/test_build_parallel
SAMPLES=$DIR/samples.txt
OUT_DIR=$DIR/test_out
FOO_FILE=$DIR/foo.txt
# Create output directory
srun -N 1 -n 1 -c 1 mkdir $OUT_DIR
# How many files to run
num_files=$(srun -N 1 -n 1 -c 1 wc -l $SAMPLES)
echo "Number of input files: " $num_files
# Create a new file for every file in listing (run 5 at a time, 1 for each node)
while read F ;
do
fn="$(rev <<< "$F" | cut -d'/' -f 1 | rev)" # Remove path for writing output to new directory
echo $fn
srun -N 1 -n 1 -c 1 cat $FOO_FILE > $OUT_DIR/$fn.out &
done <$SAMPLES
wait
# How many files actually got created
finished=$(srun -N 1 -n 1 -c 1 ls -lh $OUT_DIR/*out | wc -l)
echo "Number of files submitted: " $finished
Here is my output log file the last time I tried to run it:
Number of input files: 40 /home/exacloud/lustre1/zheng_lab/users/eggerj/Dissertation/splice_net_prototype/beatAML_data/splicing_quantification/test_build_parallel/samples.txt
sample1
sample2
sample3
sample4
sample5
sample6
sample7
sample8
Number of files submitted: 8
The issue is that srun redirects its stdin to the tasks it starts, and therefore the contents of $SAMPLES is consumed, in an unpredictable way, by all the cat commands that are started.
Try with
srun --input none -N 1 -n 1 -c 1 cat $FOO_FILE > $OUT_DIR/$fn.out &
The --input none parameter will tell srun to not mess with stdin.
Related
I have the following slurm script:
#!/bin/bash
#SBATCH -A XXX-CPU
#SBATCH --mail-type=BEGIN,END,FAIL
#SBATCH -p cclake
#SBATCH -D analyses/
#SBATCH -c 12
#SBATCH -t 01:00:00
#SBATCH --mem=10G
#SBATCH -J splitBAM
#SBATCH -a 1-12
#SBATCH -o analyses/splitBAM_%a.log
sed -n ${SLURM_ARRAY_TASK_ID}p analyses/slurm/commands1.csv | bash
sed -n ${SLURM_ARRAY_TASK_ID}p analyses/slurm/commands2.csv | bash
sed -n ${SLURM_ARRAY_TASK_ID}p analyses/slurm/commands3.csv | bash
Normally I would run another slurm script with a single bash command to remove some files with the option #SBATCH --dependency=afterok:job_id(first job).
What I want to do is include this in the above script, but when I add the line rm file1 file2 file3 it will obviously do this for each job in the array, but I only want to run the command once after all the jobs in the array have finished.
Is there a way to mark this command so that is not part of the array? This would allow me to do all the things with one script instead of two.
There is not specific Slurm syntax to do that, but you can add an if statement at the end of the script, looking for are any other jobs from that array.
if [[ $(squeue -h -j $SLURM_JOB_ID | wc -l) == 1 ]] ; then
rm file1 file2 file3
fi
If there is no other job running from that job array, then delete the files.
New to slurm, I have a script that was written to run the same command many times that has multiple inputs and outputs. If i have another shell script, is there a way that I can loop through that in multiple srun commands. My thought you would be something along the lines of:
shell script:
#!/bin/bash
ExCommand -f input1a -b input2a -c input3a -o outputa
ExCommand -f input1b -b input2b -c input3b -o outputb
ExCommand -f input1c -b input2c -c input3c -o outputc
ExCommand -f input1d -b input2d -c input3d -o outputd
ExCommand -f input1e -b input2e -c input3e -o outpute
sbatch script
#!/bin/bash
## Job Name
#SBATCH --job-name=collectAlignmentMetrics
## Allocation Definition
## Resources
## Nodes
#SBATCH --nodes=1
## Time limir
#SBATCH --time=4:00:00
## Memory per node
#SBATCH --mem=64G
## Specify the working directory for this job
for line in shellscript
do
srun command
done
Any ideas?
Try replace your for loop with this:
while read -r line;
do
if [[ $line == \#* ]]; continue ; fi
srun $line
done < shellscript
I want to start many independent tasks (job steps) as part of one job and want to keep track of the highest exit code of all these tasks.
Inspired by this question I am currently doing something like
#SBATCH stuf....
for i in {1..3}; do
srun -n 1 ./myprog ${i} >& task${i}.log &
done
wait
in my jobs.sh, which I sbatch, to start my tasks.
How can I define a variable exitcode which, after the wait command, contains the highest exit code of all the tasks?
Thanks so much in advance!
You can store jobs' pids in an array and wait for each one, like this
#SBATCH stuf....
for i in {1..3}; do
srun -n 1 ./myprog ${i} >& task${i}.log &
pids+=($!)
done
for pid in ${pids[#]}; do
wait $pid
exitcode=$[$? > exitcode ? $? : exitcode]
done
echo $exitcode
You can use GNU parallel to your advantage in such case:
#SBATCH stuf....
parallel --joblog ./jobs.log -P 3 "srun -n1 --exclusive ./myprog {} >& task{}.log " ::: {1..3}
This will run srun ./mprog three times with arguments respectively 1, 2 and 3, and redirect the output to three files names task1.log, task2.log and task3.log, just like your for-loop does.
With the --joblog option, it will furthermore create a file jobs.log that will contain some information about each run, among which is the exit code, in column 7. You can then extract the maximum with
awk 'NR>1 {print $7}' jobs.log | sort -n | tail -1
I have a shell file script.sh with the following commands:
#!/bin/sh
#SBATCH --partition=univ2
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=13
mpirun -n 25 benchmark.out $param
where param is an integer from the set {1,2,...,10}. Here param is a command line argument that is passed over to the executable benchmark.out. I want to create another shell file master.sh (in the same directory as script.sh) which would contain a loop over param (from 1 to 10), such that upon each iteration, script.sh is executed with a given value of param. How should this file look like? Thank you.
Master
#!/bin/bash
for param in `seq 1 1 10`; do
./script.sh $param
done
Script
#!/bin/sh
#SBATCH --partition=univ2
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=13
mpirun -n 25 benchmark.out $1
I am running a bash script to run jobs on Linux clusters, using SLURM. The relevant part of the script is given below (slurm.sh):
#!/bin/bash
#SBATCH -p parallel
#SBATCH --qos=short
#SBATCH --exclusive
#SBATCH -o out.log
#SBATCH -e err.log
#SBATCH --open-mode=append
#SBATCH --cpus-per-task=1
#SBATCH -J hadoopslurm
#SBATCH --time=01:30:00
#SBATCH --mem-per-cpu=1000
#SBATCH --mail-user=amukherjee708#gmail.com
#SBATCH --mail-type=ALL
#SBATCH -N 5
I am calling this script from another script (ext.sh), a part of which is given below:
#!/bin/bash
for i in {1..3}
do
source slurm.sh
done
..
I want to manipulate the value of the N variable is slurm.sh (#SBATCH -N 5) by setting it to values like 3,6,8 etc, inside the for loop of ext.sh. How do I access the variable programmatically from ext.sh? Please help.
First note that if you simply source the shell script, you will not submit a job to Slurm, you will simply run the job on the submission node. So you need to write
#!/bin/bash
for i in {1..3}
do
sbatch slurm.sh
done
Now if you want to change the -N programmatically, one option is to remove it from the file slurm.sh and add it as argument to the sbatch command:
#!/bin/bash
for i in {1..3}
do
sbatch -N $i slurm.sh
done
The above script will submit three jobs with respectively 1, 2, and 3 nodes requested.