I am working a python code with MPI (mpi4py) and I want to implement my code across many nodes (each node has 16 processors) in a queue in a HPC cluster.
My code is structured as below:
from mpi4py import MPI
comm = MPI.COMM_WORLD
size = comm.Get_size()
rank = comm.Get_rank()
count = 0
for i in range(1, size):
if rank == i:
for j in range(5):
res = some_function(some_argument)
comm.send(res, dest=0, tag=count)
I am able to run this code perfectly fine on the head node of the cluster using the command
$mpirun -np 48 python codename.py
Here "code" is the name of the python script and in the given example, I am choosing 48 processors. On the head node, for my specific task, the job takes about 1 second to finish (and it successfully gives the desired output).
However, when I run try to submit this same exact code as a job on one of the queues of the HPC cluster, it keeps running for a very long time (many hours) (doesn't finish) and I have to manually kill the job after a day or so. Also, it doesn't give the expected output.
Here is the pbs file that I am using,
#!/bin/sh
#PBS -l nodes=3:ppn=16
#PBS -N phy
#PBS -m abe
#PBS -l walltime=23:00:00
#PBS -j eo
#PBS -q queue_name
cd $PBS_O_WORKDIR
echo 'This job started on: ' `date`
module load python27-extras
mpirun -np 48 python codename.py
I use the command qsub jobname.pbs to submit the job.
I am confused as to why the code should run perfectly fine on the head node, but run into this problem when I submit this job to run the code across many processors in a queue. I am presuming that I may need to change the pbs script. I will be really thankful if someone can suggest what I should do to run such a MPI script as a job on a queue in a HPC cluster.
Didn't need to change my code. This is the pbs script that worked. =)
Apparently, I needed to call the appropriate mpirun in the job script, so that when the code runs in the clusters, it uses the same mpirun as that was being used in head node.
This is the line which made the difference: /opt/intel/impi/4.1.1.036/intel64/bin/mpirun
This is the job script which worked.
#!/bin/sh
#PBS -l nodes=3:ppn=16
#PBS -N phy
#PBS -m abe
#PBS -l walltime=23:00:00
#PBS -j eo
#PBS -q queue_name
cd $PBS_O_WORKDIR
export OMP_NUM_THREADS=16
export I_MPI_PIN=off
echo 'This job started on: ' `date`
/opt/intel/impi/4.1.1.036/intel64/bin/mpirun -np 48 python codename.py
Related
I'm invoking a job with qsub myjob.pbs. In there, I have some logic to run my experiments, which includes running torchrun, a distributed utility for pytorch. In that command you can set the number of nodes and number of processes (+gpus) per node. Depending on the availability, I want to be able to invoke qsub with an arbitrary number of GPUs, so that both -l gpus= and torchrun --nproc_per_node= are set depending on the command line argument.
I tried, the following:
#!/bin/sh
#PBS -l "nodes=1:ppn=12:gpus=$1"
torchrun --standalone --nnodes=1 --nproc_per_node=$1 myscript.py
and invoked it like so:
qsub --pass "4" myjob.pbs
but I got the following error: ERROR: -l: gpus: expected valid integer, found '"$1"'. Is there a way to pass the number of GPUs to the script so that the PBS directives can read them?
The problem is that your shell sees PBS directives as comments, so it will not be able to expand arguments in this way. This means that the expansion of $1 will not be occur using:
#PBS -l "nodes=1:ppn=12:gpus=$1"
Instead, you can apply the -l gpus= argument on the command line and remove the directive from your PBS script. For example:
#!/bin/sh
#PBS -l ncpus=12
set -eu
torchrun \
--standalone \
--nnodes=1 \
--nproc_per_node="${nproc_per_node}" \
myscript.py
Then just use a simple wrapper, e.g. run_myjob.sh:
#!/bin/sh
set -eu
qsub \
-l gpus="$1" \
-v nproc_per_node="$1" \
myjob.pbs
Which should let you specify the number of gpus as a command-line argument:
sh run_myjob.sh 4
I have submitted a job to a multicore cluster with LSF platform. It looks like the code at the end. The two executables, exec1 and exec2, start at the same time. In my intention they are separated by a column comma and the second should start after the first has finished. Of course, this caused several problems with the job that couldn't terminate correctly. Now that I have figured out this behavior, I am writing separated job-submission files for each executable. Can anybody explain why these executables are running at the same time?
#!/bin/bash -l
#
# Batch script for bash users
#
#BSUB -L /bin/bash
#BSUB -n 10
#BSUB -J jobname
#BSUB -oo output.log
#BSUB -eo error.log
#BSUB -q queue
#BSUB -P project
#BSUB -R "span[hosts=1]"
#BSUB -W 4:0
source /etc/profile.d/modules.sh
module purge
module load intel_comp/c4/2013.0.028
module load hdf5/1.8.9
module load platform_mpi/8.2.1
export OMP_NUM_THREADS=1
export MP_TASK_AFFINITY=core:$OMP_NUM_THREADS
OPT="-aff=automatic:latency"
mpirun $OPT exec1; mpirun $OPT exec2
I assume that both exec1 and exec2 are MPI applications?
Theoretically it should work, but LSF is probably doing something odd and the mpirun for exec1 is exiting before exec1 actually exits. You could instead try:
mpirun $OPT exec1 && mpirun $OPT exec2
so that mpirun $OPT exec1 has to exit with return code 0 before exec2 is launched.
However, it probably isn't a great idea to run two MPI jobs from the same script like this, since for instance the MPI environment variable setup may introduce conflicts. What you should really do is use job chaining, so that exec2 is run after exec1, like this.
I can submit a job to PBS using both approaches of Non-interactive Batch Jobs and/or Interactive Batch Jobs. However, I need to use the pbs commands in a function. In other world I need a structure like this:
#!/bin/sh
pbs_setup () {
#PBS -l $1
#PBS -N $2
#PBS -q normal
#PBS -A $USER
#PBS -m ae
#PBS -M $USER"#gmail.com"
#PBS -q normal
#PBS -l nodes=1:ppn=8
#PBS
}
pbs_setup "walltime=6:00:00" "step3";
echo " "
echo "Job started
echo " "
echo "Job Ended
When I am submitting this job it is not working.
In fact my final goal is separating the commands of job from the main body of code. So when HPC will be changed I just edit a shell file which is included this function instead of editing all the shells. I appreciate if you give me some suggestions.
You could create your custom submission command that collects the job options and sends them as command line parameters to actual qsub call.
Here is a rather basic example of this. In real usage I would add more sophisticated parameter handling tailored to the type of jobs, and more consistent with qsub interface. Also handling interactive jobs needs additional work.
submit.sh
#!/bin/bash
walltime="${2:-06:00:00}"
name="${3:-step3}"
queue="normal"
acct="$USER"
mailevents="ae"
mailaddress="$USER#gmail.com"
resources="nodes=1:ppn=8"
if [ $# -lt 1 ] ; then
echo "Usage: submit.sh script [walltime [name]]" >
exit 1
fi
script="$1"
qsub -l "$walltime" -N "$name" -q "$queue" -A "$acct" \
-m "$mailevents" -M "$mailaddress" -l "$resources" "$script"
script.sh
#!/bin/bash
echo " "
echo "Job started"
echo " "
echo "Job Ended"
This is supposed to be used as
submit.sh script.sh 06:00:00 step3
The issue with that job script is that the #PBS lines need to be first non-comment lines in the script file.
In my attempt to do this same concept, I used the same type of function you have, but cat the results and the actual commands into another file. i.e. An overarching script creates the 'job' script. You can put the HPC requirements in a separate file, then source it from the creation script.
Edit in response to comment:
e.g.
To specify a path to start the job from:
#PBS - d init_path
"working directory path to be used for the job, PBS_O_INITDIR"
Or
#PBS -D root_path
"root directory to be used for the job, PBS_O_ROOTDIR."
Or
#PBS -w working_path
"If the -w option is not specified, the default working directory is the current directory. This option sets the environment variable PBS_O_WORKDIR."
So the default PBS_O_WORKDIR is the current directory you are IN when you call the script to submit the script to qsub.
Thus, if you set the specific options (d, D, w) for paths relative to the actual script running environment, you'll be able to use the paths you intend.
For specifics including default values of these and other options, you can check out the man page for your app. If using the Torque version of the PBS system, it's available at linux.die.net - qsub
I aim to run some Julia-coded simulations on a cluster (no complicated parallel processing involved) using a .pbs file (and qsub)
I know two ways to run a .jl file from the Bash. The first one is
/path/to/julia myscript.jl
The second one is
exec '/Applications/bla/bla/julia/bin/julia'
include("myscript.jl")
Here is my .pbs file. I cannot test if it works because I don't know yet where the Julia application is stored on the cluster.
#!/bin/bash
#PBS -l procs=1
#PBS -l walltime=240:00:00
#PBS -N Name
#PBS -m ea
#PBS -M name#something.com
#PBS -l pmem=1000mb
#PBS -t 1-3
echo "Starting run at: `date`"
exec '/Applications/bla/bla/julia/bin/julia'
include("myscript.jl")
echo "Job finished with exit code $? at: `date`"
Does it seem correct to you? Or should I, somehow, make an .exec out of my .jl?
You want to directly execute Julia, with your .jl program file as an argument.
Something like:
echo "Starting run at: `date`"
/Applications/bla/bla/julia/bin/julia myscript.jl
echo "Job finished with exit code $? at: `date`"
PBS will catch the standard out and put it in a file such as .pbs.o#### (similarly the standard error in .pbs.e####).
You might find an issue in where your 'present working directory' is when the script runs. Some clusters are setup to 'cd' you to a /tmp/ filesystem, or just drop you in your home directory, rather than being where the script was submitted from.
In that case, the simple solution is to use a full path for the Julia script, but this makes it difficult to reuse your PBS submission script.
/Applications/bla/bla/julia/bin/julia ~/mydirectory/myscript.jl
Is it possible to do parallelize across a for loop in a PBS file?
Below is an my attempt.pbs file. I would like to allocate 4 nodes and simultaneously allocate 16 processes per node. I have successfully done this but now I have 4 jobs and I would like to send one job to each node. (I need to do this because queuing algo will make me wait a few days for submitting 4 separate job on the cluster I'm using)
#!/bin/bash
#PBS -q normal
#PBS -l nodes=4:ppn=16:native
#PBS -l walltime=10:00:00
#PBS -N HuMiBi000
#PBS -o HuMiBi.000.out
#PBS -e HuMiBi.000.err
#PBS -A csd399
#PBS -m abe
#PBS -V
./job1.sh
./job2.sh
./job3.sh
./job4.sh
The jobs run independently and don't use the same data. Can I run 1 job per node from the same pbs script?
Thank you.
The standard way to achieve this is through an Message Passing Interface (MPI) library. Open MPI is a fine implementation you can work with. Some basic examples can be found here and this is a tutorial for OpenMPI if you want to learn more.