How do you go about running the same program multiple times but with different arguments each instance on a cluster, submitted through a PBS. Also, is it possible to designate each of these programs to a separate node? Currently, if I have a PBS with the following script:
#PBS -l nodes=1:ppn=1
/myscript
it will run the single program once, on a single node. If I use the following script:
#PBS -l nodes=1:ppn=1
/mscript -arg arg1 &
/myscript -arg arg2
I believe this will run each program in serial, but it will use only one node. Can I declare multiple nodes and then delegate specific ones out to each instance of the program I wish to run?
Any help or suggestions will be much appreciate. I apologize if I am not clear on anything or am using incorrect terminology...I am very new to cluster computing.
You want to do that using a form of MPI. MPI stands for message passing interface and there are a number of libraries out there that implement the interface. I would recommend using OpenMPI as it integrates very well with PBS. As you say you are new, you might appreciate this tutorial.
GNU Parallel would be ideal for this purpose. An example PBS script for your case:
#PBS -l nodes=2:ppn=4 # set ppn for however many cores per node on your cluster
#Other PBS directives
module load gnu-parallel # this will depend on your cluster setup
parallel -j4 --sshloginfile $PBS_NODEFILE /mscript -arg {} \
::: arg1 arg2 arg3 arg4 arg5 arg6 arg7 arg8
GNU Parallel will handle ssh connections to the various nodes. I've written out the example with arguments on the command line, but you'd probably want to read the arguments from a text file. Here are links to the man page and tutorial. Option -j4 should match the ppn (number of cores per node).
Related
I have a complex model written in Matlab. The model was not written by us and is best thought of as a "black box" i.e. in order to fix the relevant problems from the inside would require rewritting the entire model which would take years.
If I have an "embarrassingly parallel" problem I can use an array to submit X variations of the same simulation with the option #SBATCH --array=1-X. However, clusters normally have a (frustratingly small) limit on the maximum array size.
Whilst using a PBS/TORQUE cluster I have got around this problem by forcing Matlab to run on a single thread, requesting multiple CPUs and then running multiple instances of Matlab in the background. An example submission script is:
#!/bin/bash
<OTHER PBS COMMANDS>
#PBS -l nodes=1:ppn=5,walltime=30:00:00
#PBS -t 1-600
<GATHER DYNAMIC ARGUMENTS FOR MATLAB FUNCTION CALLS BASED ON ARRAY NUMBER>
# define Matlab options
options="-nodesktop -noFigureWindows -nosplash -singleCompThread"
for sub_job in {1..5}
do
<GATHER DYNAMIC ARGUMENTS FOR MATLAB FUNCTION CALLS BASED ON LOOP NUMBER (i.e. sub_job)>
matlab ${options} -r "run_model(${arg1}, ${arg2}, ..., ${argN}); exit" &
done
wait
<TIDY UP AND FINISH COMMANDS>
Can anyone help me do the equivalent on a SLURM cluster?
The par function will not run my model in a parallel loop in Matlab.
The PBS/TORQUE language was very intuitive but SLURM's is confusing me. Assuming a similarly structured submission script as my PBS example, here is what I think certain commands will result in.
--ncpus-per-task=5 seems like the most obvious one to me. Would I put srun in front of the matlab command in the loop or leave it as it is in the PBS script loop?
--ntasks=5 I would imagine would request 5 CPUs but will run in serial unless a program specifically requests them (i.e. MPI or Python-Multithreaded etc). Would I need to put srun in front of the Matlab command in this case?
I am not a big expert on array jobs but I can help you with the inner loop.
I would always use GNU parallel to run several serial processes in parallel, within a single job that has more than one CPU available. It is a simple perl script, so not difficult to 'install', and its syntax is extremely easy. What it basically does is to run some (nested) loop in parallel. Each iteration of this loop contains a (long) process, like your Matlab command. In contrast to your solution it does not submit all these processes at once, but it runs only N processes at the same time (where N is the number of CPUs you have available). As soon as one finishes, the next one is submitted, and so on until your entire loop is finished. It is perfectly fine that not all processes take the same amount of time, as soon as one CPU is freed, another process is started.
Then, what you would like to do is to launch 600 jobs (for which I substitute 3 below, to show the complete behavior), each with 5 CPUs. To do that you could do the following (whereby I have not included the actual run of matlab, but that trivially can be included):
#!/bin/bash
#SBATCH --job-name example
#SBATCH --out job.slurm.out
#SBATCH --nodes 1
#SBATCH --ntasks 1
#SBATCH --cpus-per-task 5
#SBATCH --mem 512
#SBATCH --time 30:00:00
#SBATCH --array 1-3
cmd="echo matlab array=${SLURM_ARRAY_TASK_ID}"
parallel --max-procs=${SLURM_CPUS_PER_TASK} "$cmd,subjob={1}; sleep 30" ::: {1..5}
Submitting this job using:
$ sbatch job.slurm
submits 3 jobs to the queue. For example:
$ squeue | grep tdegeus
3395882_1 debug example tdegeus R 0:01 1 c07
3395882_2 debug example tdegeus R 0:01 1 c07
3395882_3 debug example tdegeus R 0:01 1 c07
Each job gets 5 CPUs. These are exploited by the parallel command, to run your inner loop in parallel. Once again, the range of this inner loop may be (much) larger than 5, parallel takes care of the balancing between the 5 available CPUs within this job.
Let's inspect the output:
$ cat job.slurm.out
matlab array=2,subjob=1
matlab array=2,subjob=2
matlab array=2,subjob=3
matlab array=2,subjob=4
matlab array=2,subjob=5
matlab array=1,subjob=1
matlab array=3,subjob=1
matlab array=1,subjob=2
matlab array=1,subjob=3
matlab array=1,subjob=4
matlab array=3,subjob=2
matlab array=3,subjob=3
matlab array=1,subjob=5
matlab array=3,subjob=4
matlab array=3,subjob=5
You can clearly see the 3 times 5 processes run at the same time now (as their output is mixed).
No need in this case to use srun. SLURM will create 3 jobs. Within each job everything happens on individual compute nodes (i.e. as if you were running on your own system).
Installing GNU Parallel - option 1
To 'install' GNU parallel into your home folder, for example in ~/opt.
Download the latest GNU Parallel.
Make the directory ~/opt if it does not yet exist
mkdir $HOME/opt
'Install' GNU Parallel:
tar jxvf parallel-latest.tar.bz2
cd parallel-XXXXXXXX
./configure --prefix=$HOME/opt
make
make install
Add ~/opt to your path:
export PATH=$HOME/opt/bin:$PATH
(To make it permanent, add that line to your ~/.bashrc.)
Installing GNU Parallel - option 2
Use conda.
(Optional) Create a new environment
conda create --name myenv
Load an existing environment:
conda activate myenv
Install GNU parallel:
conda install -c conda-forge parallel
Note that the command is available only when the environment is loaded.
While Tom's suggestion to use GNU Parallel is a good one, I will attempt to answer the question asked.
If you want to run 5 instances of the matlab command with the same arguments (for example if they were communicating via MPI) then you would want to ask for --ncpus-per-task=1, --ntasks=5 and you should preface your matlab line with srun and get rid of the loop.
In your case, as each of your 5 calls to matlab are independent, you want to ask for --ncpus-per-task=5, --ntasks=1. This will ensure that you allocate 5 CPU cores per job to do with as you wish. You can preface your matlab line with srun if you wish but it will make little difference you are only running one task.
Of course, this is only efficient if each of your 5 matlab runs take the same amount of time since if one takes much longer then the other 4 CPU cores will be sitting idle, waiting for the fifth to finish.
You can do it with python and subprocess, in what I describe below you just set the number of nodes and tasks and that is it, no need for an array, no need to match the size of the array to the number of simulations, etc... It will just execute python code until it is done, more nodes faster execution.
Also, it is easier to decide on variables as everything is being prepared in python (which is easier than bash).
It does assume that the Matlab scripts save the output to file - nothing is returned by this function (it can be changed..)
In the sbatch script you need to add something like this:
#!/bin/bash
#SBATCH --output=out_cluster.log
#SBATCH --error=err_cluster.log
#SBATCH --time=8:00:00
#SBATCH --nodes=36
#SBATCH --exclusive
#SBATCH --cpus-per-task=2
export IPYTHONDIR="`pwd`/.ipython"
export IPYTHON_PROFILE=ipyparallel.${SLURM_JOBID}
whereis ipcontroller
sleep 3
echo "===== Beginning ipcontroller execution ======"
ipcontroller --init --ip='*' --nodb --profile=${IPYTHON_PROFILE} --ping=30000 & # --sqlitedb
echo "===== Finish ipcontroller execution ======"
sleep 15
srun ipengine --profile=${IPYTHON_PROFILE} --timeout=300 &
sleep 75
echo "===== Beginning python execution ======"
python run_simulations.py
depending on your system, read more here:https://ipyparallel.readthedocs.io/en/latest/process.html
and run_simulations.py should contain something like this:
import os
from ipyparallel import Client
import sys
from tqdm import tqdm
import subprocess
from subprocess import PIPE
def run_sim(x):
import os
import subprocess
from subprocess import PIPE
# send job!
params = [str(i) for i in x]
p1 = subprocess.Popen(['matlab','-r',f'"run_model({x[0]},{x[1]})"'], env=dict(**os.environ))
p1.wait()
return
##load ipython parallel
rc = Client(profile=os.getenv('IPYTHON_PROFILE'))
print('Using ipyparallel with %d engines', len(rc))
lview = rc.load_balanced_view()
view = rc[:]
print('Using ipyparallel with %d engines', len(rc))
sys.stdout.flush()
map_function = lview.map_sync
to_send = []
#prepare variables <-- here you should prepare the arguments for matlab
####################
for param_1 in [1,2,3,4]:
for param_2 in [10,20,40]:
to_send.append([param_1, param_2])
ind_raw_features = lview.map_async(run_sim,to_send)
all_results = []
print('Sending jobs');sys.stdout.flush()
for i in tqdm(ind_raw_features,file=sys.stdout):
all_results.append(i)
You also get a progress bar in the stdout, which is nice... you can also easily add a check to see if the output files exist and ignore a run.
I am trying to use gnu parallel GNU parallel (version 20160922)
to launch a large number of protein docking jobs (using UCSF Dock 6.7). I am running on a high performance cluster with several dozen nodes each with 28-40 cores. The system is running CentOS 7.1.1503, and uses torque for job management.
I am trying to submit each config file in dock.n.d to the dock executable, one per core on the cluster. Here is my PBS file:
#PBS -l walltime=01:00:00
#PBS -N pardock
#PBS -l nodes=1:ppn=28
#PBS -j oe
#PBS -o /home/path/to/pardock.log
cd $PBS_O_WORKDIR
cat $PBS_NODEFILE temp.txt
#f=$(pwd)
ls dock.in.d/*.in | parallel -j 300 --sshloginfile $PBS_NODEFILE "/path/to/local/bin/dock6 -i {} -o {}.out"
This works fine on a single node as written above. But when I scale up to, say, 300 processors (with -l procs=300) accross several nodes I begin to get these errors:
parallel: Warning: ssh to node026 only allows for 99 simultaneous logins.
parallel: Warning: You may raise this by changing /etc/ssh/sshd_config:MaxStartups and MaxSessions on node026.
What I do not understand is why there are so many logins. Each node only has 28-40 cores so, as specified in $PBS_NODEFILE, I would expect there to only be 28-40 SSH logins at any point in time on these nodes.
Am I misunderstanding or misexecuting something here? Please advise what other information I can provide or what direction I should go to get this to work.
UPDATE
So my problem above was the combination of -j 300 and the use of $PBS_NODEFILE, which has a separate entry for each core on each node. So in that case it seems I should used -j 1. But then, all the jobs seem to run on a single node.
So my question remains, how to get gnu parallel to balance the jobs between nodes, utilizing all cores, but not creating an excessive number of SSH logins due to multiple jobs per core.
Thank you!
You are asking GNU Parallel to ignore the number of cores and run 300 jobs on each server.
Try instead:
ls dock.in.d/*.in | parallel --sshloginfile $PBS_NODEFILE /path/to/local/bin/dock6 -i {} -o {}.out
This will default to --jobs 100% which is one job per core on all machines.
If you are not allowed to use all cores on the machines, you can in prepend X/ to the hosts in --sshloginfile to force X as the number of cores:
28/server1.example.com
20/server2.example.com
16/server3.example.net
This will force GNU Parallel to skip the detection of cores, and instead use 28, 20, and 16 respectively. This combined with -j 100% can control how many jobs you want started on the different servers.
I have a master and two nodes. They are install with SGN. And I have a shell script ready on all the nodes as well. Now I want to use a qsub to submit the job on all my nodes.
I used:
qsub -V -b n -cwd /root/remotescript.sh
but it seems that only one node is doing the job. I am wondering how do I submit jobs for all nodes. What would the command be.
My reference is this enter link description here
SGE is meant to dispatch jobs to worker nodes. In your example, you create one job so one node will run it. If you want to run a job on each of your node, you need to submit more than one job. If you want to target nodes you probably should use something closer to
qsub -V -b n -cwd -l hostname=node001 /root/remotescript.sh
qsub -V -b n -cwd -l hostname=node002 /root/remotescript.sh
The "-l hostname=*" parameter will require a specific host to run the job.
What are you trying to do? The general use case of using a grid engine is to let the scheduler dispatch the jobs so you don't have to use the "-l hostname=*" parameter. So technically you should just submit a bunch of jobs to SGE and let it dispatch it with the nodes availability.
Finch_Powers answer is good for describing how SGE allocates resources. So, I'll elaborate below on specifics of you question, which may be why you are not getting the desired outcome.
You mention launching remote script via:
qsub -V -b n -cwd /root/remotescript.sh
Also, you mention again that these scripts are located on the nodes:
"And I have a shell script ready on all the nodes as well"
This is not how SGE is designed to work, although it can do this. Typical usage is to have same single (or multiple) scripts accessible to all nodes via network mounted storage on the execution nodes and let SGE decide which nodes to run the script on.
To run remote code, you may be better served using plain SSH.
I am interested to know the best way to start a script in the background in multiple machines as fast as possible. Currently, I'm doing this
Run for each IP address
ssh user#ip -t "perl ~/setup.pl >& ~/log &" &
But this takes time as it individually tries to SSH into each one by one to start the setup.pl in the background in that machine. This takes time as I've got a large number of machines to start this script on.
I tried using GNU parallel, but couldn't get it to work properly:
seq COUNT | parallel -j 1 -u -S ip1,ip2,... perl ~/setup.pl >& ~/log
But it doesn't seem to work, I see the script started by GNU parallel in the target machine, but it's stagnant. I don't see anything in the log.
What am I doing wrong in using the GNU parallel?
GNU Parallel assumes per default that it does not matter which machine it runs a job on - which is normally true for computations. In your case it matters greatly: You want one job on each of the machine. Also GNU Parallel will give a number as argument to setup.pl, and you clearly do not want that.
Luckily GNU Parallel does support what you want using --nonall:
http://www.gnu.org/software/parallel/man.html#example__running_the_same_command_on_remote_computers
I encourage you to read and understand the rest of the examples, too.
I recommend that you use pdsh
It allows you to run the same command on multiple machines
Usage:
pdsh -w machine1,machine2,...,machineN <command>
It might not be included in your distribution of linux so get it through yum or apt
Try to wrap ssh user#ip -t "perl ~/setup.pl >& ~/log &" & in the shell script, and run for each ip address ./mysctipt.sh &
Currently, I have a driver program that runs several thousand instances of a "payload" program and does some post-processing of the output. The driver currently calls the payload program directly, using a shell() function, from multiple threads. The shell() function executes a command in the current working directory, blocks until the command is finished running, and returns the data that was sent to stdout by the command. This works well on a single multicore machine. I want to modify the driver to submit qsub jobs to a large compute cluster instead, for more parallelism.
Is there a way to make the qsub command output its results to stdout instead of a file and block until the job is finished? Basically, I want it to act as much like "normal" execution of a command as possible, so that I can parallelize to the cluster with as little modification of my driver program as possible.
Edit: I thought all the grid engines were pretty much standardized. If they're not and it matters, I'm using Torque.
You don't mention what queuing system you're using, but SGE supports the '-sync y' option to qsub which will cause it to block until the job completes or exits.
In TORQUE this is done using the -x and -I options. qsub -I specifies that it should be interactive and -x says run only the command specified. For example:
qsub -I -x myscript.sh
will not return until myscript.sh finishes execution.
In PBS you can use qsub -Wblock=true <command>