Multiple nodes per task SLURM - parallel-processing

I would like to run one job that requires more CPUs than those available in one node. The maximum is 96 CPUs, then when I write srun -c 200 python my_script.py i get: srun: error: Unable to allocate resources: Requested node configuration is not available is there a way to tell SLURM that I want to use different nodes for the same task? Or I cannot do that, and I should split the tasks?

Related

Jobs allocate twice the cores that I request on SLURM

I am trying to understand why twice the amount of cores I request are being allocated to my sbatch jobs.
From what I can tell, my partition has 106 threads:
[.... snake_make]$ sinfo -p mypartition -o %z
S:C:T
2:26:2
Yet with the sbatch set like so for my snakemake:
module load snakemake/5.6.0
snakemake -s snake_make_tetragonula --cluster-config cluster.yaml --jobs 70
--cluster "sbatch -n 4 -M {cluster.cluster} -A {cluster.account} -p {cluster.partition}"
--latency-wait 10
Each job is being allocated 8 cores instead of 4. When I run squeue, I see that it is only able to run as many as 12 jobs at a time, suggesting that it is using 8 cores for each job despite me specifying 4 threads. Also when I look at my job usage on XDMoD, I see that only half of the cpus on the job are getting used. How can I use exactly as many cpus as I want and not double that amount, like it is currently running? I have also tried
--ntasks=1 --cpus-per-task=4
which still doubled it to 8. Thanks.
Slurm can only allocate cores, not threads. So, with such a configuration:
S:C:T
2:26:2
two threads are allocated to jobs for each core being requested. Two hardware threads cannot be allocated to distinct jobs.
You can try with
--ntasks=1 --cpus-per-task=2 --threads-per-core=2
But, if your computation is CPU-intensive, this can make your jobs slower.

Running a queue of MPI calls in parallel with SLURM and limited resources

I'm trying to run a Particle Swarm Optimization problem on a cluster using SLURM, with the optimization algorithm managed by a single-core matlab process. Each particle evaluation requires multiple MPI calls that alternate between two Python programs until the result converges. Each MPI call takes up to 20 minutes.
I initially naively submitted each MPI call as a separate SLURM job, but the resulting queue time made it slower than running each job locally in serial. I am now trying to figure out a way to submit an N node job that will continuously run MPI tasks to utilize the available resources. The matlab process would manage this job with text file flags.
Here is a pseudo-code bash file that might help to illustrate what I am trying to do on a smaller scale:
#!/bin/bash
#SBATCH -t 4:00:00 # walltime
#SBATCH -N 2 # number of nodes in this job
#SBATCH -n 32 # total number of processor cores in this job
# Set required modules
module purge
module load intel/16.0
module load gcc/6.3.0
# Job working directory
echo Working directory is $SLURM_SUBMIT_DIR
cd $SLURM_SUBMIT_DIR
echo Running on host `hostname`
echo Time is `date`
echo Directory is `pwd`
# Run Command
while <"KeepRunning.txt” == 1>
do
for i in {0..40}
do
if <“RunJob_i.txt” == 1>
then
mpirun -np 8 -rr -f ${PBS_NODEFILE} <job_i> &
fi
done
done
wait
This approach doesn't work (just crashes), but I don't know why (probably overutilization of resources?). Some of my peers have suggested using parallel with srun, but as far as I can tell this requires that I call the MPI functions in batches. This will be a huge waste of resources, as a significant portion of the runs finish or fail quickly (this is expected behavior). A concrete example of the problem would be starting a batch of 5 8-core jobs and having 4 of them crash immediately; now 32 cores would be doing nothing while they wait up to 20 minutes for the 5th job to finish.
Since the optimization will likely require upwards of 5000 mpi calls, any increase in efficiency will make a huge difference in absolute walltime. Does anyone have any advice as to how I could run a constant stream of MPI calls on a large SLURM job? I would really appreciate any help.
A couple of things: under SLURM you should be using srun, not mpirun.
The second thing is that the pseudo-code you provided launches an infinite number of jobs without waiting for any completion signal. You should try to put the wait into the inner loop, so you launch just a set of jobs, wait for them to finish, evaluate the condition and, maybe, launch the next set of jobs:
#!/bin/bash
#SBATCH -t 4:00:00 # walltime
#SBATCH -N 2 # number of nodes in this job
#SBATCH -n 4 # total number of tasks in this job
#SBATCH -s 8 # total number of processor cores for each task
# Set required modules
module purge
module load intel/16.0
module load gcc/6.3.0
# Job working directory
echo Working directory is $SLURM_SUBMIT_DIR
cd $SLURM_SUBMIT_DIR
echo Running on host `hostname`
echo Time is `date`
echo Directory is `pwd`
# Run Command
while <"KeepRunning.txt” == 1>
do
for i in {0..40}
do
if <“RunJob_i.txt” == 1>
then
srun -np 8 --exclusive <job_i> &
fi
done
wait
<Update "KeepRunning.txt”>
done
Take care also distinguishing tasks and cores. -n says how many tasks will be used, -c says how many cpus per task will be allocated.
The code I wrote will launch in the background 41 jobs (from 0 to 40, included), but they will only start once the resources are available (--exclusive), waiting while they are occupied. Each jobs will use 8 CPUs. The you will wait for them to finish and I assume that you will update the KeepRunning.txt after that round.

PBS: job on two nodes uses memory of only one

I am trying to run a job (python code) on cluster using MPI. There is 63GB of memory available on each node.
When I run it on one node, I specify PBS parameters with (only relevant parameters are listed here):
#PBS -l mem=60GB
#PBS -l nodes=node01.cluster:ppn=32
time mpiexec -n 32 python code.py
Than works just fine.
Since PBS man page says mem is memory per entire job, my parameters when trying to run it on two nodes, are
#PBS -l mem=120GB
#PBS -l nodes=node01.cluster:ppn=32+node02.cluster:ppn=32
time mpiexec -n 64 python code.py
This doesn't work (qsub: Job exceeds queue resource limits MSG=cannot satisfy queue max mem requirement). It fails even if I set mem=70GB for example (in case system needs some more memory).
If I set mem=60GB when trying to use both nodes, I get
=>> PBS: job killed: mem job total xx kb exceeded limit yy kb.
I tried it with pmem as well (that's pmem=1875MB), but no success.
My question is: How can I use entire 120GB of memory?
Torque / PBS ignores the mem resource unless the job uses a single node (see here):
Maximum amount of physical memory used by the job. (Ignored on Darwin, Digital Unix, Free BSD, HPUX 11, IRIX, NetBSD, and SunOS. Also ignored on Linux if number of nodes is not 1. Not implemented on AIX and HPUX 10.)
You should instead use the pmem resource that limits the memory per job process. With ppn=32 you should set pmem to 1920MB in order to get 60 GB per node. In that case you should mind that pmem does not allow flexible distribution of memory between the processes running on the node the same way mem does (since the latter is accounted as an aggregated value while pmem applies to each process individually).

Slurm: What is the difference for code executing under salloc vs srun

I'm using a cluster managed by slurm to run some yarn/hadoop benchmarks. To do this I am starting the hadoop servers on nodes allocated by slurm and then running the benchmarks on them. I realize that this is not the intended way to run a production hadoop cluster, but needs must.
To do this I started by writing a script that runs with srun eg srun -N 4 setup.sh. This script writes the configuration files and starts the servers on the allocated nodes, with the lowest numbered machine acting as the master. This all works, and I am able to run applications.
However, as I would like to start the servers once and then launch multiple applications on them without restarting/encoding everything in at the begining I would like to use salloc instead. I had thought that this would be a simple case of running salloc -N 4 and then running srun setup.sh. Unfortunately this does not work as the different servers are unable to communicate with each other. Could any one explain to me what the difference in the operating environment is between using srun and using salloc then srun?
Many thanks
Daniel
From the slurm-users mailing list:
sbatch and salloc allocate resources to the job, while srun launches parallel tasks across those resources. When invoked within a job allocation, srun will launch parallel tasks across some or all of the allocated resources. In that case, srun inherits by default the pertinent options of the sbatch or salloc which it runs under. You can then (usually) provide srun different options which will override what it receives by default. Each invocation of srun within a job is known as a job step.
srun can also be invoked outside of a job allocation. In that case, srun requests resources, and when those resources are granted, launches tasks across those resources as a single job and job step.

MPI not using all CPUs allocated

I am trying to run some code across multiple CPUs using MPI.
I run using:
$ mpirun -np 24 python mycode.py
I'm running on a cluster with 8 nodes, each with 12 CPUs. My 24 processes get scattered across all nodes.
Let's call the nodes node1, node2, ..., node8 and assume that the master process is on node1 and my job is the only one running. So node1 has the master process and a few slave processes, the rest of the nodes have only slave processes.
Only the node with the master process (ie node1) is being used. I can tell because nodes2-8 have load ~0 and node1 has load ~24 (whereas I would expect the load on each node to be approximately equal to the number of CPUs allocated to my job from that node). Also, each time a function is evaluated, I get it to print out the name of the host on which its running, and it prints out "node1" every time. I don't know whether the master process is the only one doing anything or if the slave processes on the same node as the master are also being used.
The cluster I'm running on was recently upgraded. Before the upgrade, I was using the same code and it behaved entirely as expected (i.e. when I asked for 24 CPUs, it gave me 24 CPUs and then used all 24 CPUs). This problem has only arisen since the upgrade, so I assume a setting somewhere got changed or reset. Has anyone seen this problem before and know how I might fix it?
Edit: This is submitted as a job to a scheduler using:
#!/bin/bash
#
#$ -cwd
#$ -pe * 24
#$ -o $JOB_ID.out
#$ -e $JOB_ID.err
#$ -r no
#$ -m n
#$ -l h_rt=24:00:00
echo job_id $JOB_ID
echo hostname $HOSTNAME
mpirun -np $NSLOTS python mycode.py
The cluster is running SGE and I submit this job using:
qsub myjob
It's also possible to specify where you want your jobs to run by using a hostfile. How the hostfile is formatted and used varies by MPI implementation so you'll need to consult the documentation for the one you have installed (man mpiexec) to find out how to use it.
The basic idea is that inside that file, you can define the nodes that you want to use and how many ranks you want on those nodes. This may require using other flags to specify how the processes are mapped to your nodes, but it the end, you can usually control how everything is laid out yourself.
All of this is different if you're using a scheduler like PBS, TORQUE, LoadLeveler, etc. as those can sometimes do some of this for you or have different ways of mapping jobs themselves. You'll have to consult the documentation for those separately or ask another question about them with the appropriate tags here.
Clusters usually have a batch scheduler like PBS, TORQUE, LoadLeveler, etc. These are generally given a shell script that contains your mpirun command along with environment variables that the scheduler needs. You should ask the administrator of your cluster what the process is for submitting batch MPI jobs.

Resources