Jobs allocate twice the cores that I request on SLURM - memory-management

I am trying to understand why twice the amount of cores I request are being allocated to my sbatch jobs.
From what I can tell, my partition has 106 threads:
[.... snake_make]$ sinfo -p mypartition -o %z
S:C:T
2:26:2
Yet with the sbatch set like so for my snakemake:
module load snakemake/5.6.0
snakemake -s snake_make_tetragonula --cluster-config cluster.yaml --jobs 70
--cluster "sbatch -n 4 -M {cluster.cluster} -A {cluster.account} -p {cluster.partition}"
--latency-wait 10
Each job is being allocated 8 cores instead of 4. When I run squeue, I see that it is only able to run as many as 12 jobs at a time, suggesting that it is using 8 cores for each job despite me specifying 4 threads. Also when I look at my job usage on XDMoD, I see that only half of the cpus on the job are getting used. How can I use exactly as many cpus as I want and not double that amount, like it is currently running? I have also tried
--ntasks=1 --cpus-per-task=4
which still doubled it to 8. Thanks.

Slurm can only allocate cores, not threads. So, with such a configuration:
S:C:T
2:26:2
two threads are allocated to jobs for each core being requested. Two hardware threads cannot be allocated to distinct jobs.
You can try with
--ntasks=1 --cpus-per-task=2 --threads-per-core=2
But, if your computation is CPU-intensive, this can make your jobs slower.

Related

Request maximum number of threads & cores on node via Slurm job scheduler

I have a heterogeneous cluster, containing either 14-core or 16-core CPUs (28 or 32 threads). I manage job submissions using Slurm. Some requirements:
It doesn't matter which CPU is used for a calculation.
I don't want to specify which CPU a job should go to.
A job should consume all available cores on the CPU (14 or 16).
I want mpirun to handle threading.
To illustrate the peculiarities of the problem, I show a job script that works on the 16-core CPUs:
#!/bin/bash
#SBATCH -J test
#SBATCH -o job.%j.out
#SBATCH -N 1
#SBATCH -n 32
mpirun -np 16 vasp
An example job script that works on the 14-core CPUs is:
#!/bin/bash
#SBATCH -J test
#SBATCH -o job.%j.out
#SBATCH -N 1
#SBATCH -n 28
mpirun -np 14 vasp
The second job script runs on the 16-core CPUs but, unfortunately, the job is about 35% slower than when I request 32 threads as is done in the first script. That's an unacceptable performance loss for my application.
I haven't figured out if there is a good way around this challenge. To me, a solution would be to request a variable number of resources, such as
#SBATCH -n [28-32]
and to tailor the mpirun -np x vasp line accordingly. I haven't found a way to do this, however. Are there any suggestions on how to achieve this directly in Slurm or is there a good workaround?
I tried to use the environmental variable $SLURM_CPUS_ON_NODE, but this variable is only set after the node is selected, so cannot be used in a #SBATCH line.
I also looked at the --constraint flag but this does not seem to give sufficiently granular control over threading requests.
Actually it should work as you want it to by simply specifying that you want a full node:
#!/bin/bash
#SBATCH -J test
#SBATCH -o job.%j.out
#SBATCH -N 1
#SBATCH --exclusive
mpirun vasp
mpirun will start the number of processes as defined in SLURM_TASKS_PER_NODE that will be set by Slurm to the number of tasks that can be created on the node, that is the number of CPUs if you do not request more than one CPU per task.

PBS torque: how to solve cores waste problem in parallel tasks that spend very different time from each other?

I'm running parallel MATLAB or python tasks in a cluster that is managed by PBS torque. The embarrassing situation now is that PBS think I'm using 56 cores but that's in the first and eventually I have only 7 hardest tasks running. 49 cores are wasted now.
My parallel tasks took very different time because they did searches in different model parameters, I didn't know which task will spend how much time before I have tried. In the start all cores were used but soon only the hardest tasks ran. Since the whole task was not finished yet PBS torque still thought I was using full 56 cores and prevent new tasks run but actually most cores were idle. I want PBS to detect this and use the idle cores to run new tasks.
So my question is that are there some settings in PBS torque that can automatically detect real cores used in the task, and allocate the really idle cores to new tasks?
#PBS -S /bin/sh
#PBS -N alps_task
#PBS -o stdout
#PBS -e stderr
#PBS -l nodes=1:ppn=56
#PBS -q batch
#PBS -l walltime=1000:00:00
#HPC -x local
cd /tmp/$PBS_O_WORKDIR
alpspython spin_half_correlation.py 2>&1 > tasklog.log
A short answer to your question is No: PBS has no way to reclaim unused resources allocated to a job.
Since your computation is essentially a bunch of independent tasks, what you could and probably should do is try to split your job into 56 independent jobs each running an individual combination of model parameters and when all the jobs are finished you could run an additional job to collect and summarize the results. This is a well supported way of doing things. PBS provides has some useful features for this type of jobs such as array jobs and job dependencies.

Running a queue of MPI calls in parallel with SLURM and limited resources

I'm trying to run a Particle Swarm Optimization problem on a cluster using SLURM, with the optimization algorithm managed by a single-core matlab process. Each particle evaluation requires multiple MPI calls that alternate between two Python programs until the result converges. Each MPI call takes up to 20 minutes.
I initially naively submitted each MPI call as a separate SLURM job, but the resulting queue time made it slower than running each job locally in serial. I am now trying to figure out a way to submit an N node job that will continuously run MPI tasks to utilize the available resources. The matlab process would manage this job with text file flags.
Here is a pseudo-code bash file that might help to illustrate what I am trying to do on a smaller scale:
#!/bin/bash
#SBATCH -t 4:00:00 # walltime
#SBATCH -N 2 # number of nodes in this job
#SBATCH -n 32 # total number of processor cores in this job
# Set required modules
module purge
module load intel/16.0
module load gcc/6.3.0
# Job working directory
echo Working directory is $SLURM_SUBMIT_DIR
cd $SLURM_SUBMIT_DIR
echo Running on host `hostname`
echo Time is `date`
echo Directory is `pwd`
# Run Command
while <"KeepRunning.txt” == 1>
do
for i in {0..40}
do
if <“RunJob_i.txt” == 1>
then
mpirun -np 8 -rr -f ${PBS_NODEFILE} <job_i> &
fi
done
done
wait
This approach doesn't work (just crashes), but I don't know why (probably overutilization of resources?). Some of my peers have suggested using parallel with srun, but as far as I can tell this requires that I call the MPI functions in batches. This will be a huge waste of resources, as a significant portion of the runs finish or fail quickly (this is expected behavior). A concrete example of the problem would be starting a batch of 5 8-core jobs and having 4 of them crash immediately; now 32 cores would be doing nothing while they wait up to 20 minutes for the 5th job to finish.
Since the optimization will likely require upwards of 5000 mpi calls, any increase in efficiency will make a huge difference in absolute walltime. Does anyone have any advice as to how I could run a constant stream of MPI calls on a large SLURM job? I would really appreciate any help.
A couple of things: under SLURM you should be using srun, not mpirun.
The second thing is that the pseudo-code you provided launches an infinite number of jobs without waiting for any completion signal. You should try to put the wait into the inner loop, so you launch just a set of jobs, wait for them to finish, evaluate the condition and, maybe, launch the next set of jobs:
#!/bin/bash
#SBATCH -t 4:00:00 # walltime
#SBATCH -N 2 # number of nodes in this job
#SBATCH -n 4 # total number of tasks in this job
#SBATCH -s 8 # total number of processor cores for each task
# Set required modules
module purge
module load intel/16.0
module load gcc/6.3.0
# Job working directory
echo Working directory is $SLURM_SUBMIT_DIR
cd $SLURM_SUBMIT_DIR
echo Running on host `hostname`
echo Time is `date`
echo Directory is `pwd`
# Run Command
while <"KeepRunning.txt” == 1>
do
for i in {0..40}
do
if <“RunJob_i.txt” == 1>
then
srun -np 8 --exclusive <job_i> &
fi
done
wait
<Update "KeepRunning.txt”>
done
Take care also distinguishing tasks and cores. -n says how many tasks will be used, -c says how many cpus per task will be allocated.
The code I wrote will launch in the background 41 jobs (from 0 to 40, included), but they will only start once the resources are available (--exclusive), waiting while they are occupied. Each jobs will use 8 CPUs. The you will wait for them to finish and I assume that you will update the KeepRunning.txt after that round.

PBS: job on two nodes uses memory of only one

I am trying to run a job (python code) on cluster using MPI. There is 63GB of memory available on each node.
When I run it on one node, I specify PBS parameters with (only relevant parameters are listed here):
#PBS -l mem=60GB
#PBS -l nodes=node01.cluster:ppn=32
time mpiexec -n 32 python code.py
Than works just fine.
Since PBS man page says mem is memory per entire job, my parameters when trying to run it on two nodes, are
#PBS -l mem=120GB
#PBS -l nodes=node01.cluster:ppn=32+node02.cluster:ppn=32
time mpiexec -n 64 python code.py
This doesn't work (qsub: Job exceeds queue resource limits MSG=cannot satisfy queue max mem requirement). It fails even if I set mem=70GB for example (in case system needs some more memory).
If I set mem=60GB when trying to use both nodes, I get
=>> PBS: job killed: mem job total xx kb exceeded limit yy kb.
I tried it with pmem as well (that's pmem=1875MB), but no success.
My question is: How can I use entire 120GB of memory?
Torque / PBS ignores the mem resource unless the job uses a single node (see here):
Maximum amount of physical memory used by the job. (Ignored on Darwin, Digital Unix, Free BSD, HPUX 11, IRIX, NetBSD, and SunOS. Also ignored on Linux if number of nodes is not 1. Not implemented on AIX and HPUX 10.)
You should instead use the pmem resource that limits the memory per job process. With ppn=32 you should set pmem to 1920MB in order to get 60 GB per node. In that case you should mind that pmem does not allow flexible distribution of memory between the processes running on the node the same way mem does (since the latter is accounted as an aggregated value while pmem applies to each process individually).

Using maximum remote servers

Im trying to distribute commands to 100 remote computers, but noticed that the commands are only being sent to 16 remote computers. My local machine has 16 cores. Why is parallel only using 16 remote computers instead of 100?
parallel --eta --sshloginfile list_of_100_remote_computers.txt < list_of_commands.txt
I do believe you will need to specify the number of parallel jobs to be executed.
According to the Parallel MAN:
--jobs N
-j N
--max-procs N
-P N
Number of jobslots. Run up to N jobs in parallel. 0 means as many as possible. Default is 100% which will run one job per CPU core.
And keep this in mind:
When you start more than one job with the -j option, it is reasonable
to assume that each job might not take exactly the same amount of time
to complete. If you care about seeing the output in the order that
file names were presented to Parallel (instead of when they
completed), use the --keeporder option.
Parallel Multicore at the Command Line with GNU Parallel, Admin Magazine
If the remote machines are 32 cores then you run 16*32 jobs. By default GNU Parallel uses a file handle for STDOUT and STDERR in total 16*32*2 file handles = 1024 file handles.
If you have a default GNU/Linux system you will be hitting the 1024 file handle limit.
If --ungroup runs more jobs, then that is a clear indication that you have hit the file handle limit. Use ulimit -n to increase the limit.

Resources