I have written a code that takes only 1-4 cpus. But when I submit a job on the cluster, I have to take at least one node with 16 cores per job. So I want to run several simulations on each node with each job I submit.
I was wondering if there is a way to submit the simulations in parallel in one job.
Here's an example:
My code takes 4 cpus. I submit a job for one node, and I want the node to run 4 instances of my code (each instance has different parameters) to take all the 16 cores.
Yes, of course; generally such systems will have instructions for how to do this, like these.
If you have (say) 4x 4-cpu jobs that you know will each take the same amount of time, and (say) you want them to run in 4 different directories (so the output files are easier to keep track of), use the shell ampersand to run them each in the background and then wait for all background tasks to finish:
(cd jobdir1; myexecutable argument1 argument2) &
(cd jobdir2; myexecutable argument1 argument2) &
(cd jobdir3; myexecutable argument1 argument2) &
(cd jobdir4; myexecutable argument1 argument2) &
wait
(where myexecutable argument1 argument2 is just a place holder for however you usually run your program; if you use mpiexec or something similar, that goes in there just as you'd normally use it. If you're using OpenMP, you can export the environment variable OMP_NUM_THREADS before the first line above.
If you have a number of tasks that won't all take the same length of time, it's easiest to assign well more than the (say) 4 jobs above and let a tool like gnu parallel launch the jobs as necessary, as described in this answer.
Related
Let’s say I have two Bash scripts:
prog1.sh and prog2.sh
I know I can run these two scripts in parallel via:
prog1.sh & prog2.sh
However, let’s say these two scripts are operating in two different directories, so I’d like them to be running via two different terminals. Otherwise, I'll run into an issue with concurrency.
My question is, how can I run these (or more generally, an arbitrary collection of scripts) simultaneously?
I tried answers at:
Run different bash scripts, started by one bash startscript, in different terminal tabs
https://unix.stackexchange.com/questions/582092/how-can-i-run-multiple-bash-scripts-simultaneously-in-a-terminal-window?newreg=2529ef31224a4e44ae7d374f8809eef9
and others.
In your "master" script, you can have something like
workdir=( "pathToWorkDir1" "pathToWorkDir2" "pathToWorkDir3" ... )
progs=( "prog1.sh" "prog2.sh" "prog3.sh" ... )
for i in 1 2 3 ...
do
( cd "${workdir[${i}]}" ; progs[${i}] & )
done
wait
If you don't want to "wait" before exiting the master script, you will need to add nohup before the "pro${i}.sh", to ensure they survive independantly of that master script.
If the paths to work directories are relative to the start directory, you need the round brackets. If the paths are absolute, you don't need the round brackets.
I have an application that can run commands in parallel. This application can work on a cluster using SLURM to get the resources, then internally, I assign each of the tasks I require to be performed by a different CPU/worker. Now, I want to run this application on my laptop (macOS) through the command line, the same code (except for the SLURM part) works fine with the only difference being that it is only performing one task at a time.
I have run code in parallel in MATLAB using the commands parcluster, parfor, etc. In this code, I can get up to 16 workers to work in parallel on my laptop.
I was hoping there is a similar solution for any other application that is not MATLAB to run other code in parallel, especially to assign the resources. Then my application itself is built to manage them.
If it is of any help, I run my application from the command line as follows:
chmod +x ./bin/OpenSees
./bin/OpenSees Run.tcl
I have read about GNU parallel or even using SLURM on my laptop, but I am not sure if these are the best (or feasible) solutions.
I tried using GNU parallel:
chmod +x ./bin/OpenSees
parallel -j 4 ./bin/OpenSees ::: Run.tcl
but it continues doing one at a time, do you have any suggestions?
I have a list of (bash) commands I want to run:
<Command 1>
<Command 2>
...
<Command n>
Each command takes a long time to run, and sometimes after seeing the output of (e.g.) <Command 1>, I'd like to update a parameter of <Command 5>, or add a new <Command k> at an arbitrary position in the list. But I want to be able to walk away from my machine at any time, and have it keep working through my last update to the list.
This is similar to the question here: Edit shell script while it's running. Some of those answers could be made to serve, but that question had the additional constraint of wanting to edit the script file itself, and I suspect there is a simpler answer because I don't have that exact constraint.
My current solution is to end my script with a call to a second script. I can edit the second file while the first one runs, this lets me append new commands to the end of my list, but I can't make any changes to the list of commands in the first file. And once execution has started in the second file, I can't make any more changes. But I often stop my script to insert updates, and this sometimes means stopping a long command that is almost complete, only so that I can update later items on the list before I leave my machine for a time. I could of course chain together many files in this way, but that seems a mess for what (hopefully) has a simple solution.
This is more of a conceptual answer than one where I provide the full code. My idea would be to run Redis (Redis description here) - it is pretty simple to install - and use it as a data-structure server. In your case, the data structure would be a list of jobs.
So, you basically add each job to a Redis list which you can do using LPUSH at the command-line:
echo "lpush jobs job1" | redis-cli
You can then start one, or more, workers, in parallel if you wish and they sit in a loop, doing repeated BLPOP of jobs (blocking pop, waiting till there are jobs) off the list and processing them:
#!/bin/bash
while :; do
job=$(echo brpop jobs 0 | redis_cli)
do $job
done
And then you are at liberty to modify the list while the worker(s) is/are running using deletions and insertions.
Example here.
I would say to put each command that you want to run in a file and in the main file list all of the command files
ex: main.sh
#!/bin/bash
# Here you define the absolute path of your script
scriptPath="/home/script/"
# Name of your script
scriptCommand1="command_1.sh"
scriptCommand2="command_2.sh"
...
scriptCommandN="command_N.sh"
# Here you execute your script
$scriptPath/$scriptCommand1
$scriptPath/$scriptCommand2
...
$scriptPath/$scriptCommandN
I suppose while 1 is running you can then modify the other since they are external files
Here I read
If no value is provided for the number of copies to execute (i.e.,
neither the "-np" nor its synonyms are provided on the command line),
Open MPI will automatically execute a copy of the program on each
process slot (see below for description of a "process slot")
So I would expect
mpirun program
to run eight copies of the program (actually a simple hello world), since I have an Intel® Core™ i7-2630QM CPU # 2.00GHz × 8, but it doesn't: it simply runs a single process.
If you do not specify the number of processes to be used, mpirun tries to obtain them from the (specified or) default host file. From the corresponding section of the man page you linked:
If the hostfile does not provide slots information, a default of 1 is assumed.
Since you did not modify this file (I assume), mpirun will use one slot only.
On my machine, the default host file is located in
/etc/openmpi-x86_64/openmpi-default-hostfile
i7-2630QM is a 4-core CPU with two hardware threads per core. With computationally intensive programs, you should better start four MPI processes instead of eight.
Simply use mpiexec -n 4 ... as you do not need a hostfile for starting processes on the same node where mpiexec is executed.
Hostfiles are used when launching MPI processes on remote nodes. If you really need to create one, the following should do it:
hostname slots=4 max_slots=8
(replace hostname with the host name of the machine)
Run the program as
mpiexec -hostfile name_of_hostfile ...
max_slots=8 allows you to oversubscribe the node with up to eight MPI processes if your MPI program can make use of the hyperthreading. You can also set the environment variable OMPI_MCA_orte_default_hostfile to the full path of the hostfile instead of explicitly passing it each and every time as a parameter to mpiexec.
If you happen to be using a distributed resource manager like Torque, LSF, SGE, etc., then, if properly compiled, Open MPI integrates with the environment and builds a host and slot list from the reservation automatically.
A frequent problem I encounter is having to run some script with 50 or so different parameterizations. In the old days, I'd write something like (e.g.)
for i in `seq 1 50`
do
./myscript $i
done
In the modern era though, all my machines can handle 4 or 8 threads at once. The scripts aren't multithreaded, so what I want to be able to do is run 4 or 8 parameterizations at a time, and to automatically start new jobs as the old ones finish. I can rig up a haphazard system myself (and have in the past), but I suspect that there must be a linux utility that does this already. Any suggestions?
GNU parallel does this. With it, your example becomes:
parallel ./myscript -- `seq 1 50`