qsub for one machine? - multiprocessing

A frequent problem I encounter is having to run some script with 50 or so different parameterizations. In the old days, I'd write something like (e.g.)
for i in `seq 1 50`
do
./myscript $i
done
In the modern era though, all my machines can handle 4 or 8 threads at once. The scripts aren't multithreaded, so what I want to be able to do is run 4 or 8 parameterizations at a time, and to automatically start new jobs as the old ones finish. I can rig up a haphazard system myself (and have in the past), but I suspect that there must be a linux utility that does this already. Any suggestions?

GNU parallel does this. With it, your example becomes:
parallel ./myscript -- `seq 1 50`

Related

How to run scripts in parallel on Windows

On linux or Mac, regularly run several python scripts (or other programs, for that matter) in parallel. A use case could be that the script runs a simulation based on random numbers, and I just want to run it many times to get good statistics.
An easy way of doing this on linux or Mac would be to use a for loop, and use an ampersand & to make the jobs run in parallel:
for i in {1..10}; do python script.py & ; done
Another usecase would be that I want to run a script on some stored data in a file. Say I have a bunch of .npy files with stored data, and I want to process them all with the same script, running 4 jobs in parallel (since I have a 4-core CPU), I could use xargs:
ls *.npy | xargs -P4 -n1 python script.py
Are there equivalent ways of doing this on the Windows command line?

Running an executable over multiple cores using a bash script

Sorry if this is a repeat question and I know there are a lot of similar questions out there, but I am really struggling to find a simple answer that works.
I want to run an executable many times eg.
seq 100 | xargs -Iz ./program
However I would like to run this over multiple cores on my machine (currently on a macbook pro so 4 cores) to speed things up.
I have tried using gnu parallel by looking at other answers on here as that seems to be what I want and have it installed but I can't work out how parallel works and what arguments I need in what order. Non of the readme is helping as it is trying to do much more complicated things than I want to.
Could anyone help me?
Thanks
So, in order to run ./program 100 times, with GNU Parallel all you need is:
parallel -N0 ./program ::: {1..100}
If your CPU has 8 cores, it will keep 8 running in parallel till all jobs are done. If you want to run, say 12, in parallel:
parallel -j 12 -N0 ./program ::: {1..100}

Make bash stop moving to the next line in the script till a shared process ends

I am executing a command a which takes as input a script input.sh and runs a binary program, which produces some output file output. input.sh has a line NUMPROCS={some number} which specifies the number of processors program uses. a gives me back control immediately while program runs. The output file has a line at the end telling me how long program took.
I want to run this in a loop to see the time program takes as a function of number of processors. How do I make the control wait for output to be generated in each loop (and therefore have grep get the correct value) before it moves to line 4?
My script so far is below. Line 3 is how I would run it if NUMPROCS had a fixed value.
Some issues: Other users run program too, so I can't use pidof program and wait on it directly.
1 #! /bin/bash
2 for i in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16; do
3 a input.sh
4 E=`grep "Elapsed Time" output`
5 sed -i "/NUMPROCS/c\NUMPROCS=${i}" input.sh
6 echo $i $E >> result.dat
7 done
8 cat result.dat
It sounds like your input.sh script (side note, if that's a command-line command, it shouldn't have .sh on the end - on the other hand, if it's read into the "a" program as a shell include with "." or "source", the .sh is fine) backgrounds its jobs internally and immediately exits, leaving all the subjobs running. Ideally you'd want to edit input.sh, to have a "wait" command after spawning the jobs (perhaps a behavior triggerable by a command line option). Of course if you could edit it, you'd obviously want to add a wait to specify the job count on the command line too, and then suddenly everything would be really easy.
Editing the script with sed, on the other hand, really only makes sense if this is indeed a shell-script import, which is an approach that makes it hard to pass in command line parameters (passing them to "a" is easy, but for input.sh, it would need to pick them up from "a"'s internal variables, which is pretty abnormal). Obviously that's a difficult model to do anything tidy with. In such a case, your options are to either parse the output(s) to look for lines indicating the several executions are all complete, or to use "ps" and "grep" to count how many are running and go on to the next loop iteration when they're done (there's also a really sick approach where you loop until the system load bottoms out again, but surely nobody would do that).
Editing the input.sh script each time isn't really good, since it means if you run several tests together they may interfere with each other - in rare cases, dramatically.
So, although this is far uglier than fixing input.sh or "a" itself, you could just add a loop like this after "a input.sh"
while true ; do
if [ $i -eq `grep -c 'Elapsed Time' output | wc -l` ] ; then
break
fi
sleep 10 # avoid zillions of greps in a given time interval.
done
Without more information about input.sh it's hard to give better advice. Fixing input.sh itself, or the "a" program, to have an option to wait until subjobs complete, would normally be the preferred approach.
Other notes:
The "#!/bin/bash" generally shouldn't have a space after the magic number.
Use "for i in {1..16} ; do" instead - much easier to change (assumes halfway modern bash)
The E=grep "Elapsed Time" output is a nested command output capture, meaning that it will try to execute the line output by grep. This probably isn't what you wanted.
You can drop the ">> result.dat" and the cat, and instead change the "done" to "done | tee result.dat" for the same effect.

Controlling output of jobs run with gnu parallel

On my old machine with gnu parallel version 20121122 after I
parallel -j 15 < jobs.txt
I see output of the jobs being run which is very handy for debug purposes.
On my new machine with parallel version 20140122 then I execute above-mentioned command I see nothing in the terminal.
From another SO thread I found out about --tollef flag which fixes problem for me, but soon it is to be retired. How do I keep things working after retirement of --tollef flag?
--ungroup (if half-lines are allowed to mix - as they are with --tollef).
--line-buffer (if you only want complete lines printed - i.e. no half-line mixing).

How to run several commands in one PBS job submission

I have written a code that takes only 1-4 cpus. But when I submit a job on the cluster, I have to take at least one node with 16 cores per job. So I want to run several simulations on each node with each job I submit.
I was wondering if there is a way to submit the simulations in parallel in one job.
Here's an example:
My code takes 4 cpus. I submit a job for one node, and I want the node to run 4 instances of my code (each instance has different parameters) to take all the 16 cores.
Yes, of course; generally such systems will have instructions for how to do this, like these.
If you have (say) 4x 4-cpu jobs that you know will each take the same amount of time, and (say) you want them to run in 4 different directories (so the output files are easier to keep track of), use the shell ampersand to run them each in the background and then wait for all background tasks to finish:
(cd jobdir1; myexecutable argument1 argument2) &
(cd jobdir2; myexecutable argument1 argument2) &
(cd jobdir3; myexecutable argument1 argument2) &
(cd jobdir4; myexecutable argument1 argument2) &
wait
(where myexecutable argument1 argument2 is just a place holder for however you usually run your program; if you use mpiexec or something similar, that goes in there just as you'd normally use it. If you're using OpenMP, you can export the environment variable OMP_NUM_THREADS before the first line above.
If you have a number of tasks that won't all take the same length of time, it's easiest to assign well more than the (say) 4 jobs above and let a tool like gnu parallel launch the jobs as necessary, as described in this answer.

Resources