Does running scripts via for-loop in Bash file force them to be single-threaded? - bash

I have a Bash script that I submit to a cluster that calls a pipeline of Python scripts which are built to be multithreaded for parallel processing. I need to call this pipeline on all files in a directory, which I can accomplish with a for-loop. However, I am worried that this will run the operations (i.e. the pipeline) on just a single-thread rather than the full range that was intended.
The batch file for submission looks like this:
#!/bin/bash
##SBATCH <parameters>
for filename in /path/to/*.txt; do
PythonScript1.py "$filename"
PythonScript2.py "$filename"
done
Will this work as intended, or will the for loop hamper the efficiency/parallel processing of the Python scripts?

If you are running on a single server:
parallel ::: PythonScript1.py PythonScript2.py ::: /path/to/*.txt
This will generate all combinations of {PythonScript1.py,PythonScript2.py} and *.txt. These combinations will be run in parallel but GNU parallel will only run as many at a time as there are CPU threads in the server.
If you are running on multiple servers in a cluster, it really depends on what system is used for controlling the cluster. On some system you ask for a list of server and then you can ssh to those:
get list of servers > serverlist
parallel --slf serverlist ::: PythonScript1.py PythonScript2.py ::: /path/to/*.txt
On others you have to give each of the commands you want to run to the queing system:
parallel queue_this ::: PythonScript1.py PythonScript2.py ::: /path/to/*.txt
Without knowing more about which cluster control system is used, it is hard to help you more.

As originally written, PythonScript2.py won't run until PythonScript1.py returns, and the for loop won't iterate until PythonScript2.py returns.
Note that I said "returns", not "finishes"; if PythonScript1.py and/or PythonScript2.py forks or otherwise goes into the background on its own, then it will return before it is finished, and will continue processing while the calling bash script continues on to its next step.
You could have the calling script put them into the background with PythonScript1.py & and PythonScript2.py &, but this might or might not be what you want, since PythonScript1.py and PythonScript2.py will thus (likely) be running at the same time.
If you want multiple files processed at the same time, but want PythonScript1.py and PythonScript2.py to run in strict order, follow the comment from William Pursell:
for filename in /path/to/*.txt; do
{ PythonScript1.py "$filename"; PythonScript2.py "$filename"; } &
done

Related

is there a way to trigger 10 scripts at any given time in Linux shell scripting?

I have a requirement where I need to trigger 10 shell scripts at a time. I may have 200+ shell scripts to be executed.
e.g. if I trigger 10 jobs and two jobs completed, I need to trigger another 2 jobs which will make number of jobs currently executing to 10.
I need your help and suggestion to cater this requirement.
Yes with GNU Parallel like this:
parallel -j 10 < ListOfJobs.txt
Or, if your jobs are called job_1.sh to job_200.sh:
parallel -j 10 job_{}.sh ::: {1..200}
Or. if your jobs are named with discontiguous, random names but are all shell scripts named with .sh suffix in one directory:
parallel -j 10 ::: *.sh
There is a very good overview here. There are lots of questions and answers on Stack Overflow here.
Simply run them as background jobs:
for i in {1..10}; { ./script.sh & }
Adding more jobs if less than 10 are running:
while true; do
pids=($(jobs -pr))
((${#pids[#]}<10)) && ./script.sh &
done &> /dev/null
There are different ways to handle this:
Launch them together as background tasks (1)
Launch them in parallel (1)
Use the crontab (2)
Use at (3)
Explanations:
(1) You can launch the processes exactly when you like (by launching a command, click a button or whatever event you choose)
(2) The processes will be launched at the same time, every (working) day, periodically.
(3) You choose a time when the processes will be launched together once.
I have used below to trigger 10 jobs a time.
max_jobs_trigger=10
while mapfile -t -n ${max_jobs_trigger} ary && ((${#ary[#]})); do
jobs_to_trigger=`printf '%s\n' "${ary[#]}"`
#Trigger script in background
done

shell script to loop and start processes in parallel?

I need a shell script that will create a loop to start parallel tasks read in from a file...
Something in the lines of..
#!/bin/bash
mylist=/home/mylist.txt
for i in ('ls $mylist')
do
do something like cp -rp $i /destination &
end
wait
So what I am trying to do is send a bunch of tasks in the background with the "&" for each line in $mylist and wait for them to finish before existing.
However, there may be a lot of lines in there so I want to control how many parallel background processes get started; want to be able to max it at say.. 5? 10?
Any ideas?
Thank you
Your task manager will make it seem like you can run many parallel jobs. How many you can actually run to obtain maximum efficiency depends on your processor. Overall you don't have to worry about starting too many processes because your system will do that for you. If you want to limit them anyway because the number could get absurdly high you could use something like this (provided you execute a cp command every time):
...
while ...; do
jobs=$(pgrep 'cp' | wc -l)
[[ $jobs -gt 50 ]] && (sleep 100 ; continue)
...
done
The number of running cp commands will be stored in the jobs variable and before starting a new iteration it will check if there are too many already. Note that we jump to a new iteration so you'd have to keep track of how many commands you already executed. Alternatively you could use wait.
Edit:
On a side note, you can assign a specific CPU core to a process using taskset, it may come in handy when you have fewer more complex commands.
You are probably looking for something like this using GNU Parallel:
parallel -j10 cp -rp {} /destination :::: /home/mylist.txt
GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to.
If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:
GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:
Installation
If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash
For other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README
Learn more
See more examples: http://www.gnu.org/software/parallel/man.html
Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel

Why didn't the shell command execute in order as I expect?

I have written 3 shell scripts named s1.sh s2.sh s3.sh. They have the same content:
#!/bin/ksh
echo $0 $$
and s.sh invoke them in order:
#!/bin/sh
echo $0 $$
exec ./s1.sh &
exec ./s2.sh &
exec ./s3.sh &
but the result is disorder:
victor#ThinkPad-Edge:~$ ./s.sh
./s.sh 3524
victor#ThinkPad-Edge:~$ ./s1.sh 3525
./s3.sh 3527
./s2.sh 3526
why not s1 s2 then s3 in sequence?
If I remove & in s.sh:
#!/bin/sh
echo $0 $$
exec ./s1.sh
exec ./s2.sh
exec ./s3.sh
the output:
$ ./s.sh
./s.sh 4022
./s1.sh 4022
Missing s2 and s3, why?
They have been executing in order (at least starting in order - Notice the ids are incrementing). You open 3 separate threads for 3 separate programs. One (for some reason) is faster than the other. If you want them in sequence, take the execs and &s out of exec ./s1.sh &.
The process scheduler achieves apparent multitasking by running a snippet of each task at a time, then rapidly switching to another. Depending on system load, I/O wait, priority, scheduling algorithm etc, two processes started at almost the same time may get radically different allotments of the available CPU. Thus there can be no guarantee as to which of your three processes reaches its echo statement first.
This is very basic Unix knowledge; perhaps you should read a book or online tutorial if you mean to use Unix seriously.
If you require parallel processes to execute in a particular order, use a locking mechanism (semaphore, shared memory, etc) to prevent one from executing a particular part of the code, called a "critical section", before another. (This isn't easy to do in shell script, though. Switch to Python or Perl if you don't want to go all the way to C. Or use a lock file if you can live with the I/O latency.)
In your second example, the exec command replaces the current process with another. Thus s1 takes over completely, and the commands to start s2 and s3 are never seen by the shell.
(This was not apparent in your first example because the & caused the shell to fork a background process first, basically rendering the exec useless anyway.)
The & operator places each exec in the background. Effectively, you are running all 3 of your scripts in parallel. They don't stay in order because the operating system executes a bit of each script whenever it gets a chance, but it is also executing a bunch of other stuff too. One process can be given more time to run than the others, causing it to finish sooner.
Missing s2 and s3, why?
You are not missing s2 or s3 -- s2 and s3 are executing in a replacement or subshell (when s.sh exits (or is replaced), they lose communication with the console causing their output to overwrite prior output on the TTY).
Other answers have discussed, that s1,s2,s3 are all executed within replacement shells (exec) or subshells (without exec) and how removing exec and & will force sequential execution of s1,s2,s3. There are two cases to discuss. One where exec is present and one where it is not. Where exec is present, the current shell is replaced by the executed process (as pointed out in the comments, the parent shell is killed).
Where exec is not used, then then s1,s2,s3 are executed in subshells. You are not seeing the output of s2, s3, because s.sh has finished and/or exited before s2, s3 execute removing their communication with the console (if you look you will see you get an additional prompt shown and then the output of the remaining s(2,3).sh commands. But, there is a way to require their completion before s.sh exits. Use wait. wait tells s.sh not to exit until all of its child processes s1, s2, and s3 complete. This provides an output path to the console. Example:
#!/bin/bash
echo $0 $$
exec ./1.sh &
exec ./s2.sh &
exec ./s3.sh &
wait
output:
$ ./s.sh
./s.sh 11151
/home/david/scr/tmp/stack/s1.sh 11153
/home/david/scr/tmp/stack/s3.sh 11155
/home/david/scr/tmp/stack/s2.sh 11154

How to run shell script in few jobs

I have a build script, which works very slowly, especially on Solaris. I want to improve its performance by running it in multiple jobs. How can I do that?
Try GNU Parallel, it is quite easy to use:
GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU parallel can then split the input and pipe it into commands in parallel.
If you use xargs and tee today you will find GNU parallel very easy to use as GNU parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel.
GNU parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU parallel as input for other programs.
For each line of input GNU parallel will execute command with the line as arguments. If no command is given, the line of input is executed. Several lines will be run in parallel. GNU parallel can often be used as a substitute for xargs or cat | bash.
You mentioned that it is a build script. If you are using command line utility make you can parallelize builds using make's -j<N> option:
GNU make knows how to execute several recipes at once. Normally, make will execute only one recipe at a time, waiting for it to finish before executing the next. However, the ‘-j’ or ‘--jobs’ option tells make to execute many recipes simultaneously.
Also, there is distcc which can be used with make to distribute compiling to multiple hosts:
export DISTCC_POTENTIAL_HOSTS='localhost red green blue'
cd ~/work/myproject;
make -j8 CC=distcc
GNU parallel is quite good. #Maxim - good suggestion +1.
For a one off, if you cannot install new software, try this for a slow command that has to run multiple times, run slowcommand 17 times. Change things to fit your needs:
#!/bin/bash
cnt=0
while [ $cnt -le 17 ] # loop 17 times
do
slow_command &
cnt=$(( $cnt + 1 ))
[ $(( $cnt % 5 )) -eq 0 ] && wait # 5 jobs at a time in parallel
done
wait # you will have 2 jobs you di not wait for in the loop 17 % 5 == 2

Small Scale load levelling

I have a series of jobs which need to be done; no dependencies between jobs. I'm looking for a tool which will help me distribute these jobs to machines. The only restriction is that each machine should run one job at a time only. I'm trying to maximize throughput, because the jobs are not very balanced. My current hacked together shell scripts are less than efficient as I pre-build the per-machine job-queue, and can't move jobs from the queue of a heavily loaded machine to one which is waiting, having already finished everything.
Previous suggestions have included SLURM which seems like overkill, and even more overkill LoadLeveller.
GNU Parallel looks like almost exactly what I want, but the remote machines don't speak SSH; there's a custom job launcher used (which has no queueing capabilities). What I'd like is Gnu Parallel where the machine can just be substituted into a shell script on the fly just before the job is dispatched.
So, in summary:
List of Jobs + List of Machines which can accept: Maximize throughput. As close to shell as possible is preferred.
Worst case scenario something can be hacked together with bash's lockfile, but I feel as if a better solution must exist somewhere.
Assuming your jobs are in a text file jobs.tab looking like
/path/to/job1
/path/to/job2
...
Create dispatcher.sh as something like
mkfifo /tmp/jobs.fifo
while true; do
read JOB
if test -z "$JOB"; then
break
fi
echo -n "Dispatching job $JOB .."
echo $JOB >> /tmp/jobs.fifo
echo ".. taken!"
done
rm /tmp/jobs.fifo
and run one instance of
dispatcher.sh < jobs.tab
Now create launcher.sh as
while true; do
read JOB < /tmp/jobs.fifo
if test -z "$JOB"; then
break
fi
#launch job $JOB on machine $0 from your custom launcher
done
and run one instance of launcher.sh per target machine (giving the machine as first and only argument)
GNU Parallel supports your own ssh command. So this should work:
function my_submit { echo On host $1 run command $3; }
export -f my_submit
parallel -j1 -S "my_submit server1,my_submit server2" my_command ::: arg1 arg2

Resources