How to create Multiple Threads in Bash Shell Script [duplicate] - bash

This question already has answers here:
Threads in bash?
(3 answers)
Closed 6 years ago.
I have a an array of arguments that will be utilized as such in the command for my shell script. I want to be able to do this
./runtests.sh -b firefox,chrome,ie
where each command here will start a separate thread (currently we are multithreading by opening multiple terminals and starting the commands there)
I have pushed the entered commands into an array:
if [[ $browser == *","* ]]; then
IFS=',' read -ra browserArray <<< "$browser"
fi
Now I will have to start a separate thread (or process) while looping through array. Can someone guide me in the right direction? My guess in sudo code is something like
for (( c=0; c<${#browserArray}; c++ ))
do
startTests &
Am I on the right track?

That's not a thread, but a background process. They are similar but:
So, effectively we can say that threads and light weight processes are same.
The main difference between a light weight process (LWP) and a normal process is that LWPs share same address space and other resources like open files etc. As some resources are shared so these processes are considered to be light weight as compared to other normal processes and hence the name light weight processes.
NB: Redordered for clarity
What are Linux Processes, Threads, Light Weight Processes, and Process State
You can see the running background process using the jobs command. E.g.:
nick#nick-lt:~/test/npm-test$ sleep 10000 &
[1] 23648
nick#nick-lt:~/test/npm-test$ jobs
[1]+ Running
You can bring them to the foreground using fg:
nick#nick-lt:~/test/npm-test$ fg 1
sleep 1000
where the cursor will wait until the sleep time has elapsed. You can pause the job when it's in the foreground (as in the scenario after fg 1) by pressing CTRL-Z (SIGTSTP), which gives something like this:
[1]+ Stopped sleep 1000
and resume it by typing:
bg 1 # Resumes in the background
fg 1 # Resumes in the foreground
and you can kill it by pressing CTRL-C (SIGINT) when it's in the foreground, which just ends the process, or through using the kill command with the % affix to the jobs ID:
kill %1 # Or kill <PID>
Onto your implementation:
BROWSERS=
for i in "${#}"; do
case $i in
-b)
shift
BROWSERS="$1"
;;
*)
;;
esac
done
IFS=',' read -r -a SPLITBROWSERS <<< "$BROWSERS"
for browser in "${SPLITBROWSERS[#]}"
do
echo "Running ${browser}..."
$browser &
done
Can be called as:
./runtests.sh -b firefox,chrome,ie
Tadaaa.

Related

Pause script by keyboard input

(Sorry for my bad english.) I would like to pause a running script by pressing the [SPACE] bar. The script must run, until the user not press the [SPACE] bar, then pause 20 seconds, and run forth. How can i continuously watch the keyboard input while the script is running?
One way to do it:
#!/bin/bash -eu
script(){ #a mock for your script
while :; do
echo working
sleep 1
done
}
set -m #use job control
script & #run it in the background in a separate process group
read -sd ' ' #silently read until a space is read
kill -STOP -$! #stop the background process group
sleep 2 #wait 2 seconds (change it to 20 for your case)
kill -CONT -$! #resume the background process group
fg #put it in the forground so it's killable with Ctrl+C
I think the most simple way is to implement a script with checkpoints, which tests if a pause is required. Of course, it means your code never call 'long' running command...
A more complex solution is to use SIGPAUSE signal. You can have the main process that execute the script and the side process that catches [SPACE] and emit SIGPAUSE to the main process. Here I see at least two issues:
- how to share the terminal/keyboard between the 2 process (simple if your main script don't expect input from keyboard),
- if the main script starts several processes, you will have to deal with process group...
So it really depends on the complexity of your script. You may consider to rely only on regular Job control provided by Bash.
I suggest to use a controlling script that freezes you busy script:
kill -SIGSTOP ${PID}
and then
kill -SIGCONT ${PID}
to allow the process to continue.
see https://superuser.com/questions/485884/can-a-process-be-frozen-temporarily-in-linux for more detailed explanation.

Run / Close Programs over and over again

Is there a way I can write a simple script to run a program, close that program about 5 seconds later, and then repeat?
I just want to be able to run a program that I wrote over and over again but to do so Id have to close it like 5 seconds after running it.
Thanks!
If your command is non-interactive (requires no user interaction):
Launch your program in the background with control operator &, which gives you access to its PID (process ID) via $!, by which you can kill the running program instance after sleeping for 5 seconds:
#!/bin/bash
# Start an infinite loop.
# Use ^C to abort.
while :; do
# Launch the program in the background.
/path/to/your/program &
# Wait 5 seconds, then kill the program (if still alive).
sleep 5 && { kill $! && wait $!; } 2>/dev/null
done
If your command is interactive:
More work is needed if your command must run in the foreground to allow user interaction: then it is the command to kill the program after 5 seconds that must run in the background:
#!/bin/bash
# Turn on job control, so we can bring a background job back to the
# foreground with `fg`.
set -m
# Start an infinite loop.
# CAVEAT: The only way to exit this loop is to kill the current shell.
# Setting up an INT (^C) trap doesn't help.
while :; do
# Launch program in background *initially*, so we can reliably
# determine its PID.
# Note: The command line being set to the bakground is invariably printed
# to stderr. I don't know how to suppress it (the usual tricks
# involving subshells and group commands do not work).
/path/to/your/program &
pid=$! # Save the PID of the background job.
# Launch the kill-after-5-seconds command in the background.
# Note: A status message is invariably printed to stderr when the
# command is killed. I don't know how to suppress it (the usual tricks
# involving subshells and group commands do not work).
{ (sleep 5 && kill $pid &) } 2>/dev/null
# Bring the program back to the foreground, where you can interact with it.
# Execution blocks until the program terminates - whether by itself or
# by the background kill command.
fg
done
Check out the watch command. It will let you run a program repeatedly monitoring the output. Might have to get a little fancy if you need to kill that program manually after 5 seconds.
https://linux.die.net/man/1/watch
A simple example:
watch -n 5 foo.sh
To literally answer your question:
Run 10 times with sleep 5:
#!/bin/bash
COUNTER=0
while [ $COUNTER -lt 10 ]; do
# your script
sleep 5
let COUNTER=COUNTER+1
done
Run continuously:
#!/bin/bash
while [ 1 ]; do
# your script
sleep 5
done
If there is no input on the code, you can simply do
#!/bin/bash
while [ 1 ]
do
./exec_name
if [ $? == 0 ]
then
sleep 5
fi
done

Bash: Start and kill child process

I have a program I want to start. Let' say this program will run a while(true)-loop (so it does not terminate. I want to write a bash script which:
Starts the program (./endlessloop &)
Waits 1 second (sleep 1)
Kills the program --> How?
I cannot use $! to get pid from child because server is running a lot of instances concurrently.
Store the PID:
./endlessloop & endlessloop_pid=$!
sleep 1
kill "$endlessloop_pid"
You can also check whether the process is still running with kill -0:
if kill -0 "$endlessloop_pid"; then
echo "Endlessloop is still running"
fi
...and storing the content in a variable means it scales to multiple processes:
endlessloop_pids=( ) # initialize an empty array to store PIDs
./endlessloop & endlessloop_pids+=( "$!" ) # start one in background and store its PID
./endlessloop & endlessloop_pids+=( "$!" ) # start another and store its PID also
kill "${endlessloop_pids[#]}" # kill both endlessloop instances started above
See also BashFAQ #68, "How do I run a command, and have it abort (timeout) after N seconds?"
The ProcessManagement page on the Wooledge wiki also discusses relevant best practices.
You can use the pgrep command for the same:
kill $(pgrep endlessloop)

How to restart a BASH script from itself with a signal?

For example I have script with an infinite loop printing something to stdout. I need to trap a signal (for example SIGHUP) so it will restart the script with different PID and the loop will start itself again from 0. Killing and starting doesn't work as expected:
function traphup(){
kill $0
exec $0
}
trap traphup HUP
Maybe I should place something in background or use nohup, but I am not familiar with this command.
In your function:
traphup(){
$0 "$#" &
exit 0
}
This starts a new process in the background with the original command name and arguments (vary arguments to suit your requirements) with a new process ID. The original shell then exits. Don't forget to sort out the PID file if your daemon uses one to identify itself - but the restart may do that anyway.
Note that using nohup would be the wrong direction; the first time you launched the daemon, it would respond to the HUP signal, but the one launched with nohup would ignore the signal, not restarting again - unless you explicitly overrode the 'ignore' status, which is a bad idea for various reasons.
Answering comment
I'm not quite sure what the trouble is.
When I run the following script, I only see one copy of the script in ps output, regardless of whether I start it as ./xx.sh or as ./xx.sh &.
#!/bin/bash
traphup()
{
$0 "$$" &
exit 0
}
trap traphup HUP
echo
sleep 1
i=1
while [ $i -lt 1000 ]
do
echo "${1:-<none>}: $$: $i"
sleep 1
: $(( i++ ))
done
The output contains lines such as:
<none>: 1155: 21
<none>: 1155: 22
<none>: 1155: 23
1155: 1649: 1
1155: 1649: 2
1155: 1649: 3
1155: 1649: 4
The ones with '<none>' are the original process; the second set are the child process (1649) reporting its parent (1155). This output made it easy to track which process to send HUP signals to. (The initial echo and sleep gets the command line prompt out of the way of the output.)
My suspicion is that what you are seeing depends on the content of your script - in my case, the body of the loop is simple. But if I had a pipeline or something in there, then I might see a second process with the same name. But I don't think that would change depending on whether the original script is run in foreground or background.

Why can't I use job control in a bash script?

In this answer to another question, I was told that
in scripts you don't have job control
(and trying to turn it on is stupid)
This is the first time I've heard this, and I've pored over the bash.info section on Job Control (chapter 7), finding no mention of either of these assertions. [Update: The man page is a little better, mentioning 'typical' use, default settings, and terminal I/O, but no real reason why job control is particularly ill-advised for scripts.]
So why doesn't script-based job-control work, and what makes it a bad practice (aka 'stupid')?
Edit: The script in question starts a background process, starts a second background process, then attempts to put the first process back into the foreground so that it has normal terminal I/O (as if run directly), which can then be redirected from outside the script. Can't do that to a background process.
As noted by the accepted answer to the other question, there exist other scripts that solve that particular problem without attempting job control. Fine. And the lambasted script uses a hard-coded job number — Obviously bad. But I'm trying to understand whether job control is a fundamentally doomed approach. It still seems like maybe it could work...
What he meant is that job control is by default turned off in non-interactive mode (i.e. in a script.)
From the bash man page:
JOB CONTROL
Job control refers to the ability to selectively stop (suspend)
the execution of processes and continue (resume) their execution at a
later point.
A user typically employs this facility via an interactive interface
supplied jointly by the system’s terminal driver and bash.
and
set [--abefhkmnptuvxBCHP] [-o option] [arg ...]
...
-m Monitor mode. Job control is enabled. This option is on by
default for interactive shells on systems that support it (see
JOB CONTROL above). Background processes run in a separate
process group and a line containing their exit status is
printed upon their completion.
When he said "is stupid" he meant that not only:
is job control meant mostly for facilitating interactive control (whereas a script can work directly with the pid's), but also
I quote his original answer, ... relies on the fact that you didn't start any other jobs previously in the script which is a bad assumption to make. Which is quite correct.
UPDATE
In answer to your comment: yes, nobody will stop you from using job control in your bash script -- there is no hard case for forcefully disabling set -m (i.e. yes, job control from the script will work if you want it to.) Remember that in the end, especially in scripting, there always are more than one way to skin a cat, but some ways are more portable, more reliable, make it simpler to handle error cases, parse the output, etc.
You particular circumstances may or may not warrant a way different from what lhunath (and other users) deem "best practices".
Job control with bg and fg is useful only in interactive shells. But & in conjunction with wait is useful in scripts too.
On multiprocessor systems spawning background jobs can greatly improve the script's performance, e.g. in build scripts where you want to start at least one compiler per CPU, or process images using ImageMagick tools parallely etc.
The following example runs up to 8 parallel gcc's to compile all source files in an array:
#!bash
...
for ((i = 0, end=${#sourcefiles[#]}; i < end;)); do
for ((cpu_num = 0; cpu_num < 8; cpu_num++, i++)); do
if ((i < end)); then gcc ${sourcefiles[$i]} & fi
done
wait
done
There is nothing "stupid" about this. But you'll require the wait command, which waits for all background jobs before the script continues. The PID of the last background job is stored in the $! variable, so you may also wait ${!}. Note also the nice command.
Sometimes such code is useful in makefiles:
buildall:
for cpp_file in *.cpp; do gcc -c $$cpp_file & done; wait
This gives much finer control than make -j.
Note that & is a line terminator like ; (write command& not command&;).
Hope this helps.
Job control is useful only when you are running an interactive shell, i.e., you know that stdin and stdout are connected to a terminal device (/dev/pts/* on Linux). Then, it makes sense to have something on foreground, something else on background, etc.
Scripts, on the other hand, doesn't have such guarantee. Scripts can be made executable, and run without any terminal attached. It doesn't make sense to have foreground or background processes in this case.
You can, however, run other commands non-interactively on the background (appending "&" to the command line) and capture their PIDs with $!. Then you use kill to kill or suspend them (simulating Ctrl-C or Ctrl-Z on the terminal, it the shell was interactive). You can also use wait (instead of fg) to wait for the background process to finish.
It could be useful to turn on job control in a script to set traps on
SIGCHLD. The JOB CONTROL section in the manual says:
The shell learns immediately whenever a job changes state. Normally,
bash waits until it is about to print a prompt before reporting
changes in a job's status so as to not interrupt any other output. If
the -b option to the set builtin command is enabled, bash reports
such changes immediately. Any trap on SIGCHLD is executed for each
child that exits.
(emphasis is mine)
Take the following script, as an example:
dualbus#debian:~$ cat children.bash
#!/bin/bash
set -m
count=0 limit=3
trap 'counter && { job & }' CHLD
job() {
local amount=$((RANDOM % 8))
echo "sleeping $amount seconds"
sleep "$amount"
}
counter() {
((count++ < limit))
}
counter && { job & }
wait
dualbus#debian:~$ chmod +x children.bash
dualbus#debian:~$ ./children.bash
sleeping 6 seconds
sleeping 0 seconds
sleeping 7 seconds
Note: CHLD trapping seems to be broken as of bash 4.3
In bash 4.3, you could use 'wait -n' to achieve the same thing,
though:
dualbus#debian:~$ cat waitn.bash
#!/home/dualbus/local/bin/bash
count=0 limit=3
trap 'kill "$pid"; exit' INT
job() {
local amount=$((RANDOM % 8))
echo "sleeping $amount seconds"
sleep "$amount"
}
for ((i=0; i<limit; i++)); do
((i>0)) && wait -n; job & pid=$!
done
dualbus#debian:~$ chmod +x waitn.bash
dualbus#debian:~$ ./waitn.bash
sleeping 3 seconds
sleeping 0 seconds
sleeping 5 seconds
You could argue that there are other ways to do this in a more
portable way, that is, without CHLD or wait -n:
dualbus#debian:~$ cat portable.sh
#!/bin/sh
count=0 limit=3
trap 'counter && { brand; job & }; wait' USR1
unset RANDOM; rseed=123459876$$
brand() {
[ "$rseed" -eq 0 ] && rseed=123459876
h=$((rseed / 127773))
l=$((rseed % 127773))
rseed=$((16807 * l - 2836 * h))
RANDOM=$((rseed & 32767))
}
job() {
amount=$((RANDOM % 8))
echo "sleeping $amount seconds"
sleep "$amount"
kill -USR1 "$$"
}
counter() {
[ "$count" -lt "$limit" ]; ret=$?
count=$((count+1))
return "$ret"
}
counter && { brand; job & }
wait
dualbus#debian:~$ chmod +x portable.sh
dualbus#debian:~$ ./portable.sh
sleeping 2 seconds
sleeping 5 seconds
sleeping 6 seconds
So, in conclusion, set -m is not that useful in scripts, since
the only interesting feature it brings to scripts is being able to
work with SIGCHLD. And there are other ways to achieve the same thing
either shorter (wait -n) or more portable (sending signals yourself).
Bash does support job control, as you say. In shell script writing, there is often an assumption that you can't rely on the fact that you have bash, but that you have the vanilla Bourne shell (sh), which historically did not have job control.
I'm hard-pressed these days to imagine a system in which you are honestly restricted to the real Bourne shell. Most systems' /bin/sh will be linked to bash. Still, it's possible. One thing you can do is instead of specifying
#!/bin/sh
You can do:
#!/bin/bash
That, and your documentation, would make it clear your script needs bash.
Possibly o/t but I quite often use nohup when ssh into a server on a long-running job so that if I get logged out the job still completes.
I wonder if people are confusing stopping and starting from a master interactive shell and spawning background processes? The wait command allows you to spawn a lot of things and then wait for them all to complete, and like I said I use nohup all the time. It's more complex than this and very underused - sh supports this mode too. Have a look at the manual.
You've also got
kill -STOP pid
I quite often do that if I want to suspend the currently running sudo, as in:
kill -STOP $$
But woe betide you if you've jumped out to the shell from an editor - it will all just sit there.
I tend to use mnemonic -KILL etc. because there's a danger of typing
kill - 9 pid # note the space
and in the old days you could sometimes bring the machine down because it would kill init!
jobs DO work in bash scripts
BUT, you ... NEED to watch for the spawned staff
like:
ls -1 /usr/share/doc/ | while read -r doc ; do ... done
jobs will have different context on each side of the |
bypassing this may be using for instead of while:
for `ls -1 /usr/share/doc` ; do ... done
this should demonstrate how to use jobs in a script ...
with the mention that my commented note is ... REAL (dunno why that behaviour)
#!/bin/bash
for i in `seq 7` ; do ( sleep 100 ) & done
jobs
while [ `jobs | wc -l` -ne 0 ] ; do
for jobnr in `jobs | awk '{print $1}' | cut -d\[ -f2- |cut -d\] -f1` ; do
kill %$jobnr
done
#this is REALLY ODD ... but while won't exit without this ... dunno why
jobs >/dev/null 2>/dev/null
done
sleep 1
jobs

Resources