executing bash loop while command is running - bash

I want to build a bash script that executes a command and in the meanwhile performs other stuff, with the possibility of killing the command if the script is killed. Say, executes a cp of a large file and in the meanwhile prints the elapsed time since copy started, but if the script is killed it kills also the copy.
I don't want to use rsync, for 2 reasons: 1) is slow and 2) I want to learn how to do it, it could be useful.
I tried this:
until cp SOURCE DEST
do
#evaluates time, stuff, commands, file dimensions, not important now
#and echoes something
done
but it doesn't execute the do - done block, as it is waiting that the copy ends. Could you please suggest something?

until is the opposite of while. It's nothing to do with doing stuff while another command runs. For that you need to run your task in the background with &.
cp SOURCE DEST &
pid=$!
# If this script is killed, kill the `cp'.
trap "kill $pid 2> /dev/null" EXIT
# While copy is running...
while kill -0 $pid 2> /dev/null; do
# Do stuff
...
sleep 1
done
# Disable the trap on a normal exit.
trap - EXIT
kill -0 checks if a process is running. Note that it doesn't actually signal the process and kill it, as the name might suggest. Not with signal 0, at least.

There are three steps involved in solving your problem:
Execute a command in the background, so it will keep running while your script does something else. You can do this by following the command with &. See the section on Job Control in the Bash Reference Manual for more details.
Keep track of that command's status, so you'll know if it is still running. You can do this with the special variable $!, which is set to the PID (process identifier) of the last command you ran in the background, or empty if no background command was started. Linux creates a directory /proc/$PID for every process that is running and deletes it when the process exits, so you can check for the existence of that directory to find out if the background command is still running. You can learn more than you ever wanted to know about /proc from the Linux Documentation Project's File System Hierarchy page or Advanced Bash-Scripting Guide.
Kill the background command if your script is killed. You can do this with the trap command, which is a bash builtin command.
Putting the pieces together:
# Look for the 4 common signals that indicate this script was killed.
# If the background command was started, kill it, too.
trap '[ -z $! ] || kill $!' SIGHUP SIGINT SIGQUIT SIGTERM
cp $SOURCE $DEST & # Copy the file in the background.
# The /proc directory exists while the command runs.
while [ -e /proc/$! ]; do
echo -n "." # Do something while the background command runs.
sleep 1 # Optional: slow the loop so we don't use up all the dots.
done
Note that we check the /proc directory to find out if the background command is still running, because kill -0 will generate an error if it's called when the process no longer exists.
Update to explain the use of trap:
The syntax is trap [arg] [sigspec …], where sigspec … is a list of signals to catch, and arg is a command to execute when any of those signals is raised. In this case, the command is a list:
'[ -z $! ] || kill $!'
This is a common bash idiom that takes advantage of the way || is processed. An expression of the form cmd1 || cmd2 will evaluate as successful if either cmd1 OR cmd2 succeeds. But bash is clever: if cmd1 succeeds, bash knows that the complete expression must also succeed, so it doesn't bother to evaluate cmd2. On the other hand, if cmd1 fails, the result of cmd2 determines the overall result of the expression. So an important feature of || is that it will execute cmd2 only if cmd1 fails. That means it's a shortcut for the (invalid) sequence:
if cmd1; then
# do nothing
else
cmd2
fi
With that in mind, we can see that
trap '[ -z $! ] || kill $!' SIGHUP SIGINT SIGQUIT SIGTERM
will test whether $! is empty (which means the background task was never executed). If that fails, which means the task was executed, it kills the task.

here is the simplest way to do that using ps -p :
[command_1_to_execute] &
pid=$!
while ps -p $pid &>/dev/null; do
[command_2_to_be_executed meanwhile command_1 is running]
sleep 10
done
This will run every 10 seconds the command_2 if the command_1 is still running in background .
hope this will help you :)

What you want is to do two things at once in shell. The usual way to do that is with a job. You can start a background job by ending the command with an ampersand.
copy $SOURCE $DEST &
You can then use the jobs command to check its status.
Read more:
Gnu Bash Job Control

Related

What's the effect of combining exec and & in shell script?

I maintained some legacy Linux shell script codes, and I met something like this:
#!/bin/sh
foo()
{
exec some_shell_command &
return 0
}
foo
I'm very curious about the effect of such shell scripts. Is some_shell_command executed in another subprocess? And after the execution of exec command, does shell script process become the some_shell_command process?
Thanks in advance.
update:
The script is:
exec /mnt/usr/bin/pppd $DIAL_DEV unit $count call $PROVIDER ipparam $PROVIDER &
and at sometime:
# Shutdown ppp connection.
pppOff() {
# Get device index.
local index=$1
# Check connection.
pppCheck $index
if [ $? -ne 0 ]; then
echo "invalid pppd: "$index
return 1
fi
# Get pid.
local PID=$PPPD_PID
echo "pppd pid for "$index": "$PID
# Kill
kill -TERM ${PID}
return 0
}
after executing the pppOff, the script itself is killed. So pppd is executed as the same process as the script maybe.
Is some_shell_command executed in another subprocess?
Yes.
after the execution of exec command, does shell script process become the some_shell_command process?
There are two processes, the one spawned for the background becomes some_shell_command. Parent continues execution.
does it mean 'exec' is meaningless?
It has very little meaning in this specific context. Generally, you should expect that Bash optimizes and if Bash finds out there is only one command, it will optimize it to an exec.
$ strace -ff bash -c '/bin/echo 1' 2>&1 | grep clone
# nothing, because `fork()` is optimized out
There are cases (see https://github.com/bminor/bash/blob/f3a35a2d601a55f337f8ca02a541f8c033682247/builtins/evalstring.c#L441 https://github.com/bminor/bash/blob/f3a35a2d601a55f337f8ca02a541f8c033682247/builtins/evalstring.c#L124 ) where the command is not optimized, mostly in the case of like trap or some signal handling that Bash needs to execute after the command is done.
Another difference is that exec requires specifically an executable, where without exec then some_shell_command could be a built-in, function or an alias.

How can I run multiple bash scripts in unison?

I'm learning Bash for a Unix class, and I'm trying to figure out how to run a script, then run a second script while the first is running and have the two interact. To clarify, the scripts look like this:
#!/bin/bash
num = 1
trap exit 0 SIGINT SIGTERM
trap "{ echo &num ; num++; }" SIGUSR1
while :
do
sleep 2
done
and the second one:
#!/bin/bash
if ps | grep "$1" > /dev/null
then
kill -SIGUSR1 $1
else
echo "Process doesn't exist"
fi
exit 0
In case the code isn't correct, the general idea is for the first script to loop until it recieves a SIGINT or SIGTERM, and echo and increment a number whenever it receives a SIGUSR1. The second script takes a pid as an argument and checks if it exists, and sends a SIGUSR1 to the given process. The problem is that when I run the first script, I can't do anything unless I move it to the background with ctrl-z, but when it's there it doesn't seem to respond to any signal except a kill signal. Any ideas on how to make this work?
You can use mycommand & to run a script in the background. Ctrl-Z stops the script, but you can then use bg to let it run in the background. In either case, you can use fg to bring it to the foreground again.
Also note that you can't have spaces around the = in assignments, and you can use let num++ to increment num. You should also singlequote the command in trap, to prevent "$num" from expanding.
All in all:
#!/bin/bash
num=1
trap exit 0 SIGINT SIGTERM
trap '{ echo $num ; let num++; }' SIGUSR1
while :
do
sleep 2
done
Finally, you can more easily check if a pid exists by just using kill -0 pid, or just attempting to sigusr1 it and check the result, to avoid grep "123" matching the substring of pid "1234" and such.
You need to make the first script run in the background. When you press Ctrl+Z it is suspended. Then you can type "bg" to make it run in the background (it will stop again if it tries to read from standard input, to allow you to switch back to it with the "fg" command).
Another way is to start script1 already in the background like this:
$ ./script1 &
The ampersand starts a job in the background and returns you to the prompt immediately.
Look in the bash man page under "JOB CONTROL" (here's a copy) for more information on how this works. The key commands to deal with jobs from an interactive shell is "jobs", "fg", and "bg".

How do I terminate all the subshell processes?

I have a bash script to test how a server performs under load.
num=1
if [ $# -gt 0 ]; then
num=$1
fi
for i in {1 .. $num}; do
(while true; do
{ time curl --silent 'http://localhost'; } 2>&1 | grep real
done) &
done
wait
When I hit Ctrl-C, the main process exits, but the background loops keep running. How do I make them all exit? Or is there a better way of spawning a configurable number of logic loops executing in parallel?
Here's a simpler solution -- just add the following line at the top of your script:
trap "kill 0" SIGINT
Killing 0 sends the signal to all processes in the current process group.
One way to kill subshells, but not self:
kill $(jobs -p)
Bit of a late answer, but for me solutions like kill 0 or kill $(jobs -p) go too far (kill all child processes).
If you just want to make sure one specific child-process (and its own children) are tidied up then a better solution is to kill by process group (PGID) using the sub-process' PID, like so:
set -m
./some_child_script.sh &
some_pid=$!
kill -- -${some_pid}
Firstly, the set -m command will enable job management (if it isn't already), this is important, as otherwise all commands, sub-shells etc. will be assigned to the same process group as your parent script (unlike when you run the commands manually in a terminal), and kill will just give a "no such process" error. This needs to be called before you run the background command you wish to manage as a group (or just call it at script start if you have several).
Secondly, note that the argument to kill is negative, this indicates that you want to kill an entire process group. By default the process group ID is the same as the first command in the group, so we can get it by simply adding a minus sign in front of the PID we fetched with $!. If you need to get the process group ID in a more complex case, you will need to use ps -o pgid= ${some_pid}, then add the minus sign to that.
Lastly, note the use of the explicit end of options --, this is important, as otherwise the process group argument will be treated as an option (signal number), and kill will complain it doesn't have enough arguments. You only need this if the process group argument is the first one you wish to terminate.
Here is a simplified example of a background timeout process, and how to cleanup as much as possible:
#!/bin/bash
# Use the overkill method in case we're terminated ourselves
trap 'kill $(jobs -p | xargs)' SIGINT SIGHUP SIGTERM EXIT
# Setup a simple timeout command (an echo)
set -m
{ sleep 3600; echo "Operation took longer than an hour"; } &
timeout_pid=$!
# Run our actual operation here
do_something
# Cancel our timeout
kill -- -${timeout_pid} >/dev/null 2>&1
wait -- -${timeout_pid} >/dev/null 2>&1
printf '' 2>&1
This should cleanly handle cancelling this simplistic timeout in all reasonable cases; the only case that can't be handled is the script being terminated immediately (kill -9), as it won't get a chance to cleanup.
I've also added a wait, followed by a no-op (printf ''), this is to suppress "terminated" messages that can be caused by the kill command, it's a bit of a hack, but is reliable enough in my experience.
You need to use job control, which, unfortunately, is a bit complicated. If these are the only background jobs that you expect will be running, you can run a command like this one:
jobs \
| perl -ne 'print "$1\n" if m/^\[(\d+)\][+-]? +Running/;' \
| while read -r ; do kill %"$REPLY" ; done
jobs prints a list of all active jobs (running jobs, plus recently finished or terminated jobs), in a format like this:
[1] Running sleep 10 &
[2] Running sleep 10 &
[3] Running sleep 10 &
[4] Running sleep 10 &
[5] Running sleep 10 &
[6] Running sleep 10 &
[7] Running sleep 10 &
[8] Running sleep 10 &
[9]- Running sleep 10 &
[10]+ Running sleep 10 &
(Those are jobs that I launched by running for i in {1..10} ; do sleep 10 & done.)
perl -ne ... is me using Perl to extract the job numbers of the running jobs; you can obviously use a different tool if you prefer. You may need to modify this script if your jobs has a different output format; but the above output is also on Cygwin, so it's very likely identical to yours.
read -r reads a "raw" line from standard input, and saves it into the variable $REPLY. kill %"$REPLY" will be something like kill %1, which "kills" (sends an interrupt signal to) job number 1. (Not to be confused with kill 1, which would kill process number 1.) Together, while read -r ; do kill %"$REPLY" ; done goes through each job number printed by the Perl script, and kills it.
By the way, your for i in {1 .. $num} won't do what you expect, since brace expansion is handled before parameter expansion, so what you have is equivalent to for i in "{1" .. "$num}". (And you can't have white-space inside the brace expansion, anyway.) Unfortunately, I don't know of a clean alternative; I think you have to do something like for i in $(bash -c "{1..$num}"), or else switch to an arithmetic for-loop or whatnot.
Also by the way, you don't need to wrap your while-loop in parentheses; & already causes the job to be run in a subshell.
Here's my eventual solution. I'm keeping track of the subshell process IDs using an array variable, and trapping the Ctrl-C signal to kill them.
declare -a subs #array of subshell pids
function kill_subs() {
for pid in ${subs[#]}; do
kill $pid
done
exit 0
}
num=1 if [ $# -gt 0 ]; then
num=$1 fi
for ((i=0;i < $num; i++)); do
while true; do
{ time curl --silent 'http://localhost'; } 2>&1 | grep real
done &
subs[$i]=$! #grab the pid of the subshell
done
trap kill_subs 1 2 15
wait
While these is not an answer, I just would like to point out something which invalidates the selected one; using jobs or kill 0 might have unexpected results; in my case it killed unintended processes which in my case is not an option.
It has been highlighted somehow in some of the answers but I am afraid not with enough stress or it has been not considered:
"Bit of a late answer, but for me solutions like kill 0 or kill $(jobs -p) go too far (kill all child processes)."
"If these are the only background jobs that you expect will be running, you can run a command like this one:"

Set trap in bash for different process with PID known

I need to set a trap for a bash process I'm starting in the background. The background process may run very long and has its PID saved in a specific file.
Now I need to set a trap for that process, so if it terminates, the PID file will be deleted.
Is there a way I can do that?
EDIT #1
It looks like I was not precise enough with my description of the problem. I have full control over all the code, but the long running background process I have is this:
cat /dev/random >> myfile&
When I now add the trap at the beginning of the script this statement is in, $$ will be the PID of that bigger script not of this small background process I am starting here.
So how can I set traps for that background process specifically?
(./jobsworthy& echo $! > $pidfile; wait; rm -f $pidfile)&
disown
Add this to the beginning of your Bash script.
#!/bin/bash
trap 'rm "$pidfile"; exit' EXIT SIGQUIT SIGINT SIGSTOP SIGTERM ERR
pidfile=$(tempfile -p foo -s $$)
echo $$ > "$pidfile"
# from here, do your long running process
You can run your long running background process in an explicit subshell, as already shown by Petesh's answer, and set a trap inside this specific subshell to handle the exiting of your long running background process. The parent shell remains unaffected by this subshell trap.
(
trap '
trap - EXIT ERR
kill -0 ${!} 1>/dev/null 2>&1 && kill ${!}
rm -f pidfile.pid
exit
' EXIT QUIT INT STOP TERM ERR
# simulate background process
sleep 15 &
echo ${!} > pidfile.pid
wait
) &
disown
# remove background process by hand
# kill -TERM ${!}
You do not need trap to just run some command after a background process terminates, you can instead run through a shell command line and add the command following after the background process, separated with semicolon (and let this shell run in the background instead of the background process).
If you still would like to have some notification in your shell script send and trap SIGUSR2 for instance:
#!/bin/sh
BACKGROUND_PROCESS=xterm # for my testing, replace with what you have
sh -c "$BACKGROUND_PROCESS; rm -f the_pid_file; kill -USR2 $$" &
trap "echo $BACKGROUND_PROCESS ended" USR2
while sleep 1
do
echo -n .
done

How to suppress Terminated message after killing in bash?

How can you suppress the Terminated message that comes up after you kill a
process in a bash script?
I tried set +bm, but that doesn't work.
I know another solution involves calling exec 2> /dev/null, but is that
reliable? How do I reset it back so that I can continue to see stderr?
In order to silence the message, you must be redirecting stderr at the time the message is generated. Because the kill command sends a signal and doesn't wait for the target process to respond, redirecting stderr of the kill command does you no good. The bash builtin wait was made specifically for this purpose.
Here is very simple example that kills the most recent background command. (Learn more about $! here.)
kill $!
wait $! 2>/dev/null
Because both kill and wait accept multiple pids, you can also do batch kills. Here is an example that kills all background processes (of the current process/script of course).
kill $(jobs -rp)
wait $(jobs -rp) 2>/dev/null
I was led here from bash: silently kill background function process.
The short answer is that you can't. Bash always prints the status of foreground jobs. The monitoring flag only applies for background jobs, and only for interactive shells, not scripts.
see notify_of_job_status() in jobs.c.
As you say, you can redirect so standard error is pointing to /dev/null but then you miss any other error messages. You can make it temporary by doing the redirection in a subshell which runs the script. This leaves the original environment alone.
(script 2> /dev/null)
which will lose all error messages, but just from that script, not from anything else run in that shell.
You can save and restore standard error, by redirecting a new filedescriptor to point there:
exec 3>&2 # 3 is now a copy of 2
exec 2> /dev/null # 2 now points to /dev/null
script # run script with redirected stderr
exec 2>&3 # restore stderr to saved
exec 3>&- # close saved version
But I wouldn't recommend this -- the only upside from the first one is that it saves a sub-shell invocation, while being more complicated and, possibly even altering the behavior of the script, if the script alters file descriptors.
EDIT:
For more appropriate answer check answer given by Mark Edgar
Solution: use SIGINT (works only in non-interactive shells)
Demo:
cat > silent.sh <<"EOF"
sleep 100 &
kill -INT $!
sleep 1
EOF
sh silent.sh
http://thread.gmane.org/gmane.comp.shells.bash.bugs/15798
Maybe detach the process from the current shell process by calling disown?
The Terminated is logged by the default signal handler of bash 3.x and 4.x. Just trap the TERM signal at the very first of child process:
#!/bin/sh
## assume script name is test.sh
foo() {
trap 'exit 0' TERM ## here is the key
while true; do sleep 1; done
}
echo before child
ps aux | grep 'test\.s[h]\|slee[p]'
foo &
pid=$!
sleep 1 # wait trap is done
echo before kill
ps aux | grep 'test\.s[h]\|slee[p]'
kill $pid ## no need to redirect stdin/stderr
sleep 1 # wait kill is done
echo after kill
ps aux | grep 'test\.s[h]\|slee[p]'
Is this what we are all looking for?
Not wanted:
$ sleep 3 &
[1] 234
<pressing enter a few times....>
$
$
[1]+ Done sleep 3
$
Wanted:
$ (set +m; sleep 3 &)
<again, pressing enter several times....>
$
$
$
$
$
As you can see, no job end message. Works for me in bash scripts as well, also for killed background processes.
'set +m' disables job control (see 'help set') for the current shell. So if you enter your command in a subshell (as done here in brackets) you will not influence the job control settings of the current shell. Only disadvantage is that you need to get the pid of your background process back to the current shell if you want to check whether it has terminated, or evaluate the return code.
This also works for killall (for those who prefer it):
killall -s SIGINT (yourprogram)
suppresses the message... I was running mpg123 in background mode.
It could only silently be killed by sending a ctrl-c (SIGINT) instead of a SIGTERM (default).
disown did exactly the right thing for me -- the exec 3>&2 is risky for a lot of reasons -- set +bm didn't seem to work inside a script, only at the command prompt
Had success with adding 'jobs 2>&1 >/dev/null' to the script, not certain if it will help anyone else's script, but here is a sample.
while true; do echo $RANDOM; done | while read line
do
echo Random is $line the last jobid is $(jobs -lp)
jobs 2>&1 >/dev/null
sleep 3
done
Another way to disable job notifications is to place your command to be backgrounded in a sh -c 'cmd &' construct.
#!/bin/bash
# ...
pid="`sh -c 'sleep 30 & echo ${!}' | head -1`"
kill "$pid"
# ...
# or put several cmds in sh -c '...' construct
sh -c '
sleep 30 &
pid="${!}"
sleep 5
kill "${pid}"
'
I found that putting the kill command in a function and then backgrounding the function suppresses the termination output
function killCmd() {
kill $1
}
killCmd $somePID &
Simple:
{ kill $! } 2>/dev/null
Advantage? can use any signal
ex:
{ kill -9 $PID } 2>/dev/null

Resources