In a bash script, I'm waiting on a child process's pid using wait. That child process is writing to a log file. Is there a way in the bash script to tail that log file to std out while at the same time waiting on the process to complete?
Use the tail command to follow the file while you wait for the command to finish.
command &
cmdpid=$!
tail -f -n +0 logfile &
wait $cmdpid
kill $!
This is in spirit similar to William's solution, but with one important difference: it will correctly print the log file if it takes longer for command to finish than it does for cat to print the file (quite likely, as cat is very fast). The -n +0 option tells tail to print the whole file before it starts following updates.
Run cat in the background:
cmd-that-logs-to-file &
pid=$!
cat file &
wait $pid
kill $! # Kill the cat
This makes it simpler:
command &
pid=$!
tail --pid=$pid -f /path/to/log
Related
(or How to kill the child process)?
inotifywait -mqr --format '%w %f %e' $feedDir | while read dir file event
do
#something
done &
echo $! #5431
ps eg:
>$ ps
PID TTY TIME CMD
2867 pts/3 00:00:02 bash
5430 pts/3 00:00:00 inotifywait
5431 pts/3 00:00:00 bash
5454 pts/3 00:00:00 ps
It seems if I kill 5431 then 5430 (inotifywait) will be left running, but if I kill 5430 then both processes die. I don't suppose I can reliably assume that the pid of inotifywait will always be 1 less than $!?
When we run a pipe, each command is executed in a separated process. The interpreter waits for the last one but if we use ampersand (&).
cmd1 | cmd2 &
The pid of processes will be probably close, but we cannot assume it reliably. In the case where the last command is a bash reserved word as while, it creates a dedicated bash (that's why your 'dir', 'file' variables won't exist after the done keyword). Example:
ps # shows one bash process
echo "azerty" | while read line; do ps; done # shows one more bash
When the first command exits, the second one will terminate because the read on the pipe return EOF.
When the second command exits, the first command will be terminated by the signal SIGPIPE (write on a pipe with no reader) when it tries to write to the pipe. But if the command waits indefinitely... it is not terminated.
echo "$!" prints the pid of the last command executed in background. In your case, the bash process that is executing the while loop.
You can find the pid of "inotifywait" with the following syntax. But it's uggly:
(inotifywait ... & echo "$!">inotifywait.pid) | \
while read dir file event
do
#something
done &
cat inotifywait.pid # prints pid of inotifywait
If you don't want the pid, but just be sure the process will be terminated, you can use the -t option of inotifywait:
(while true; do inotifywait -t 10 ...; done)| \
while read dir file event
do
#something
done &
kill "$!" # kill the while loop
None of this solution are nice. What is your real achievement? Maybe we can find a more elegant solution.
If your goal is to make sure all of the children can be killed or interrupted elegantly. If you're using BusyBox's Ash, you don't have process substitution. If you don't want to use an fd either, check out this solution.
#!/bin/sh
pid=$$
terminate() {
pkill -9 -P "$pid"
}
trap terminate SIGHUP SIGINT SIGQUIT SIGTERM
# do your stuff here, note: should be run in the background {{{
inotifywait -mqr --format '%w %f %e' $feedDir | while read dir file event
do
#something
done &
# }}}
# Either pkill -9 -P "$pid" here
wait
# or pkill -9 -P "$pid" here
Or in another shell:
kill <pid ($$)>
I am attempting to run a couple commands in a bash script however it will hang up on my command waiting for it to complete (which it wont). this script is simply making sure its running.
#!/bin/bash
ps cax | grep python > /dev/null
if [ $? -eq 0 ]; then
echo "Process is running."
else
echo "Process is not running... Starting..."
python likebot.py
echo $(ps aux | grep python | grep -v color | awk {'print $2'})
fi
Once it gets to the python command it hangs up while the command is being executed. its not till i cntrl c before it gives the pid. is there anyway i can have it run this bash script and exit the bash script once the commands were run (without waiting for them to complete).
In general, if you want to execute a command and not wait for it, you can simply use & as the delimiter rather than ; or a newline. When doing so, the pid of that process is available to the shell in the special variable !. If you want to wait for that process to complete, you can use wait. If you do not wish to wait for it, then simply omit the wait. In your case:
python likebot.py & # Start command asynchronously
echo $! # echo the pid of the most recent asynchronous process
Since it looks like likebot should be always running you might want to consider 'nohup' as well, with a bare & the job is still a child of your login process and will die if that dies.
I have a script that starts background processes.
#!/bin/sh
./process1.sh &
./process2.sh &
I need to kill these processes using a separate script.
Here is what I did:
#!/bin/sh
# the kill.sh
pid=$(ps | grep './process1.sh' |grep -v grep| awk '{print $1}')
kill -9 $pid
Question time:
When the kill.sh is called processes are stoped. But I get the message
"sh: you need to specify whom to kill".
Why is that?
After I kill the process using the described script, it doesn't stop immediately.For a while I see the output on the screen as if the process is still running. Why?
What could be an alternative solution to kill the processes?
Worth to mention that I am working with busybox do I have limited choice of utilities.
You could store the process ids in a temporary file like this:
#!/bin/sh
./process1.sh &
echo $! > /tmp/process1.pid
./process2.sh &
echo $! > /tmp/process2.pid
and then delete it with your script. $! returns the PID of the process last executed.
kill -9 `cat /tmp/process*.pid`
rm /tmp/process*.pid
Make sure the process*.pid files get deleted after the corresponding script is finished.
Because your kill command failed as pid is empty.
pid=$(ps | grep './process1.sh' |grep -v grep| awk '{print $1}')
This doesn't give you the pid you want. When you start a process in the background, it's executed in a new shell and you won't see the process.sh in your ps output.
What you can do is save the PIDs when you start the background processes and kill them:
./process1.sh &
pid1=$! # Save the previously started background's PID
./process2.sh &
pid2=$! # Save the previously started background's PID
echo $pid1 " " $pid2 > /tmp/killfile
Then get the PIDs from this file and pass it to kill.
I have a couple of scripts to control some applications (start/stop/list/etc). Currently my "stop" script just sends an interrupt signal to an application, but I'd like to have more feedback about what application does when it is shutting down. Ideally, I'd like to start tailing its log, then send an interrupt signal and then keep tailing that log until the application stops.
How to do this with a shell script?
For just tailing a log file until a certain process stops (using tail from GNU coreutils):
do_something > logfile &
tail --pid $! -f logfile
UPDATE The above contains a race condition: In case do_something spews many lines into logfile, the tail will skip all of them but the last few. To avoid that and always have tail print the complete logfile, add a -n +1 parameter to the tail call (that is even POSIX tail(1)):
do_something > logfile &
tail --pid $! -n +1 -f logfile
Here's a Bash script that works without --pid. Change $log_file and $p_name to suit your need:
#!/bin/bash
log_file="/var/log/messages"
p_name="firefox"
tail -n10 $log_file
curr_line="$(tail -n1 $log_file)"
last_line="$(tail -n1 $log_file)"
while [ $(ps aux | grep $p_name | grep -v grep | wc -l) -gt 0 ]
do
curr_line="$(tail -n1 $log_file)"
if [ "$curr_line" != "$last_line" ]
then
echo $curr_line
last_line=$curr_line
fi
done
echo "[*] $p_name exited !!"
If you need to tail log until process exited, but watch stdout / sdterr at the same time, try this:
# Run some process in bg (background):
some_process &
# Get process id:
pid=$!
# Tail the log once it is created, but watch process stdout/stderr at the same time:
tail --pid=$pid -f --retry log_file_path &
# Since tail is running in bg also - wait until the process has completed:
tail --pid=$pid -f /dev/null
How can you suppress the Terminated message that comes up after you kill a
process in a bash script?
I tried set +bm, but that doesn't work.
I know another solution involves calling exec 2> /dev/null, but is that
reliable? How do I reset it back so that I can continue to see stderr?
In order to silence the message, you must be redirecting stderr at the time the message is generated. Because the kill command sends a signal and doesn't wait for the target process to respond, redirecting stderr of the kill command does you no good. The bash builtin wait was made specifically for this purpose.
Here is very simple example that kills the most recent background command. (Learn more about $! here.)
kill $!
wait $! 2>/dev/null
Because both kill and wait accept multiple pids, you can also do batch kills. Here is an example that kills all background processes (of the current process/script of course).
kill $(jobs -rp)
wait $(jobs -rp) 2>/dev/null
I was led here from bash: silently kill background function process.
The short answer is that you can't. Bash always prints the status of foreground jobs. The monitoring flag only applies for background jobs, and only for interactive shells, not scripts.
see notify_of_job_status() in jobs.c.
As you say, you can redirect so standard error is pointing to /dev/null but then you miss any other error messages. You can make it temporary by doing the redirection in a subshell which runs the script. This leaves the original environment alone.
(script 2> /dev/null)
which will lose all error messages, but just from that script, not from anything else run in that shell.
You can save and restore standard error, by redirecting a new filedescriptor to point there:
exec 3>&2 # 3 is now a copy of 2
exec 2> /dev/null # 2 now points to /dev/null
script # run script with redirected stderr
exec 2>&3 # restore stderr to saved
exec 3>&- # close saved version
But I wouldn't recommend this -- the only upside from the first one is that it saves a sub-shell invocation, while being more complicated and, possibly even altering the behavior of the script, if the script alters file descriptors.
EDIT:
For more appropriate answer check answer given by Mark Edgar
Solution: use SIGINT (works only in non-interactive shells)
Demo:
cat > silent.sh <<"EOF"
sleep 100 &
kill -INT $!
sleep 1
EOF
sh silent.sh
http://thread.gmane.org/gmane.comp.shells.bash.bugs/15798
Maybe detach the process from the current shell process by calling disown?
The Terminated is logged by the default signal handler of bash 3.x and 4.x. Just trap the TERM signal at the very first of child process:
#!/bin/sh
## assume script name is test.sh
foo() {
trap 'exit 0' TERM ## here is the key
while true; do sleep 1; done
}
echo before child
ps aux | grep 'test\.s[h]\|slee[p]'
foo &
pid=$!
sleep 1 # wait trap is done
echo before kill
ps aux | grep 'test\.s[h]\|slee[p]'
kill $pid ## no need to redirect stdin/stderr
sleep 1 # wait kill is done
echo after kill
ps aux | grep 'test\.s[h]\|slee[p]'
Is this what we are all looking for?
Not wanted:
$ sleep 3 &
[1] 234
<pressing enter a few times....>
$
$
[1]+ Done sleep 3
$
Wanted:
$ (set +m; sleep 3 &)
<again, pressing enter several times....>
$
$
$
$
$
As you can see, no job end message. Works for me in bash scripts as well, also for killed background processes.
'set +m' disables job control (see 'help set') for the current shell. So if you enter your command in a subshell (as done here in brackets) you will not influence the job control settings of the current shell. Only disadvantage is that you need to get the pid of your background process back to the current shell if you want to check whether it has terminated, or evaluate the return code.
This also works for killall (for those who prefer it):
killall -s SIGINT (yourprogram)
suppresses the message... I was running mpg123 in background mode.
It could only silently be killed by sending a ctrl-c (SIGINT) instead of a SIGTERM (default).
disown did exactly the right thing for me -- the exec 3>&2 is risky for a lot of reasons -- set +bm didn't seem to work inside a script, only at the command prompt
Had success with adding 'jobs 2>&1 >/dev/null' to the script, not certain if it will help anyone else's script, but here is a sample.
while true; do echo $RANDOM; done | while read line
do
echo Random is $line the last jobid is $(jobs -lp)
jobs 2>&1 >/dev/null
sleep 3
done
Another way to disable job notifications is to place your command to be backgrounded in a sh -c 'cmd &' construct.
#!/bin/bash
# ...
pid="`sh -c 'sleep 30 & echo ${!}' | head -1`"
kill "$pid"
# ...
# or put several cmds in sh -c '...' construct
sh -c '
sleep 30 &
pid="${!}"
sleep 5
kill "${pid}"
'
I found that putting the kill command in a function and then backgrounding the function suppresses the termination output
function killCmd() {
kill $1
}
killCmd $somePID &
Simple:
{ kill $! } 2>/dev/null
Advantage? can use any signal
ex:
{ kill -9 $PID } 2>/dev/null