I have the following bash script:
tail -F -n0 /private/var/log/system.log | while read line
do
if [ ! `echo $line | grep -c 'launchd'` -eq 0 ]; then
echo 'launchd message'
exit 0
fi
done
For some reason, it is echoing launchd message, waiting for a full 5 seconds, and then exiting.
Why is this happening and how do I make it exit immediately after it echos launchd message?
Since you're using a pipe, the while loop is being run in a subshell. Run it in the main shell instead.
#!/bin/bash
while ...
do
...
done < <(tail ...)
As indicated by Ignacio, your tail | while creates a subshell. The delay is because it's waiting for the next line to be written to the log file before everything closes.
You can add this line immediately before your exit command if you'd prefer not using process substitution:
kill -SIGPIPE $$
Unfortunately, I don't know of any way to control the exit code using this method. It will be 141 which is 128 + 13 (the signal number of SIGPIPE).
If you're trying to make the startup of a daemon dependent on another one having started, there's probably a better way to do that.
By the way, if you're really writing a Bash script (which you'd have to be to use <() process substitution), you can write your if like this: if [[ $line == *launchd* ]].
You can also exit the subshell with a tell-tale exit code and then test the value of "$?" to get the same effect you're looking for:
tail -F -n0 /private/var/log/system.log | while read line
do
if [ ! `echo $line | grep -c 'launchd'` -eq 0 ]; then
echo 'launchd message'
exit 10
fi
done
if [ $? -eq 10 ]; then exit 0; fi
Related
I currently have a script that does something like
./a | ./b | ./c
I want to modify it so that if any of a, b, or c exit with an error code I print an error message and stop instead of piping bad output forward.
What would be the simplest/cleanest way to do so?
In bash you can use set -e and set -o pipefail at the beginning of your file. A subsequent command ./a | ./b | ./c will fail when any of the three scripts fails. The return code will be the return code of the first failed script.
Note that pipefail isn't available in standard sh.
You can also check the ${PIPESTATUS[]} array after the full execution, e.g. if you run:
./a | ./b | ./c
Then ${PIPESTATUS} will be an array of error codes from each command in the pipe, so if the middle command failed, echo ${PIPESTATUS[#]} would contain something like:
0 1 0
and something like this run after the command:
test ${PIPESTATUS[0]} -eq 0 -a ${PIPESTATUS[1]} -eq 0 -a ${PIPESTATUS[2]} -eq 0
will allow you to check that all commands in the pipe succeeded.
If you really don't want the second command to proceed until the first is known to be successful, then you probably need to use temporary files. The simple version of that is:
tmp=${TMPDIR:-/tmp}/mine.$$
if ./a > $tmp.1
then
if ./b <$tmp.1 >$tmp.2
then
if ./c <$tmp.2
then : OK
else echo "./c failed" 1>&2
fi
else echo "./b failed" 1>&2
fi
else echo "./a failed" 1>&2
fi
rm -f $tmp.[12]
The '1>&2' redirection can also be abbreviated '>&2'; however, an old version of the MKS shell mishandled the error redirection without the preceding '1' so I've used that unambiguous notation for reliability for ages.
This leaks files if you interrupt something. Bomb-proof (more or less) shell programming uses:
tmp=${TMPDIR:-/tmp}/mine.$$
trap 'rm -f $tmp.[12]; exit 1' 0 1 2 3 13 15
...if statement as before...
rm -f $tmp.[12]
trap 0 1 2 3 13 15
The first trap line says 'run the commands 'rm -f $tmp.[12]; exit 1' when any of the signals 1 SIGHUP, 2 SIGINT, 3 SIGQUIT, 13 SIGPIPE, or 15 SIGTERM occur, or 0 (when the shell exits for any reason).
If you're writing a shell script, the final trap only needs to remove the trap on 0, which is the shell exit trap (you can leave the other signals in place since the process is about to terminate anyway).
In the original pipeline, it is feasible for 'c' to be reading data from 'b' before 'a' has finished - this is usually desirable (it gives multiple cores work to do, for example). If 'b' is a 'sort' phase, then this won't apply - 'b' has to see all its input before it can generate any of its output.
If you want to detect which command(s) fail, you can use:
(./a || echo "./a exited with $?" 1>&2) |
(./b || echo "./b exited with $?" 1>&2) |
(./c || echo "./c exited with $?" 1>&2)
This is simple and symmetric - it is trivial to extend to a 4-part or N-part pipeline.
Simple experimentation with 'set -e' didn't help.
Unfortunately, the answer by Johnathan requires temporary files and the answers by Michel and Imron requires bash (even though this question is tagged shell). As pointed out by others already, it is not possible to abort the pipe before later processes are started. All processes are started at once and will thus all run before any errors can be communicated. But the title of the question was also asking about error codes. These can be retrieved and investigated after the pipe finished to figure out whether any of the involved processes failed.
Here is a solution that catches all errors in the pipe and not only errors of the last component. So this is like bash's pipefail, just more powerful in the sense that you can retrieve all the error codes.
res=$( (./a 2>&1 || echo "1st failed with $?" >&2) |
(./b 2>&1 || echo "2nd failed with $?" >&2) |
(./c 2>&1 || echo "3rd failed with $?" >&2) > /dev/null 2>&1)
if [ -n "$res" ]; then
echo pipe failed
fi
To detect whether anything failed, an echo command prints on standard error in case any command fails. Then the combined standard error output is saved in $res and investigated later. This is also why standard error of all processes is redirected to standard output. You can also send that output to /dev/null or leave it as yet another indicator that something went wrong. You can replace the last redirect to /dev/null with a file if yo uneed to store the output of the last command anywhere.
To play more with this construct and to convince yourself that this really does what it should, I replaced ./a, ./b and ./c by subshells which execute echo, cat and exit. You can use this to check that this construct really forwards all the output from one process to another and that the error codes get recorded correctly.
res=$( (sh -c "echo 1st out; exit 0" 2>&1 || echo "1st failed with $?" >&2) |
(sh -c "cat; echo 2nd out; exit 0" 2>&1 || echo "2nd failed with $?" >&2) |
(sh -c "echo start; cat; echo end; exit 0" 2>&1 || echo "3rd failed with $?" >&2) > /dev/null 2>&1)
if [ -n "$res" ]; then
echo pipe failed
fi
This answer is in the spirit of the accepted answer, but using shell variables instead of temporary files.
if TMP_A="$(./a)"
then
if TMP_B="$(echo "TMP_A" | ./b)"
then
if TMP_C="$(echo "TMP_B" | ./c)"
then
echo "$TMP_C"
else
echo "./c failed"
fi
else
echo "./b failed"
fi
else
echo "./a failed"
fi
i'm new at bash scripting and i am trying to write a script to check if an ethernet device is up and if so exit the script.
That doesn't work as intended, maybe someone can give me a hint.
I start the script, then plug in my device and the script just seems to hang up in the terminal. It is not getting back to the command line.
When the device is plugged in already and the ethernet dev is up the script just runs perfectly. It then echoes 'Connected' and throws me back to command line.
#! /bin/sh
t1=$(ifconfig | grep -o enxca1aea4347b1)
t2='enxca1aea4347b1'
while [ "$t1" != "$t2" ];
do
sleep 1;
done
echo "Connected"
exit 0
You don't even need to make the comparison; just check the exit status of grep.
t2='enxca1aea4347b1'
until ifconfig | grep -q "$t2"; do
sleep 1;
done
echo "Connected"
exit 0
In fact, you don't even need grep:
until [[ "$(ifconfig)" =~ $t2 ]]; do
sleep 1
done
You've made an infinite loop, since you're not updating the value of $t1 inside the while statement.
Instead, try:
#! /bin/sh
t1=$(ifconfig | grep -o enxca1aea4347b1)
t2='enxca1aea4347b1'
while [ "$t1" != "$t2" ];
do
sleep 1;
t1=$(ifconfig | grep -o enxca1aea4347b1)
done
echo "Connected"
exit 0
The following script works as expected when executed from an Applescript do shell script command.
#!/bin/sh
sleep 10 &
#echo "hello world" > /tmp/apipe &
cpid=$!
sleep 1
if ps -ef | grep $cpid | grep sleep | grep -qv grep ; then
echo "killing blocking cmd..."
kill -KILL $cpid
# non zero status to inform launch script of problem...
exit 1
fi
But, if the sleep command (line 2) is swaped to the echo command in (line 3) together with the if statement, the script blocks when run from Applescript but runs fine from the terminal command line.
Any ideas?
EDIT: I should have mentioned that the script works properly when a consumer/reader is connected to the pipe. It only block when nothing is reading from the pipe...
OK, the following will do the trick. It basically kills the job using its jobid. Since there is only one, it's the current job %%.
I was lucky that I came across the this answer or it would have driven me crazy :)
#!/bin/sh
echo $1 > $2 &
sleep 1
# Following is necessary. Seems to need it or
# job will not complete! Also seen at
# https://stackoverflow.com/a/10736613/348694
echo "Checking for running jobs..."
jobs
kill %% >/dev/null 2>&1
if [ $? -eq 0 ] ; then
echo "Taking too long. Killed..."
exit 1
fi
exit 0
So I have this Bash script:
#!/bin/bash
PID=`ps -u ...`
if [ "$PID" = "" ]; then
echo $(date) Server off: not backing up
exit
else
echo "say Server backup in 10 seconds..." >> fifo
sleep 10
STARTTIME="$(date +%s)"
echo nosave >> fifo
echo savenow >> fifo
tail -n 3 -f server.log | while read line
do
if echo $line | grep -q 'save complete'; then
echo $(date) Backing up...
OF="./backups/backup $(date +%Y-%m-%d\ %H:%M:%S).tar.gz"
tar -czhf "$OF" data
echo autosave >> fifo
echo "$(date) Backup complete, resuming..."
echo "done"
exit 0
echo "done2"
fi
TIMEDIFF="$(($(date +%s)-STARTTIME))"
if ((TIMEDIFF > 70)); then
echo "Save took too long, canceling backup."
exit 1
fi
done
fi
Basically, the server takes input from a fifo and outputs to server.log. The fifo is used to send stop/start commands to the server for autosaves. At the end, once it receives the message from the server that the server has completed a save, it tar's the data directory and starts saves again.
It's at the exit 0 line that I'm having trouble. Everything executes fine, but I get this output:
srv:scripts $ ./backup.sh
Sun Nov 24 22:42:09 EST 2013 Backing up...
Sun Nov 24 22:42:10 EST 2013 Backup complete, resuming...
done
But it hangs there. Notice how "done" echoes but "done2" fails. Something is causing it to hang on exit 0.
ADDENDUM: Just to avoid confusion for people looking at this in the future, it hangs at the exit line and never returns to the command prompt. Not sure if I was clear enough in my original description.
Any thoughts? This is the entire script, there's nothing else going on and I'm calling it direct from bash.
Here's a smaller, self contained example that exhibits the same behavior:
echo foo > file
tail -f file | while read; do exit; done
The problem is that since each part of the pipeline runs in a subshell, exit only exits the while read loop, not the entire script.
It will then hang until tail finds a new line, tries to write it, and discovers that the pipe is broken.
To fix it, you can replace
tail -n 3 -f server.log | while read line
do
...
done
with
while read line
do
...
done < <(tail -n 3 -f server.log)
By redirecting from a process substitution instead, the flow doesn't have to wait for tail to finish like it would in a pipeline, and it won't run in a subshell so that exit will actually exits the entire script.
But it hangs there. Notice how "done" echoes but "done2" fails.
done2 won't be printed at all since exit 0 has already ended your script with return code 0.
I don't know the details of bash subshells inside loops, but normally the appropriate way to exit a loop is to use the "break" command. In some cases that's not enough (you really need to exit the program), but refactoring that program may be the easiest (safest, most portable) way to solve that. It may also improve readability, because people don't expect programs to exit in the middle of a loop.
I wrote a tiny Bash script to find all the Mercurial changesets (starting from the tip) that contains the string passed in argument:
#!/bin/bash
CNT=$(hg tip | awk '{ print $2 }' | head -c 3)
while [ $CNT -gt 0 ]
do
echo rev $CNT
hg log -v -r$CNT | grep $1
let CNT=CNT-1
done
If I interrupt it by hitting ctrl-c, more often than not the command currently executed is "hg log" and it's that command that gets interrupted, but then my script continues.
I was then thinking of checking the return status of "hg log", but because I'm piping it into grep I'm not too sure as to how to go about it...
How should I go about exiting this script when it is interrupted? (btw I don't know if that script is good at all for what I want to do but it does the job and anyway I'm interested in the "interrupted" issue)
Place at the beginning of your script: trap 'echo interrupted; exit' INT
Edit: As noted in comments below, probably doesn't work for the OP's program due to the pipe. The $PIPESTATUS solution works, but it might be simpler to set the script to exit if any program in the pipe exits with an error status: set -e -o pipefail
Rewrite your script like this, using the $PIPESTATUS array to check for a failure:
#!/bin/bash
CNT=$(hg tip | awk '{ print $2 }' | head -c 3)
while [ $CNT -gt 0 ]
do
echo rev $CNT
hg log -v -r$CNT | grep $1
if [ 0 -ne ${PIPESTATUS[0]} ] ; then
echo hg failed
exit
fi
let CNT=CNT-1
done
The $PIPESTATUS variable will allow you to check the results of every member of the pipe.