bash script beep depending on stdout message - bash

Never coded on bash but need something urgent. Sorry if this is not the norm but would really like to get some help.
I have some messages that are thrown to stdout, Depending on the message type (the message is a string with the word "found") I need the bash script to beep.
So far I've come up with this.
output=$(command 1) # getting stdout stream?
while [ true ]; do
if [ "$output" = "found" ]; then # if the stdout has the word "found"
echo $(echo -e '\a') # this makes the beep sound
fi
done
I'm not sure where/how to add grep or awk command to check for the string that has the word "found" and only return "found" so that in the if condition it can check against that word.
Thanks!

You can do something as simple as:
command | grep -q 'found' && echo -e '\a'
If the output of command contains the text "found", then grep will return with a zero exit status, so the echo command will be executed, causing the beep.
If the output does not contain "found", grep will exit with status 1, and will not result in the echo.
Depending on what you need to make the beep work, just replace anything after the &&. The general syntax would be something like:
command | grep -q "$SEARCH" && command_if_found || command_if_not_found

Related

Generate other exit behavior if output from pipeline is empty

I have a bash shell script doing sth like
#!/bin/bash
# construct regex from input
# set FILE according to input file
egrep "${regex}" "${FILE}" | doing stuff | sort
I want this script to write the output (a list of new line separated matches) of the command to stdout if matches are found (which it is doing). If no matches are found it needs to write out an error message to stderr and exit with exit status 3.
I tried this
#!/bin/bash
# construct regex from input
# set FILE according to input file
function check () {
if ! read > /dev/null
then
echo "error message" 1>&2
exit 3
fi
}
egrep "${regex}" "${FILE}" | doing stuff |
sort | tee >(check)
Now the correct error message is written out but the exit status "cannot escape the subshell"; the outer script is still exiting with exit status 0.
I also tried sth like
#!/bin/bash
# construct regex from input
# set FILE according to input file
if ! egrep "${regex}" "${FILE}" | doing stuff | sort
then
echo "error message" 1>&2
exit 3
fi
But here I have the problem that one of the commands in the pipe (especially sort) exits with an exit status 0
How can I get my desired exit status 3 and error message while keeping the output for normal execution and without doing all the stuff twice?
EDIT:
I can solve the problem by using
#!/bin/bash
# construct regex from input
# set FILE according to input file
if ! egrep "${regex}" "${FILE}" | doing stuff | sort | grep .
then
echo "error message" 1>&2
exit 3
fi
However I am not sure this is the best way since pipes are working in parallel...
I would use the PIPESTATUS to check the exit code of egrep:
#!/bin/bash
# construct regex from input
# set FILE according to input file
egrep "${regex}" "${FILE}" | doing stuff | sort
if [[ ${PIPESTATUS[0] != 0 ]]; then
echo "error message" 1>&2
exit 3
fi
Some context:
${PIPESTATUS[#]} is just an array wich contains the exit code of every program you chained up. $? will just give you the exit code of the last command in the pipe.

Bash script for searching a specific word in terminal output

I'm trying to implement a bash script who supposed to search for a word in a Python script terminal output.
The Python script doesn't stop so "&" in the end of the command is needed but the "if [ $? == 0 ] ; then" condition doesn't work.
How it can be solved?
Thanks, Gal.
#!/bin/bash
#Check if Pixhawk is connected
PORT=/dev/ttyPixhawk
end=$((SECONDS+3))
not_exists=f
/usr/local/bin/mavproxy.py --daemon --non-interactive --master=$PORT | grep 'Failed' &> /dev/null &
while [ $SECONDS -lt $end ] ; do
if [ $? == 0 ] ; then
not_exists=t
fi
sleep 1
done
if [ $not_exists=t ] ; then
echo "Not Exists"
else
echo "Exists"
fi
kill $(pgrep -f '/usr/local/bin/mavproxy.py')
Bash doesn't know anything about the output of background commands. Check for yourself with [ 5444 -lt 3 ] & echo $?.
your if statement wouldn't work in any case because $? checks for the return value of the most recent previous command, which in this case is your while loop.
You have a few different options. If you're waiting for some output, and you know how long it is in the output until whatever target you're looking for occurs, you can have the python write to a file and keep checking on the file size with a timeout for failure.
You can also continue with a simple timed approach as you have where you just check the output after a few seconds and decide success or failure based on that.
You can make your python script actually end, or provide more error messages, or write only the relevant parts to file that way.
Furthermore, you really should run your script through shellcheck.net to notice more problems.
You'll need to define your goal and use case more clearly to get real help; all we can really say is "your approach will not work, but there are definitely approaches which will work"
You are checking the status of grep command output inside while loop using $?. This can be done if $? is the next command to be fired after grep and if grep is not a back-group process . But in your script, $? will return the status of while [$SECONDS -lt $end ]. You can try to re-direct the output to a temp file and check it's status
/usr/local/bin/mavproxy.py --daemon --non-interactive --master=$PORT | grep 'Failed' &> tmp.txt &
sleep 3
# If file exists and it's size is greater than 0, [ -s File] will return true
if [ -s tmp.txt ]; then
echo 'pattern exists'
else
echo 'pattern not exists'
fi

trying to test zero length output from command in shell script

I'm sort of a newbie when it comes to shell scripting. What am I doing wrong?
I'm trying to grep a running log file and take action if the grep returns data.
# grep for "success" in the log which will tell us if we were successful
tail -f file.log | grep success > named_pipe &
# send signal to my server to do something
/bin/kill -10 $PID
timeout=0;
while : ; do
OUTPUT=$(cat < named_pipe)
if test [-n] $OUTPUT
then
echo "output is '" $OUTPUT "'"
echo_success
break;
else
timeout=$((timeout+1))
sleep 1
if [ $timeout -ge $SHUTDOWN_TIMEOUT ]; then
echo_failure
break
fi
fi
done
I'm finding that even when "success" is not in the log, test [-n] $OUTPUT returns true. This is because apparently OUTPUT is equal to " ". Why is OUTPUT a single space rather than empty?
How can I fix this?
Here's a smaller test case for your problem:
output=""
if test [-n] $output
then
echo "Why does this happen?"
fi
This happens because when $output is empty or whitespace, it expands to nothing, and you just run test [-n].
test foo is true when foo is non-empty. It doesn't matter that your foo is a flag wrapped in square brackets.
The correct way to do this is without the brackets, and with quotes:
if test -n "$output"
then
...
fi
As for why $OUTPUT is a single space, that's simple: it isn't. echo just writes out its arguments separated as spaces, and you specified multiple arguments. The correct code is echo "output is '$OUTPUT'"

How to get exit status of piped command from inside the pipeline?

Consider I have following commandline: do-things arg1 arg2 | progress-meter "Doing things...";, where progress-meter is bash function I want to implement. It should print Doing things... before running do-things arg1 arg2 or in parallel (so, it will be printed anyway at the very beginning), and record stdout+stderr of do-things command, and check it's exit status. If exit status is 0, it should print [ OK ], otherwise it should print [FAIL] and dump recorded output.
Currently I have things done using progress-meter "Doing things..." "do-things arg1 arg2";, and evaluating second argument inside, which is clumsy and I don't like that and believe there is better solution.
The problem with pipe syntax is that I don't know how can I get do-things' exit status from inside the pipeline? $PIPESTATUS seems to be useful only after all commands in pipeline finished.
Maybe process substitution like progress-meter "Doing things..." <(do-things arg1 arg2); will be fine, but in this case I also don't know how can I get exit status of do-things.
I'll be happy to hear if there is some other neat syntax possible to achieve same task without escaping command to be executed like in my example.
I greatly hope for the help of community.
UPD1: As question seems not to be clear enough, I paraphrase it:
I want bash function that can be fed with command, that will execute in parallel to function, and bash function will receive it's stdout+stderr, wait for completion and get its exit status.
Example implementation using evals:
progress_meter() {
local output;
local errcode;
echo -n -e $1;
output=$( {
eval "${cmd}";
} 2>&1; );
errcode=$?;
if (( errcode )); then {
echo '[FAIL]';
echo "Output was: ${output}"
} else {
echo '[ OK ]';
}; fi;
}
So this can be used as progress_meter "Do things..." "do-things arg1 arg2". I want the same without eval.
Why eval things? Assuming you have one fixed argument to progress-meter, you can do something like:
#!/bin/bash
# progress meter
prompt="$1"
shift
echo "$prompt"
"$#" # this just executes a command made up of
# arguments 2, 3, ... of the script
# the real script should actually read its input,
# display progress meter etc.
and call it
$ progress-meter "Doing stuff" do-things arg1 arg2
If you insist on putting progress-meter in a pipeline, I'm afraid your best bet is something like
(do-things arg1 arg2 ; echo $?) | progress-meter "Doing stuff"
I'm not sure I understand what exactly you're trying to achieve,
but you could check the pipefail option:
pipefail
If set, the return value of a pipeline is the
value of the last (rightmost) command to exit
with a non-zero status, or zero if all commands
in the pipeline exit successfully. This option
is disabled by default.
For example:
bash-4.1 $ ls no_such_a_file 2>&- | : && echo ok: $? || echo ko: $?
ok: 0
bash-4.1 $ set -o pipefail
bash-4.1 $ ls no_such_a_file 2>&- | : && echo ok: $? || echo ko: $?
ko: 2
Edit: I just read your comment on the other post. Why don't you just handle the error?
bash-4.1 $ ls -d /tmp 2>&- || echo failed | while read; do [[ $REPLY == failed ]] && echo failed || echo "$REPLY"; done
/tmp
bash-4.1 $ ls -d /tmpp 2>&- || echo failed | while read; do [[ $REPLY == failed ]] && echo failed || echo "$REPLY"; done
failed
Have your scrips in the pipeline communicate by proxy (much like the Blackboard Pattern: some guy writes on the blackboard, another guy reads it):
Modify your do-things script so that it reports its exit status to a file somewhere.
Modify your progress-meter script to read that file, using command line switches if you like so as not to hardcode the name of the blackboard file, for reporting the exit status of the program that it is reporting the progress for.

bash script: how to save return value of first command in a pipeline?

Bash: I want to run a command and pipe the results through some filter, but if the command fails, I want to return the command's error value, not the boring return value of the filter:
E.g.:
if !(cool_command | output_filter); then handle_the_error; fi
Or:
set -e
cool_command | output_filter
In either case it's the return value of cool_command that I care about -- for the 'if' condition in the first case, or to exit the script in the second case.
Is there some clean idiom for doing this?
Use the PIPESTATUS builtin variable.
From man bash:
PIPESTATUS
An array variable (see Arrays
below) containing a list of exit
status values from the processes in
the most-recently-executed foreground
pipeline (which may contain only a
single command).
If you didn't need to display the error output of the command, you could do something like
if ! echo | mysql $dbcreds mysql; then
error "Could not connect to MySQL. Did you forget to add '--db-user=' or '--db-password='?"
die "Check your credentials or ensure server is running with /etc/init.d/mysqld status"
fi
In the example, error and die are defined functions. elsewhere in the script. $dbcreds is also defined, though this is built from command line options. If there is no error generated by the command, nothing is returned. If an error occurs, text will be returned by this particular command.
Correct me if I'm wrong, but I get the impression you're really looking to do something a little more convoluted than
[ `id -u` -eq '0' ] || die "Must be run as root!"
where you actually grab the user ID prior to the if statement, and then perform the test. Doing it this way, you could then display the result if you choose. This would be
UID=`id -u`
if [ $UID -eq '0' ]; then
echo "User is root"
else
echo "User is not root"
exit 1 ##set an exit code higher than 0 if you're exiting because of an error
fi
The following script uses a fifo to filter the output in a separate process. This has the following advantages over the other answers. First, it is not bash specific. In particular it does not rely on PIPESTATUS. Second, output is not stalled until the command has completed.
$ cat >test_filter.sh <<EOF
#!/bin/sh
cmd()
{
echo $1
echo $2 >&2
return $3
}
filter()
{
while read line
do
echo "... $line"
done
}
tmpdir=$(mktemp -d)
fifo="$tmpdir"/out
mkfifo "$fifo"
filter <"$fifo" &
pid=$!
cmd a b 10 >"$fifo" 2>&1
ret=$?
wait $pid
echo exit code: $ret
rm -f "$fifo"
rmdir "$tmpdir"
EOF
$ sh ./test_filter.sh
... a
... b
exit code: 10

Resources