How can I use read timeouts with stat? - bash

I have the following code:
#!/bin/bash
read -t1 < <(stat -t "/my/mountpoint")
if [ $? -eq 1 ]; then
echo NFS mount stale. Removing...
umount -f -l /my/mountpoint
fi
How do I mute the output of stat while at the same time being still able to detect its error level in the subsequent test?
Adding >/dev/null 2>&1 inside the subshell, or in the end of the read line does not work. But there must be a way...
Thanks for any insights on this!

Use Command-Subsitution, Not Process Substitution
Instead of reading in from process subsitution, consider using command substitution instead. For example:
mountpoint=$(stat -t "/my/mountpoint" 2>&1)
This will silence the output by storing standard output in a variable, but leave the results retrievable by dereferencing $mountpoint. This approach also leaves the exit status accessible through $?.
A Clearer Alternative
Alternatively, you might just rewrite this more simply as:
mountpoint="/my/mountpoint"
if stat -t "$mountpoint" 2>&-
then
echo "NFS mount stale. Removing..."
umount -f -l "$mountpoint"
fi
To me, this seems more intention-revealing and less error-prone, but your mileage may certainly vary.
(Ab)using Read Timeouts
In the comments, the OP asked whether read timeouts could be abused to handle hung input from stat. The answer is yes, if you close standard error and check for an empty $REPLY string. For example:
mountpoint="/my/mountpoint"
read -t1 < <(stat -t "$mountpoint" 2>&-)
if [[ -n "$REPLY" ]]; then
echo "NFS mount stale. Removing..."
umount -f -l "$mountpoint"
fi
This works for several reasons:
When using the read builtin in Bash:
If no NAMEs are supplied, the line read is stored in the REPLY variable.
With standard error closed, $REPLY will be empty unless stat returns something on standard output, which it won't if it encounters an error. In other words, you're checking the contents of the $REPLY string instead of the exit status from read.

I think I got it! The redirection mentioned in your response seems to work within the subshell without wiping out the return code like 2>&1 did. So this works as expected:
read -t1 < <(rpcinfo -t 10.0.128.1 nfs 2>&-)
if [ $? -eq 0 ]; then
echo "NFS server/service available!"
else
echo "NFS server/service unavailable!"
fi
Where 10.0.128.1 is a 'bad' IP (no server/service responding). The script times out within a second and produces "NFS server/service unavailable!" response, but no output from rpcinfo. Likewise, when the IP is good, the desired response is output.
I upvoted your response!

Related

Display output of command in terminal while using command substitution

So I'm trying to check for the output of a command, but I also want to be able display the output directly in the terminal.
#!/bin/bash
while :
do
OUT=$(streamlink -o "$NAME" "$STREAM" best)
echo "$OUT"
if [[ $OUT == *"No playable streams"* ]]; then
echo "Delaying!"
sleep 15s
fi
done
This is what I tried to do.
The code checks if the output of a command contains that error substring, if so it'd add a delay. It works well on that part.
But it doesn't work well when the command is actually successfully downloading a file as it won't perform that echo until it is finished with the download (which would take hours). So until then I have no way of personally checking the output of the command
Plus the output of this particular command displays and updates the speed and filesize in real-time, something echo wouldn't be able to replicate.
So is there a way to be able to display the output of a command in real-time, while also command substituting them in order to check the output for substrings after the command is finished?
Use a temporary file:
TEMP=$(mktemp) || exit 1
while true
do
streamlink -o "$NAME" "$STREAM" best |& tee "$TEMP"
OUT=$( cat "$TEMP" )
#echo "$OUT" # not longer needed
if [[ $OUT == *"No playable streams"* ]]; then
echo "Delaying!"
sleep 15s
fi
done
# not really needed here because of endless loop
rm -f "$TEMP"

My $? command succeeded code fails, why?

In my bash script, I am trying to have rsync retry 10 times if it looses connection to its destination before giving up.
I know this code traps all errors but I can live with that for now.
My code is:
lops="0"
while true; do
let "lops++"
rsync $OPT $SRCLINUX $TRG 2>&1 | tee --append ${LOGFILE}
if [ "$?" -eq "0" ]; then
echolog "rsync finished WITHOUT error after $lops times"
break
else
echolog "Re-starting rsync for the ${lops}th time due to ERRORS"
fi
if [[ "$lops" == "10" ]]; then
echolog "GAVE UP DUE TO TOO MANY rsync ERRORS - BACKUP NOT FINISHED"
break
fi
done
It does not work as expected, here is a what happens on the first error:
TDBG6/
rsync: connection unexpectedly closed (131505 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(226) [sender=3.1.1]
rsync finished WITHOUT error after 1 times
Is this because the $? contains the return value of tee, NOT rsync?
How can I fix this? (I am personally Linux syntax limited :)
I see at least 2 possibilities to fix your problem:
use PIPESTATUS:
An array variable containing a list of exit status values from the processes in the most-recently-executed foreground pipeline (which may contain only a single command).
Use it as:
rsync $OPT $SRCLINUX $TRG 2>&1 | tee --append ${LOGFILE}
if (( PIPESTATUS[0] == 0 )); then
use rsync's --log-file option.
Notes:
you have lots of quotes missing in your code!
don't use uppercase variable names.
Don't use let and use ((...)) for arithmetic.
In addition to the other suggestions for working around the pipe confounding your exit code, you can avoid the pipe by using process substitution like so:
rsync "$OPT" "$SRCLINUX" "$TRG" &> >( tee --append "${LOGFILE}" )
which will redirect both stdout and stderr (that's the &> part) into a "file" that is connected to stdin of the process within the >(...), in this case the tee command you want. So it's very much like the pipe, but without the pipe (the pipe connects stdout of the left to stdin of the right behind the scenes, we've just pushed it out in the open here).
I believe that the status is the status of the last command in the pipeline.
I don't know how to deal with this in general, but for this specific case you can just redirect the output of the whole loop:
while true; do
...
done | tee -a "$LOGFILE"
Depending on what it's for, this may also mean you don't need the "echolog" function.

Checking if output of a command contains a certain string in a shell script

I'm writing a shell script, and I'm trying to check if the output of a command contains a certain string. I'm thinking I probably have to use grep, but I'm not sure how. Does anyone know?
Testing $? is an anti-pattern.
if ./somecommand | grep -q 'string'; then
echo "matched"
fi
Test the return value of grep:
./somecommand | grep 'string' &> /dev/null
if [ $? == 0 ]; then
echo "matched"
fi
which is done idiomatically like so:
if ./somecommand | grep -q 'string'; then
echo "matched"
fi
and also:
./somecommand | grep -q 'string' && echo 'matched'
Another option is to check for regular expression match on the command output.
For example:
[[ "$(./somecommand)" =~ "sub string" ]] && echo "Output includes 'sub string'"
A clean if/else conditional shell script:
if (ls | grep '$1')
then
echo "exists"
else
echo "doesn't exist"
fi
SHORT ANSWER
All the above (very excellent) answers all assume that grep can "see" the output of the command, which isn't always true:
SUCCESS can be sent to STDOUT while FAILURE to STDERR.
So depending on which direction you test, your grep can fail. That's to say that if you are testing for the case of FAILURE you must redirect the output of the command to STDOUT using 2>&1 in such a case as this.
LONGER ANSWER w/ PROOFS
I had what I thought was a very simple test in a bash script using grep and it kept failing. Much head scratching followed. Use of set -x in my script revealed that the variable was empty! So I created the following test to understand how things were breaking.
NOTE: iscsiadm is a Linux tool from the "open-iscsi" package used to connect/disconnect a host to SAN storage. The command iscsiadm -m session is used to show if any LUN connections are established):
#!/bin/bash
set -x
TEST1=$(iscsiadm -m session)
TEST2=$(iscsiadm -m session 2>&1)
echo
echo 'Print TEST1'
echo $TEST1
echo
echo 'Print TEST2'
echo $TEST2
echo
If a LUN WAS connected, BOTH variables were successfully populated with values:
Print TEST1
tcp: [25] 192.168.X.XX:3260,1 iqn.2000-01.com.synology:ipdisk.Target-LUN1 (non-flash) tcp: [26] 192.168.X.XX:3260,1 iqn.2000-01.com.synology:storagehost.Target-LUN1 (non-flash)
Print TEST2
tcp: [25] 192.168.X.XX:3260,1 iqn.2000-01.com.synology:ipdisk.Target-LUN1 (non-flash) tcp: [26] 192.168.X.XX:3260,1 iqn.2000-01.com.synology:storagehost.Target-LUN1 (non-flash)
However, if a LUN WASN'T connected, iscsiadm sent the output to STDERR, and only the "TEST2" variable was populated where we had redirected to STDOUT using 2>&1; "TEST1" variable which had no redirection to STDOUT was empty:
iscsiadm: No active sessions.
Print TEST1
Print TEST2
iscsiadm: No active sessions.
CONCLUSION
If you have a funky, half-broken- works in one direction but not the other- situation such as this, try the above test replacing iscsiadm with your own command and you should get the proper visibility to rewrite your test to work correctly.

conditional redirection in bash

I have a bash script that I want to be quiet when run without attached tty (like from cron).
I now was looking for a way to conditionally redirect output to /dev/null in a single line.
This is an example of what I had in mind, but I will have many more commands that do output in the script
#!/bin/bash
# conditional-redirect.sh
if tty -s; then
REDIRECT=
else
REDIRECT=">& /dev/null"
fi
echo "is this visible?" $REDIRECT
Unfortunately, this does not work:
$ ./conditional-redirect.sh
is this visible?
$ echo "" | ./conditional-redirect.sh
is this visible? >& /dev/null
what I don't want to do is duplicate all commands in a with-redirection or with-no-redirection variant:
if tty -s; then
echo "is this visible?"
else
echo "is this visible?" >& /dev/null
fi
EDIT:
It would be great if the solution would provide me a way to output something in "quiet" mode, e.g. when something is really wrong, I might want to get a notice from cron.
For bash, you can use the line:
exec &>/dev/null
This will direct all stdout and stderr to /dev/null from that point on. It uses the non-argument version of exec.
Normally, something like exec xyzzy would replace the program in the current process with a new program but you can use this non-argument version to simply modify redirections while keeping the current program.
So, in your specific case, you could use something like:
tty -s
if [[ $? -eq 1 ]] ; then
exec &>/dev/null
fi
If you want the majority of output to be discarded but still want to output some stuff, you can create a new file handle to do that. Something like:
tty -s
if [[ $? -eq 1 ]] ; then
exec 3>&1 &>/dev/null
else
exec 3>&1
fi
echo Normal # won't see this.
echo Failure >&3 # will see this.
I found another solution, but I feel it is clumsy, compared to paxdiablo's answer:
if tty -s; then
REDIRECT=/dev/tty
else
REDIRECT=/dev/null
fi
echo "Normal output" &> $REDIRECT
You can use a function:
function the_code {
echo "is this visible?"
# as many code lines as you want
}
if tty -s; then # or other condition
the_code
else
the_code >& /dev/null
fi
This works well for me. If DUMP_FILE is empty things go to stdout otherwise to the file. It does the job without using explicit redirection, but just uses pipes and existing applications.
function stdout_or_file
{
local DUMP_FILE=${1:-}
if [ -z "${DUMP_FILE}" ]; then
cat
else
sed -n "w ${DUMP_FILE}"
fi
}
function foo()
{
local MSG=$1
echo "info: ${MSG}"
}
foo "bar" | stdout_or_file ${DUMP_FILE}
Of course, you can squeeze this also in one line
foo "bar" | if [ -z "${DUMP_FILE}" ]; then cat; else sed -n "w ${DUMP_FILE}"; fi
Besides sed -n "w ${DUMP_FILE}" another command that does the same is dd status=none of=${DUMP_FILE}
The simplest solution is to use eval (a shell builtin), as it will act on the redirection in the expanded variable... and also act on anything else in the command line, so add extra quoting as required (note the extra single quotes added around the echo string below due to the '?' which would otherwise cause shell filename expansion to be attempted).
#!/bin/bash
# conditional-redirect.sh
if tty -s; then
REDIRECT=
else
REDIRECT=">& /dev/null"
fi
eval echo '"is this visible?"' $REDIRECT

How can I detect if my shell script is running through a pipe?

How do I detect from within a shell script if its standard output is being sent to a terminal or if it's piped to another process?
The case in point: I'd like to add escape codes to colorize output, but only when run interactively, but not when piped, similar to what ls --color does.
In a pure POSIX shell,
if [ -t 1 ] ; then echo terminal; else echo "not a terminal"; fi
returns "terminal", because the output is sent to your terminal, whereas
(if [ -t 1 ] ; then echo terminal; else echo "not a terminal"; fi) | cat
returns "not a terminal", because the output of the parenthetic element is piped to cat.
The -t flag is described in man pages as
-t fd True if file descriptor fd is open and refers to a terminal.
... where fd can be one of the usual file descriptor assignments:
0: standard input
1: standard output
2: standard error
There is no foolproof way to determine if STDIN, STDOUT, or STDERR are being piped to/from your script, primarily because of programs like ssh.
Things that "normally" work
For example, the following bash solution works correctly in an interactive shell:
[[ -t 1 ]] && \
echo 'STDOUT is attached to TTY'
[[ -p /dev/stdout ]] && \
echo 'STDOUT is attached to a pipe'
[[ ! -t 1 && ! -p /dev/stdout ]] && \
echo 'STDOUT is attached to a redirection'
But they don't always work
However, when executing this command as a non-TTY ssh command, STD streams always looks like they are being piped. To demonstrate this, using STDIN because it's easier:
# CORRECT: Forced-tty mode correctly reports '1', which represents
# no pipe.
ssh -t localhost '[[ -p /dev/stdin ]]; echo ${?}'
# CORRECT: Issuing a piped command in forced-tty mode correctly
# reports '0', which represents a pipe.
ssh -t localhost 'echo hi | [[ -p /dev/stdin ]]; echo ${?}'
# INCORRECT: Non-tty mode reports '0', which represents a pipe,
# even though one isn't specified here.
ssh -T localhost '[[ -p /dev/stdin ]]; echo ${?}'
Why it matters
This is a pretty big deal, because it implies that there is no way for a bash script to tell whether a non-tty ssh command is being piped or not. Note that this unfortunate behavior was introduced when recent versions of ssh started using pipes for non-TTY STDIO. Prior versions used sockets, which COULD be differentiated from within bash by using [[ -S ]].
When it matters
This limitation normally causes problems when you want to write a bash script that has behavior similar to a compiled utility, such as cat. For example, cat allows the following flexible behavior in handling various input sources simultaneously, and is smart enough to determine whether it is receiving piped input regardless of whether non-TTY or forced-TTY ssh is being used:
ssh -t localhost 'echo piped | cat - <( echo substituted )'
ssh -T localhost 'echo piped | cat - <( echo substituted )'
You can only do something like that if you can reliably determine if pipes are involved or not. Otherwise, executing a command that reads STDIN when no input is available from either pipes or redirection will result in the script hanging and waiting for STDIN input.
Other things that don't work
In trying to solve this problem, I've looked at several techniques that fail to solve the problem, including ones that involve:
examining SSH environment variables
using stat on /dev/stdin file descriptors
examining interactive mode via [[ "${-}" =~ 'i' ]]
examining tty status via tty and tty -s
examining ssh status via [[ "$(ps -o comm= -p $PPID)" =~ 'sshd' ]]
Note that if you are using an OS that supports the /proc virtual filesystem, you might have luck following the symbolic links for STDIO to determine whether a pipe is being used or not. However, /proc is not a cross-platform, POSIX-compatible solution.
I'm extremely interesting in solving this problem, so please let me know if you think of any other technique that might work, preferably POSIX-based solutions that work on both Linux and BSD.
The command test (builtin in Bash), has an option to check if a file descriptor is a tty.
if [ -t 1 ]; then
# Standard output is a tty
fi
See "man test" or "man bash" and search for "-t".
You don't mention which shell you are using, but in Bash, you can do this:
#!/bin/bash
if [[ -t 1 ]]; then
# stdout is a terminal
else
# stdout is not a terminal
fi
On Solaris, the suggestion from Dejay Clayton works mostly. The -p does not respond as desired.
File bash_redir_test.sh looks like:
[[ -t 1 ]] && \
echo 'STDOUT is attached to TTY'
[[ -p /dev/stdout ]] && \
echo 'STDOUT is attached to a pipe'
[[ ! -t 1 && ! -p /dev/stdout ]] && \
echo 'STDOUT is attached to a redirection'
On Linux, it works great:
:$ ./bash_redir_test.sh
STDOUT is attached to TTY
:$ ./bash_redir_test.sh | xargs echo
STDOUT is attached to a pipe
:$ rm bash_redir_test.log
:$ ./bash_redir_test.sh >> bash_redir_test.log
:$ tail bash_redir_test.log
STDOUT is attached to a redirection
On Solaris:
:# ./bash_redir_test.sh
STDOUT is attached to TTY
:# ./bash_redir_test.sh | xargs echo
STDOUT is attached to a redirection
:# rm bash_redir_test.log
bash_redir_test.log: No such file or directory
:# ./bash_redir_test.sh >> bash_redir_test.log
:# tail bash_redir_test.log
STDOUT is attached to a redirection
:#
The following code (tested only in Linux Bash 4.4) should not be considered portable nor recommended, but for the sake of completeness here it is:
ls /proc/$$/fdinfo/* >/dev/null 2>&1 || grep -q 'flags: 00$' /proc/$$/fdinfo/0 && echo "pipe detected"
I don't know why, but it seems that file descriptor "3" is somehow created when a Bash function has standard input piped.

Resources