check isatty in bash - bash

I want my shell to detect if human behavior, then show the prompt.
So, assume the file name is test.bash
#!/bin/bash
if [ "x" != "${PS1:-x}" ] ;then
read -p "remove test.log Yes/No" x
[ "$x" = "n" ] && exit 1
fi
rm -f test.log
But, I found it can not work if I haven't set PS1. Is there better method?
my test methods:
./test.bash # human interactive
./test.bash > /tmp/test.log # stdout in batch mode
ls | ./test.bash # stdin in batch mode

to elaborate, I would try
if [ -t 0 ] ; then
# this shell has a std-input, so we're not in batch mode
.....
else
# we're in batch mode
....
fi
I hope this helps.

From help test:
-t FD True if FD is opened on a terminal.

You could make use of the /usr/bin/tty program:
if tty -s
then
# ...
fi
I admit that I'm not sure how portable it is, but it's at least part of GNU coreutils.

Note that in bash scripts (see the test expr entry in man bash), it is not necessary to use the beefy && and || shell operators to combine two separate runs of the [ command, because the [ command has its own built-in and -a and or -o operators that let you compose several simpler tests into a single outcome.
So, here is how you can implement the test that you asked for — where you flip into batch mode if either the input or the output has been redirected away from the TTY — using a single invocation of [:
if [ -t 0 -a -t 1 ]
then
echo Interactive mode
else
echo Batch mode
fi

Related

Bash: redirect to screen or /dev/null depending on flag

I'm trying to come up with a way script to pass a silent flag in a bash so that all output will be directed to /dev/null if it is present and to the screen if it is not.
An MWE of my script would be:
#!/bin/bash
# Check if silent flag is on.
if [ $2 = "-s" ]; then
echo "Silent mode."
# Non-working line.
out_var = "to screen"
else
echo $1
# Non-working line.
out_var = "/dev/null"
fi
command1 > out_var
command2 > out_var
echo "End."
I call the script with two variables, the first one is irrelevant and the second one ($2) is the actual silent flag (-s):
./myscript.sh first_variable -s
Obviously the out_var lines don't work, but they give an idea of what I want: a way to direct the output of command1 and command2 to either the screen or to /dev/null depending on -s being present or not.
How could I do this?
You can use the naked exec command to redirect the current program without starting a new one.
Hence, a -s flag could be processed with something like:
if [[ "$1" == "-s" ]] ; then
exec >/dev/null 2>&1
fi
The following complete script shows how to do it:
#!/bin/bash
echo XYZZY
if [[ "$1" == "-s" ]] ; then
exec >/dev/null 2>&1
fi
echo PLUGH
If you run it with -s, you get XYZZY but no PLUGH output (well, technically, you do get PLUGH output but it's sent to the /dev/null bit bucket).
If you run it without -s, you get both lines.
The before and after echo statements show that exec is acting as described, simply changing redirection for the current program rather than attempting to re-execute it.
As an aside, I've assumed you meant "to screen" to be "to the current standard output", which may or may not be the actual terminal device (for example if it's already been redirected to somewhere else). If you do want the actual terminal device, it can still be done (using /dev/tty for example) but that would be an unusual requirement.
There are lots of things that could be wrong with your script; I won't attempt to guess since you didn't post any actual output or errors.
However, there are a couple of things that can help:
You need to figure out where your output is really going. Standard output and standard error are two different things, and redirecting one doesn't necessarily redirect the other.
In Bash, you can send output to /dev/stdout or /dev/stderr, so you might want to try something like:
# Send standard output to the tty/pty, or wherever stdout is currently going.
cmd > /dev/stdout
# Do the same thing, but with standard error instead.
cmd > /dev/stderr
Redirect standard error to standard output, and then send standard output to /dev/null. Order matters here.
cmd 2>&1 > /dev/null
There may be other problems with your script, too, but for issues with Bash shell redirections the GNU Bash manual is the canonical source of information. Hope it helps!
If you don't want to redirect all output from your script, you can use eval. For example:
$ fd=1
$ eval "echo hi >$a" >/dev/null
$ fd=2
$ eval "echo hi >$a" >/dev/null
hi
Make sure you use double quotes so that the variable is replaced before eval evaluates it.
In your case, you just needed to change out_var = "to screen" to out_var = "/dev/tty". And use it like this command1 > $out_var (see the '$' you are lacking)
I implemented it like this
# Set debug flag as desired
DEBUG=1
# DEBUG=0
if [ "$DEBUG" -eq "1" ]; then
OUT='/dev/tty'
else
OUT='/dev/null'
fi
# actual script use commands like this
command > $OUT 2>&1
# or like this if you need
command 2> $OUT
Of course you can also set the debug mode from a cli option, see How do I parse command line arguments in Bash?
And you can have multiple debug or verbose levels like this
# Set VERBOSE level as desired
# VERBOSE=0
VERBOSE=1
# VERBOSE=2
VERBOSE1='/dev/null'
VERBOSE2='/dev/null'
if [ "$VERBOSE" -gte 1 ]; then
VERBOSE1='/dev/tty'
fi
if [ "$VERBOSE" -gte 2 ]; then
VERBOSE2='/dev/tty'
fi
# actual script use commands like this
command > $VERBOSE1 2>&1
# or like this if you need
command 2> $VERBOSE2

Simple bash script for starting application silently

Here I am again. Today I wrote a little script that is supposed to start an application silently in my debian env.
Easy as
silent "npm search 1234556"
This works but not at all.
As you can see, I commented the section where I have some troubles.
This line:
$($cmdLine) &
doesn't hide application output but this one
$($1 >/dev/null 2>/dev/null) &
works perfectly. What am I missing? Many thanks.
#!/bin/sh
# Daniele Brugnara
# October, 2013
# Silently exec a command line passed as argument
errorsRedirect=""
if [ -z "$1" ]; then
echo "Please, don't joke me..."
exit 1
fi
cmdLine="$1 >/dev/null"
# if passed a second parameter, errors will be hidden
if [ -n "$2" ]; then
cmdLine="$cmdLine 2>/dev/null"
fi
# not working
$($cmdLine) &
# works perfectly
#$($1 >/dev/null 2>/dev/null) &
With the use of evil eval following script will work:
#!/bin/sh
# Silently exec a command line passed as argument
errorsRedirect=""
if [ -z "$1" ]; then
echo "Please, don't joke me..."
exit 1
fi
cmdLine="$1 >/dev/null"
# if passed a second parameter, errors will be hidden
if [ -n "$2" ]; then
cmdLine="$cmdLine 2>&1"
fi
eval "$cmdLine &"
Rather than building up a command with redirection tacked on the end, you can incrementally apply it:
#!/bin/sh
if [ -z "$1" ]; then
exit
fi
exec >/dev/null
if [ -n "$2" ]; then
exec 2>&1
fi
exec $1
This first redirects stdout of the shell script to /dev/null. If the second argument is given, it redirects stderr of the shell script too. Then it runs the command which will inherit stdout and stderr from the script.
I removed the ampersand (&) since being silent has nothing to do with running in the background. You can add it back (and remove the exec on the last line) if it is what you want.
I added exec at the end as it is slightly more efficient. Since it is the end of the shell script, there is nothing left to do, so you may as well be done with it, hence exec.
& means that you're doing sort of multitask whereas
1 >/dev/null 2>/dev/null
means that you redirect the output to a sort of garbage and that's why you don't see anything.
Furthermore cmdLine="$1 >/dev/null" is incorrect, you should use ' instead of " :
cmdLine='$1 >/dev/null'
you can build your command line in a var and run a bash with it in background:
bash -c "$cmdLine"&
Note that it might be useful to store the output (out/err) of the program, instead of trow them in null.
In addition, why do you need errorsRedirect??
You can even add a wait at the end, just to be safe...if you want...
#!/bin/sh
# Daniele Brugnara
# October, 2013
# Silently exec a command line passed as argument
[ ! $1 ] && echo "Please, don't joke me..." && exit 1
cmdLine="$1>/dev/null"
# if passed a second parameter, errors will be hidden
[ $2 ] && cmdLine+=" 2>/dev/null"
# not working
echo "Running \"$cmdLine\""
bash -c "$cmdLine" &
wait

Timeout command on Mac OS X?

Is there an alternative for the timeout command on Mac OSx. The basic requirement is I am able to run a command for a specified amount of time.
e.g:
timeout 10 ping google.com
This program runs ping for 10s on Linux.
You can use
brew install coreutils
And then whenever you need timeout, use
gtimeout
..instead. To explain why here's a snippet from the Homebrew Caveats section:
Caveats
All commands have been installed with the prefix 'g'.
If you really need to use these commands with their normal names, you
can add a "gnubin" directory to your PATH from your bashrc like:
PATH="/usr/local/opt/coreutils/libexec/gnubin:$PATH"
Additionally, you can access their man pages with normal names if you add
the "gnuman" directory to your MANPATH from your bashrc as well:
MANPATH="/usr/local/opt/coreutils/libexec/gnuman:$MANPATH"
Another simple approach that works pretty much cross platform (because it uses perl which is nearly everywhere) is this:
function timeout() { perl -e 'alarm shift; exec #ARGV' "$#"; }
Snagged from here:
https://gist.github.com/jaytaylor/6527607
Instead of putting it in a function, you can just put the following line in a script, and it'll work too:
timeout.sh
perl -e 'alarm shift; exec #ARGV' "$#";
or a version that has built in help/examples:
timeout.sh
#!/usr/bin/env bash
function show_help()
{
IT=$(cat <<EOF
Runs a command, and times out if it doesnt complete in time
Example usage:
# Will fail after 1 second, and shows non zero exit code result
$ timeout 1 "sleep 2" 2> /dev/null ; echo \$?
142
# Will succeed, and return exit code of 0.
$ timeout 1 sleep 0.5; echo \$?
0
$ timeout 1 bash -c 'echo "hi" && sleep 2 && echo "bye"' 2> /dev/null; echo \$?
hi
142
$ timeout 3 bash -c 'echo "hi" && sleep 2 && echo "bye"' 2> /dev/null; echo \$?
hi
bye
0
EOF
)
echo "$IT"
exit
}
if [ "$1" == "help" ]
then
show_help
fi
if [ -z "$1" ]
then
show_help
fi
#
# Mac OS-X does not come with the delightfully useful `timeout` program. Thankfully a rough BASH equivalent can be achieved with only 2 perl statements.
#
# Originally found on SO: http://stackoverflow.com/questions/601543/command-line-command-to-auto-kill-a-command-after-a-certain-amount-of-time
#
perl -e 'alarm shift; exec #ARGV' "$#";
As kvz stated simply use homebrew:
brew install coreutils
Now the timeout command is already ready to use - no aliases are required (and no gtimeout required, although also available).
You can limit execution time of any program using this command:
ping -t 10 google.com & sleep 5; kill $!
The Timeout Package from Ubuntu / Debian can be made to compile on Mac and it works.
The package is available at http://packages.ubuntu.com/lucid/timeout
You can do ping -t 10 google.com >nul
the >nul gets rid of the output. So instead of showing 64 BYTES FROM 123.45.67.8 BLAH BLAH BLAH it'll just show a blank newline until it times out. -t flag can be changed to any number.

How can I detect if my shell script is running through a pipe?

How do I detect from within a shell script if its standard output is being sent to a terminal or if it's piped to another process?
The case in point: I'd like to add escape codes to colorize output, but only when run interactively, but not when piped, similar to what ls --color does.
In a pure POSIX shell,
if [ -t 1 ] ; then echo terminal; else echo "not a terminal"; fi
returns "terminal", because the output is sent to your terminal, whereas
(if [ -t 1 ] ; then echo terminal; else echo "not a terminal"; fi) | cat
returns "not a terminal", because the output of the parenthetic element is piped to cat.
The -t flag is described in man pages as
-t fd True if file descriptor fd is open and refers to a terminal.
... where fd can be one of the usual file descriptor assignments:
0: standard input
1: standard output
2: standard error
There is no foolproof way to determine if STDIN, STDOUT, or STDERR are being piped to/from your script, primarily because of programs like ssh.
Things that "normally" work
For example, the following bash solution works correctly in an interactive shell:
[[ -t 1 ]] && \
echo 'STDOUT is attached to TTY'
[[ -p /dev/stdout ]] && \
echo 'STDOUT is attached to a pipe'
[[ ! -t 1 && ! -p /dev/stdout ]] && \
echo 'STDOUT is attached to a redirection'
But they don't always work
However, when executing this command as a non-TTY ssh command, STD streams always looks like they are being piped. To demonstrate this, using STDIN because it's easier:
# CORRECT: Forced-tty mode correctly reports '1', which represents
# no pipe.
ssh -t localhost '[[ -p /dev/stdin ]]; echo ${?}'
# CORRECT: Issuing a piped command in forced-tty mode correctly
# reports '0', which represents a pipe.
ssh -t localhost 'echo hi | [[ -p /dev/stdin ]]; echo ${?}'
# INCORRECT: Non-tty mode reports '0', which represents a pipe,
# even though one isn't specified here.
ssh -T localhost '[[ -p /dev/stdin ]]; echo ${?}'
Why it matters
This is a pretty big deal, because it implies that there is no way for a bash script to tell whether a non-tty ssh command is being piped or not. Note that this unfortunate behavior was introduced when recent versions of ssh started using pipes for non-TTY STDIO. Prior versions used sockets, which COULD be differentiated from within bash by using [[ -S ]].
When it matters
This limitation normally causes problems when you want to write a bash script that has behavior similar to a compiled utility, such as cat. For example, cat allows the following flexible behavior in handling various input sources simultaneously, and is smart enough to determine whether it is receiving piped input regardless of whether non-TTY or forced-TTY ssh is being used:
ssh -t localhost 'echo piped | cat - <( echo substituted )'
ssh -T localhost 'echo piped | cat - <( echo substituted )'
You can only do something like that if you can reliably determine if pipes are involved or not. Otherwise, executing a command that reads STDIN when no input is available from either pipes or redirection will result in the script hanging and waiting for STDIN input.
Other things that don't work
In trying to solve this problem, I've looked at several techniques that fail to solve the problem, including ones that involve:
examining SSH environment variables
using stat on /dev/stdin file descriptors
examining interactive mode via [[ "${-}" =~ 'i' ]]
examining tty status via tty and tty -s
examining ssh status via [[ "$(ps -o comm= -p $PPID)" =~ 'sshd' ]]
Note that if you are using an OS that supports the /proc virtual filesystem, you might have luck following the symbolic links for STDIO to determine whether a pipe is being used or not. However, /proc is not a cross-platform, POSIX-compatible solution.
I'm extremely interesting in solving this problem, so please let me know if you think of any other technique that might work, preferably POSIX-based solutions that work on both Linux and BSD.
The command test (builtin in Bash), has an option to check if a file descriptor is a tty.
if [ -t 1 ]; then
# Standard output is a tty
fi
See "man test" or "man bash" and search for "-t".
You don't mention which shell you are using, but in Bash, you can do this:
#!/bin/bash
if [[ -t 1 ]]; then
# stdout is a terminal
else
# stdout is not a terminal
fi
On Solaris, the suggestion from Dejay Clayton works mostly. The -p does not respond as desired.
File bash_redir_test.sh looks like:
[[ -t 1 ]] && \
echo 'STDOUT is attached to TTY'
[[ -p /dev/stdout ]] && \
echo 'STDOUT is attached to a pipe'
[[ ! -t 1 && ! -p /dev/stdout ]] && \
echo 'STDOUT is attached to a redirection'
On Linux, it works great:
:$ ./bash_redir_test.sh
STDOUT is attached to TTY
:$ ./bash_redir_test.sh | xargs echo
STDOUT is attached to a pipe
:$ rm bash_redir_test.log
:$ ./bash_redir_test.sh >> bash_redir_test.log
:$ tail bash_redir_test.log
STDOUT is attached to a redirection
On Solaris:
:# ./bash_redir_test.sh
STDOUT is attached to TTY
:# ./bash_redir_test.sh | xargs echo
STDOUT is attached to a redirection
:# rm bash_redir_test.log
bash_redir_test.log: No such file or directory
:# ./bash_redir_test.sh >> bash_redir_test.log
:# tail bash_redir_test.log
STDOUT is attached to a redirection
:#
The following code (tested only in Linux Bash 4.4) should not be considered portable nor recommended, but for the sake of completeness here it is:
ls /proc/$$/fdinfo/* >/dev/null 2>&1 || grep -q 'flags: 00$' /proc/$$/fdinfo/0 && echo "pipe detected"
I don't know why, but it seems that file descriptor "3" is somehow created when a Bash function has standard input piped.

Shell scripting: die on any error

Suppose a shell script (/bin/sh or /bin/bash) contained several commands. How can I cleanly make the script terminate if any of the commands has a failing exit status? Obviously, one can use if blocks and/or callbacks, but is there a cleaner, more concise way? Using && is not really an option either, because the commands can be long, or the script could have non-trivial things like loops and conditionals.
With standard sh and bash, you can
set -e
It will
$ help set
...
-e Exit immediately if a command exits with a non-zero status.
It also works (from what I could gather) with zsh. It also should work for any Bourne shell descendant.
With csh/tcsh, you have to launch your script with #!/bin/csh -e
May be you could use:
$ <any_command> || exit 1
You can check $? to see what the most recent exit code is..
e.g
#!/bin/sh
# A Tidier approach
check_errs()
{
# Function. Parameter 1 is the return code
# Para. 2 is text to display on failure.
if [ "${1}" -ne "0" ]; then
echo "ERROR # ${1} : ${2}"
# as a bonus, make our script exit with the right error code.
exit ${1}
fi
}
### main script starts here ###
grep "^${1}:" /etc/passwd > /dev/null 2>&1
check_errs $? "User ${1} not found in /etc/passwd"
USERNAME=`grep "^${1}:" /etc/passwd|cut -d":" -f1`
check_errs $? "Cut returned an error"
echo "USERNAME: $USERNAME"
check_errs $? "echo returned an error - very strange!"

Resources