What is the difference between using process substitution vs. a pipe? - bash

I came across an example for the using tee utility in the tee info page:
wget -O - http://example.com/dvd.iso | tee >(sha1sum > dvd.sha1) > dvd.iso
I looked up the >(...) syntax and found something called "process substitution". From what I understand, it makes a process look like a file that another process could write/append its output to. (Please correct me if I'm wrong on that point.)
How is this different from a pipe? (|) I see a pipe is being used in the above example—is it just a precedence issue? or is there some other difference?

There's no benefit here, as the line could equally well have been written like this:
wget -O - http://example.com/dvd.iso | tee dvd.iso | sha1sum > dvd.sha1
The differences start to appear when you need to pipe to/from multiple programs, because these can't be expressed purely with |. Feel free to try:
# Calculate 2+ checksums while also writing the file
wget -O - http://example.com/dvd.iso | tee >(sha1sum > dvd.sha1) >(md5sum > dvd.md5) > dvd.iso
# Accept input from two 'sort' processes at the same time
comm -12 <(sort file1) <(sort file2)
They're also useful in certain cases where you for any reason can't or don't want to use pipelines:
# Start logging all error messages to file as well as disk
# Pipes don't work because bash doesn't support it in this context
exec 2> >(tee log.txt)
ls doesntexist
# Sum a column of numbers
# Pipes don't work because they create a subshell
sum=0
while IFS= read -r num; do (( sum+=num )); done < <(curl http://example.com/list.txt)
echo "$sum"
# apt-get something with a generated config file
# Pipes don't work because we want stdin available for user input
apt-get install -c <(sed -e "s/%USER%/$USER/g" template.conf) mysql-server

Another major difference is the propagation of return values / exit codes (I'll use simpler commands to illustrate):
Pipe:
$ ls -l /notthere | tee listing.txt
ls: cannot access '/notthere': No such file or directory
$ echo $?
0
-> exit code of tee is propagated
Process substitution:
$ ls -l /notthere > >(tee listing.txt)
ls: cannot access '/notthere': No such file or directory
$ echo $?
2
-> exit code of ls is propagated
There are of course several methods to work around this (e.g. set -o pipefail, variable PIPESTATUS), but I think it's worth mentioning since this is the default behavior.
Yet another rather subtle, yet potentially annoying difference lies in subprocess termination (best illustrated using commands that produce lots of output):
Pipe:
#!/usr/bin/env bash
tar --create --file /tmp/etc-backup.tar --verbose --directory /etc . 2>&1 | tee /tmp/etc-backup.log
retval=${PIPESTATUS[0]}
(( ${retval} == 0 )) && echo -e "\n*** SUCCESS ***\n" || echo -e "\n*** FAILURE (EXIT CODE: ${retval}) ***\n"
-> after the line containing the pipe construct, all commands of the pipe have already terminated (otherwise PIPESTATUS could not contain their respective exit codes)
Process substitution:
#!/usr/bin/env bash
tar --create --file /tmp/etc-backup.tar --verbose --directory /etc . &> >(tee /tmp/etc-backup.log)
retval=$?
(( ${retval} == 0 )) && echo -e "\n*** SUCCESS ***\n" || echo -e "\n*** FAILURE (EXIT CODE: ${retval}) ***\n"
-> after the line containing the process substitution, the command within >(...), i.e. tee in this example, may still be running, potentially causing desynchronized console output (SUCCESS / FAILURE message gets mixed in with still flowing tar output) [*]
[*] Can be reproduced on the framebuffer console, but does not seem to affect GUI terminals like KDE's Konsole (likely due to different buffering strategies).

Related

How can I conditionally copy output to a file without repeating echo/printf statements? [duplicate]

I know how to redirect stdout to a file:
exec > foo.log
echo test
this will put the 'test' into the foo.log file.
Now I want to redirect the output into the log file AND keep it on stdout
i.e. it can be done trivially from outside the script:
script | tee foo.log
but I want to do declare it within the script itself
I tried
exec | tee foo.log
but it didn't work.
#!/usr/bin/env bash
# Redirect stdout ( > ) into a named pipe ( >() ) running "tee"
exec > >(tee -i logfile.txt)
# Without this, only stdout would be captured - i.e. your
# log file would not contain any error messages.
# SEE (and upvote) the answer by Adam Spiers, which keeps STDERR
# as a separate stream - I did not want to steal from him by simply
# adding his answer to mine.
exec 2>&1
echo "foo"
echo "bar" >&2
Note that this is bash, not sh. If you invoke the script with sh myscript.sh, you will get an error along the lines of syntax error near unexpected token '>'.
If you are working with signal traps, you might want to use the tee -i option to avoid disruption of the output if a signal occurs. (Thanks to JamesThomasMoon1979 for the comment.)
Tools that change their output depending on whether they write to a pipe or a terminal (ls using colors and columnized output, for example) will detect the above construct as meaning that they output to a pipe.
There are options to enforce the colorizing / columnizing (e.g. ls -C --color=always). Note that this will result in the color codes being written to the logfile as well, making it less readable.
The accepted answer does not preserve STDERR as a separate file descriptor. That means
./script.sh >/dev/null
will not output bar to the terminal, only to the logfile, and
./script.sh 2>/dev/null
will output both foo and bar to the terminal. Clearly that's not
the behaviour a normal user is likely to expect. This can be
fixed by using two separate tee processes both appending to the same
log file:
#!/bin/bash
# See (and upvote) the comment by JamesThomasMoon1979
# explaining the use of the -i option to tee.
exec > >(tee -ia foo.log)
exec 2> >(tee -ia foo.log >&2)
echo "foo"
echo "bar" >&2
(Note that the above does not initially truncate the log file - if you want that behaviour you should add
>foo.log
to the top of the script.)
The POSIX.1-2008 specification of tee(1) requires that output is unbuffered, i.e. not even line-buffered, so in this case it is possible that STDOUT and STDERR could end up on the same line of foo.log; however that could also happen on the terminal, so the log file will be a faithful reflection of what could be seen on the terminal, if not an exact mirror of it. If you want the STDOUT lines cleanly separated from the STDERR lines, consider using two log files, possibly with date stamp prefixes on each line to allow chronological reassembly later on.
Solution for busybox, macOS bash, and non-bash shells
The accepted answer is certainly the best choice for bash. I'm working in a Busybox environment without access to bash, and it does not understand the exec > >(tee log.txt) syntax. It also does not do exec >$PIPE properly, trying to create an ordinary file with the same name as the named pipe, which fails and hangs.
Hopefully this would be useful to someone else who doesn't have bash.
Also, for anyone using a named pipe, it is safe to rm $PIPE, because that unlinks the pipe from the VFS, but the processes that use it still maintain a reference count on it until they are finished.
Note the use of $* is not necessarily safe.
#!/bin/sh
if [ "$SELF_LOGGING" != "1" ]
then
# The parent process will enter this branch and set up logging
# Create a named piped for logging the child's output
PIPE=tmp.fifo
mkfifo $PIPE
# Launch the child process with stdout redirected to the named pipe
SELF_LOGGING=1 sh $0 $* >$PIPE &
# Save PID of child process
PID=$!
# Launch tee in a separate process
tee logfile <$PIPE &
# Unlink $PIPE because the parent process no longer needs it
rm $PIPE
# Wait for child process, which is running the rest of this script
wait $PID
# Return the error code from the child process
exit $?
fi
# The rest of the script goes here
Inside your script file, put all of the commands within parentheses, like this:
(
echo start
ls -l
echo end
) | tee foo.log
Easy way to make a bash script log to syslog. The script output is available both through /var/log/syslog and through stderr. syslog will add useful metadata, including timestamps.
Add this line at the top:
exec &> >(logger -t myscript -s)
Alternatively, send the log to a separate file:
exec &> >(ts |tee -a /tmp/myscript.output >&2 )
This requires moreutils (for the ts command, which adds timestamps).
Using the accepted answer my script kept returning exceptionally early (right after 'exec > >(tee ...)') leaving the rest of my script running in the background. As I couldn't get that solution to work my way I found another solution/work around to the problem:
# Logging setup
logfile=mylogfile
mkfifo ${logfile}.pipe
tee < ${logfile}.pipe $logfile &
exec &> ${logfile}.pipe
rm ${logfile}.pipe
# Rest of my script
This makes output from script go from the process, through the pipe into the sub background process of 'tee' that logs everything to disc and to original stdout of the script.
Note that 'exec &>' redirects both stdout and stderr, we could redirect them separately if we like, or change to 'exec >' if we just want stdout.
Even thou the pipe is removed from the file system in the beginning of the script it will continue to function until the processes finishes. We just can't reference it using the file name after the rm-line.
Bash 4 has a coproc command which establishes a named pipe to a command and allows you to communicate through it.
Can't say I'm comfortable with any of the solutions based on exec. I prefer to use tee directly, so I make the script call itself with tee when requested:
# my script:
check_tee_output()
{
# copy (append) stdout and stderr to log file if TEE is unset or true
if [[ -z $TEE || "$TEE" == true ]]; then
echo '-------------------------------------------' >> log.txt
echo '***' $(date) $0 $# >> log.txt
TEE=false $0 $# 2>&1 | tee --append log.txt
exit $?
fi
}
check_tee_output $#
rest of my script
This allows you to do this:
your_script.sh args # tee
TEE=true your_script.sh args # tee
TEE=false your_script.sh args # don't tee
export TEE=false
your_script.sh args # tee
You can customize this, e.g. make tee=false the default instead, make TEE hold the log file instead, etc. I guess this solution is similar to jbarlow's, but simpler, maybe mine has limitations that I have not come across yet.
Neither of these is a perfect solution, but here are a couple things you could try:
exec >foo.log
tail -f foo.log &
# rest of your script
or
PIPE=tmp.fifo
mkfifo $PIPE
exec >$PIPE
tee foo.log <$PIPE &
# rest of your script
rm $PIPE
The second one would leave a pipe file sitting around if something goes wrong with your script, which may or may not be a problem (i.e. maybe you could rm it in the parent shell afterwards).

bash: How to prepend a string to stderr lines and combine both stdout and stderr in exact order and store in one variable in bash?

What I have done so far is :
#!/bin/bash
exec 2> >(sed 's/^/ERROR= /')
var=$(
sleep 1 ;
hostname ;
ifconfig | wc -l ;
ls /sfsd;
ls hasdh;
mkdir /tmp/asdasasd/asdasd/asdasd;
ls /tmp ;
)
echo "$var"
This does prepend ERROR= at the start of each error lines, but displays all errors first and then stdout, (not in order in which it was executed).
If we skip storing the output in variable and execute the commands directly, the output comes in desired order.
Any expert opinion would be appreciated.
The primary problem with your script is that the command substitution $(...) only captures the subshell's standard output; the subshell's standard error still just flows through to the parent shell's standard error. As it happens, you've redirected the parent shell's standard error in a way that ends up populating the parent shell's standard output; but that completely circumvents the $(...), which is only capturing the subshell's standard output.
Do you see what I mean?
So, you can fix that by redirecting the subshell's standard error in a way that ends up populating its standard output, which is what gets captured:
var=$(
exec 2> >(sed 's/^/ERROR= /')
sleep 1
hostname
ifconfig | wc -l
ls /sfsd
ls hasdh
mkdir /tmp/asdasasd/asdasd/asdasd
ls /tmp
)
echo "$var"
Even so, this does not guarantee proper ordering of lines. The problem is that sed is running in parallel with everything else in the subshell, so while it's just received an error-line and is busy planning to write to standard output, one of the later commands in the subshell can be plowing ahead and already writing more things to standard output!
You can improve that by launching sed separately for each command, so that the shell will wait for sed to complete before proceeding to the next command:
var=$(
sleep 1 2> >(sed 's/^/ERROR= /')
hostname 2> >(sed 's/^/ERROR= /')
{ ifconfig | wc -l ; } 2> >(sed 's/^/ERROR= /')
ls /sfsd 2> >(sed 's/^/ERROR= /')
ls hasdh 2> >(sed 's/^/ERROR= /')
mkdir /tmp/asdasasd/asdasd/asdasd 2> >(sed 's/^/ERROR= /')
ls /tmp 2> >(sed 's/^/ERROR= /')
)
echo "$var"
Even so, sed will be running concurrently with each command, so if any of those commands is a complicated command that writes both to standard output and to standard error, then the order that that command's output is captured in may not match the order in which the command actually wrote it. But this should probably be good enough for your purposes.
You can improve the readability a bit by creating a wrapper function for the simple-command (non-pipeline) case:
var=$(
function fix-stderr () {
"$#" 2> >(sed 's/^/ERROR= /')
}
fix-stderr sleep 1
fix-stderr hostname
fix-stderr eval 'ifconfig | wc -l' # using eval to get a simple command
fix-stderr ls /sfsd
fix-stderr ls hasdh
fix-stderr mkdir /tmp/asdasasd/asdasd/asdasd
fix-stderr ls /tmp
)
echo "$var"
The sed command runs asynchronously from the rest of the shell; its output goes to standard error as soon as it processes its input from the commands in the command substitution. The standard output of those commands, however, are captured in $var and not displayed until the echo command runs.
Even if you weren't capturing the output, there is a chance the standard error and standard output of those commands wouldn't appear as you expect, because the sed command that ultimately produces the error messages might not be scheduled by the OS when you expect it to be, delaying the appearance of the error messages.
When you run a command in the usual way from the terminal, that command's standard error and standard output point to the same file: the terminal itself. As such, writes to the file maintain the order in which they occur in the program. As soon as you pipe one or the other to another process, you lose all control over how the two are spliced back together, if ever. In your case, you are redirecting standard error to sed, which writes modified lines back to standard output. But you have no control over when the the OS schedules sed to run and when your shell runs, so you can't control the order in which lines are written.
It helps to redirect standard error separately for each command:
tag_error () { sed 's/^/ERROR= /'; }
hostname 2> >(tag_error)
{ ifconfig | wc -l ; } 2> >(tag_error)
# etc
but this still doesn't guarantee that writes from within the same program are ordered as if they were all writing to the same file.
(ruakh has covered how to combine this with capturing standard output, so I won't bother adding it now. See his answer.)
One possible solution would be putting the commands in an array then execute them within a loop:
declare -a cmds=('sleep 1' 'hostname' 'eval ifconfig | wc -l' 'ls /sfsd' 'ls /tmp' 'ls hasdh')
for i in "${cmds[#]}"; do
$i 2> >(sed -E 's/^/ERROR=/')
done
When an error occurs it should print in the same order that it occurred in the execution. Using a command such as sh script.sh within the array should also reveal any stdout or stderr from the resulting external script. For piped command an eval will likely be needed as well.

Incorrect results with bash process substitution and tail?

Using bash process substitution, I want to run two different commands on a file simultaneously. In this example it is not necessary but imagine that "cat /usr/share/dict/words" was a very expensive operation such as uncompressing a 50gb file.
cat /usr/share/dict/words | tee >(head -1 > h.txt) >(tail -1 > t.txt) > /dev/null
After this command I would expect h.txt to contain the first line of the words file "A", and t.txt to contain the last line of the file "Zyzzogeton".
However what actually happens is that h.txt contains "A" but t.txt contains "argillaceo" which is about 5% into the file.
Why does this happen? It seems like either the "tail" process is terminating early or the streams are getting mixed up.
Running another similar command like this behaves as expected:
cat /usr/share/dict/words | tee >(grep ^a > a.txt) >(grep ^z > z.txt) > /dev/null
After this command I'd expect a.txt to contain all the words that begin with "a", while z.txt contains all of the words that begin with "z", which is exactly what happened.
So why doesn't this work with "tail", and with what other commands will this not work?
Ok, what seems to happen is that once the head -1 command finishes it exits and that causes tee to get a SIGPIPE it tries to write to the named pipe that the process substitution setup which generates an EPIPE and according to man 2 write will also generate SIGPIPE in the writing process, which causes tee to exit and that forces the tail -1 to exit immediately, and the cat on the left gets a SIGPIPE as well.
We can see this a little better if we add a bit more to the process with head and make the output both more predictable and also written to stderr without relying on the tee:
for i in {1..30}; do echo "$i"; echo "$i" >&2; sleep 1; done | tee >(head -1 > h.txt; echo "Head done") >(tail -1 > t.txt) >/dev/null
which when I run it gave me the output:
1
Head done
2
so it got just 1 more iteration of the loop before everything exited (though t.txt still only has 1 in it). If we then did
echo "${PIPESTATUS[#]}"
we see
141 141
which this question ties to SIGPIPE in a very similar fashion to what we're seeing here.
The coreutils maintainers have added this as an example to their tee "gotchas" for future posterity.
For a discussion with the devs about how this fits into POSIX compliance you can see the (closed notabug) report at http://debbugs.gnu.org/cgi/bugreport.cgi?bug=22195
If you have access to GNU version 8.24 they have added some options (not in POSIX) that can help like -p or --output-error=warn. Without that you can take a bit of a risk but get the desired functionality in the question by trapping and ignoring SIGPIPE:
trap '' PIPE
for i in {1..30}; do echo "$i"; echo "$i" >&2; sleep 1; done | tee >(head -1 > h.txt; echo "Head done") >(tail -1 > t.txt) >/dev/null
trap - PIPE
will have the expected results in both h.txt and t.txt, but if something else happened that wanted SIGPIPE to be handled correctly you'd be out of luck with this approach.
Another hacky option would be to zero out t.txt before starting then not let the head process list finish until it is non-zero length:
> t.txt; for i in {1..10}; do echo "$i"; echo "$i" >&2; sleep 1; done | tee >(head -1 > h.txt; echo "Head done"; while [ ! -s t.txt ]; do sleep 1; done) >(tail -1 > t.txt; date) >/dev/null

Pipe command output, but keep the error code [duplicate]

This question already has answers here:
Pipe output and capture exit status in Bash
(16 answers)
Closed 5 years ago.
How do I get the correct return code from a unix command line application after I've piped it through another command that succeeded?
In detail, here's the situation :
$ tar -cEvhf - -I ${sh_tar_inputlist} | gzip -5 -c > ${sh_tar_file} -- when only the tar command fails $?=0
$ echo $?
0
And, what I'd like to see is:
$ tar -cEvhf - -I ${sh_tar_inputlist} 2>${sh_tar_error_file} | gzip -5 -c > ${sh_tar_file}
$ echo $?
1
Does anyone know how to accomplish this?
Use ${PIPESTATUS[0]} to get the exit status of the first command in the pipe.
For details, see http://tldp.org/LDP/abs/html/internalvariables.html#PIPESTATUSREF
See also http://cfajohnson.com/shell/cus-faq-2.html for other approaches if your shell does not support $PIPESTATUS.
Look at $PIPESTATUS which is an array variable holding exit statuses. So ${PIPESTATUS[0]} holds the exit status of the first command in the pipe, ${PIPESTATUS[1]} the exit status of the second command, and so on.
For example:
$ tar -cEvhf - -I ${sh_tar_inputlist} | gzip -5 -c > ${sh_tar_file}
$ echo ${PIPESTATUS[0]}
To print out all statuses use:
$ echo ${PIPESTATUS[#]}
Here is a general solution using only POSIX shell and no temporary files:
Starting from the pipeline:
foo | bar | baz
exec 4>&1
error_statuses=`((foo || echo "0:$?" >&3) |
(bar || echo "1:$?" >&3) |
(baz || echo "2:$?" >&3)) 3>&1 >&4`
exec 4>&-
$error_statuses contains the status codes of any failed processes, in random order, with indexes to tell which command emitted each status.
# if "bar" failed, output its status:
echo $error_statuses | grep '1:' | cut -d: -f2
# test if all commands succeeded:
test -z "$error_statuses"
# test if the last command succeeded:
echo $error_statuses | grep '2:' >/dev/null
As others have pointed out, some modern shells provide PIPESTATUS to get this info. In classic sh, it's a bit more difficult, and you need to use a fifo:
#!/bin/sh
trap 'rm -rf $TMPDIR' 0
TMPDIR=$( mktemp -d )
mkfifo ${FIFO=$TMPDIR/fifo}
cmd1 > $FIFO &
cmd2 < $FIFO
wait $!
echo The return value of cmd1 is $?
(Well, you don't need to use a fifo. You can have the commands early in the pipe echo a status variable and eval that in the main shell, redirecting file descriptors all over the place and basically bending over backwards to check things, but using a fifo is much, much easier.)

How can I detect if my shell script is running through a pipe?

How do I detect from within a shell script if its standard output is being sent to a terminal or if it's piped to another process?
The case in point: I'd like to add escape codes to colorize output, but only when run interactively, but not when piped, similar to what ls --color does.
In a pure POSIX shell,
if [ -t 1 ] ; then echo terminal; else echo "not a terminal"; fi
returns "terminal", because the output is sent to your terminal, whereas
(if [ -t 1 ] ; then echo terminal; else echo "not a terminal"; fi) | cat
returns "not a terminal", because the output of the parenthetic element is piped to cat.
The -t flag is described in man pages as
-t fd True if file descriptor fd is open and refers to a terminal.
... where fd can be one of the usual file descriptor assignments:
0: standard input
1: standard output
2: standard error
There is no foolproof way to determine if STDIN, STDOUT, or STDERR are being piped to/from your script, primarily because of programs like ssh.
Things that "normally" work
For example, the following bash solution works correctly in an interactive shell:
[[ -t 1 ]] && \
echo 'STDOUT is attached to TTY'
[[ -p /dev/stdout ]] && \
echo 'STDOUT is attached to a pipe'
[[ ! -t 1 && ! -p /dev/stdout ]] && \
echo 'STDOUT is attached to a redirection'
But they don't always work
However, when executing this command as a non-TTY ssh command, STD streams always looks like they are being piped. To demonstrate this, using STDIN because it's easier:
# CORRECT: Forced-tty mode correctly reports '1', which represents
# no pipe.
ssh -t localhost '[[ -p /dev/stdin ]]; echo ${?}'
# CORRECT: Issuing a piped command in forced-tty mode correctly
# reports '0', which represents a pipe.
ssh -t localhost 'echo hi | [[ -p /dev/stdin ]]; echo ${?}'
# INCORRECT: Non-tty mode reports '0', which represents a pipe,
# even though one isn't specified here.
ssh -T localhost '[[ -p /dev/stdin ]]; echo ${?}'
Why it matters
This is a pretty big deal, because it implies that there is no way for a bash script to tell whether a non-tty ssh command is being piped or not. Note that this unfortunate behavior was introduced when recent versions of ssh started using pipes for non-TTY STDIO. Prior versions used sockets, which COULD be differentiated from within bash by using [[ -S ]].
When it matters
This limitation normally causes problems when you want to write a bash script that has behavior similar to a compiled utility, such as cat. For example, cat allows the following flexible behavior in handling various input sources simultaneously, and is smart enough to determine whether it is receiving piped input regardless of whether non-TTY or forced-TTY ssh is being used:
ssh -t localhost 'echo piped | cat - <( echo substituted )'
ssh -T localhost 'echo piped | cat - <( echo substituted )'
You can only do something like that if you can reliably determine if pipes are involved or not. Otherwise, executing a command that reads STDIN when no input is available from either pipes or redirection will result in the script hanging and waiting for STDIN input.
Other things that don't work
In trying to solve this problem, I've looked at several techniques that fail to solve the problem, including ones that involve:
examining SSH environment variables
using stat on /dev/stdin file descriptors
examining interactive mode via [[ "${-}" =~ 'i' ]]
examining tty status via tty and tty -s
examining ssh status via [[ "$(ps -o comm= -p $PPID)" =~ 'sshd' ]]
Note that if you are using an OS that supports the /proc virtual filesystem, you might have luck following the symbolic links for STDIO to determine whether a pipe is being used or not. However, /proc is not a cross-platform, POSIX-compatible solution.
I'm extremely interesting in solving this problem, so please let me know if you think of any other technique that might work, preferably POSIX-based solutions that work on both Linux and BSD.
The command test (builtin in Bash), has an option to check if a file descriptor is a tty.
if [ -t 1 ]; then
# Standard output is a tty
fi
See "man test" or "man bash" and search for "-t".
You don't mention which shell you are using, but in Bash, you can do this:
#!/bin/bash
if [[ -t 1 ]]; then
# stdout is a terminal
else
# stdout is not a terminal
fi
On Solaris, the suggestion from Dejay Clayton works mostly. The -p does not respond as desired.
File bash_redir_test.sh looks like:
[[ -t 1 ]] && \
echo 'STDOUT is attached to TTY'
[[ -p /dev/stdout ]] && \
echo 'STDOUT is attached to a pipe'
[[ ! -t 1 && ! -p /dev/stdout ]] && \
echo 'STDOUT is attached to a redirection'
On Linux, it works great:
:$ ./bash_redir_test.sh
STDOUT is attached to TTY
:$ ./bash_redir_test.sh | xargs echo
STDOUT is attached to a pipe
:$ rm bash_redir_test.log
:$ ./bash_redir_test.sh >> bash_redir_test.log
:$ tail bash_redir_test.log
STDOUT is attached to a redirection
On Solaris:
:# ./bash_redir_test.sh
STDOUT is attached to TTY
:# ./bash_redir_test.sh | xargs echo
STDOUT is attached to a redirection
:# rm bash_redir_test.log
bash_redir_test.log: No such file or directory
:# ./bash_redir_test.sh >> bash_redir_test.log
:# tail bash_redir_test.log
STDOUT is attached to a redirection
:#
The following code (tested only in Linux Bash 4.4) should not be considered portable nor recommended, but for the sake of completeness here it is:
ls /proc/$$/fdinfo/* >/dev/null 2>&1 || grep -q 'flags: 00$' /proc/$$/fdinfo/0 && echo "pipe detected"
I don't know why, but it seems that file descriptor "3" is somehow created when a Bash function has standard input piped.

Resources