Does 2>&2 makes sense? If so, what does it do? - bash

I am using 2>&2 in my script and expect stderr to not be displayed on terminal and be redirected to /dev/fd/ instead. Errors are still displayed on terminal. Is there is something wrong with my understanding?
Thanks.
Edit:
I wanted to redirect standard error to logfile as well as on terminal.
This works fine(Discovered via hit and trial)
exec 2> >(while read -r line; do printf '%s %s\n' "$(date --rfc-3339=seconds)" "$line" | tee -a $LOG_FILE; done >&2)
But this doesn't
exec 2> >(while read -r line; do printf '%s %s\n' "$(date --rfc-3339=seconds)" "$line" | tee -a $LOG_FILE; done)
I wanted to understand what >&2 did that command 1 works but command 2 doesn't

No, it doesn't make sense.
With 2>&2 you redirect the file descriptor 2 (stderr) to the resource that is associated with file descriptor 2. So, you don't change anything, you just redirect stderr to where it is already directed to.
The default resource for stderr is the terminal, so that's why the stderr output is displayed still on the terminal.

Related

Redirect stdout and stderr separately over a socket connection

I am trying to run a script on the other side of a unix-socket connection. For this I am trying to use socat. The script is
#!/bin/bash
read MESSAGE1
echo "PID: $$"
echo "$MESSAGE1"
sleep 2
read MESSAGE2
echo "$MESSAGE2" 1>&2
As the listener for socat I have
socat unix-listen:my_socket,fork exec:./getmsg.sh,stderr
as the client I use:
echo $'message 1\nmessage 2\n' | socat -,ignoreeof unix:my_socket 2> stderr.txt
and I get the output
PID: 57248
message 1
message 2
whereas the file stderr.txt is empty.
My expectation however was that
stdout from the script on the listener side would be piped to stdout on the client and
stderr on the listener side to stderr on the client side.
That is the file stderr.txt should have had the content message 2 instead of being empty.
Any idea on how I can achieve it that stdout and stderr are transferred separately and not combined?
Thanks
If the input and output are just text with reasonably finite line lengths, then you can easily write muxing and demuxing commands in pure Bash.
The only issue is how socat (mis)handles stderr; it basically either forces it to be the same file as stdout or ignores it completely. At which point it is better to use one’s own file descriptor convention in the handler script, with unusual file descriptors that don’t conflict with 0, 1 or 2.
Let’s pick 11 for stdout and 12 for stderr, for example. For stdin we can just keep using 0 as usual.
getmsg.sh
#!/bin/bash
set -e -o pipefail
read message
echo "PID: $$" 1>&11 # to stdout
echo "$message" 1>&11 # to stdout
sleep 2
read message
echo "$message" 1>&12 # to stderr
mux.sh
#!/bin/bash
"$#" \
11> >(while read line; do printf '%s\n' "stdout: ${line}"; done) \
12> >(while read line; do printf '%s\n' "stderr: ${line}"; done)
demux.sh
#!/bin/bash
set -e -o pipefail
declare -ri stdout="${1:-1}"
declare -ri stderr="${2:-2}"
while IFS= read -r line; do
if [[ "$line" = 'stderr: '* ]]; then
printf '%s\n' "${line#stderr: }" 1>&"$((stderr))"
elif [[ "$line" = 'stdout: '* ]]; then
printf '%s\n' "${line#stdout: }" 1>&"$((stdout))"
else
exit 3 # report malformed stream
fi
done
A few examples
#!/bin/bash
set -e -o pipefail
socat unix-listen:my_socket,fork exec:'./mux.sh ./getmsg.sh' &
declare -ir server_pid="$!"
trap 'kill "$((server_pid))"
wait -n "$((server_pid))" || :' EXIT
until [[ -S my_socket ]]; do :; done # ugly
echo '================= raw data from the socket ================='
echo $'message 1\nmessage 2\n' | socat -,ignoreeof unix:my_socket
echo '================= normal mode of operation ================='
echo $'message 1\nmessage 2\n' | socat -,ignoreeof unix:my_socket \
| ./demux.sh
echo '================= demux / mux test for fun ================='
echo $'message 1\nmessage 2\n' | socat -,ignoreeof unix:my_socket \
| ./mux.sh ./demux.sh 11 12
My expectation however
There is only one socket and via one socket one stream of data can be sent, not two. You can't send stdout and stderr (two streams) via one handle (I mean, without like inter-mixing them, i.e. without loosing information what data is from which stream). Also see explanation of stderr flag with exec in man socat and see man dup. Both stderr and stdout of the script redirect to the same output.
The expectation would be that stderr.txt is empty, because socat does not write anything to stderr.
how I can achieve it that stdout and stderr are transferred separately and not combined?
Use two sockets separately for each stream.
Transfer messages using a protocol that would differentiate two streams. For example a simple line-based protocol that prefixes the messages:
# script.sh
echo "stdout: this is stdout"
echo "stderr: this is stderr"
# client
... | socat ... | tee >(sed -n 's/stderr: //p' >&2) | sed -n 's/stdout: //p'

Fails to read lines from running process in bash

Using process substitution, we can get every lines of output of a command .
# Echoes every seconds using process substitution
while read line; do
echo $line
done < <(for i in $(seq 1 10); do echo $i && sleep 1; done)
By the same way above, I want to get the stdout output of 'wpa_supplicant' command, while discarding stderr.
But nothing can be seen on screen!
while read line; do
echo $line
done < <(wpa_supplicant -Dwext -iwlan1 -c${MY_CONFIG_FILE} 2> /dev/null)
I confirmed that typing the same command in prompt shows its output normaly.
$ wpa_supplicant -Dwext -iwlan1 -c${MY_CONFIG_FILE} 2> /dev/null
What is the mistake? Any help would be appreciated.
Finally I found the answer here!
The problem was easy... the buffering. Using stdbuf (and piping), the original code will be modified as below.
stdbuf -oL wpa_supplicant -iwlan1 -Dwext -c${MY_CONFIG_FILE} | while read line; do
echo "! $line"
done
'stdbuf -oL' make the stream line buffered, so I can get every each line from the running process.

Read full stdin until EOF when stdin comes from `cat` bash

I'm trying to read full stdin into a variable :
script.sh
#/bin/bash
input=""
while read line
do
echo "$line"
input="$input""\n""$line"
done < /dev/stdin
echo "$input" > /tmp/test
When I run ls | ./script.sh or mostly any other commands, it works fine.
However It doesn't work when I run cat | ./script.sh , enter my message, and then hit Ctrl-C to exit cat.
Any ideas ?
I would stick to the one-liner
input=$(cat)
Of course, Ctrl-D should be used to signal end-of-file.

How do I get both STDOUT and STDERR to go to the terminal and a log file?

I have a script which will be run interactively by non-technical users. The script writes status updates to STDOUT so that the user can be sure that the script is running OK.
I want both STDOUT and STDERR redirected to the terminal (so that the user can see that the script is working as well as see if there was a problem). I also want both streams redirected to a log file.
I've seen a bunch of solutions on the net. Some don't work and others are horribly complicated. I've developed a workable solution (which I'll enter as an answer), but it's kludgy.
The perfect solution would be a single line of code that could be incorporated into the beginning of any script that sends both streams to both the terminal and a log file.
EDIT: Redirecting STDERR to STDOUT and piping the result to tee works, but it depends on the users remembering to redirect and pipe the output. I want the logging to be fool-proof and automatic (which is why I'd like to be able to embed the solution into the script itself.)
Use "tee" to redirect to a file and the screen. Depending on the shell you use, you first have to redirect stderr to stdout using
./a.out 2>&1 | tee output
or
./a.out |& tee output
In csh, there is a built-in command called "script" that will capture everything that goes to the screen to a file. You start it by typing "script", then doing whatever it is you want to capture, then hit control-D to close the script file. I don't know of an equivalent for sh/bash/ksh.
Also, since you have indicated that these are your own sh scripts that you can modify, you can do the redirection internally by surrounding the whole script with braces or brackets, like
#!/bin/sh
{
... whatever you had in your script before
} 2>&1 | tee output.file
Approaching half a decade later...
I believe this is the "perfect solution" sought by the OP.
Here's a one liner you can add to the top of your Bash script:
exec > >(tee -a $HOME/logfile) 2>&1
Here's a small script demonstrating its use:
#!/usr/bin/env bash
exec > >(tee -a $HOME/logfile) 2>&1
# Test redirection of STDOUT
echo test_stdout
# Test redirection of STDERR
ls test_stderr___this_file_does_not_exist
(Note: This only works with Bash. It will not work with /bin/sh.)
Adapted from here; the original did not, from what I can tell, catch STDERR in the logfile. Fixed with a note from here.
The Pattern
the_cmd 1> >(tee stdout.txt ) 2> >(tee stderr.txt >&2 )
This redirects both stdout and stderr separately, and it sends separate copies of stdout and stderr to the caller (which might be your terminal).
In zsh, it will not proceed to the next statement until the tees have finished.
In bash, you may find that the final few lines of output appear after whatever statement comes next.
In either case, the right bits go to the right places.
Explanation
Here's a script (stored in ./example):
#! /usr/bin/env bash
the_cmd()
{
echo out;
1>&2 echo err;
}
the_cmd 1> >(tee stdout.txt ) 2> >(tee stderr.txt >&2 )
Here's a session:
$ foo=$(./example)
err
$ echo $foo
out
$ cat stdout.txt
out
$ cat stderr.txt
err
Here's how it works:
Both tee processes are started, their stdins are assigned to file descriptors. Because they're enclosed in process substitutions, the paths to those file descriptors are substituted in the calling command, so now it looks something like this:
the_cmd 1> /proc/self/fd/13 2> /proc/self/fd/14
the_cmd runs, writing stdout to the first file descriptor, and stderr to the second one.
In the bash case, once the_cmd finishes, the following statement happens immediately (if your terminal is the caller, then you will see your prompt appear).
In the zsh case, once the_cmd finishes, the shell waits for both of the tee processes to finish before moving on. More on this here.
The first tee process, which is reading from the_cmd's stdout, writes a copy of that stdout back to the caller because that's what tee does. Its outputs are not redirected, so they make it back to the caller unchanged
The second tee process has it's stdout redirected to the caller's stderr (which is good, because it's stdin is reading from the_cmd's stderr). So when it writes to its stdout, those bits go to the caller's stderr.
This keeps stderr separate from stdout both in the files and in the command's output.
If the first tee writes any errors, they'll show up in both the stderr file and in the command's stderr, if the second tee writes any errors, they'll only show up only in the terminal's stderr.
the to redirect stderr to stdout append this at your command: 2>&1
For outputting to terminal and logging into file you should use tee
Both together would look like this:
mycommand 2>&1 | tee mylogfile.log
EDIT: For embedding into your script you would do the same. So your script
#!/bin/sh
whatever1
whatever2
...
whatever3
would end up as
#!/bin/sh
( whatever1
whatever2
...
whatever3 ) 2>&1 | tee mylogfile.log
EDIT:
I see I got derailed and ended up answering a different question from the one asked. The answer to the real question is at the bottom of Paul Tomblin's answer. (If you want to enhance that solution to redirect stdout and stderr separately for some reason, you could use the technique I describe here.)
I've been wanting an answer that preserves the distinction between stdout and stderr.
Unfortunately all of the answers given so far that preserve that distinction
are race-prone: they risk programs seeing incomplete input, as I pointed out in comments.
I think I finally found an answer that preserves the distinction,
is not race prone, and isn't terribly fiddly either.
First building block: to swap stdout and stderr:
my_command 3>&1 1>&2 2>&3-
Second building block: if we wanted to filter (e.g. tee) only stderr,
we could accomplish that by swapping stdout&stderr, filtering, and then swapping back:
{ my_command 3>&1 1>&2 2>&3- | stderr_filter;} 3>&1 1>&2 2>&3-
Now the rest is easy: we can add a stdout filter, either at the beginning:
{ { my_command | stdout_filter;} 3>&1 1>&2 2>&3- | stderr_filter;} 3>&1 1>&2 2>&3-
or at the end:
{ my_command 3>&1 1>&2 2>&3- | stderr_filter;} 3>&1 1>&2 2>&3- | stdout_filter
To convince myself that both of the above commands work, I used the following:
alias my_command='{ echo "to stdout"; echo "to stderr" >&2;}'
alias stdout_filter='{ sleep 1; sed -u "s/^/teed stdout: /" | tee stdout.txt;}'
alias stderr_filter='{ sleep 2; sed -u "s/^/teed stderr: /" | tee stderr.txt;}'
Output is:
...(1 second pause)...
teed stdout: to stdout
...(another 1 second pause)...
teed stderr: to stderr
and my prompt comes back immediately after the "teed stderr: to stderr", as expected.
Footnote about zsh:
The above solution works in bash (and maybe some other shells, I'm not sure), but it doesn't work in zsh. There are two reasons it fails in zsh:
the syntax 2>&3- isn't understood by zsh; that has to be rewritten
as 2>&3 3>&-
in zsh (unlike other shells), if you redirect a file descriptor
that's already open, in some cases (I don't completely understand how it decides) it does a built-in tee-like behavior instead. To avoid this, you have to close each fd prior to
redirecting it.
So, for example, my second solution has to be rewritten for zsh as {my_command 3>&1 1>&- 1>&2 2>&- 2>&3 3>&- | stderr_filter;} 3>&1 1>&- 1>&2 2>&- 2>&3 3>&- | stdout_filter (which works in bash too, but is awfully verbose).
On the other hand, you can take advantage of zsh's mysterious built-in implicit teeing to get a much shorter solution for zsh, which doesn't run tee at all:
my_command >&1 >stdout.txt 2>&2 2>stderr.txt
(I wouldn't have guessed from the docs I found that the >&1 and 2>&2 are the thing that trigger zsh's implicit teeing; I found that out by trial-and-error.)
Use the tee program and dup stderr to stdout.
program 2>&1 | tee > logfile
Use the script command in your script (man 1 script)
Create a wrapper shellscript (2 lines) that sets up script() and then calls exit.
Part 1: wrap.sh
#!/bin/sh
script -c './realscript.sh'
exit
Part 2: realscript.sh
#!/bin/sh
echo 'Output'
Result:
~: sh wrap.sh
Script started, file is typescript
Output
Script done, file is typescript
~: cat typescript
Script started on fr. 12. des. 2008 kl. 18.07 +0100
Output
Script done on fr. 12. des. 2008 kl. 18.07 +0100
~:
I created a script called "RunScript.sh". The contents of this script is:
${APP_HOME}/${1}.sh ${2} ${3} ${4} ${5} ${6} 2>&1 | tee -a ${APP_HOME}/${1}.log
I call it like this:
./RunScript.sh ScriptToRun Param1 Param2 Param3 ...
This works, but it requires the application's scripts to be run via an external script. It's a bit kludgy.
A year later, here's an old bash script for logging anything. For example,
teelog make ... logs to a generated log name (and see the trick for logging nested makes too.)
#!/bin/bash
me=teelog
Version="2008-10-9 oct denis-bz"
Help() {
cat <<!
$me anycommand args ...
logs the output of "anycommand ..." as well as displaying it on the screen,
by running
anycommand args ... 2>&1 | tee `day`-command-args.log
That is, stdout and stderr go to both the screen, and to a log file.
(The Unix "tee" command is named after "T" pipe fittings, 1 in -> 2 out;
see http://en.wikipedia.org/wiki/Tee_(command) ).
The default log file name is made up from "command" and all the "args":
$me cmd -opt dir/file logs to `day`-cmd--opt-file.log .
To log to xx.log instead, either export log=xx.log or
$me log=xx.log cmd ...
If "logdir" is set, logs are put in that directory, which must exist.
An old xx.log is moved to /tmp/\$USER-xx.log .
The log file has a header like
# from: command args ...
# run: date pwd etc.
to show what was run; see "From" in this file.
Called as "Log" (ln -s $me Log), Log anycommand ... logs to a file:
command args ... > `day`-command-args.log
and tees stderr to both the log file and the terminal -- bash only.
Some commands that prompt for input from the console, such as a password,
don't prompt if they "| tee"; you can only type ahead, carefully.
To log all "make" s, including nested ones like
cd dir1; \$(MAKE)
cd dir2; \$(MAKE)
...
export MAKE="$me make"
!
# See also: output logging in screen(1).
exit 1
}
#-------------------------------------------------------------------------------
# bzutil.sh denisbz may2008 --
day() { # 30mar, 3mar
/bin/date +%e%h | tr '[A-Z]' '[a-z]' | tr -d ' '
}
edate() { # 19 May 2008 15:56
echo `/bin/date "+%e %h %Y %H:%M"`
}
From() { # header # from: $* # run: date pwd ...
case `uname` in Darwin )
mac=" mac `sw_vers -productVersion`"
esac
cut -c -200 <<!
${comment-#} from: $#
${comment-#} run: `edate` in $PWD `uname -n` $mac `arch`
!
# mac $PWD is pwd -L not -P real
}
# log name: day-args*.log, change this if you like --
logfilename() {
log=`day`
[[ $1 == "sudo" ]] && shift
for arg
do
log="$log-${arg##*/}" # basename
(( ${#log} >= 100 )) && break # max len 100
done
# no blanks etc in logfilename please, tr them to "-"
echo $logdir/` echo "$log".log | tr -C '.:+=[:alnum:]_\n' - `
}
#-------------------------------------------------------------------------------
case "$1" in
-v* | --v* )
echo "$0 version: $Version"
exit 1 ;;
"" | -* )
Help
esac
# scan log= etc --
while [[ $1 == [a-zA-Z_]*=* ]]; do
export "$1"
shift
done
: ${logdir=.}
[[ -w $logdir ]] || {
echo >&2 "error: $me: can't write in logdir $logdir"
exit 1
}
: ${log=` logfilename "$#" `}
[[ -f $log ]] &&
/bin/mv "$log" "/tmp/$USER-${log##*/}"
case ${0##*/} in # basename
log | Log ) # both to log, stderr to caller's stderr too --
{
From "$#"
"$#"
} > $log 2> >(tee /dev/stderr) # bash only
# see http://wooledge.org:8000/BashFAQ 47, stderr to a pipe
;;
* )
#-------------------------------------------------------------------------------
{
From "$#" # header: from ... date pwd etc.
"$#" 2>&1 # run the cmd with stderr and stdout both to the log
} | tee $log
# mac tee buffers stdout ?
esac
This question seems has not been gracefully solved yet.
Everytime I search "how to output to stdout and stderr at same time", google directs me to this post.
Today, I finally found a simple & effective way to solve almost all these kind of needs.
The essential idea is the tee command which can print to multiple output at same time, and linux-specific /proc/self/fd/{1,2,...} to represent stdout, stderr...
print stdin to both stdout and stderr
tee /proc/self/fd/2
print stdin to both stdout and stderr, and file
tee /proc/self/fd/2 file
Hope this be helpful.
Here is a solution working for bash by redirection, by combining the solution of "kvantour, MatrixManAtYrService" and "Jason Sydes":
#!/bin/bash
exec 1> >(tee x.log) 2> >(tee x.err >&2)
echo "test for log"
echo "test for err" 1>&2
Save the script above as x.sh. After running ./x.sh, the x.log only include the stdout, while the x.err only include the stderr.

Append text to stderr redirects in bash

Right now I'm using exec to redirect stderr to an error log with
exec 2>> ${errorLog}
The only downside is that I have to start each run with a timestamp since exec just pushes the text straight into the log file. Is there a way to redirect stderr but allow me to append text to it, such as a time stamp?
This is very interesting. I've asked a guy who knows bash quite well, and he told me this way:
foo() { while IFS='' read -r line; do echo "$(date) $line" >> file.txt; done; };
First, that creates a function reading one line of raw input from stdin, while the assignment to IFS makes it doesn't ignore blanks. Having read one line, it outputs it with the appropriate data prepended. Then you have to tell bash to redirect stderr into that function:
exec 2> >(foo)
Everything you write into stderr will now go through the foo function. Note when you do it in an interactive shell, you won't see the prompt anymore, because it's printed to stderr, and the read in foo is line buffered :)
You could simple just use:
exec 1> >( sed "s/^/$(date '+[%F %T]'): /" | tee -a ${LOGFILE}) 2>&1
This will not completely solve your Problem regarding Prompt not shown (itt will show after a short time, but not realtime, since the pipe will cache some data...), but will display the output 1:1 on stdout as well as in the file.
The only problem is, that I could not solve, is, to do this from a function, since that opens a subshell, where the exec is useless for the main program...
This example redirects stdout and stderr without loosing the original stdout and stderr. Also errors in the stdout handler are logged to the stderr handler. The file descriptors are saved in variables and closed in the child processes. Bash takes care, that no collisions occur.
#! /bin/bash
stamp ()
{
local LINE
while IFS='' read -r LINE; do
echo "$(date '+%Y-%m-%d %H:%M:%S,%N %z') $$ $LINE"
done
}
exec {STDOUT}>&1
exec {STDERR}>&2
exec 2> >(exec {STDOUT}>&-; exec {STDERR}>&-; exec &>> stderr.log; stamp)
exec > >(exec {STDOUT}>&-; exec {STDERR}>&-; exec >> stdout.log; stamp)
for n in $(seq 3); do
echo loop $n >&$STDOUT
echo o$n
echo e$n >&2
done
This requires a current Bash version but thanks to Shellshock one can rely on this nowadays.
cat q23123 2> tmp_file ;cat tmp_file | sed -e "s/^/$(date '+[%F %T]'): /g" >> output.log; rm -f tmp_file

Resources