Writing to stderr using /dev/tty? - bash

I want to write to only stderr using /dev/tty . If I write directly to /dev/tty (with tee), it seems like that will print out on stdout. Is that correct? How can I specify that I want to print to stderr?
Currently the line in bash looks like
echo "foo" >&2 | tee /dev/tty | logger -it "my_script"

If we split apart your command apart with the result of each command after the #
echo "foo" >&2 # echo "foo" and redirect to fd 2 (/dev/sdterr)
| #pipe stdout to
tee /dev/tty #both send stdout to file /dev/tty, which is terminal file that can output both stdout and stderr depending on what you pass to it (so you probably want /dev/stdout/ or /dev/stderr directly instead) and pass it along to the next pipe
| #pipe stdout to
logger -it "my_script"
So it depends on what you want to do (In the above foo gets redirected to stderr and nothing gets piped to tee)
If you want to print foo to stderr and pass stdout to your script you can just do
echo "foo" | tee /dev/stderr | yourscirpt
Then tee will print to stderr and foo will get piped as stdout to yourscript.

Related

Bash: How to direct output to both stderr and to stdout, to pipe into another command?

I know variations of this question have been asked and answered several times before, but I'm either misunderstanding the solutions, or am trying to do something eccentric. My instinct is that it shouldn't require tee but maybe I'm completely wrong...
Given a command like this:
sh
echo "hello"
I want to send it to STDERR so that it can be logged/seen on the console, and so that it can be sent to another command. For example, if I run:
sh
echo "hello" SOLUTION>&2 > myfile.txt
(SOLUTION> being whatever the answer to my problem is)
I want:
hello to be shown in the console like any other STDERR message
The file myfile.txt to contain hello
There's no need to redirect it to stderr. Just use tee to send it to the file while also sending to stdout, which will go to the terminal.
echo "hello" | tee myfile.txt
If you want to pipe the output to another command without writing it to a file, then you could use
echo "hello" | tee /dev/stderr | other_command
You could also write a shell function that does the equivalent of tee /dev/stderr:
$ tee_to_stderr() {
while read -r line; do
printf "%s\n" "$line";
printf "%s\n" "$line" >&2
done
}
$ echo "hello" | tee_to_stderr | wc
hello
1 1 6
This doesn't work well with binary output, but since you intend to use this to display text on the terminal that shouldn't be a concern.
tee copies stdin to the files on its command line, and also to stdout.
echo hello | tee myfile.txt >&2
This will save hello in myfile.txt and also print it to stderr.

forward stdin to stdout

I'm looking for a way to "forward" stdin to stdout in a pipe, while in that step something is written to stderr. The example should clarify this:
echo "before.." | >&2 echo "some logging..."; [[forward stdin>stdout]] | cat
This should put "before.." to stdout, meanwhile "some logging..." to stderr.
How to do that? Or is there maybe another quite different approach to this?
Here's a solution based on your comments:
cat ~/.bashrc | tee >( cat -n >&2 ) | sort
cat ~/.bashrc represents the start of your pipeline, producing some data.
tee duplicates its input, writing to both stdout and any files listed as arguments.
>( ... ) is a bash construct that runs ... as a pipe subcommand but replaces itself by a filename (something that tee can open and write to).
cat -n represents modifying the input (adding line numbers).
>&2 redirects stdout to stderr.
sort represents the end of your pipeline (normal processing of the unchanged input).
Putting it all together, bash will
run cat ~/.bashrc, putting the contents of ~/.bashrc on stdout
... which is piped to the stdin of tee
run cat -n with stdout redirected to stderr and stdin redirected to a new pipe
run tee /dev/fd/63 (where /dev/fd/63 represents the other end of the cat -n pipe)
this is where it all comes together: tee reads its input and writes it to both its stdout and to the other pipe that goes to cat -n (and from there to stderr)
finally tee's stdout goes into sort
Redirections follows the simple command they refer to, thus
echo "before" >&1
echo "some logging..." >&2
should do the trick, if I understand what you're trying to do.

Piping both stdout and stderr in bash?

It seems that newer versions of bash have the &> operator, which (if I understand correctly), redirects both stdout and stderr to a file (&>> appends to the file instead, as Adrian clarified).
What's the simplest way to achieve the same thing, but instead piping to another command?
For example, in this line:
cmd-doesnt-respect-difference-between-stdout-and-stderr | grep -i SomeError
I'd like the grep to match on content both in stdout and stderr (effectively, have them combined into one stream).
Note: this question is asking about piping, not redirecting - so it is not a duplicate of the question it's currently marked as a duplicate of.
(Note that &>>file appends to a file while &> would redirect and overwrite a previously existing file.)
To combine stdout and stderr you would redirect the latter to the former using 1>&2. This redirects stdout (file descriptor 1) to stderr (file descriptor 2), e.g.:
$ { echo "stdout"; echo "stderr" 1>&2; } | grep -v std
stderr
$
stdout goes to stdout, stderr goes to stderr. grep only sees stdout, hence stderr prints to the terminal.
On the other hand:
$ { echo "stdout"; echo "stderr" 1>&2; } 2>&1 | grep -v std
$
After writing to both stdout and stderr, 2>&1 redirects stderr back to stdout and grep sees both strings on stdin, thus filters out both.
You can read more about redirection here.
Regarding your example (POSIX):
cmd-doesnt-respect-difference-between-stdout-and-stderr 2>&1 | grep -i SomeError
or, using >=bash-4:
cmd-doesnt-respect-difference-between-stdout-and-stderr |& grep -i SomeError
Bash has a shorthand for 2>&1 |, namely |&, which pipes both stdout and stderr (see the manual):
cmd-doesnt-respect-difference-between-stdout-and-stderr |& grep -i SomeError
This was introduced in Bash 4.0, see the release notes.

Why does 2>&1 need to come before a | (pipe) but after a "> myfile" (redirect to file)?

When combining stderr with stdout, why does 2>&1 need to come before a | (pipe) but after a > myfile (redirect to file)?
To redirect stderr to stdout for file output:
echo > myfile 2>&1
To redirect stderr to stdout for a pipe:
echo 2>&1 | less
My assumption was that I could just do:
echo | less 2>&1
and it would work, but it doesn't. Why not?
A pipeline is a |-delimited list of commands. Any redirections you specify apply to the constituent commands (simple or compound), but not to the pipeline as a whole. Each pipe chains one command's stdout to the stdin of the next by implicitly applying a redirect to each subshell before any redirects associated with a command are evaluated.
cmd 2>&1 | less
First stdout of the first subshell is redirected to the pipe from which less is reading. Next, the 2>&1 redirect is applied to the first command. Redirecting stderr to stdout works because stdout is already pointing at the pipe.
cmd | less 2>&1
Here, the redirect applies to less. Less's stdout and stderr both presumably started out pointed at the terminal, so 2>&1 in this case has no effect.
If you want a redirect to apply to an entire pipeline, to group multiple commands as part of a pipeline, or to nest pipelines, then use a command group (or any other compound command):
{ { cmd1 >&3; cmd2; } 2>&1 | cmd3; } 3>&2
Might be a typical example. The end result is: cmd1 and cmd2's stderr -> cmd3; cmd2's stdout -> cmd3; and cmd1 and cmd3's stderr, and cmd3's stdout -> the terminal.
If you use the Bash-specific |& pipe, things get stranger, because each of the pipeline's stdout redirects still occur first, but the stderr redirect actually comes last. So for example:
f() { echo out; echo err >&2; }; f >/dev/null |& cat
Now, counterintuitively, all output is hidden. First stdout of f goes to the pipe, next stdout of f is redirected to /dev/null, and finally, stderr is redirected to stdout (/dev/null still).
I recommend never using |& in Bash -- it's used here for demonstration.
Many of the obligatory redirection links
To add to ormaaj's answer:
The reason you need to specify redirection operators in the proper order is that they're evaluated from left to right. Consider these command lists:
# print "hello" on stdout and "world" on stderr
{ echo hello; echo world >&2; }
# Redirect stdout to the file "out"
# Then redirect stderr to the file "err"
{ echo hello; echo world >&2; } > out 2> err
# Redirect stdout to the file "out"
# Then redirect stderr to the (already redirected) stdout
# Result: all output is stored in "out"
{ echo hello; echo world >&2; } > out 2>&1
# Redirect stderr to the current stdout
# Then redirect stdout to the file "out"
# Result: "world" is displayed, and "hello" is stored in "out"
{ echo hello; echo world >&2; } 2>&1 > out
My answer is by understanding file descriptors. Each process has a bunch of file descriptors: entries to files that are opened. By default, number 0 is for stdin, number 1 is for stdout and number 2 is for stderr.
The i/o redirectors > and < by default connect to their most reasonable file descriptors, stout and stdin. If you re-route stdout to a file (as with foo > bar), on starting process 'foo', the file 'bar' is opened for writing and hooked on file descriptor number 1. If you want only stderr in 'bar', you'd use foo 2> bar which opens file bar and hooks it to the stderr.
Now the i/o redirector '2>&1'. I normally read that as 'put file descriptor 2 to the same as file descriptor 1. While reading the commandline from left to right, you can do the next: foo 1>bar 2>&1 1>/dev/tty. With this, file descriptor 1 is set to the file 'bar', file descriptor 2 is set to the same as 1 (hence 'bar') and after that, file descriptor 1 is set to /dev/tty. The runnning foo is sending its output to /dev/tty and it stderr to the file 'bar'.
Now the pipeline comes in: this does not alter the file descriptors, however, it will connect them between the processes: stdout of the left process ot stdin of the next. Stderr is passed on. Hence, if you like the pipeline to work on stderr only, you use foo 2| bar, which connects stderr of foo to stdin of bar. (I'm not sure what happens with the stdout of foo.)
With the above, if you use foo 2>&1 | bar, since stderr of foo is re-routed to stdout of foo, both stdout and stderr of foo arrive at the stdin of bar.

How to connect stderr to stdin using pipes?

The "|" pipe operator connects the stdout of one process to the stdin of another. Is there any way to create a pipe that connects the stderr of one process to the stdin of another keeping the stdout alive in my terminal? Searching on the internet gave me no information at all...
Thank you in advance,
Michalis.
If you're happy to mix stdouot and stderr, then you can first redirect stderr to stdout and then pipe that:
theprogram 2>&1 | otherprogram
If you don't want stdout, you can kill that one:
theprogram 2>&1 1> /dev/null | otherprogram
If you do want to store the original stdout as well, then you have to redirect it either to a file (in place of /dev/null), or to another file descriptor that you opened previously with exec. Here are some details.
(Unfortunately there is no direct "pipe this file descriptor" syntax like 2|. That would have been handy.)
You can get this effect with bash's process substitution feature:
somecommand 2> >(errorprocessor)
You could use named pipes:
mkfifo /my/pipe
error-handler </my/pipe &
do-something 2>/my/pipe
This should keep stdin & stdout of "do-something" in your terminal und redirect stderr to /my/pipe, which is read by "error-handler".
(I hope this work, have no bash to test)
You may also swap the stdout & stderr streams, i. e. stdout becomes the new stderr and stderr becomes the new stdout).
# only the stdout stream gets upcased
ls -ld / xxx ~/.bashrc yyy 3>&1 1>&2 2>&3 3>&- | tr '[[:lower:]]' '[[:upper:]]'
# block original stdout by closing fd 1
ls -ld / xxx ~/.bashrc yyy 2>&1 1>&- | tr '[[:lower:]]' '[[:upper:]]'

Resources