Redirect stderr to console and file - bash

How can i redirect sdterr of bash script to console and file?
I am using:
exec 2>> myfile
to log It to myfile. How to extend it to log to console as well?

For example:
exec 2>&1 | tee myfile

or you can use tail -f
$ touch myfile
$ tail -f myfile &
$ command 2>myfile

You can create a fifo
$ mknod mypipe p
let tee read from the fifo. It writes to stdout and the file you specified
$ tee myfile <mypipe &
[1] 17121
now run the command and pipe stderr to the fifo
$ ls kkk 2>mypipe
ls: cannot access kkk: No such file or directory
[1]+ Done tee myfile < mypipe

Try to open that file by another command (like cat) in background.
exec 2>> myfile
cat myfile & >&2
CAT_PID=$!
... # your script
kill $CAT_PID

Pure Bash solution which builts upon #mpapis's answer:
exec 2> >( while read -r line; do printf '%s\n' "${line}" >&2; printf '%s\n' "${line}" >> err.log; done )
and expanded:
exec 2> >(
while read -r line; do
printf '%s\n' "${line}" >&2
printf '%s\n' "${line}" >> err.log
done
)

You can redirect output to a process and use tee in that process:
#!/usr/bin/env bash
exec 2> >( tee -a err.log )
echo bla >&2

Related

How to print shell script stdout/stderr to file/s and console

In the bash script I use the following syntax I order to print everything from the script to the files - $file and $sec_file
we are running the script on our Linux rhel server - version 7.8
exec > >(tee -a "$file" >>"$sec_file") 2>&1
so after bash script completed , we get on both files the content of stdout/stderr of every line in the bash script
now we want additionally to print to the console the stdout/stderr and not only to the files
I will appreciate of any suggestion
Example of the script:
# more /tmp/script.bash
#!/bin/bash
file=/tmp/file.txt
sec_file=/tmp/sec_file.txt
exec > >(tee -a "$file" >>"$sec_file") 2>&1
echo "hello world , we are very happy to stay here "
Example how to run the script:
/tmp/script.bash
<-- no output from the script -->
# more /tmp/file.txt
hello world , we are very happy to stay here
# more /tmp/sec_file.txt
hello world , we are very happy to stay here
example of expected output that should be as the following
/tmp/script.bash
hello world , we are very happy to stay here
and
# more /tmp/file.txt
hello world , we are very happy to stay here
# more /tmp/sec_file.txt
hello world , we are very happy to stay here
I think, the easiest is to just add multiple files as arguments to the tee like this:
% python3 -c 'import sys; print("to stdout"); print("to stderr", file=sys.stderr)' 2>&1 | tee -a /tmp/file.txt /tmp/file_sec.txt
to stdout
to stderr
% cat /tmp/file.txt
to stdout
to stderr
% cat /tmp/file_sec.txt
to stdout
to stderr
Your script would look like this then:
#!/bin/bash
file=/tmp/file.txt
sec_file=/tmp/sec_file.txt
exec > >(tee -a "$file" "$sec_file") 2>&1
echo "hello world , we are very happy to stay here "
I would suggest to just write console things to a new output channel:
#!/bin/bash
file=file.txt
sec_file=sec_file.txt
exec 4>&1 > >(tee -a "$file" >>"$sec_file") 2>&1
echo "stdout"
echo "stderr" >&2
echo "to the console" >&4
Output:
me#pc:~/⟫ ./script.sh
to the console
me#pc:~/⟫ cat file.txt
stdout
stderr
me#pc:~/⟫ cat sec_file.txt
stdout
stderr
If you want you can do this and even write to stderr again with >&5:
exec 4>&1 5>&1 > >(tee -a "$file" >>"$sec_file") 2>&1
echo "stderr to console" >&5
Edit: Changed &3 to &4 as &3 is sometimes used for stdin.
But maybe this is the moment to rethink what you are doing and keep &1 stdout and &2 stderr and use &4 and &5 to write to file?
exec 4> >(tee -a "$file" >>"$sec_file") 5>&1
This does require you though to add to all lines that should end up in your file to prepend >&4 2>&5

Capture output of piped command while still knowing if first command wrote to stderr

Is it possible to capture the output of cmd2 from cmd1 | cmd2 while still knowing if cmd1 wrote to stderr?
I am using exiftool to strip exif data from files:
exiftool "/path/to/file.ext" -all= -o -
This writes the output to stdout. This works for most files. If the file is corrupt or not a video/image file it will not write anything to stdout and, instead, write an error to stderr. For example:
Error: Writing of this type of file is not supported - /path/to/file.ext
I ultimately need to capture the md5 of files that don't result in an error. Right now I am doing this:
md5=$(exiftool "/path/to/file.ext" -all= -o - | md5sum | awk '{print $1}')
Regardless if the file is a image/video, it'll calculate an md5.
If the file is an image/video, it'll capture the file's md5 as expected.
If the file is not an image/video, exiftool doesn't write anything to stdout and so md5sum calculates the md5 of the null input. But that line will also write an error to stderr.
I need to be able to check if something was written to stderr so I know to scrap the calculated md5.
I know one alternative is to run the exiftool twice: one time without the md5sum and without capturing to see if anything was written to stderr and then a second time with the md5sum and capturing. But this means I have to run exiftool twice. I want to avoid that because it can take a long time for big files. I'd rather only run it once.
Update
Also, I can't capture the output of just exiftool because it yields this error:
bash: warning: command substitution: ignored null byte in input
And I cannot ignore this error because the md5 result is not the same. That is to say:
file=$(exiftool "/path/to/file.ext" -all= -o -)
echo "$file" | md5sum
Will print the above null byte error and will not have the same md5 result as:
exiftool "/path/to/file.ext" -all= -o - | md5sum
There is a special var(array) for this PIPESTATUS, simple example, file and file2 exist
$ ls file &> /dev/null | ls file2 &> /dev/null; echo ${PIPESTATUS[#]}
0 0
And here file3 not exist
$ ls file3 &> /dev/null | ls file2 &> /dev/null; echo ${PIPESTATUS[#]}
2 0
$ ls file3; echo $?
ls: cannot access 'file3': No such file or directory
2
Triple pipe
$ ls file 2> /dev/null | ls file3 &> /dev/null | ls file2 &> /dev/null; echo ${PIPESTATUS[#]}
0 2 0
Pipe in var tested with grep
$ test=$(ls file | grep .; ((${PIPESTATUS[1]} > 0)) && echo error)
$ echo $test
file
$ test=$(ls file3 | grep .; ((${PIPESTATUS[1]} > 0)) && echo error)
ls: cannot access 'file3': No such file or directory
$ echo $test
error
Another approach is to check that file type is image or video first.
type=$(file "/path/to/file.ext")
case $type in
*image*|*Media*) echo "is an image or video";;
esac
A coprocess can be used for this:
#!/usr/bin/env bash
case $BASH_VERSION in [0-3].*) echo "ERROR: Bash 4+ required" >&2; exit 1;; esac
coproc STDERR_CHECK { seen=0; while IFS= read -r; do seen=1; done; echo "$seen"; }
{
md5=$(exiftool "/path/to/file.ext" -all= -o - | md5sum | awk '{print $1}')
} 2>&${STDERR_CHECK[1]}
exec {STDERR_CHECK[1]}>&-
read stderr_seen <&"${STDERR_CHECK[0]}"
if (( stderr_seen )); then
echo "exiftool emitted stdout with md5 $md5, and had content on stderr"
else
echo "exiftool emitted stdout with md5 $md5, and did not emit any content on stderr"
fi
md5=$(exec 3>&1; (exiftool "/path/to/file.ext" -all= -o - 2>&1 1>&3) 3> >(md5sum | awk '{print $1}' >&3) | grep -q .)
This opens file descriptor 3 and redirects it to file descriptor 1 (a.k.a. stdout).
The trick is to redirect exiftool outputs:
exiftool ... 2>&1 tells that file descriptor 2 (a.k.a. stderr) is redirected to stdout
exiftool ... 1>&3 tells that stdout is redirected to file descriptor 3 which, at this moment, is redirected to stdout
Then fd 3 is redirected to another chain of commands using process substitution, i.e. 3> >(md5sum | awk '{print $1}' >&3) where 3> tells to redirect fd3 and >(...) is the process substitution itself.
At the same time, the standard error of exiftool is written to the standard output which is piped into grep -q . which will return 0 if there is at least one character.
Because grep -q . is the last command executed in the main chain of commands, you can simply check the results of $?:
md5=$(exec 3>&1; (exiftool "/path/to/file.ext" -all= -o - 2>&1 1>&3) 3> >(md5sum | awk '{print $1}' >&3) | grep -q .)
if [ $? -eq 0 ]
then
# something was written to exiftool's stderr
fi
The error will not be written. If you want to see the error but not capture it in md5 then replace grep -q . by grep . >&2
md5=$(exec 3>&1; (exiftool "/path/to/file.ext" -all= -o - 2>&1 1>&3) 3> >(md5sum | awk '{print $1}' >&3) | grep . >&2)
It is very important that you redirect exiftool outputs in this very order. If you redirected like this:
exiftool "/path/to/file.ext" -all= -o - 1>&3 2>&1
Then stdout is redirected to fd3 and then stderr is redirected to stdout. But because 1>&3 occurs before 2>&1 then stderr will be redirected to stdout which is redirected to fd3 at this time. You definitely don’t want that.
The end of the process substitution chain writes to fd3 with >&3 because you want to keep the result to fd3. Without >&3, the result of awk would end up in fd1 which would be piped to grep -q . or grep . >&2 and, again, you definitely don’t want that.
PS. you don’t need to close fd3 because it was opened during a subprocess when assigning md5. Should you need to close the file descriptor, please call exec 3>&-
Just capture the output, and then conditionally write it. eg:
if out="$(exiftool "/path/to/file.ext" -all= -o - )"; then
md5=$(echo "$out" | md5sum | awk '{print $1}'))
fi
This makes the assignment to md5 and returns the exit status of exiftool, which is checked by the if. Note that this construction assumes that exiftool returns a reasonable exit value.

How can I send multiple commands' output to a single shell pipeline?

I have multiple pipelines, which looks like:
tee -a $logfilename.txt | jq string2object.jq >> $logfilename.json
or
tee -a $logfilename.txt | jq array2object.jq >> $logfilename.json
For each pipeline, I want to apply to multiple commands.
Each set of commands looks something like:
echo "start filelist:"
printf '%s\n' "$PWD"/*
or
echo "start wget:"
wget -nv http://web.site.com/downloads/2017/file_1.zip 2>&1
wget -nv http://web.site.com/downloads/2017/file_2.zip 2>&1
and the output from those commands should all go through the pipe.
What I've tried in the past is putting the pipeline on each command separately:
echo "start filelist:" | tee -a $logfilename | jq -sRf array2object.jq >>$logfilename.json
printf '%s\n' "$PWD"/* | tee -a $logfilename | jq -sRf array2object.jq >>$logfilename.json
but in that case the JSON script can only see one line at a time, so it doesn't work correctly.
The Portable Approach
The following is portable to POSIX sh:
#!/bin/sh
die() { rm -rf -- "$tempdir"; [ "$#" -gt 0 ] && echo "$*" >&2; exit 1; }
logfilename="whatever"
tempdir=$(mktemp -d "${TMPDIR:-/tmp}"/fifodir.XXXXXX) || exit
mkfifo "$tempdir/fifo" || die "mkfifo failed"
tee -a "$logfilename" <"$tempdir/fifo" \
| jq -sRf json_log_s2o.jq \
>>"$logfilename.json" & fifo_pid=$!
exec 3>"$tempdir/fifo" || die "could not open fifo for write"
echo "start filelist:" >&3
printf '%s\n' "$PWD"/* >&3
echo "start wget:" >&3
wget -nv http://web.site.com/downloads/2017/file_1.zip >&3 2>&1
wget -nv http://web.site.com/downloads/2017/file_2.zip >&3 2>&1
exec 3>&- # close the write end of the FIFO
wait "$fifo_pid" # and wait for the process to exit
rm -rf "$tempdir" # delete the temporary directory with the FIFO
Avoiding FIFO Management (Using Bash)
With bash, one can avoid needing to manage the FIFO by using a process substitution:
#!/bin/bash
logfilename="whatever"
exec 3> >(tee -a "$logfilename" | jq -sRf json_log_s2o.jq >>"$logfilename.json")
echo "start filelist:" >&3
printf '%s\n' "$PWD/*" >&3
echo "start wget:" >&3
wget -nv http://web.site.com/downloads/2017/file_1.zip >&3 2>&1
wget -nv http://web.site.com/downloads/2017/file_2.zip >&3 2>&1
exec 3>&1
Waiting For Exit (Using Linux-y Tools)
However, the thing this doesn't let you do (without bash 4.4) is detect when jq failed, or wait for jq to finish writing before your script exits. If you want to ensure that jq finishes before your script exits, then you might consider using flock, like so:
writelogs() {
exec 4>"${1}.json"
flock -x 4
tee -a "$1" | jq -sRf json_log_s2o.jq >&4
}
exec 3> >(writelogs "$logfilename")
and later:
exec 3>&-
flock -s "$logfilename.json" -c :
Because the jq process inside the writelogs function holds a lock on the output file, the final flock -s command isn't able to also grab a lock on the output file until jq exits.
An Aside: Avoiding All The >&3 Redirections
In either shell, the below is just as valid:
{
echo "start filelist:"
printf '%s\n' "$PWD"/*
echo "start wget:"
wget -nv http://web.site.com/downloads/2017/file_1.zip 2>&1
wget -nv http://web.site.com/downloads/2017/file_2.zip 2>&1
} >&3
It's also possible, but not advisable, to pipe a code block into a pipeline, thus replacing the FIFO use or process substitution altogether:
{
echo "start filelist:"
printf '%s\n' "$PWD"/*
echo "start wget:"
wget -nv http://web.site.com/downloads/2017/file_1.zip 2>&1
wget -nv http://web.site.com/downloads/2017/file_2.zip 2>&1
} | tee -a "$logfilename" | jq -sRf json_log_s2o.jq >>"${logfilename}.json"
...why not advisable? Because there's no guarantee in POSIX sh as to which components of a pipeline if any run in the same shell interpreter as the rest of your script; and if the above isn't run in the same piece of the script, then variables will be thrown away (and without extensions such as pipefail, exit status as well). See BashFAQ #24 for more information.
Waiting For Exit On Bash 4.4
With bash 4.4, process substitutions export their PIDs in $!, and these can be waited for. Thus, you get an alternate way to wait for the FIFO to exit:
exec 3> >(tee -a "$logfilename" | jq -sRf json_log_s2o.jq >>"$logfilename.json"); log_pid=$!
...and then, later on:
wait "$log_pid"
as an alternative to the flock approach given earlier. Obviously, do this only if you have bash 4.4 available.

how to redirect stdout and stderr to a file while showing stderr to screen?

The script should redirect all the output (stdout and stderr) to a log file, and only display stderr to the screen (notifying user if an error happens). The command tee may help but don't know how to write it.
Thanks.
P.S., thanks lihao and konsolebox for the answer, but is there a way to keep the output in order. For example:
$ cat test.sh
echo "to stdout..1"
echo "to stderr..1" >&2
echo "to stdout..2"
echo "to stderr..2" >&2
$ sh test.sh 2>&1 >test.log | tee -a test.log
to stderr..1
to stderr..2
$ cat test.log
to stdout..1
to stdout..2
to stderr..1
to stderr..2
Command: { sh test.sh 2> >(tee /dev/fd/4); } 4>&1 >test.log has the same output.
how about the following:
cmd args 2>&1 >logfile | tee -a logfile
You should map normal stdout to another file descriptor (4), make the file the default output, then use tee to redirect output to the new file descriptor through /dev/fd. Of course you'd need process substitution to pass stderr output to tee:
{ cmd args 2> >(exec tee /dev/fd/4); } 4>&1 >file
If you want to make a general redirection for the script, place this at the beginning of it:
exec 4>&1 >file 2> >(exec tee /dev/fd/4)
You can restore normal output with:
exec >&4 4>&-

How to save STDERR and STDOUT of a pipeline on a file?

I'm running a pipeline of commands that have STDERR and STDOUT outputs. I want to save both outputs in a single log file.
This are my attempts to do it:
bash my_script.sh > log.txt #Only save STDOUT
bash my_script.sh > >(tee log.txt) 2> >(tee log.txt >&2) #The STDERR overwrite the STDOUT
I hope you can provide a simple solution to do this.
Thanks for your time!
How about just
bash my_script.sh > >(tee log.txt) 2>&1
Also if you want to append output if log.txt already exists, add -a option to tee
bash my_script.sh > >(tee -a log.txt) 2>&1
It's actually equivalent to bash my_script.sh 2>&1 | tee log.txt or bash my_script.sh 2>&1 | tee -a log.txt
bash my_script.sh > log.txt 2>&1
where 2>&1 redirects stderr to stdout

Resources