Shell script: redirect output - bash

Our shell script contains the header
#!/bin/bash -x
that causes the commands to also be listed. Instead of having to type
$ ./script.sh &> log.txt
I would like to add a command to this script that will log all following output (also) to a log file. How this is possible?

You can place this line at the start of your script:
# redirect stdout/stderr to a file
exec &> log.txt
EDIT: As per comments below:
#!/bin/bash -x
# redirect stdout/stderr to a file and still show them on terminal
exec &> >(tee log.txt; exit)

Related

I want to place after all: echo "etc" ( >> filename) [duplicate]

Is it possible to redirect all of the output of a Bourne shell script to somewhere, but with shell commands inside the script itself?
Redirecting the output of a single command is easy, but I want something more like this:
#!/bin/sh
if [ ! -t 0 ]; then
# redirect all of my output to a file here
fi
# rest of script...
Meaning: if the script is run non-interactively (for example, cron), save off the output of everything to a file. If run interactively from a shell, let the output go to stdout as usual.
I want to do this for a script normally run by the FreeBSD periodic utility. It's part of the daily run, which I don't normally care to see every day in email, so I don't have it sent. However, if something inside this one particular script fails, that's important to me and I'd like to be able to capture and email the output of this one part of the daily jobs.
Update: Joshua's answer is spot-on, but I also wanted to save and restore stdout and stderr around the entire script, which is done like this:
# save stdout and stderr to file
# descriptors 3 and 4,
# then redirect them to "foo"
exec 3>&1 4>&2 >foo 2>&1
# ...
# restore stdout and stderr
exec 1>&3 2>&4
Addressing the question as updated.
#...part of script without redirection...
{
#...part of script with redirection...
} > file1 2>file2 # ...and others as appropriate...
#...residue of script without redirection...
The braces '{ ... }' provide a unit of I/O redirection. The braces must appear where a command could appear - simplistically, at the start of a line or after a semi-colon. (Yes, that can be made more precise; if you want to quibble, let me know.)
You are right that you can preserve the original stdout and stderr with the redirections you showed, but it is usually simpler for the people who have to maintain the script later to understand what's going on if you scope the redirected code as shown above.
The relevant sections of the Bash manual are Grouping Commands and I/O Redirection. The relevant sections of the POSIX shell specification are Compound Commands and I/O Redirection. Bash has some extra notations, but is otherwise similar to the POSIX shell specification.
Typically we would place one of these at or near the top of the script. Scripts that parse their command lines would do the redirection after parsing.
Send stdout to a file
exec > file
with stderr
exec > file
exec 2>&1
append both stdout and stderr to file
exec >> file
exec 2>&1
As Jonathan Leffler mentioned in his comment:
exec has two separate jobs. The first one is to replace the currently executing shell (script) with a new program. The other is changing the I/O redirections in the current shell. This is distinguished by having no argument to exec.
You can make the whole script a function like this:
main_function() {
do_things_here
}
then at the end of the script have this:
if [ -z $TERM ]; then
# if not run via terminal, log everything into a log file
main_function 2>&1 >> /var/log/my_uber_script.log
else
# run via terminal, only output to screen
main_function
fi
Alternatively, you may log everything into logfile each run and still output it to stdout by simply doing:
# log everything, but also output to stdout
main_function 2>&1 | tee -a /var/log/my_uber_script.log
For saving the original stdout and stderr you can use:
exec [fd number]<&1
exec [fd number]<&2
For example, the following code will print "walla1" and "walla2" to the log file (a.txt), "walla3" to stdout, "walla4" to stderr.
#!/bin/bash
exec 5<&1
exec 6<&2
exec 1> ~/a.txt 2>&1
echo "walla1"
echo "walla2" >&2
echo "walla3" >&5
echo "walla4" >&6
[ -t <&0 ] || exec >> test.log
I finally figured out how to do it. I wanted to not just save the output to a file but also, find out if the bash script ran successfully or not!
I've wrapped the bash commands inside a function and then called the function main_function with a tee output to a file. Afterwards, I've captured the output using if [ $? -eq 0 ].
#! /bin/sh -
main_function() {
python command.py
}
main_function > >(tee -a "/var/www/logs/output.txt") 2>&1
if [ $? -eq 0 ]
then
echo 'Success!'
else
echo 'Failure!'
fi

How to redirect both stdout and stderr to a file from within a bash script?

I want to add a command to my bash script that directs all stderr and stdout to specific files. From this and many other sources, I know that from the command line I would use:
/path/to/script.sh >> log_file 2>> err_file
However, I want something inside my script, something akin to these slurm flags:
#!/bin/bash
#SBATCH -o slurm.stdout.txt # Standard output log
#SBATCH -e slurm.stderr.txt # Standard error log
<code>
Is there a way to direct output from within a script, or do I need to use >> log_file 2>> err_file every time I call the script? Thanks
You can use this:
exec >> file
exec 2>&1
at the start of your bash script. This will append both stdout and stderr to your file.
You can use this at the start of your bash script:
# Redirected Output
exec > log_file 2> err_file
If the file does exist it is truncated to zero size. If you prefer to append, use this:
# Appending Redirected Output
exec >> log_file 2>> err_file
If you want to redirect both stdout and stderr to the same file, then you can use:
# Redirected Output
exec &> log_file
# This is semantically equivalent to
exec > log_file 2>&1
If you prefer to append, use this:
# Appending Redirected Output
exec >> log_file 2>&1
#SBATCH --output=serial_test_%j.log # Standard output and error log
This will send all output, i.e stdout and stderr to a single log file called serial_test_<JOBID>.log
Ref: https://help.rc.ufl.edu/doc/Sample_SLURM_Scripts

Where does `set -x` cause bash to print to?

If I set -x in my bash session ( v4.1.2(2) - CentOS 6.10), I get :
$ ls /root
+ ls --color=auto /root
ls: cannot open directory /root: Permission denied
Great, it echo's the command I ran and prints out the terminal. This is expected. Now if I redirect both stdout and stderr to the another file.
$ ls /root &> stuff.txt
+ ls --color=auto /root
It still prints the command to the terminal.
QUESTION
Where is set -x having bash print to if it isn't stderr or stdout?
The set -x command prints tracing information to stderr.
When you run this command...
ls /root &> stuff.txt
You're only redirecting stdout and stderr for the ls command. You're not changing either for your current shell, which is where you have run set -x.
As Mad Physicist points out, the technical answer is "it logs to BASH_XTRACEFD", which defaults to stderr. You can redirect trace logging for the current shell to another file by doing something like:
# open a new file descriptor for logging
exec 4> trace.log
# redirect trace logs to fd 4
BASH_XTRACEFD=4
# enable tracing
set -x
When you execute a command, you can redirect the standard output (known as /dev/stdout) of the command directly to the file. Also if the command generates error-output (generally send to /dev/stderr) you can also redirect it to a file as:
$ command > /path/to/output.txt 2> /path/to/error.txt
When you execute the command set -x, you ask it to generate a trace of the commands being executed. It does this by sending messages to /dev/stderr. In contrast to a normal command, you cannot easily redirect this in a similar way as with a normal command. This is because bash executes the script and at the same time generates the trace to /dev/stderr. So if you would like to catch the trace, you would have to redirect the error output of bash directly. This can be done by the command
exec 2> /path/to/trace.txt
note: this will at the same time also contain all the error output of any command executed in the script.
Examples:
#!/usr/bin/env bash
set -x
command
This sends all output and error output to the terminal
#!/usr/bin/env bash
set -x
command 2> /path/to/command.err
This sends the output of command and the trace of bash to the terminal but catches the error output of command in a file
#!/usr/bin/env bash
set -x
exec 2> /path/to/trace.err
command 2> /path/to/command.err
This sends the output of command to the terminal, the error output of command to a file, and the trace of the script to /path/to/trace.err
#!/usr/bin/env bash
set -x
exec 2> /path/to/trace_and_command.err
command
This sends the output of command to the terminal, the trace and the error of command to a file.

Automatically capture all stderr and stdout to a file and still show on console

I'm looking for a way to capture all standard output and standard error to a file, while also outputting it to console. So:
(set it up here)
set -x # I want to capture every line that's executed too
cat 'foo'
echo 'bar'
Now the output from foo and bar, as well as the debugging output from set -x, will be logged to some log file and shown on the console.
I can't control how the file is invoked, so it needs to be set up at the start of the file.
You can use exec and process substitution to send stdout and stderr inside of the script to tee. The process substitution is a bashism, so it is not portable and will not work if bash is called as /bin/sh or with --posix.
exec > >(tee foo.log) 2>&1
set -x # I want to capture every line that's executed too
cat 'foo'
echo 'bar'
sleep 2
The sleep is added to the end because the output to the console will be buffered by the tee. The sleep will help prevent the prompt from returning before the output has finished.
Maybe create a proxy-script that calls the real script, redirecting stderr to stdout and piping it to tee?
Something like this:
#!/bin/bash
/path/to/real/script "$#" 2>&1 | tee file
If you like only STDERR on your console you can:
#!/bin/bash
set -e
outfile=logfile
exec > >(cat >> $outfile)
exec 2> >(tee -a $outfile >&2)
# write your code here

How to redirect output of an entire shell script within the script itself?

Is it possible to redirect all of the output of a Bourne shell script to somewhere, but with shell commands inside the script itself?
Redirecting the output of a single command is easy, but I want something more like this:
#!/bin/sh
if [ ! -t 0 ]; then
# redirect all of my output to a file here
fi
# rest of script...
Meaning: if the script is run non-interactively (for example, cron), save off the output of everything to a file. If run interactively from a shell, let the output go to stdout as usual.
I want to do this for a script normally run by the FreeBSD periodic utility. It's part of the daily run, which I don't normally care to see every day in email, so I don't have it sent. However, if something inside this one particular script fails, that's important to me and I'd like to be able to capture and email the output of this one part of the daily jobs.
Update: Joshua's answer is spot-on, but I also wanted to save and restore stdout and stderr around the entire script, which is done like this:
# save stdout and stderr to file
# descriptors 3 and 4,
# then redirect them to "foo"
exec 3>&1 4>&2 >foo 2>&1
# ...
# restore stdout and stderr
exec 1>&3 2>&4
Addressing the question as updated.
#...part of script without redirection...
{
#...part of script with redirection...
} > file1 2>file2 # ...and others as appropriate...
#...residue of script without redirection...
The braces '{ ... }' provide a unit of I/O redirection. The braces must appear where a command could appear - simplistically, at the start of a line or after a semi-colon. (Yes, that can be made more precise; if you want to quibble, let me know.)
You are right that you can preserve the original stdout and stderr with the redirections you showed, but it is usually simpler for the people who have to maintain the script later to understand what's going on if you scope the redirected code as shown above.
The relevant sections of the Bash manual are Grouping Commands and I/O Redirection. The relevant sections of the POSIX shell specification are Compound Commands and I/O Redirection. Bash has some extra notations, but is otherwise similar to the POSIX shell specification.
Typically we would place one of these at or near the top of the script. Scripts that parse their command lines would do the redirection after parsing.
Send stdout to a file
exec > file
with stderr
exec > file
exec 2>&1
append both stdout and stderr to file
exec >> file
exec 2>&1
As Jonathan Leffler mentioned in his comment:
exec has two separate jobs. The first one is to replace the currently executing shell (script) with a new program. The other is changing the I/O redirections in the current shell. This is distinguished by having no argument to exec.
You can make the whole script a function like this:
main_function() {
do_things_here
}
then at the end of the script have this:
if [ -z $TERM ]; then
# if not run via terminal, log everything into a log file
main_function 2>&1 >> /var/log/my_uber_script.log
else
# run via terminal, only output to screen
main_function
fi
Alternatively, you may log everything into logfile each run and still output it to stdout by simply doing:
# log everything, but also output to stdout
main_function 2>&1 | tee -a /var/log/my_uber_script.log
For saving the original stdout and stderr you can use:
exec [fd number]<&1
exec [fd number]<&2
For example, the following code will print "walla1" and "walla2" to the log file (a.txt), "walla3" to stdout, "walla4" to stderr.
#!/bin/bash
exec 5<&1
exec 6<&2
exec 1> ~/a.txt 2>&1
echo "walla1"
echo "walla2" >&2
echo "walla3" >&5
echo "walla4" >&6
[ -t <&0 ] || exec >> test.log
I finally figured out how to do it. I wanted to not just save the output to a file but also, find out if the bash script ran successfully or not!
I've wrapped the bash commands inside a function and then called the function main_function with a tee output to a file. Afterwards, I've captured the output using if [ $? -eq 0 ].
#! /bin/sh -
main_function() {
python command.py
}
main_function > >(tee -a "/var/www/logs/output.txt") 2>&1
if [ $? -eq 0 ]
then
echo 'Success!'
else
echo 'Failure!'
fi

Resources