Suppose I have this script:
logfile=$1
echo "This is just a debug message indicating the script is starting to run..."
# Do some work...
echo "Results: x, y and z." >> $logfile
Is it possible to invoke the script from the command-line such that $logfile is actually stdout?
Why? I would like to have a script that prints part of its output to stdout or, optionally, to a file.
"But why not remove the >> $logfile part and just invoke it with ./script >> filename when you want to write to a file?", you may ask.
Well, because I just want to do this "optional redirect" thing for some output messages. In the example above, just the second message should be affected.
Use /dev/stdout, if your operating system is Linux or something similarly compliant with convention. Or:
#!/bin/bash
# works on bash even if OS doesn't provide a /dev/stdout
# for non-bash shells, consider using exec 3>&1 explicitly if $1 is empty
exec 3>${1:-/dev/stdout}
echo "This is just a debug message indicating the script is starting to run..." >&2
echo "Results: x, y and z." >&3
This is also vastly more efficient than putting >>"$filename" on every line that should log to the file, which reopens the file for output on each command.
Related
Is there a filename that is assignable to a variable (i.e. not a magic builtin shell token like &1) that will let me redirect to stdout?
What I finally want to do is run something like this in a cron script:
LOG=/tmp/some_file
...
some_command 2>&1 >> $LOG
echo "blah" >> $LOG
...
Conveniently, this lets me turn off log noise by redirecting to /dev/null later when I'm sure there is nothing that can fail (or, at least, nothing that I care about!) without rewriting the whole script. Yes, turning off logging isn't precisely best practice -- but once this script works, there is not much that can conceivably go wrong, and trashing the disk with megabytes of log info that nobody wants to read isn't desired.
In case something unexpectedly fails 5 years later, it is still possible to turn on logging again by flipping a switch.
On the other hand, while writing and debugging the script, which involves calling it manually from the shell, it would be extremely nice if it could just dump the output to the console. That way I wouldn't need to tail the logfile manually.
In other words, what I'm looking for is something like /proc/self/fd/0 in bash-talk that I can assign to LOG. As it happens, /proc/self/fd/0 works just fine on my Linux box, but I wonder if there isn't such a thing built into bash already (which would generally be preferrable).
Basic solution:
#!/bin/bash
LOG=/dev/null
# uncomment next line for debugging (logging)
# LOG=/tmp/some_file
{
some_command
echo "blah"
} | tee 1>$LOG 2>&1
More evolved:
#!/bin/bash
ENABLE_LOG=0 # 1 to log standard & error outputs
LOG=/tmp/some_file
{
some_command
echo "blah"
} | if (( $ENABLE_LOG ))
then
tee 1>$LOG 2>&1
fi
More elegant solution from DevSolar's idea:
#!/bin/bash
# uncomment next line for debugging (logging)
# exec 1> >(tee /tmp/some_file) 2>&1
some_command
echo "blah"
Thanks to the awesome hints by olibre and suvayu, I came up with this (for the record, version that I'm using now):
# log to file
# exec 1>> /tmp/logfile 2>&1
# be quiet
# exec 1> /dev/null 2>&1
# dump to console
exec 2>&1
Just need to uncomment one of the three, depending on what is desired, and don't worry about anything else, ever again. This either logs all subsequent output to a file or to console, or not at all.
No output duplicated, works universally the same for every command (without explicit redirects), no weird stuff, and as easy as it gets.
If I have understood your requirement clearly, the following should do what you want
exec 2>&1
exec >> $LOG
Stdout and stderr of all subsequent commands will be appended to the file $LOG.
Use /dev/stdout
Here's another SO answer that mentions this solution: Difference between stdout and /dev/stdout
I have a script that do many action, and echo information.
I want to have a "duplicate" of all what is printed to the terminal into a file, and I want to add some information into it as well.
Problem : i can't simply piping every echo into tee because some printing are done by function that are sourced from another source file, and I can't modify them.
So I searched and I found this
redirect COPY of stdout to log file from within bash script itself
So based on that, i do the following script
#!/bin/bash
strReportingFilepath="./reporting.log"
echo "[1] stdout : This text will not appear in the reporting file"
echo "[2] stderr : Neither this one" >&2
# We dup stdour and stderr into the reporting file (reporting fiel emptied)
exec > >(tee -i "$strReportingFilepath") 2>&1
echo "[3] stdout : This text will be on both the terminal and the reporting file"
echo "[4] stderr : This one too :)" >&2
echo "[5] stdout : Last case : this text should only appear into the reporting file" >> "$strReportingFilepath"
exit 0
When I run this script, I have this on my terminal
(terminal info) # ./my_script.sh
[1] stdout : This text will not appear in the reporting file
[2] stderr : Neither this one
(terminal info) # [3] stdout : This text will be on both the terminal and the reporting file
[4] stderr : This one too :)
and this into my file
(terminal info) # cat reporting.log
[3] stdout : This text will be on both the terminal and the reporting file
[4] stderr : This one too :)
I have 2 problem :
First, for an unknow reason (I suspect that I use exec/named pipe/tee in a sloppy way), my script actually wait at the end. I have to press enter key in order to terminate the script.
Secondly, the [5] echo doesn't appear into the file.
I'm pretty sure that I mess something up with the exec line, but I can't understand what and I don't know what to do.
I added a "sync" call after the [5] echo, but it didn't work.
Well, I think I finally managed to do what I wanted.
What I want :
Having everything printed onto the screen terminal (stdout, stderr, stdin) to be "logged" into a file
Having the possibility to add text into this logging file without displaying it onto the terminal screen.
I stopped using tee, and I found a link talking about "script(8)".
After tinkering around, I ended up with this script :
#!/bin/bash
strReportingFilepath="./reporting.log"
# Part that will do the script reporting
if [ -z "$SCRIPT" ]; then
export SCRIPT="$0"
# Creation of the string bash command
strCommand="$0"
strSearch="'"
strReplace="'\''"
for strArgument in "$#"; do
strCommand+=" '${strArgument//$strSearch/$strReplace}'"
done
# Execution of the script
/usr/bin/script --return --quiet --flush --append "$strReportingFilepath" --command "$strCommand"
# We get the return
intScriptReturnValue="$?"
# The line automaticly added by /usr/bin/script is something that I don't want
strTemporaryFilepath="$(mktemp)"
grep -v '^Script started on ' "$strReportingFilepath" > "$strTemporaryFilepath"
mv "$strTemporaryFilepath" "$strReportingFilepath"
exit "$intScriptReturnValue"
fi
# "Start" of the script that I want "supervised"
sync
echo "[0 ] directly" >> "$strReportingFilepath"
echo "[1. ] out"
sync
echo "[1.5] directly" >> "$strReportingFilepath"
echo "[2 ] out"
echo "[3. ] err" >&2
sync
echo "[3.5] directly" >> "$strReportingFilepath"
echo "[4 ] err" >&2
echo -n "[5 ] input : "
read
echo "[6 ] out"
echo "[7 ] err" >&2
echo " End :)"
exit 5
Now, there are still some problem :
I really don't know what I have done, so if you're thinking about using this code as it, think about it twice.
I just know that for my case, it "seem to work" (yeepee !)
Each time I want to write something directly into the logging file, I have to call "sync" before. Without it, the line aren't in order. Please note that "sync" doesn't garanty that it's instantaneous. In the doc, they say that command like "halt" do a "sleep" after a sync in order to be sure, so it's still possible to have race condition resulting in wrong line order.
It seem that "script" doesn't work on some case, like when stdin is not a terminal. See "script(8)".
Well, sorry for this strange question, I would have bet that what I wanted to do was simple and common, turn out not really.
You have a race condition; everything sent via tee is just getting buffered, and only gets written (to the terminal and disk) after the script has actually finished. Note: sync doesn't affect this, since it flushes the operating system's buffers, but tail's.
That's why you're getting a terminal prompt before line [3]:
(terminal info) # [3] stdout : This text will be on both the terminal and the reporting file
...your shell sends the prompt for the next command (after running the script), then tee flushes everything in its buffers, so you see lines [3] and [4] after the prompt. Pressing enter after this doesn't terminate the script (it ended a while ago), it just gets you a clean shell prompt that isn't mixed up with the output.
Something similar happens with the reporting.log file. Line [5] gets written into it, and then a moment later tee writes lines [3] and [4] into it, but since tee's output pointer is set to write to it starting at the beginning of the file, it overwrites line [5].
I'm developing a BASH script which invokes another BASH script which prints a line to stdout. That output is captured by the first BASH script and used later. It works, but it has the downside that any other output which is printed by the second script will cause this part to behave unexpectedly, because there will be extra content.
main.sh
#!/bin/bash
# Invoke worker.sh and capture its standard output to stats
stats=$(worker.sh --generate-stats)
echo "stats=$stats"
worker.sh
#!/bin/bash
[[ $1 == "--generate-stats" ]] && echo "cpu=90 mem=50 disk=15"
In this over-simplified example, it's not a problem to use this construct, but as worker.sh grows in size and complexity, it's hard to remember that no other command can print to stdout without confounding the behavior, and if someone else works on worker.sh without realizing they can't print to stdout, it can easily get fouled. So what is considered good practice to generate output in one script and use it in the other?
I'm wondering if a fifo would be appropriate, or another file descriptor, or just a plain file. Or if exec should be used in this case, something like what is shown here https://www.tldp.org/LDP/abs/html/x17974.html:
#!/bin/bash
exec 6>&1 # Link file descriptor #6 with stdout.
# Saves stdout.
exec >&2 # stdout now goes to stderr
echo "Didn't know I shouldn't print to stdout"
exec 1>&6 6>&- # Restore stdout and close file descriptor #6.
[[ $1 == "--generate-stats" ]] && echo "cpu=90 mem=50 disk=15"
But I wouldn't want to use that if it's not considered good practice.
Many command-line utilities have quiet and verbose modes; it's generally considered good practice to have the most verbose output (debugging, tracing, etc.) be separated to standard error anyway, but it's common to have normal output be formatted for human legibility (e.g. include table headings and column separators) and quiet mode output be just the bare data for programmatic use. (For one example, see docker images vs docker images -q). So that would be my recommendation - have worker.sh take a flag indicating whether its output is being consumed programmatically, and write it such that its output is all sent via a function that checks that flag and filters appropriately.
Maybe a different approach would be for the second script to test to see if it's stdout is being used programatically:
gash.sh:
#!/bin/bash
data=$(./another.sh)
echo "Received $data"
another.sh:
#!/bin/bash
# for -t see man isatty(3). 1 is file descriptor 1 - stdout
if [ -t 1 ]; then
echo "stdout is a terminal"
else
echo "stdout is not a terminal"
fi
Gives (where $ is a generic keyboard prompt):
$ bash gash.sh
Received stdout is not a terminal
$ bash another.sh
stdout is a terminal
You could then set a flag to change script behaviour (ls(1) does a similar thing). However, you should be prepared for this:
$ bash another.sh|more
stdout is not a terminal
$ bash another.sh > out.txt
$ cat out.txt
stdout is not a terminal
Is it possible to redefine file descriptors (e.g. stderr) in bash?
I would like to send all output to a file by default while still being able to use the original stderr and stdout.
#!/bin/bash
echo "Error: foo bar" 1>2
REAL_STDERR=2
REAL_STDOUT=1
2=open("/tmp/stderr.log")
1=open("/tmp/stdout.log")
echo "This goes to stdout.log"
if ! curl doesntexist.yet; then
echo "Error: Unable to reach host. See stderr.log for details" 1>REAL_STDERR
fi
The exec builtin does this when not given a name of a command to run.
exec 3>&1 4>&2 2>/tmp/stderr.log >/tmp/stdout.log
echo "This goes to stdout.log"
echo "This goes to stderr.log" >&2
echo "This goes directly to real stderr" >&4
Note that redirections are processed in the order they're given on the command line left-to-right. Thus, &1 and &2 are interpreted as-modified by any previous redirections on the same command.
See the relevant POSIX spec.
If you want to use variable names for your file descriptors (with the numbers automatically allocated), you'll need bash 4.1 or newer. There, you can do:
exec {real_stderr}>&2 {real_stdout}>&1 >stdout.log 2>stderr.log
echo "This goes stdout.log"
echo "This goes to stderr.log" >&2
echo "This goes to real stderr" >&$real_stderr
Is there a standard Bash tool that acts like echo but outputs to stderr rather than stdout?
I know I can do echo foo 1>&2 but it's kinda ugly and, I suspect, error prone (e.g. more likely to get edited wrong when things change).
You could do this, which facilitates reading:
>&2 echo "error"
>&2 copies file descriptor #2 to file descriptor #1. Therefore, after this redirection is performed, both file descriptors will refer to the same file: the one file descriptor #2 was originally referring to. For more information see the Bash Hackers Illustrated Redirection Tutorial.
You could define a function:
echoerr() { echo "$#" 1>&2; }
echoerr hello world
This would be faster than a script and have no dependencies.
Camilo Martin's bash specific suggestion uses a "here string" and will print anything you pass to it, including arguments (-n) that echo would normally swallow:
echoerr() { cat <<< "$#" 1>&2; }
Glenn Jackman's solution also avoids the argument swallowing problem:
echoerr() { printf "%s\n" "$*" >&2; }
Since 1 is the standard output, you do not have to explicitly name it in front of an output redirection like >. Instead, you can simply type:
echo This message goes to stderr >&2
Since you seem to be worried that 1>&2 will be difficult for you to reliably type, the elimination of the redundant 1 might be a slight encouragement to you!
Another option
echo foo >>/dev/stderr
No, that's the standard way to do it. It shouldn't cause errors.
If you don't mind logging the message also to syslog, the not_so_ugly way is:
logger -s $msg
The -s option means: "Output the message to standard error as well as to the system log."
Another option that I recently stumbled on is this:
{
echo "First error line"
echo "Second error line"
echo "Third error line"
} >&2
This uses only Bash built-ins while making multi-line error output less error prone (since you don't have to remember to add &>2 to every line).
Note: I'm answering the post- not the misleading/vague "echo that outputs to stderr" question (already answered by OP).
Use a function to show the intention and source the implementation you want. E.g.
#!/bin/bash
[ -x error_handling ] && . error_handling
filename="foobar.txt"
config_error $filename "invalid value!"
output_xml_error "No such account"
debug_output "Skipping cache"
log_error "Timeout downloading archive"
notify_admin "Out of disk space!"
fatal "failed to open logger!"
And error_handling being:
ADMIN_EMAIL=root#localhost
config_error() { filename="$1"; shift; echo "Config error in $filename: $*" 2>&1; }
output_xml_error() { echo "<error>$*</error>" 2>&1; }
debug_output() { [ "$DEBUG"=="1" ] && echo "DEBUG: $*"; }
log_error() { logger -s "$*"; }
fatal() { which logger >/dev/null && logger -s "FATAL: $*" || echo "FATAL: $*"; exit 100; }
notify_admin() { echo "$*" | mail -s "Error from script" "$ADMIN_EMAIL"; }
Reasons that handle concerns in OP:
nicest syntax possible (meaningful words instead of ugly symbols)
harder to make an error (especially if you reuse the script)
it's not a standard Bash tool, but it can be a standard shell library for you or your company/organization
Other reasons:
clarity - shows intention to other maintainers
speed - functions are faster than shell scripts
reusability - a function can call another function
configurability - no need to edit original script
debugging - easier to find the line responsible for an error (especially if you're deadling with a ton of redirecting/filtering output)
robustness - if a function is missing and you can't edit the script, you can fall back to using external tool with the same name (e.g. log_error can be aliased to logger on Linux)
switching implementations - you can switch to external tools by removing the "x" attribute of the library
output agnostic - you no longer have to care if it goes to STDERR or elsewhere
personalizing - you can configure behavior with environment variables
My suggestion:
echo "my errz" >> /proc/self/fd/2
or
echo "my errz" >> /dev/stderr
echo "my errz" > /proc/self/fd/2 will effectively output to stderr because /proc/self is a link to the current process, and /proc/self/fd holds the process opened file descriptors, and then, 0, 1, and 2 stand for stdin, stdout and stderr respectively.
The /proc/self link doesn't work on MacOS, however, /proc/self/fd/* is available on Termux on Android, but not /dev/stderr. How to detect the OS from a Bash script? can help if you need to make your script more portable by determining which variant to use.
Don't use cat as some have mentioned here. cat is a program
while echo and printf are bash (shell) builtins. Launching a program or another script (also mentioned above) means to create a new process with all its costs. Using builtins, writing functions is quite cheap, because there is no need to create (execute) a process (-environment).
The opener asks "is there any standard tool to output (pipe) to stderr", the short answer is : NO ... why? ... redirecting pipes is an elementary concept in systems like unix (Linux...) and bash (sh) builds up on these concepts.
I agree with the opener that redirecting with notations like this: &2>1 is not very pleasant for modern programmers, but that's bash. Bash was not intended to write huge and robust programs, it is intended to help the admins to get there work with less keypresses ;-)
And at least, you can place the redirection anywhere in the line:
$ echo This message >&2 goes to stderr
This message goes to stderr
This is a simple STDERR function, which redirect the pipe input to STDERR.
#!/bin/bash
# *************************************************************
# This function redirect the pipe input to STDERR.
#
# #param stream
# #return string
#
function STDERR () {
cat - 1>&2
}
# remove the directory /bubu
if rm /bubu 2>/dev/null; then
echo "Bubu is gone."
else
echo "Has anyone seen Bubu?" | STDERR
fi
# run the bubu.sh and redirect you output
tux#earth:~$ ./bubu.sh >/tmp/bubu.log 2>/tmp/bubu.err
read is a shell builtin command that prints to stderr, and can be used like echo without performing redirection tricks:
read -t 0.1 -p "This will be sent to stderr"
The -t 0.1 is a timeout that disables read's main functionality, storing one line of stdin into a variable.
Combining solution suggested by James Roth and Glenn Jackman
add ANSI color code to display the error message in red:
echoerr() { printf "\e[31;1m%s\e[0m\n" "$*" >&2; }
# if somehow \e is not working on your terminal, use \u001b instead
# echoerr() { printf "\u001b[31;1m%s\u001b[0m\n" "$*" >&2; }
echoerr "This error message should be RED"
Make a script
#!/bin/sh
echo $* 1>&2
that would be your tool.
Or make a function if you don't want to have a script in separate file.
Here is a function for checking the exit status of the last command, showing error and terminate the script.
or_exit() {
local exit_status=$?
local message=$*
if [ "$exit_status" -gt 0 ]
then
echo "$(date '+%F %T') [$(basename "$0" .sh)] [ERROR] $message" >&2
exit "$exit_status"
fi
}
Usage:
gzip "$data_dir"
or_exit "Cannot gzip $data_dir"
rm -rf "$junk"
or_exit Cannot remove $junk folder
The function prints out the script name and the date in order to be useful when the script is called from crontab and logs the errors.
59 23 * * * /my/backup.sh 2>> /my/error.log