I'm aware of the bash "capture output" capability, along the lines of (two separate files):
sub.sh: echo hello
main.sh: greeting="$(./sub.sh)"
This will set the greeting variable to be hello.
However, I need to write a script where I want to just capture some information, allowing the rest to got to "normal" standard output:
sub.sh: xyzzy hello ; plugh goodbye
main.sh: greeting="$(./sub.sh)"
What I would like is for hello to be placed in the greeting variable but goodbye to be sent to the standard output of main.sh.
What do the magic commands xyzzy and plugh need to be replaced with above (or what can I do in main.sh), in order to achieve this behaviour? I suspect it could be done with some sneaky fiddling around of handle-based redirections but I'm not sure. If not possible, I'll have to revert to writing one of the items to a temporary file to be picked up later, but I'd prefer not to do that.
To make things clearer, here's the test case I'm using (currently using the non-working file handle 3 method). First sub.sh:
echo xx_greeting >&3 # This should be captured to variable.
echo xx_stdout # This should show up on stdout.
echo xx_stderr >&2 # This should show up on stderr.
Then main.sh:
greeting="$(./sub.sh)" 3>&1
echo "Greeting was ${greeting}"
And I run it thus:
./main.sh >/tmp/out 2>/tmp.err
expecting to see the following files:
/tmp/out:
xx_stdout
Greeting was xx_greeting
/tmp/err:
xx_stderr
This can be done by introducing an extra file descriptor as follows. First the sub.sh script for testing, which simply writes a different thing to three different descriptors (implicit >&1 for the first):
echo for-var
echo for-out >&3
echo for-err >&2
Second, the main.sh which calls it:
exec 3>&1
greeting="$(./sub.sh)"
echo "Variable is ${greeting}"
Then you simply run it ensuring you know what output is going to the different locations:
pax> ./main.sh > xxout 2> xxerr
pax> cat xxout
for-out
Variable is for-var
pax> cat xxerr
for-err
Hence you can see that, when calling sub.sh from main.sh, stuff written to file handle 1 goes to the capture variable, stuff written to file handle 2 goes to standard error, and stuff written to file handle 3 goes to standard output.
Bash supports colors, i.e. \033[31m switches to red and \033[0m switches back to uncolored.
I would like to make a small bash-wrapper that reliably puts out stderr in red, i.e. it should put \033[31m before and \033[0m after everything that comes from stderr.
I'm not sure that this is even possible, because when two parallel processes (or even a single process) writes to both stdout and stderr there would have to be a way to distinguish the two by a character-by-character basis.
Colorizing text is simple enough: read each line and echo it with appropriate escape sequences at beginning and end. But colorizing standard error gets tricky because standard error doesn’t get passed to pipes.
Here’s one approach that works by swapping standard error and standard output, then filtering standard output.
Here is our test command:
#!/bin/bash
echo hi
echo 'Error!' 1>&2
And the wrapper script:
#!/bin/bash
(# swap stderr and stdout
exec 3>&1 # copy stdout to fd 3
exec 1>&2 # copy stderr to fd 1
exec 2>&3- # move saved stdout on fd 3 over to 2
"${#}") | while read line; do
echo -e "\033[31m${line}\033[0m"
done
Then:
$ ./wrapper ./test-command
hi
Error! # <- shows up red
Unfortunately, all output from the wrapper command comes out of stderr, not stdout, so you can’t pipe the output into any further scripts. You can probably get around this by creating a temporary fifo… but hopefully this little wrapper script is enough to meet your needs.
Based on andrewdotn's wrapper
Changes:
Puts the stderr output back to stderr
Avoid echo -e processing content in the lines
wrapper
#!/bin/bash
"${#}" 2> >(
while read line; do
echo -ne "\033[31m" 1>&2
echo -n "${line}" 1>&2
echo -e "\033[0m" 1>&2
done
)
Issues:
The output lines end up grouped, rather than mixed stdout/stderr
Test script:
#!/bin/bash
echo Hi
echo "\033[32mStuff"
echo message
echo error 1>&2
echo message
echo error 1>&2
echo message
echo error 1>&2
Output:
Hi
\033[32mStuff
message
message
message
error # <- shows up red
error # <- shows up red
error # <- shows up red
I have a lot of bash commands.Some of them fail for different reasons.
I want to check if some of my errors contain a substring.
Here's an example:
#!/bin/bash
if [[ $(cp nosuchfile /foobar) =~ "No such file" ]]; then
echo "File does not exist. Please check your files and try again."
else
echo "No match"
fi
When I run it, the error is printed to screen and I get "No match":
$ ./myscript
cp: cannot stat 'nosuchfile': No such file or directory
No match
Instead, I wanted the error to be captured and match my condition:
$ ./myscript
File does not exist. Please check your files and try again.
How do I correctly match against the error message?
P.S. I've found some solution, what do you think about this?
out=`cp file1 file2 2>&1`
if [[ $out =~ "No such file" ]]; then
echo "File does not exist. Please check your files and try again."
elif [[ $out =~ "omitting directory" ]]; then
echo "You have specified a directory instead of a file"
fi
I'd do it like this
# Make sure we always get error messages in the same language
# regardless of what the user has specified.
export LC_ALL=C
case $(cp file1 file2 2>&1) in
#or use backticks; double quoting the case argument is not necessary
#but you can do it if you wish
#(it won't get split or glob-expanded in either case)
*"No such file"*)
echo >&2 "File does not exist. Please check your files and try again."
;;
*"omitting directory"*)
echo >&2 "You have specified a directory instead of a file"
;;
esac
This'll work with any POSIX shell too, which might come in handy if you ever decide to
convert your bash scripts to POSIX shell (dash is quite a bit faster than bash).
You need the first 2>&1 redirection because executables normally output information not primarily meant for further machine processing to stderr.
You should use the >&2 redirections with the echos because what you're ouputting there fits into that category.
PSkocik's answer is the correct one for one you need to check for a specific string in an error message. However, if you came looking for ways to detect errors:
I want to check whether or not a command failed
Check the exit code instead of the error messages:
if cp nosuchfile /foobar
then
echo "The copy was successful."
else
ret="$?"
echo "The copy failed with exit code $ret"
fi
I want to differentiate different kinds of failures
Before looking for substrings, check the exit code documentation for your command. For example, man wget lists:
EXIT STATUS
Wget may return one of several error codes if it encounters problems.
0 No problems occurred.
1 Generic error code.
2 Parse error---for instance, when parsing command-line options
3 File I/O error.
(...)
in which case you can check it directly:
wget "$url"
case "$?" in
0) echo "No problem!";;
6) echo "Incorrect password, try again";;
*) echo "Some other error occurred :(" ;;
esac
Not all commands are this disciplined in their exit status, so you may need to check for substrings instead.
Both examples:
out=`cp file1 file2 2>&1`
and
case $(cp file1 file2 2>&1) in
have the same issue because they mixing the stderr and stdout into one output which can be examined. The problem is when you trying the complex command with interactive output i.e top or ddrescueand you need to preserve stdout untouched and examine only the stderr.
To omit this issue you can try this (working only in bash > v4.2!):
shopt -s lastpipe
declare errmsg_variable="errmsg_variable UNSET"
command 3>&1 1>&2 2>&3 | read errmsg_variable
if [[ "$errmsg_variable" == *"substring to find"* ]]; then
#commands to execute only when error occurs and specific substring find in stderr
fi
Explanation
This line
command 3>&1 1>&2 2>&3 | read errmsg_variable
redirecting stderr to the errmsg_variable (using file descriptors trick and pipe) without mixing with stdout. Normally pipes spawning own sub-processes and after executing command with pipes all assignments are not visible in the main process so examining them in the rest of code can't be effective. To prevent this you have to change standard shell behavior by using:
shopt -s lastpipe
which executes last pipe manipulation in command as in the current process so:
| read errmsg_variable
assignes content "pumped" to pipe (in our case error message) into variable which resides in the main process. Now you can examine this variable in the rest of code to find specific sub-string:
if [[ "$errmsg_variable" == *"substring to find"* ]]; then
#commands to execute only when error occurs and specific substring find in stderr
fi
Suppose I have this script:
logfile=$1
echo "This is just a debug message indicating the script is starting to run..."
# Do some work...
echo "Results: x, y and z." >> $logfile
Is it possible to invoke the script from the command-line such that $logfile is actually stdout?
Why? I would like to have a script that prints part of its output to stdout or, optionally, to a file.
"But why not remove the >> $logfile part and just invoke it with ./script >> filename when you want to write to a file?", you may ask.
Well, because I just want to do this "optional redirect" thing for some output messages. In the example above, just the second message should be affected.
Use /dev/stdout, if your operating system is Linux or something similarly compliant with convention. Or:
#!/bin/bash
# works on bash even if OS doesn't provide a /dev/stdout
# for non-bash shells, consider using exec 3>&1 explicitly if $1 is empty
exec 3>${1:-/dev/stdout}
echo "This is just a debug message indicating the script is starting to run..." >&2
echo "Results: x, y and z." >&3
This is also vastly more efficient than putting >>"$filename" on every line that should log to the file, which reopens the file for output on each command.
Is there a standard Bash tool that acts like echo but outputs to stderr rather than stdout?
I know I can do echo foo 1>&2 but it's kinda ugly and, I suspect, error prone (e.g. more likely to get edited wrong when things change).
You could do this, which facilitates reading:
>&2 echo "error"
>&2 copies file descriptor #2 to file descriptor #1. Therefore, after this redirection is performed, both file descriptors will refer to the same file: the one file descriptor #2 was originally referring to. For more information see the Bash Hackers Illustrated Redirection Tutorial.
You could define a function:
echoerr() { echo "$#" 1>&2; }
echoerr hello world
This would be faster than a script and have no dependencies.
Camilo Martin's bash specific suggestion uses a "here string" and will print anything you pass to it, including arguments (-n) that echo would normally swallow:
echoerr() { cat <<< "$#" 1>&2; }
Glenn Jackman's solution also avoids the argument swallowing problem:
echoerr() { printf "%s\n" "$*" >&2; }
Since 1 is the standard output, you do not have to explicitly name it in front of an output redirection like >. Instead, you can simply type:
echo This message goes to stderr >&2
Since you seem to be worried that 1>&2 will be difficult for you to reliably type, the elimination of the redundant 1 might be a slight encouragement to you!
Another option
echo foo >>/dev/stderr
No, that's the standard way to do it. It shouldn't cause errors.
If you don't mind logging the message also to syslog, the not_so_ugly way is:
logger -s $msg
The -s option means: "Output the message to standard error as well as to the system log."
Another option that I recently stumbled on is this:
{
echo "First error line"
echo "Second error line"
echo "Third error line"
} >&2
This uses only Bash built-ins while making multi-line error output less error prone (since you don't have to remember to add &>2 to every line).
Note: I'm answering the post- not the misleading/vague "echo that outputs to stderr" question (already answered by OP).
Use a function to show the intention and source the implementation you want. E.g.
#!/bin/bash
[ -x error_handling ] && . error_handling
filename="foobar.txt"
config_error $filename "invalid value!"
output_xml_error "No such account"
debug_output "Skipping cache"
log_error "Timeout downloading archive"
notify_admin "Out of disk space!"
fatal "failed to open logger!"
And error_handling being:
ADMIN_EMAIL=root#localhost
config_error() { filename="$1"; shift; echo "Config error in $filename: $*" 2>&1; }
output_xml_error() { echo "<error>$*</error>" 2>&1; }
debug_output() { [ "$DEBUG"=="1" ] && echo "DEBUG: $*"; }
log_error() { logger -s "$*"; }
fatal() { which logger >/dev/null && logger -s "FATAL: $*" || echo "FATAL: $*"; exit 100; }
notify_admin() { echo "$*" | mail -s "Error from script" "$ADMIN_EMAIL"; }
Reasons that handle concerns in OP:
nicest syntax possible (meaningful words instead of ugly symbols)
harder to make an error (especially if you reuse the script)
it's not a standard Bash tool, but it can be a standard shell library for you or your company/organization
Other reasons:
clarity - shows intention to other maintainers
speed - functions are faster than shell scripts
reusability - a function can call another function
configurability - no need to edit original script
debugging - easier to find the line responsible for an error (especially if you're deadling with a ton of redirecting/filtering output)
robustness - if a function is missing and you can't edit the script, you can fall back to using external tool with the same name (e.g. log_error can be aliased to logger on Linux)
switching implementations - you can switch to external tools by removing the "x" attribute of the library
output agnostic - you no longer have to care if it goes to STDERR or elsewhere
personalizing - you can configure behavior with environment variables
My suggestion:
echo "my errz" >> /proc/self/fd/2
or
echo "my errz" >> /dev/stderr
echo "my errz" > /proc/self/fd/2 will effectively output to stderr because /proc/self is a link to the current process, and /proc/self/fd holds the process opened file descriptors, and then, 0, 1, and 2 stand for stdin, stdout and stderr respectively.
The /proc/self link doesn't work on MacOS, however, /proc/self/fd/* is available on Termux on Android, but not /dev/stderr. How to detect the OS from a Bash script? can help if you need to make your script more portable by determining which variant to use.
Don't use cat as some have mentioned here. cat is a program
while echo and printf are bash (shell) builtins. Launching a program or another script (also mentioned above) means to create a new process with all its costs. Using builtins, writing functions is quite cheap, because there is no need to create (execute) a process (-environment).
The opener asks "is there any standard tool to output (pipe) to stderr", the short answer is : NO ... why? ... redirecting pipes is an elementary concept in systems like unix (Linux...) and bash (sh) builds up on these concepts.
I agree with the opener that redirecting with notations like this: &2>1 is not very pleasant for modern programmers, but that's bash. Bash was not intended to write huge and robust programs, it is intended to help the admins to get there work with less keypresses ;-)
And at least, you can place the redirection anywhere in the line:
$ echo This message >&2 goes to stderr
This message goes to stderr
This is a simple STDERR function, which redirect the pipe input to STDERR.
#!/bin/bash
# *************************************************************
# This function redirect the pipe input to STDERR.
#
# #param stream
# #return string
#
function STDERR () {
cat - 1>&2
}
# remove the directory /bubu
if rm /bubu 2>/dev/null; then
echo "Bubu is gone."
else
echo "Has anyone seen Bubu?" | STDERR
fi
# run the bubu.sh and redirect you output
tux#earth:~$ ./bubu.sh >/tmp/bubu.log 2>/tmp/bubu.err
read is a shell builtin command that prints to stderr, and can be used like echo without performing redirection tricks:
read -t 0.1 -p "This will be sent to stderr"
The -t 0.1 is a timeout that disables read's main functionality, storing one line of stdin into a variable.
Combining solution suggested by James Roth and Glenn Jackman
add ANSI color code to display the error message in red:
echoerr() { printf "\e[31;1m%s\e[0m\n" "$*" >&2; }
# if somehow \e is not working on your terminal, use \u001b instead
# echoerr() { printf "\u001b[31;1m%s\u001b[0m\n" "$*" >&2; }
echoerr "This error message should be RED"
Make a script
#!/bin/sh
echo $* 1>&2
that would be your tool.
Or make a function if you don't want to have a script in separate file.
Here is a function for checking the exit status of the last command, showing error and terminate the script.
or_exit() {
local exit_status=$?
local message=$*
if [ "$exit_status" -gt 0 ]
then
echo "$(date '+%F %T') [$(basename "$0" .sh)] [ERROR] $message" >&2
exit "$exit_status"
fi
}
Usage:
gzip "$data_dir"
or_exit "Cannot gzip $data_dir"
rm -rf "$junk"
or_exit Cannot remove $junk folder
The function prints out the script name and the date in order to be useful when the script is called from crontab and logs the errors.
59 23 * * * /my/backup.sh 2>> /my/error.log