Pass bash syntax (pipe operator) correctly to function - bash

How is it possible that operator >> and stream redirection operator are passed to the function try() which catches errors and exits...
When I do this :
exitFunc() { echo "EXIIIIIIIIIIIIIIIIT" }
yell() { echo "$0: $*" >&2; }
die() { yell "$*"; exitFunc 111; }
try() { "$#" || die "cannot $*"; }
try commandWhichFails >> "logFile.log" 2>&1
When I run the above, also the exitFunction echo is output into the logFile...
How do I need to change the above that the try command does basically this
try ( what ever comes here >> "logFile.log" 2>&1 )
Can this be achieved with subshells?

If you want to use stderr in yell and not have it lost by your redirection in the body of the script, then you need to preserve it at the start of the script. For example in file descriptor 5:
#!/bin/bash
exec 5>&2
yell() { echo "$0: $*" >&5; }
...
If your bash supports it you can ask it to allocate the new file descriptor for you using a new syntax:
#!/bin/bash
exec {newfd}>&2
yell() { echo "$0: $*" >&$newfd; }
...
If you need to you can close the new fd with exec {newfd}>&-.

If I understand you correctly, you can't achieve it with subshells.
If you want the output of commandWhichFails to be sent to logFile.log, but not the errors from try() etc., the problem with your code is that redirections are resolved before command execution, in order of appearance.
Where you've put
try false >> "logFile.log" 2>&1
(using false as a command which fails), the redirections apply to the output of try, not to its arguments (at this point, there is no way to know that try executes its arguments as a command).
There may be a better way to do this, but my instinct is to add a catch function, thus:
last_command=
exitFunc() { echo "EXIIIIIIIIIIIIIIIIT"; } #added ; here
yell() { echo "$0: $*" >&2; }
die() { yell "$*"; exitFunc 111; }
try() { last_command="$#"; "$#"; }
catch() { [ $? -eq 0 ] || die "cannot $last_command"; }
try false >> "logFile.log" 2>&1
catch
Depending on portability requirements, you can always replace last_command with a function like last_command() { history | tail -2 | sed -n '1s/^ *[0-9] *//p' ;} (bash), which requires set -o history and removes the necessity of the try() function. You can replace the -2 with -"$1" to get the N th previous command.
For a more complete discussion, see BASH: echoing the last command run . I'd also recommend looking at trap for general error handling.

Related

How to keep return value while processing a command's output?

In Bash environment, I have a command, and I want to detect if it fails.
However it is not failing gracefully:
# ./program
do stuff1
do stuff2
error!
do stuff3
# echo $?
0
When it runs without errors (successful run), it returns with 0. When it runs into an error, it can either
return with 1, easily detectable
return with 0, but during run it prints some error messages
I want to use this program in a script with these goals:
I need the output to be printing to stdout normally (not at once after it finished!)
I need to catch the output's return value by $? or similar
I need to grep for "error" string in the output and set a variable in case of presence
Then I can evaluate by checking the return value and the "error" output.
However, if I add tee, it will ruin the return value.
I have tried $PIPESTATUS[0] and $PIPESTATUS[1], but it doesn't seem to work:
program | tee >(grep -i error)
Even if there is no error, $PIPESTATUS[1] always returns 0 (true), because the tee command was successful.
So what is the way to do this in bash?
#!/usr/bin/env bash
case $BASH_VERSION in
''|[0-3].*|4.[012].*) echo "ERROR: bash 4.3+ required" >2; exit 1;;
esac
exec {stdout_fd}>&1
if "$#" | tee "/dev/fd/$stdout_fd" | grep -i error >/dev/null; then
echo "Errors occurred (detected on stdout)" >&2
elif (( ${PIPESTATUS[0]} )); then
echo "Errors detected (via exit status)" >&2
else
echo "No errors occurred" >&2
fi
Tested as follows:
$ myfunc() { echo "This is an ERROR"; return 0; }; export -f myfunc
$ ./test-err myfunc
This is an ERROR
Errors occurred (detected on stdout)
$ myfunc() { echo "Everything is not so fine"; return 1; }; export -f myfunc
$ ./test-err myfunc
Everything is not so fine
Errors detected (via exit status)
$ myfunc() { echo "Everything is fine"; }; export -f myfunc
$ ./test-err myfunc
Everything is fine
No errors occurred

Bash script: Redirect error of command to function that receives an argument

how can I achieve to redirect the error to a function that receives a string as an argument?
This is the code:
function error {
echo "[ERROR]: $1"
}
# This does works:
terraform apply myplan || { echo -e '\n[ERROR]: Terraform apply failed. Fix errors and run the script again!' ; exit 1; }
# Output: [ERROR]: Terraform apply failed. Fix errors and run the script again!
# This does NOT work:
terraform apply myplan || { error 'Terraform apply failed. Fix errors and run the script again!' ; exit 1; }
# Output: [ERROR]
I do not understand why.
Example:
#!/bin/bash
# simulate terraform commands
function terraform_ok {
echo "this is on stdout from terraform_ok"
exit 0
}
function terraform_warning {
echo "this is on stdout from terraform_warning"
echo "this is on stderr from terraform_warning" >&2
exit 0
}
function terraform_error {
echo "this is on stdout from terraform_error"
echo "this is on stderr from terraform_error" >&2
echo "this is line two on stderr" >&2
exit 1
}
function catch_error {
rv=$?
if [[ $rv != 0 ]]; then
echo -e "[ERROR] >>>\n$#\n[ERROR] <<<"
elif [[ "$#" != "" ]]; then
echo -e "[WARNING] >>>\n$#\n[WARNING] <<<"
fi
# exit subshell with the same exit code the terraform command had
exit $rv
}
function swap_stdout_and_stderr {
"$#" 3>&2 2>&1 1>&3
}
function perform {
(catch_error "$(swap_stdout_and_stderr "$#")") 2>&1
}
function die {
rv=$?
echo "\"$#\" failed with exit code $rv."
exit $rv
}
function perform_or_die {
perform "$#" || die "$#"
}
perform_or_die terraform_ok apply myplan
perform_or_die terraform_warning apply myplan
perform_or_die terraform_error apply myplan
echo "this will never be reached"
Output (all on stdout):
this is on stdout from terraform_ok
this is on stdout from terraform_warning
[WARNING] >>>
this is on stderr from terraform_warning
[WARNING] <<<
this is on stdout from terraform_error
[ERROR] >>>
this is on stderr from terraform_error
this is line two on stderr
[ERROR] <<<
"terraform_error apply myplan" failed with exit code 1.
Explanation:
The swapping of stdout and stderr (3>&2 2>&1 1>&3) is done because when you do variable=$(command) the variable will get assigned whatever comes on stdout from command. The same applies in catch_error "$(command)". Whatever comes on stdout from command will be assigned to $# in the function catch_error. In your case you I assume you want to catch what comes on stderr instead, hence the swapping.
The final 2>&1 on the line is done to redirect stderr (which is the old stdout) back to stdout so that the expected behavior of greping in the output from this script can be done as usual.
Since the catch_error ... command is running in a subshell I've used || to execute another command in case the subshell returns an error. That command is die "$#" to exit the whole script with the same error code that the command exited with and to be able to show the command that failed.
The simplest way I can think of; this will save all output to a file:
terraform apply --auto-approve -no-color -input=false \
2>&1 | tee /tmp/tf-apply.out
I believe the expression &> would save only errors to the file.

Passing subshell to bash function

I have a set of bash log functions which enable me to comfortably redirect all output to a log file and bail out in case something happens:
#! /usr/bin/env bash
# This script is meant to be sourced
export SCRIPT=$0
if [ -z "${LOG_FILE}" ]; then
export LOG_FILE="./log.txt"
fi
# https://stackoverflow.com/questions/11904907/redirect-stdout-and-stderr-to-function
# If the message is piped receives info, if the message is a parameter
# receives info, message
log() {
local TYPE
local IN
local PREFIX
local LINE
TYPE="$1"
if [ -n "$2" ]; then
IN="$2"
else
if read -r LINE; then
IN="${LINE}"
fi
while read -r LINE; do
IN="${IN}\n${LINE}"
done
IN=$(echo -e "${IN}")
fi
if [ -n "${IN}" ]; then
PREFIX=$(date +"[%X %d-%m-%y - $(basename "${SCRIPT}")] ${TYPE}: ")
IN="$(echo "${IN}" | awk -v PREFIX="${PREFIX}" '{printf PREFIX}$0')"
touch "${LOG_FILE}"
echo "${IN}" >> "${LOG_FILE}"
fi
}
# receives message as parameter or piped, logs as info
info() {
log "( INFO )" "$#"
}
# receives message as parameter or piped, logs as an error
error() {
log "(ERROR )" "$#"
}
# logs error and exits
fail() {
error "$1"
exit 1
}
# Reroutes stdout to info and sterr to error
log_wrap()
{
"$#" > >(info) 2> >(error)
return $?
}
Then I use the functions as follows:
LOG_FILE="logging.log"
source "log_functions.sh"
info "Program started"
log_wrap some_command arg0 arg1 --kwarg=value || fail "Program failed"
Which works. Since log_wrap redirects stdout and sterr I don't want it interfering with commands composed using piping or redirections. Such as:
log_wrap echo "File content" > ~/user_file || fail "user_file could not be created."
log_wrap echo "File content" | sudo tee ~/root_file > /dev/null || fail "root_file could not be created."
So I want a way to group those commands so their redirection is solved and then pass that to log_wrap. I am aware of two ways of grouping:
Subshells: They are not meant to be passed around, naturally this:
log_wrap ( echo "File content" > ~/user_file ) || fail "user_file could not be created."
throws a syntax error.
Braces (grouping?, context?): When called inside a command, the brace is interpreted as an argument.
log_wrap { echo "File content" > ~/user_file } || fail "user_file could not be created."
Is roughly equivalent (in my understanding) to:
log_wrap '{' echo "File content" > ~/user_file '}' || fail "user_file could not be created."
To recapitulate, my question is: Is there a way to pass a composition of commands, in my case composed by redirection/piping, to a bash function?
The way it's set up, you can only pass what Posix calls simple commands -- command names and arguments. No compound commands like subshells or brace groups will work.
However, you can use functions to run arbitrary code in a simple command:
foo() { { echo "File content" > ~/user_file; } || fail "user_file could not be created."; }
log_wrap foo
You could also consider just automatically applying your wrapper to all commands in the rest of the script using exec:
exec > >(info) 2> >(error)
{ echo "File content" > ~/user_file; } || fail "user_file could not be created.";

setting variables inside a compound command in bash fails (command group)

The group command { list; } should execute list in the current shell environment.
This allows things like variable assignments to be visible outside of the command group (http://mywiki.wooledge.org/BashGuide/CompoundCommands).
I use it to send output to a logfile as well as terminal:
{ { echo "Result is 13"; echo "ERROR: division by 0" 1>&2; } | tee -a stdout.txt; } 3>&1 1>&2 2>&3 | tee -a stderr.txt;
On the topic "pipe stdout and stderr to two different processes in shell script?" read here: pipe stdout and stderr to two different processes in shell script?.
{ echo "Result is 13"; echo "ERROR: division by 0" 1>&2; }
simulates a command with output to stdout and stderr.
I want to evaluate the exit status also. /bin/true and /bin/false simulate a command that may succeed or fail. So I try to save $? to a variable r:
~$ r=init; { /bin/true; r=$?; } | cat; echo $r;
init
~$ r=init; { /bin/true; r=$?; } 2>/dev/null; echo $r;
0
As you can see the above pipeline construct does not set variable r while the second command line leads to the expected result. Is it a bug or is it my fault? Thanks.
I tested Ubuntu 12.04.2 LTS (~$) and Debian GNU/Linux 7.0 (wheezy) (~#) with the following versions of bash:
~$ echo $BASH_VERSION
4.2.25(1)-release
~# echo $BASH_VERSION
4.2.37(1)-release
I think, you miss that /bin/true returs 0 and /bin/false returns 1
$ r='res:'; { /bin/true; r+=$?; } 2>/dev/null; echo $r;
res:0
And
$ r='res:'; { /bin/false; r+=$?; } 2>/dev/null; echo $r;
res:1
I tried a test program:
x=0
{ x=$$ ; echo "$$ $BASHPID $x" ; }
echo $x
x=0
{ x=$$ ; echo "$$ $BASHPID $x" ; } | cat
echo $x
And indeed - it looks like the pipe forces the prior code into another process, but without reinitialising bash - so $BASHPID changes but $$ does.
See Difference between bash pid and $$ for more details of the different between $$ and $BASHPID.
Also outputting $BASH_SUBSHELL shows that the second bit is running in a subshell (level 1), and the first is at level 0.
bash executes all elements of a pipeline as subprocesses; if they're shell builtins or command groups, that means they execute in subshells and so any variables they set don't propagate to the parent shell. This can be tricky to work around in general, but if all you need is the exit status of the command group, you can use the $PIPESTATUS array to get it:
$ { false; } | cat; echo "${PIPESTATUS[#]}"
1 0
$ { false; } | cat; r=${PIPESTATUS[0]}; echo $r
1
$ { true; } | cat; r=${PIPESTATUS[0]}; echo $r
0
Note that this only works for getting the exit status of the last command in the group:
$ { false; true; false; uselessvar=$?; } | cat; r=${PIPESTATUS[0]}; echo $r
0
... because uselessvar=$? succeeded.
Using a variable to hold the exit status is no appropriate method with pipelines:
~$ r=init; { /bin/true; r=$?; } | cat; echo $r;
init
The pipeline creates a subshell. In the pipe the exit status is assigned to a (local) copy of variable r whose value is dropped.
So I want to add my solution to the orginating challenge to send output to a logfile as well as terminal while keeping track of exit status. I decided to use another file descriptor. Formatting in a single line may be a bit confusing ...
{ { r=$( { { { echo "Result is 13"; echo "ERROR: division by 0" 1>&2; /bin/false; echo $? 1>&4; } | tee stdout.txt; } 3>&1 1>&2 2>&3 | tee stderr.txt; } 4>&1 1>&2 2>&3 ); } 3>&1; } 1>stdout.term 2>stderr.term; echo r=$r
... so I apply some indentation:
{
{
: # no operation
r=$( {
{
{
echo "Result is 13"
echo "ERROR: division by 0" 1>&2
/bin/false; echo $? 1>&4
} | tee stdout.txt;
} 3>&1 1>&2 2>&3 | tee stderr.txt;
} 4>&1 1>&2 2>&3 );
} 3>&1;
} 1>stdout.term 2>stderr.term; echo r=$r
Do not mind the line "no operation". It showed up that the forum's formatting checker relies on it and otherwise would insist: "Your post appears to contain code that is not properly formatted as code. Please indent all code by 4 spaces using the code toolbar button or the CTRL+K keyboard shortcut. For more editing help, click the [?] toolbar icon."
If executed it yields the following output:
r=1
For demonstration purposes I redirected terminal output to the files stdout.term and stderr.term.
root#voipterm1:~# cat stdout.txt
Result is 13
root#voipterm1:~# cat stderr.txt
ERROR: division by 0
root#voipterm1:~# cat stdout.term
Result is 13
root#voipterm1:~# cat stderr.term
ERROR: division by 0
Let me explain:
The following group command simulates some command that yields an error code of 1 along with some error message. File descriptor 4 is declared in step 3:
{
echo "Result is 13"
echo "ERROR: division by 0" 1>&2
/bin/false; echo $? 1>&4
} | tee stdout.txt;
By the following code stdout and stderr streams are swapped using file descriptor 3 as a dummy. This way error messages are sent to the file stderr.txt:
{
...
} 3>&1 1>&2 2>&3 | tee stderr.txt;
Exit status has been sent to file descriptor 4 in step 1. It is now redirected to file descriptor 1 which defines the value of variable r. Error messages are redirected to file descriptor 2 while normal output ("Result is 13") is attached to file descriptor 3:
r=$( {
...
} 4>&1 1>&2 2>&3 );
Finally file descriptor 3 is redirected to file descriptor 1. This controls the output "Result is 13":
{
...
} 3>&1;
The outermost curly brace just shows how the command behaves.
Gordon Davisson suggested to exploit the array variable PIPESTATUS containing a list of exit status values from the processes in the most-recently-executed foreground pipeline. This may be an promising approach but leads to the question how to hand over its value to the enclosing pipeline.
~# r=init; { { echo "Result is 13"; echo "ERROR: division by 0" 1>&2; } | tee -a stdout.txt; r=${PIPESTATUS[0]}; } 3>&1 1>&2 2>&3 | tee -a stderr.txt; echo "Can you tell me the exit status? $r"
ERROR: division by 0
Result is 13
Can you tell me the exit status? init

In bash, is there an equivalent of die "error msg"

In perl, you can exit with an error msg with die "some msg". Is there an equivalent single command in bash? Right now, I'm achieving this using commands: echo "some msg" && exit 1
You can roll your own easily enough:
die() { echo "$*" 1>&2 ; exit 1; }
...
die "Kaboom"
Here's what I'm using. It's too small to put in a library so I must have typed it hundreds of times ...
warn () {
echo "$0:" "$#" >&2
}
die () {
rc=$1
shift
warn "$#"
exit $rc
}
Usage: die 127 "Syntax error"
This is a very close function to perl's "die" (but with function name):
function die
{
local message=$1
[ -z "$message" ] && message="Died"
echo "$message at ${BASH_SOURCE[1]}:${FUNCNAME[1]} line ${BASH_LINENO[0]}." >&2
exit 1
}
And bash way of dying if built-in function is failed (with function name)
function die
{
local message=$1
[ -z "$message" ] && message="Died"
echo "${BASH_SOURCE[1]}: line ${BASH_LINENO[0]}: ${FUNCNAME[1]}: $message." >&2
exit 1
}
So, Bash is keeping all needed info in several environment variables:
LINENO - current executed line number
FUNCNAME - call stack of functions, first element (index 0) is current function, second (index 1) is function that called current function
BASH_LINENO - call stack of line numbers, where corresponding FUNCNAME was called
BASH_SOURCE - array of source file, where corresponfing FUNCNAME is stored
Yep, that's pretty much how you do it.
You might use a semicolon or newline instead of &&, since you want to exit whether or not echo succeeds (though I'm not sure what would make it fail).
Programming in a shell means using lots of little commands (some built-in commands, some tiny programs) that do one thing well and connecting them with file redirection, exit code logic and other glue.
It may seem weird if you're used to languages where everything is done using functions or methods, but you get used to it.
# echo pass params and print them to a log file
wlog(){
# check terminal if exists echo
test -t 1 && echo "`date +%Y.%m.%d-%H:%M:%S` [$$] $*"
# check LogFile and
test -z $LogFile || {
echo "`date +%Y.%m.%d-%H:%M:%S` [$$] $*" >> $LogFile
} #eof test
}
# eof function wlog
# exit with passed status and message
Exit(){
ExitStatus=0
case $1 in
[0-9]) ExitStatus="$1"; shift 1;;
esac
Msg="$*"
test "$ExitStatus" = "0" || Msg=" ERROR: $Msg : $#"
wlog " $Msg"
exit $ExitStatus
}
#eof function Exit

Resources