I trimmed my script down but my log function stops working and I don't understand why. I copied a script that returns values through stdout so we can't put 'anything' in stdout or it corrupts the set of bash scripts. I am on macOS Catalina
#!/bin/bash
set -e
function log {
MESSAGE=$1
>&2 echo "$MESSAGE"
}
log "message works"
command -v tac >&2
log "test and not work too"
TAC_EXISTS=$?
command -v tail >&2
TAIL_EXISTS=$?
log "message not work"
function log {
MESSAGE=$1
>&2 echo "$MESSAGE"
}
Why don't you write it as
function log {
>&2 echo "$1" > &2
}
Related
In Bash environment, I have a command, and I want to detect if it fails.
However it is not failing gracefully:
# ./program
do stuff1
do stuff2
error!
do stuff3
# echo $?
0
When it runs without errors (successful run), it returns with 0. When it runs into an error, it can either
return with 1, easily detectable
return with 0, but during run it prints some error messages
I want to use this program in a script with these goals:
I need the output to be printing to stdout normally (not at once after it finished!)
I need to catch the output's return value by $? or similar
I need to grep for "error" string in the output and set a variable in case of presence
Then I can evaluate by checking the return value and the "error" output.
However, if I add tee, it will ruin the return value.
I have tried $PIPESTATUS[0] and $PIPESTATUS[1], but it doesn't seem to work:
program | tee >(grep -i error)
Even if there is no error, $PIPESTATUS[1] always returns 0 (true), because the tee command was successful.
So what is the way to do this in bash?
#!/usr/bin/env bash
case $BASH_VERSION in
''|[0-3].*|4.[012].*) echo "ERROR: bash 4.3+ required" >2; exit 1;;
esac
exec {stdout_fd}>&1
if "$#" | tee "/dev/fd/$stdout_fd" | grep -i error >/dev/null; then
echo "Errors occurred (detected on stdout)" >&2
elif (( ${PIPESTATUS[0]} )); then
echo "Errors detected (via exit status)" >&2
else
echo "No errors occurred" >&2
fi
Tested as follows:
$ myfunc() { echo "This is an ERROR"; return 0; }; export -f myfunc
$ ./test-err myfunc
This is an ERROR
Errors occurred (detected on stdout)
$ myfunc() { echo "Everything is not so fine"; return 1; }; export -f myfunc
$ ./test-err myfunc
Everything is not so fine
Errors detected (via exit status)
$ myfunc() { echo "Everything is fine"; }; export -f myfunc
$ ./test-err myfunc
Everything is fine
No errors occurred
how can I achieve to redirect the error to a function that receives a string as an argument?
This is the code:
function error {
echo "[ERROR]: $1"
}
# This does works:
terraform apply myplan || { echo -e '\n[ERROR]: Terraform apply failed. Fix errors and run the script again!' ; exit 1; }
# Output: [ERROR]: Terraform apply failed. Fix errors and run the script again!
# This does NOT work:
terraform apply myplan || { error 'Terraform apply failed. Fix errors and run the script again!' ; exit 1; }
# Output: [ERROR]
I do not understand why.
Example:
#!/bin/bash
# simulate terraform commands
function terraform_ok {
echo "this is on stdout from terraform_ok"
exit 0
}
function terraform_warning {
echo "this is on stdout from terraform_warning"
echo "this is on stderr from terraform_warning" >&2
exit 0
}
function terraform_error {
echo "this is on stdout from terraform_error"
echo "this is on stderr from terraform_error" >&2
echo "this is line two on stderr" >&2
exit 1
}
function catch_error {
rv=$?
if [[ $rv != 0 ]]; then
echo -e "[ERROR] >>>\n$#\n[ERROR] <<<"
elif [[ "$#" != "" ]]; then
echo -e "[WARNING] >>>\n$#\n[WARNING] <<<"
fi
# exit subshell with the same exit code the terraform command had
exit $rv
}
function swap_stdout_and_stderr {
"$#" 3>&2 2>&1 1>&3
}
function perform {
(catch_error "$(swap_stdout_and_stderr "$#")") 2>&1
}
function die {
rv=$?
echo "\"$#\" failed with exit code $rv."
exit $rv
}
function perform_or_die {
perform "$#" || die "$#"
}
perform_or_die terraform_ok apply myplan
perform_or_die terraform_warning apply myplan
perform_or_die terraform_error apply myplan
echo "this will never be reached"
Output (all on stdout):
this is on stdout from terraform_ok
this is on stdout from terraform_warning
[WARNING] >>>
this is on stderr from terraform_warning
[WARNING] <<<
this is on stdout from terraform_error
[ERROR] >>>
this is on stderr from terraform_error
this is line two on stderr
[ERROR] <<<
"terraform_error apply myplan" failed with exit code 1.
Explanation:
The swapping of stdout and stderr (3>&2 2>&1 1>&3) is done because when you do variable=$(command) the variable will get assigned whatever comes on stdout from command. The same applies in catch_error "$(command)". Whatever comes on stdout from command will be assigned to $# in the function catch_error. In your case you I assume you want to catch what comes on stderr instead, hence the swapping.
The final 2>&1 on the line is done to redirect stderr (which is the old stdout) back to stdout so that the expected behavior of greping in the output from this script can be done as usual.
Since the catch_error ... command is running in a subshell I've used || to execute another command in case the subshell returns an error. That command is die "$#" to exit the whole script with the same error code that the command exited with and to be able to show the command that failed.
The simplest way I can think of; this will save all output to a file:
terraform apply --auto-approve -no-color -input=false \
2>&1 | tee /tmp/tf-apply.out
I believe the expression &> would save only errors to the file.
I have a set of bash log functions which enable me to comfortably redirect all output to a log file and bail out in case something happens:
#! /usr/bin/env bash
# This script is meant to be sourced
export SCRIPT=$0
if [ -z "${LOG_FILE}" ]; then
export LOG_FILE="./log.txt"
fi
# https://stackoverflow.com/questions/11904907/redirect-stdout-and-stderr-to-function
# If the message is piped receives info, if the message is a parameter
# receives info, message
log() {
local TYPE
local IN
local PREFIX
local LINE
TYPE="$1"
if [ -n "$2" ]; then
IN="$2"
else
if read -r LINE; then
IN="${LINE}"
fi
while read -r LINE; do
IN="${IN}\n${LINE}"
done
IN=$(echo -e "${IN}")
fi
if [ -n "${IN}" ]; then
PREFIX=$(date +"[%X %d-%m-%y - $(basename "${SCRIPT}")] ${TYPE}: ")
IN="$(echo "${IN}" | awk -v PREFIX="${PREFIX}" '{printf PREFIX}$0')"
touch "${LOG_FILE}"
echo "${IN}" >> "${LOG_FILE}"
fi
}
# receives message as parameter or piped, logs as info
info() {
log "( INFO )" "$#"
}
# receives message as parameter or piped, logs as an error
error() {
log "(ERROR )" "$#"
}
# logs error and exits
fail() {
error "$1"
exit 1
}
# Reroutes stdout to info and sterr to error
log_wrap()
{
"$#" > >(info) 2> >(error)
return $?
}
Then I use the functions as follows:
LOG_FILE="logging.log"
source "log_functions.sh"
info "Program started"
log_wrap some_command arg0 arg1 --kwarg=value || fail "Program failed"
Which works. Since log_wrap redirects stdout and sterr I don't want it interfering with commands composed using piping or redirections. Such as:
log_wrap echo "File content" > ~/user_file || fail "user_file could not be created."
log_wrap echo "File content" | sudo tee ~/root_file > /dev/null || fail "root_file could not be created."
So I want a way to group those commands so their redirection is solved and then pass that to log_wrap. I am aware of two ways of grouping:
Subshells: They are not meant to be passed around, naturally this:
log_wrap ( echo "File content" > ~/user_file ) || fail "user_file could not be created."
throws a syntax error.
Braces (grouping?, context?): When called inside a command, the brace is interpreted as an argument.
log_wrap { echo "File content" > ~/user_file } || fail "user_file could not be created."
Is roughly equivalent (in my understanding) to:
log_wrap '{' echo "File content" > ~/user_file '}' || fail "user_file could not be created."
To recapitulate, my question is: Is there a way to pass a composition of commands, in my case composed by redirection/piping, to a bash function?
The way it's set up, you can only pass what Posix calls simple commands -- command names and arguments. No compound commands like subshells or brace groups will work.
However, you can use functions to run arbitrary code in a simple command:
foo() { { echo "File content" > ~/user_file; } || fail "user_file could not be created."; }
log_wrap foo
You could also consider just automatically applying your wrapper to all commands in the rest of the script using exec:
exec > >(info) 2> >(error)
{ echo "File content" > ~/user_file; } || fail "user_file could not be created.";
I try to execute from bash a command and retrieve stdout, stderr and exit code.
So far so good, there is plenty way.
The problem begin when that the program have an interactive input.
More precisly, I execute "git commit" (without -m) and "GNU nano" is executed in order to put a commit message.
If I use simply :
git commit
or
exec git commit
I can see the prompt, but I can't get stdout/stderr.
If I use
output=`git commit 2>&1`
or
output=$(git commit 2>&1)
I can retrieve stdout/stderr, but I can't see the prompt.
I can still do ctrl+X to abort the git commit.
My first attempt was by function call and my script end up hanging on a blank screen and ctrl+x / ctrl+c doesn't work.
function Execute()
{
if [[ $# -eq 0 ]]; then
echo "Error : function 'Execute' called without argument."
exit 3
fi
local msg=$("$# 2>&1")
local error=$?
if [[ $error -ne 0 ]]; then
echo "Error : '"$(printf '%q ' "$#")"' return '$error' error code."
echo "$1 message :"
echo "$msg"
echo
exit 1
fi
}
Execute git commit
I begin to ran out of idea/knowledge. Is what I want to do impossible ? Or is there a way that I don't know ?
Try this which processes every line output to stdout or stderr and redirects based on content:
#!/bin/env bash
foo() {
printf 'prompt: whos on first?\n' >&2
printf 'error: uh-oh\n' >&2
}
var=$(foo 2>&1 | awk '{print | "cat>&"(/prompt/ ? 2 : 1)}' )
echo "var=$var"
$ ./tst.sh
prompt: whos on first?
var=error: uh-oh
or this which just processes stderr:
#!/bin/env bash
foo() {
printf 'prompt: whos on first?\n' >&2
printf 'error: uh-oh\n' >&2
}
var=$(foo 2> >(awk '{print | "cat>&"(/prompt/ ? 2 : 1)}') )
echo "var=$var"
$ ./tst.sh
prompt: whos on first?
var=error: uh-oh
The awk command splits it's input to stderr or stdout based on content and only stdout is saved in the variable var. I don't know if your prompt is coming to stderr or stdout or where you really want it go go but massage to suit wrt what you want to go to stdout vs stderr and what you want to capture in the variable vs see printed to the screen. You just need to have something in the prompt to recognize as such so you can separate the prompt from the rest of the stdout and stderr and print the prompt to stderr while everything else gets redirected to stdout.
Alternatively here's a version that prints the first line (regardless of content) to stderr for display and everything else to stdout for capture:
$ cat tst.sh
#!/bin/env bash
foo() {
printf 'prompt: whos on first?\n' >&2
printf 'error: uh-oh\n' >&2
}
var=$(foo 2>&1 | awk '{print | "cat>&"(NR>1 ? 1 : 2)}' )
echo "var=$var"
$ ./tst.sh
prompt: whos on first?
var=error: uh-oh
I'd like to be able to put log messages in the middle of bash functions, without affecting the output of those very functions. For example, consider the following functions log() and get_animals():
# print a log a message
log ()
{
echo "Log message: $1"
}
get_animals()
{
log "Fetching animals"
echo "cat dog mouse"
}
values=`get_animals`
echo $values
After which $values contains the string "Log message: Fetching animals cat dog mouse".
How should I modify this script so that "Log message: Fetching animals" is outputted to the terminal, and $values contains "cat dog mouse"?
choroba's solution to another question shows how to use exec to open a new file descriptor.
Translating that solution to this question gives something like:
# Open a new file descriptor that redirects to stdout:
exec 3>&1
log ()
{
echo "Log message: $1" 1>&3
}
get_animals()
{
log "Fetching animals"
echo "cat dog mouse"
}
animals=`get_animals`
echo Animals: $animals
Executing the above produces:
Log message: Fetching animals
Animals: cat dog mouse
More information about using I/O redirection and file descriptors in Bash can be found at:
Bash Guide for Beginners, section 8.2.3, Redirection and file descriptors
Advanced Bash-Scripting Guide, Chapter 20, I/O Redirection
You can redirect the output to the sdterr error file on file handle 2 using >&2
example :
# print a log a message
log ()
{
echo "Log message: $1" >&2
}
get_animals()
{
log "Fetching animals"
echo "cat dog mouse"
}
values=`get_animals`
echo $values
the `` only take the output on stdout, not on stderr. The console on the other hand displays both.
If you really want the Log message on the stdout you can redirect error back to stdout after assigning to the variable :
# print a log a message
log ()
{
echo "Log message: $1" >&2
}
get_animals()
{
log "Fetching animals"
echo "cat dog mouse"
}
values=`get_animals` 2>&1
echo $values
#
#------------------------------------------------------------------------------
# echo pass params and print them to a log file and terminal
# with timestamp and $host_name and $0 PID
# usage:
# doLog "INFO some info message"
# doLog "DEBUG some debug message"
# doLog "WARN some warning message"
# doLog "ERROR some really ERROR message"
# doLog "FATAL some really fatal message"
#------------------------------------------------------------------------------
doLog(){
type_of_msg=$(echo $*|cut -d" " -f1)
msg=$(echo "$*"|cut -d" " -f2-)
[[ $type_of_msg == DEBUG ]] && [[ $do_print_debug_msgs -ne 1 ]] && return
[[ $type_of_msg == INFO ]] && type_of_msg="INFO " # one space for aligning
[[ $type_of_msg == WARN ]] && type_of_msg="WARN " # as well
# print to the terminal if we have one
test -t 1 && echo " [$type_of_msg] `date "+%Y.%m.%d-%H:%M:%S %Z"` [$run_unit][#$host_name] [$$] ""$msg"
# define default log file none specified in cnf file
test -z $log_file && \
mkdir -p $product_instance_dir/dat/log/bash && \
log_file="$product_instance_dir/dat/log/bash/$run_unit.`date "+%Y%m"`.log"
echo " [$type_of_msg] `date "+%Y.%m.%d-%H:%M:%S %Z"` [$run_unit][#$host_name] [$$] ""$msg" >> $log_file
}
#eof func doLog
You could redirect log output to the standard error stream:
log()
{
echo 1>&2 "Log message: $1"
}