Passing subshell to bash function - bash

I have a set of bash log functions which enable me to comfortably redirect all output to a log file and bail out in case something happens:
#! /usr/bin/env bash
# This script is meant to be sourced
export SCRIPT=$0
if [ -z "${LOG_FILE}" ]; then
export LOG_FILE="./log.txt"
fi
# https://stackoverflow.com/questions/11904907/redirect-stdout-and-stderr-to-function
# If the message is piped receives info, if the message is a parameter
# receives info, message
log() {
local TYPE
local IN
local PREFIX
local LINE
TYPE="$1"
if [ -n "$2" ]; then
IN="$2"
else
if read -r LINE; then
IN="${LINE}"
fi
while read -r LINE; do
IN="${IN}\n${LINE}"
done
IN=$(echo -e "${IN}")
fi
if [ -n "${IN}" ]; then
PREFIX=$(date +"[%X %d-%m-%y - $(basename "${SCRIPT}")] ${TYPE}: ")
IN="$(echo "${IN}" | awk -v PREFIX="${PREFIX}" '{printf PREFIX}$0')"
touch "${LOG_FILE}"
echo "${IN}" >> "${LOG_FILE}"
fi
}
# receives message as parameter or piped, logs as info
info() {
log "( INFO )" "$#"
}
# receives message as parameter or piped, logs as an error
error() {
log "(ERROR )" "$#"
}
# logs error and exits
fail() {
error "$1"
exit 1
}
# Reroutes stdout to info and sterr to error
log_wrap()
{
"$#" > >(info) 2> >(error)
return $?
}
Then I use the functions as follows:
LOG_FILE="logging.log"
source "log_functions.sh"
info "Program started"
log_wrap some_command arg0 arg1 --kwarg=value || fail "Program failed"
Which works. Since log_wrap redirects stdout and sterr I don't want it interfering with commands composed using piping or redirections. Such as:
log_wrap echo "File content" > ~/user_file || fail "user_file could not be created."
log_wrap echo "File content" | sudo tee ~/root_file > /dev/null || fail "root_file could not be created."
So I want a way to group those commands so their redirection is solved and then pass that to log_wrap. I am aware of two ways of grouping:
Subshells: They are not meant to be passed around, naturally this:
log_wrap ( echo "File content" > ~/user_file ) || fail "user_file could not be created."
throws a syntax error.
Braces (grouping?, context?): When called inside a command, the brace is interpreted as an argument.
log_wrap { echo "File content" > ~/user_file } || fail "user_file could not be created."
Is roughly equivalent (in my understanding) to:
log_wrap '{' echo "File content" > ~/user_file '}' || fail "user_file could not be created."
To recapitulate, my question is: Is there a way to pass a composition of commands, in my case composed by redirection/piping, to a bash function?

The way it's set up, you can only pass what Posix calls simple commands -- command names and arguments. No compound commands like subshells or brace groups will work.
However, you can use functions to run arbitrary code in a simple command:
foo() { { echo "File content" > ~/user_file; } || fail "user_file could not be created."; }
log_wrap foo
You could also consider just automatically applying your wrapper to all commands in the rest of the script using exec:
exec > >(info) 2> >(error)
{ echo "File content" > ~/user_file; } || fail "user_file could not be created.";

Related

Unable to execute awk command in a function, but working directly in the shell

I want to create a utility function for bash to remove duplicate lines. I am using function
function remove_empty_lines() {
if ! command -v awk &> /dev/null
then
echo '[x] ERR: "awk" command not found'
return
fi
if [[ -z "$1" ]]
then
echo "usage: remove_empty_lines <file-name> [--replace]"
echo
echo "Arguments:"
echo -e "\t--replace\t (Optional) If not passed, the result will be redirected to stdout"
return
fi
if [[ ! -f "$1" ]]
then
echo "[x] ERR: \"$1\" file not found"
return
fi
echo $0
local CMD="awk '!seen[$0]++' $1"
if [[ "$2" = '--reload' ]]
then
CMD+=" > $1"
fi
echo $CMD
}
If I am running the main awk command directly, it is working. But when i execute the same $CMD in the function, I am getting this error
$ remove_empty_lines app.js
/bin/bash
awk '!x[/bin/bash]++' app.js
The original code is broken in several ways:
When used with --reload, it would truncate the output file's contents before awk could ever read those contents (see How can I use a file in a command and redirect output to the same file without truncating it?)
It didn't ever actually run the command, and for the reasons described in BashFAQ #50, storing a shell command in a string is inherently buggy (one can work around some of those issues with eval; BashFAQ #48 describes why doing so introduces security bugs).
It wrote error messages (and other "diagnostic content") to stdout instead of stderr; this means that if your function's output was redirected to a file, you could never see its errors -- they'd end up jumbled into the output.
Error cases were handled with a return even in cases where $? would be zero; this means that return itself would return a zero/successful/truthy status, not revealing to the caller that any error had taken place.
Presumably the reason you were storing your output in CMD was to be able to perform a redirection conditionally, but that can be done other ways: Below, we always create a file descriptor out_fd, but point it to either stdout (when called without --reload), or to a temporary file (if called with --reload); if-and-only-if awk succeeds, we then move the temporary file over the output file, thus replacing it as an atomic operation.
remove_empty_lines() {
local out_fd rc=0 tempfile=
command -v awk &>/dev/null || { echo '[x] ERR: "awk" command not found' >&2; return 1; }
if [[ -z "$1" ]]; then
printf '%b\n' >&2 \
'usage: remove_empty_lines <file-name> [--replace]' \
'' \
'Arguments:' \
'\t--replace\t(Optional) If not passed, the result will be redirected to stdout'
return 1
fi
[[ -f "$1" ]] || { echo "[x] ERR: \"$1\" file not found" >&2; return 1; }
if [ "$2" = --reload ]; then
tempfile=$(mktemp -t "$1.XXXXXX") || return
exec {out_fd}>"$tempfile" || { rc=$?; rm -f "$tempfile"; return "$rc"; }
else
exec {out_fd}>&1
fi
awk '!seen[$0]++' <"$1" >&$out_fd || { rc=$?; rm -f "$tempfile"; return "$rc"; }
exec {out_fd}>&- # close our file descriptor
if [[ $tempfile ]]; then
mv -- "$tempfile" "$1" || return
fi
}
First off the output from your function call is not an error but rather the output of two echo commands (echo $0 and echo $CMD).
And as Charles Duffy has pointed out ... at no point is the function actually running the $CMD.
As for the inclusion of /bin/bash in your function's echo output ... the main problem is the reference to $0; by definition $0 is the name of the running process, which in the case of a function is the shell under which the function is being called. Consider the following when run from a bash command prompt:
$ echo $0
-bash
As you can see from your output this generates /bin/bash in your environment. See this and this for more details.
On a related note, the reference to $0 within double quotes causes the $0 to be evaluated, so this:
local CMD="awk '!seen[$0]++' $1"
becomes
local CMD="awk '!seen[/bin/bash]++' app.js"
I'm thinking what you want is something like:
echo $1 # the name of the file to be processed
local CMD="awk '!seen[\$0]++' $1" # escape the '$' in '$0'
becomes
local CMD="awk '!seen[$0]++' app.js"
That should fix the issues shown in your function's output; as for the other issues ... you're getting a good bit of feedback in the various comments ...

Bash sub script redirects input to /dev/null mistakenly

I'm working on a script to automate the creation of a .gitconfig file.
This is my main script that calls a function which in turn execute another file.
dotfile.sh
COMMAND_NAME=$1
shift
ARG_NAME=$#
set +a
fail() {
echo "";
printf "\r[${RED}FAIL${RESET}] $1\n";
echo "";
exit 1;
}
set -a
sub_setup() {
info "This may overwrite existing files in your computer. Are you sure? (y/n)";
read -p "" -n 1;
echo "";
if [[ $REPLY =~ ^[Yy]$ ]]; then
for ARG in $ARG_NAME; do
local SCRIPT="~/dotfiles/setup/${ARG}.sh";
[ -f "$SCRIPT" ] && echo "Applying '$ARG'" && . "$SCRIPT" || fail "Unable to find script '$ARG'";
done;
fi;
}
case $COMMAND_NAME in
"" | "-h" | "--help")
sub_help;
;;
*)
CMD=${COMMAND_NAME/*-/}
sub_${CMD} $ARG_NAME 2> /dev/null;
if [ $? = 127 ]; then
fail "'$CMD' is not a known command or has errors.";
fi;
;;
esac;
git.sh
git_config() {
if [ ! -f "~/dotfiles/git/gitconfig_template" ]; then
fail "No gitconfig_template file found in ~/dotfiles/git/";
elif [ -f "~/dotfiles/.gitconfig" ]; then
fail ".gitconfig already exists. Delete the file and retry.";
else
echo "Setting up .gitconfig";
GIT_CREDENTIAL="cache"
[ "$(uname -s)" == "Darwin" ] && GIT_CREDENTIAL="osxkeychain";
user " - What is your GitHub author name?";
read -e GIT_AUTHORNAME;
user " - What is your GitHub author email?";
read -e GIT_AUTHOREMAIL;
user " - What is your GitHub username?";
read -e GIT_USERNAME;
if sed -e "s/AUTHORNAME/$GIT_AUTHORNAME/g" \
-e "s/AUTHOREMAIL/$GIT_AUTHOREMAIL/g" \
-e "s/USERNAME/$GIT_USERNAME/g" \
-e "s/GIT_CREDENTIAL_HELPER/$GIT_CREDENTIAL/g" \
"~/dotfiles/git/gitconfig_template" > "~/dotfiles/.gitconfig"; then
success ".gitconfig has been setup";
else
fail ".gitconfig has not been setup";
fi;
fi;
}
git_config
In the console
$ ./dotfile.sh --setup git
[ ?? ] This may overwrite existing files in your computer. Are you sure? (y/n)
y
Applying 'git'
Setting up .gitconfig
[ .. ] - What is your GitHub author name?
Then I cannot see what I'm typing...
At the bottom of dotfile.sh, I redirect any error that occurs during my function call to /dev/null. But I should normally see what I'm typing. If I remove 2> /dev/null from this line sub_${CMD} $ARG_NAME 2> /dev/null;, it works!! But I don't understand why.
I need this line to prevent my script to echo an error in case my command doesn't exists. I only want my own message.
e.g.
$ ./dotfile --blahblah
./dotfiles: line 153: sub_blahblah: command not found
[FAIL] 'blahblah' is not a known command or has errors
I really don't understand why the input in my sub script is redirected to /dev/null as I mentioned only stderr to be redirected to /dev/null.
Thanks
Do you need the -e option in your read statements?
I did a quick test in an interactive shell. The following command does not echo characters :
read -e TEST 2>/dev/null
The following does echo the characters
read TEST 2>/dev/null

Pass bash syntax (pipe operator) correctly to function

How is it possible that operator >> and stream redirection operator are passed to the function try() which catches errors and exits...
When I do this :
exitFunc() { echo "EXIIIIIIIIIIIIIIIIT" }
yell() { echo "$0: $*" >&2; }
die() { yell "$*"; exitFunc 111; }
try() { "$#" || die "cannot $*"; }
try commandWhichFails >> "logFile.log" 2>&1
When I run the above, also the exitFunction echo is output into the logFile...
How do I need to change the above that the try command does basically this
try ( what ever comes here >> "logFile.log" 2>&1 )
Can this be achieved with subshells?
If you want to use stderr in yell and not have it lost by your redirection in the body of the script, then you need to preserve it at the start of the script. For example in file descriptor 5:
#!/bin/bash
exec 5>&2
yell() { echo "$0: $*" >&5; }
...
If your bash supports it you can ask it to allocate the new file descriptor for you using a new syntax:
#!/bin/bash
exec {newfd}>&2
yell() { echo "$0: $*" >&$newfd; }
...
If you need to you can close the new fd with exec {newfd}>&-.
If I understand you correctly, you can't achieve it with subshells.
If you want the output of commandWhichFails to be sent to logFile.log, but not the errors from try() etc., the problem with your code is that redirections are resolved before command execution, in order of appearance.
Where you've put
try false >> "logFile.log" 2>&1
(using false as a command which fails), the redirections apply to the output of try, not to its arguments (at this point, there is no way to know that try executes its arguments as a command).
There may be a better way to do this, but my instinct is to add a catch function, thus:
last_command=
exitFunc() { echo "EXIIIIIIIIIIIIIIIIT"; } #added ; here
yell() { echo "$0: $*" >&2; }
die() { yell "$*"; exitFunc 111; }
try() { last_command="$#"; "$#"; }
catch() { [ $? -eq 0 ] || die "cannot $last_command"; }
try false >> "logFile.log" 2>&1
catch
Depending on portability requirements, you can always replace last_command with a function like last_command() { history | tail -2 | sed -n '1s/^ *[0-9] *//p' ;} (bash), which requires set -o history and removes the necessity of the try() function. You can replace the -2 with -"$1" to get the N th previous command.
For a more complete discussion, see BASH: echoing the last command run . I'd also recommend looking at trap for general error handling.

SHELL general function for action state

How to make a code bellow as a general function to be used entire script in bash:
if [[ $? = 0 ]]; then
echo "success " >> $log
else echo "failed" >> $log
fi
You might write a wrapper for command execution:
function exec_cmd {
$#
if [[ $? = 0 ]]; then
echo "success " >> $log
else
echo "failed" >> $log
fi
}
And then execute commands in your script using the function:
exec_cmd command1 arg1 arg2 ...
exec_cmd command2 arg1 arg2 ...
...
If you don't want to wrap the original calls you could use an explicit call, like the following
function check_success {
if [[ $? = 0 ]]; then
echo "success " >> $log
else echo "failed" >> $log
fi
}
ls && check_success
ls non-existant
check_success
There's no really clean way to do that. This is clean and might be good enough?
PS4='($?)[$LINENO]'
exec 2>>"$log"
That will show every command run in the log, and each entry will start with the exit code of the previous command...
You could put this in .bashrc and call it whenever
function log_status { [ $? == 0 ] && echo success>>/tmp/status || echo fail>>/tmp/status }
If you want it after every command you could make the prompt write to the log (note the original PS1 value is appended).
export PS1="\$([ \$? == 0 ] && echo success>>/tmp/status || echo fail>>/tmp/status)$PS1"
(I'm not experienced with this, perhaps PROMPT_COMMAND is a more appropriate place to put it)
Or even get more fancy and see the result with colours.
I guess you could also play with getting the last executed command:
How do I get "previous executed command" in a bash script?
Get name of last run program in Bash
BASH: echoing the last command run

shell script ssh command exit status

In a loop in shell script, I am connecting to various servers and running some commands. For example
#!/bin/bash
FILENAME=$1
cat $FILENAME | while read HOST
do
0</dev/null ssh $HOST 'echo password| sudo -S
echo $HOST
echo $?
pwd
echo $?'
done
Here I am running "echo $HOST" and "pwd" commands and I am getting exit status via "echo $?".
My question is that I want to be able to store the exit status of the commands I run remotely in some variable and then ( based on if the command was success or not) , write a log to a local file.
Any help and code is appreciated.
ssh will exit with the exit code of the remote command. For example:
$ ssh localhost exit 10
$ echo $?
10
So after your ssh command exits, you can simply check $?. You need to make sure that you don't mask your return value. For example, your ssh command finishes up with:
echo $?
This will always return 0. What you probably want is something more like this:
while read HOST; do
echo $HOST
if ssh $HOST 'somecommand' < /dev/null; then
echo SUCCESS
else
echo FAIL
done
You could also write it like this:
while read HOST; do
echo $HOST
if ssh $HOST 'somecommand' < /dev/null
if [ $? -eq 0 ]; then
echo SUCCESS
else
echo FAIL
done
You can assign the exit status to a variable as simple as doing:
variable=$?
Right after the command you are trying to inspect. Do not echo $? before or the new value of $? will be the exit code of echo (usually 0).
An interesting approach would be to retrieve the whole output of each ssh command set in a local variable using backticks, or even seperate with a special charachter (for simplicity say ":") something like:
export MYVAR=`ssh $HOST 'echo -n ${HOSTNAME}\:;pwd'`
after this you can use awk to split MYVAR into your results and continue bash testing.
Perhaps prepare the log file on the other side and pipe it to stdout, like this:
ssh -n user#example.com 'x() { local ret; "$#" >&2; ret=$?; echo "[`date +%Y%m%d-%H%M%S` $ret] $*"; return $ret; };
x true
x false
x sh -c "exit 77";' > local-logfile
Basically just prefix everything on the remote you want to invoke with this x wrapper. It works for conditionals, too, as it does not alter the exit code of a command.
You can easily loop this command.
This example writes into the log something like:
[20141218-174611 0] true
[20141218-174611 1] false
[20141218-174611 77] sh -c exit 77
Of course you can make it better parsable or adapt it to your whishes how the logfile shall look like. Note that the uncatched normal stdout of the remote programs is written to stderr (see the redirection in x()).
If you need a recipe to catch and prepare output of a command for the logfile, here is a copy of such a catcher from https://gist.github.com/hilbix/c53d525f113df77e323d - but yes, this is a bit bigger boilerplate to "Run something in current context of shell, postprocessing stdout+stderr without disturbing return code":
# Redirect lines of stdin/stdout to some other function
# outfn and errfn get following arguments
# "cmd args.." "one line full of output"
: catch outfn errfn cmd args..
catch()
{
local ret o1 o2 tmp
tmp=$(mktemp "catch_XXXXXXX.tmp")
mkfifo "$tmp.out"
mkfifo "$tmp.err"
pipestdinto "$1" "${*:3}" <"$tmp.out" &
o1=$!
pipestdinto "$2" "${*:3}" <"$tmp.err" &
o2=$!
"${#:3}" >"$tmp.out" 2>"$tmp.err"
ret=$?
rm -f "$tmp.out" "$tmp.err" "$tmp"
wait $o1
wait $o2
return $ret
}
: pipestdinto cmd args..
pipestdinto()
{
local x
while read -r x; do "$#" "$x" </dev/null; done
}
STAMP()
{
date +%Y%m%d-%H%M%S
}
# example output function
NOTE()
{
echo "NOTE `STAMP`: $*"
}
ERR()
{
echo "ERR `STAMP`: $*" >&2
}
catch_example()
{
# Example use
catch NOTE ERR find /proc -ls
}
See the second last line for an example (scroll down)

Resources