Passing Command to Function in Bash - bash

I am creating a setup bash script so I can quickly setup my servers.
Inside the run function I want to pass the command, which the run function will then verify whether is successful.
function run () {
if output=$( $1 );
then printf 'OK. («%s»)\n' "$output";
else printf 'Failed! («%s»)\n' "$output";
fi
}
printf 'Setting up «uni» as system group...'
run " if [ ! $( getent group uni ) ];
then sudo addgroup --system uni;
else echo 'Group exists.';
fi "
However this results in a error: setup.sh: line 5: if: command not found
When I do this it works fine, but I want to eliminate repetitive code as I ha ve many commands:
if output=$(
if [ ! $( getent group uni ) ]; then sudo addgroup --system uni; else echo 'Group exists.'; fi
);
then printf 'OK. («%s»)\n' "$output";
else printf 'Failed! («%s»)\n' "$output";
fi
What am I doing wrong?

The safe way to pass code to functions is... to encapsulate that code in another function.
run() {
local output
if output=$("$#"); then
printf 'OK. («%s»)\n' "$output";
else
printf 'Failed! («%s»)\n' "$output";
fi
}
printf 'Setting up «uni» as system group...'
step1() {
if [ ! "$(getent group uni)" ]; then
sudo addgroup --system uni;
else
echo 'Group exists.';
fi
}
run step1
If your code didn't involve flow control operators (and otherwise fit into the definition of a "simple command"), you wouldn't even need to do that; with the above run definition (using "$#" instead of $1),
run sudo addgroup --system uni
...would work correctly as-is.
Using either eval or sh -c exposes you to serious security problems; see BashFAQ #48 for a high-level overview, and see BashFAQ #50 for a discussion of why code shouldn't be passed around as text (and preferred ways to avoid the need to do so in the first place!)

You have chosen to pass a shell command string to your function (aka system(3) semantics). This means that you have to use eval to evaluate it:
function run () {
if output=$( eval "$1" );
then printf 'OK. («%s»)\n' "$output";
else printf 'Failed! («%s»)\n' "$output";
fi
}
Note that the parameter with your command will be expanded as normal, so if you want the $(getent) to be evaluated in the run function instead of before it, you will need to escape it:
run " if [ ! \$( getent group uni ) ];
then sudo addgroup --system uni;
else echo 'Group exists.';
fi "

If you want to pass a bash piece of code as a function args to be evaluated in a subshell and test the success, you can do it like that:
#!/bin/bash
run () {
# Here I'm not sure what you want to test, so I assume that you want to test if the piece of bash pass as an arg is failing or not
# Do not forget to quote the value in order to take all the string
# You can also try with "eval" instead of "bash -c"
if output=$( bash -c "${1}" ); then
echo "OK. («${output}»)"
else
echo "Failed! («${output}»)"
fi
}
# Do not forget to escape the $ here to be evaluated after in the `run` function
# Do not forget to exit with a code different of 0 to indicate there is a failure for the test in the "run" function
run "if [[ ! \$( getent group uni ) ]]; then sudo addgroup --system uni else echo 'Group exists.'; exit 1; fi"
Warning: I posted this piece of code to make your script works the way you seems to want (have a function which will eval your code from a string and interpret the result) but it's not safe at all for this reasons as Charles Duffy pointed out in the comment section.
It's probably safer to make something like:
#!/bin/bash
run () {
if output=$("${#}"); then
echo "OK. («${output}»)"
else
echo "Failed! («${output}»)"
fi
}
one_of_your_functions_you_want_to_eval() {
if [[ ! $( getent group uni ) ]]; then
sudo addgroup --system uni
else
echo 'Group exists.'
# do not forget to exit with something different than 0 here
exit 1
fi
}
run one_of_your_functions_you_want_to_eval
Note: in order to declare a function, either you declare it like that (POSIX compliant syntax):
your_function() {
}
Either using a bash only syntax (I think it's better to avoid "bashism" when it not bring real values to your scripts):
function your_function {
}
But no need to mix both syntaxes ;)

Related

In bash, either exit script without exiting the shell or export/set variables from within subshell

I have a function that runs a set of scripts that set variables, functions, and aliases in the current shell.
reloadVariablesFromScript() {
for script in "${scripts[#]}"; do
. "$script"
done
}
If one of the scripts has an error, I want to exit the script and then exit the function, but not to kill the shell.
reloadVariablesFromScript() {
for script in "${scripts[#]}"; do
{(
set -e
. "$script"
)}
if [[ $? -ne 0 ]]; then
>&2 echo $script failed. Skipping remaining scripts.
return 1
fi
done
}
This would do what I want except it doesn't set the variables in the script whether the script succeeds or fails.
Without the subshell, set -e causes the whole shell to exit, which is undesirable.
Is there a way I can either prevent the called script from continuing on an error without killing the shell or else set/export variables, aliases, and functions from within a subshell?
The following script simulates my problem:
test() {
{(
set -e
export foo=bar
false
echo Should not have gotten here!
export bar=baz
)}
local errorCode=$?
echo foo="'$foo'". It should equal 'bar'.
echo bar="'$bar'". It should not be set.
if [[ $errorCode -ne 0 ]]; then
echo Script failed correctly. Exiting function.
return 1
fi
echo Should not have gotten here!
}
test
If worst comes to worse, since these scripts don't actually edit the filesystem, I can run each script in a subshell, check the exit code, and if it succeeds, run it outside of a subshell.
Note that set -e has a number of surprising behaviors -- relying on it is not universally considered a good idea. That caveat being give, though: We can shuffle environment variables, aliases, and shell functions out as text:
envTest() {
local errorCode newVars
newVars=$(
set -e
{
export foo=bar
false
echo Should not have gotten here!
export bar=baz
} >&2
# print generate code which, when eval'd, recreates our functions and variables
declare -p | egrep -v '^declare -[^[:space:]]*r'
declare -f
alias -p
); errorCode=$?
if (( errorCode == 0 )); then
eval "$newVars"
fi
printf 'foo=%q. It should equal %q\n' "$foo" "bar"
printf 'bar=%q. It should not be set.\n' "$bar"
if [[ $errorCode -ne 0 ]]; then
echo 'Script failed correctly. Exiting function.'
return 1
fi
echo 'Should not have gotten here!'
}
envTest
Note that this code only evaluates either export should the entire script segment succeed; the question text and comments appear to indicate that this is acceptable if not desired.

"alias method chain" in Bash or Zsh

This is (or was, at least) a common pattern in Ruby, but I can't figure out how to do it in Zsh or Bash.
Let's suppose I have a shell function called "whoosiwhatsit", and I want to override it in a specific project, while still keeping the original available under a different name.
If I didn't know better, I might try creating an alias to point to whoosiwhatsit, and then create a new "whoosiwhatsit" function that uses the alias. Of course that work, since the alias will refer to the new function instead.
Is there any way to accomplish what I'm talking about?
Aliases are pretty weak. You can do this with functions though. Consider the following tools:
#!/usr/bin/env bash
PS4=':${#FUNCNAME[#]}:${BASH_SOURCE}:$LINENO+'
rename_function() {
local orig_definition new_definition new_name retval
retval=$1; shift
orig_definition=$(declare -f "$1") || return 1
new_name="${1}_"
while declare -f "$new_name" >/dev/null 2>&1; do
new_name+="_"
done
new_definition=${orig_definition/"$1"/"$new_name"}
eval "$new_definition" || return
unset -f "$orig_definition"
printf -v "$retval" %s "$new_name"
}
# usage: shadow_function target_name shadowing_func [...]
# ...replaces target_name with a function which will call:
# shadowing_func target_renamed_to_this number_of_args_in_[...] [...] "$#"
shadow_function() {
local shadowed_func eval_code shadowed_name shadowing_func shadowed_func_renamed
shadowed_name=$1; shift
shadowing_func=$1; shift
rename_function shadowed_func_renamed "$shadowed_name" || return
if (( $# )); then printf -v const_args '%q ' "$#"; else const_args=''; fi
printf -v eval_code '%q() { %q %q %s "$#"; }' \
"$shadowed_name" "$shadowing_func" "$shadowed_func_renamed" "$# $const_args"
eval "$eval_code"
}
...and the following example application of those tools:
whoosiwhatsit() { echo "This is the original implementation"; }
override_in_directory() {
local shadowed_func=$1; shift
local override_cmd_len=$1; shift
local override_dir=$1; shift
local -a override_cmd=( )
local i
for (( i=1; i<override_cmd_len; i++)); do : "$1"
override_cmd+=( "$1" ); shift
done
: PWD="$PWD" override_dir="$override_dir" shadowed_func="$shadowed_func"
: override_args "${override_args[#]}"
if [[ $PWD = $override_dir || $PWD = $override_dir/* ]]; then
[[ $- = *x* ]] && declare -f shadowed_func >&2 # if in debugging mode
"${override_cmd[#]}"
else
"$shadowed_func" "$#"
fi
}
ask_the_user_first() {
local shadowed_func=$1; shift;
shift # ignore static-argument-count parameter
if [[ -t 0 ]]; then
read -r -p "Press ctrl+c if you are unsure, or enter if you are"
fi
"$shadowed_func" "$#"
}
shadow_function whoosiwhatsit ask_the_user_first
shadow_function whoosiwhatsit \
override_in_directory /tmp echo "Not in the /tmp!!!"
shadow_function whoosiwhatsit \
override_in_directory /home echo "Don't try this at home"
The end result is a whoosiwhatsit function that asks the user before it does anything when its stdin is a TTY, and aborts (with different messages) when run under either /tmp or /home.
That said, I don't condone this practice. Consider the above provided as an intellectual exercise. :)
In bash, there is a built-in variable called BASH_ALIASES that is an associative array containing the current aliases. The semantics are a bit inconsistent when you update it (RTM!) but if you restrict yourself to reading BASH_ALIASES, you should be able to write yourself a shell function that implements alias chaining.
It's common and well supported to create a single level of overrides through functions that optionally invoke their overridden builtin or command:
# Make all cd commands auto-exit on failure
cd() { builtin cd "$#" || exit; }
# Make all ssh commands verbose
ssh() { command ssh -vv "$#"; }
It doesn't chain beyond the one link, but it's completely POSIX and often works better in practice than trying to write Ruby in Bash.

Is there a Bash wrapper (program/script) that enables a more succinct input when I want multiple outputs in one Bash call

I'm currently creating monstrosities like the following:
ll /home && echo -e "==============\n" && getent passwd && echo -e "==============\n" && ll /opt/tomcat/ && echo -e "==============\n" && ll /etc/sudoers.d/
Is there perhaps some program that handles this in a nicer way?
Something like this (the hypothetical name of the program would be multiprint in my example):
multiprint --delim-escapechars true --delim "============\n" '{ll /home},{getent passwd},...'
alternatively:
multiprint -de "============\n" '{ll /home},{getent passwd},...'
A function like the following would give you that ability :
function intersect() {
delim=$1
shift
for f; do cat "$f"; echo "$delim"; done
}
You could call it as follows to implement your specific use-case :
intersect '==============' <(ll /home) <(getent passwd) <(ll /opt/tomcat/) <(ll /etc/sudoers.d/)
You can try it here.
printf will repeat its format until its arguments are exhausted. You could write something like
printf '%s\n================\n' "$(ll /home)" "$(getent passed)" "$(ll /opt/tomcat)" "$(ll /etc/sudoers.d)"
although this is a little memory-intensive, since it buffers all the output in memory until all the commands have completed.
Based on #Aaron's answer I ended up creating this multiprint.sh Bash shell script, and for what it's worth posting it here:
#!/bin/bash
# Print output of multiple commands, delimited by a specified string
function multiprint() {
if [[ -z "$*" ]]; then
__multiprint_usage
return 0
elif [[ "$1" == "--help" ]]; then
__multiprint_usage
return 0
else
delim=$1
shift
for f; do cat "$f"; echo -e "$delim"; done
fi
}
function __multiprint_usage() {
echo "Usage:"
echo " multiprint '<delimiter>' <(cmd1) <(cmd2) ..."
# example: multiprint '\n\n\n' <(ll /home/) <(ll /var/) ..."
}

writing from a function in a Bash script leaking file descriptors

We have a shell script that is called by cron and runs as root.
This script outputs logging and debug info, and has been failing at one certain point. This point varies based on how much output the script creates (it fails sooner if we enable more debugging output, for example).
However, if the script is called directly, as a user, then it works without a problem.
We have since created a simplified test case which demonstrates the problem.
The script is:
#!/bin/bash
function log_so () {
local msg="$1"
if [ -z "${LOG_FILE}" ] ; then warn_so "It's pointless use log_so() if LOG_FILE variable is undefined!" ; return 1 ; fi
echo -e "${msg}"
echo -e "${msg}" >> ${LOG_FILE}
(
/bin/true
)
}
LOG_FILE="/usr/local/bin/log_bla"
linenum=1
while [[ $linenum -lt 2000 ]] ; do
log_so "short text: $linenum"
let linenum++
done
The highest this has reached is 244 before dying (when called via cron).
Some other searches recommended using a no-op subshell from the function and also calling /bin/true but not only did this not work, the subshell option is not feasible in the main script.
We have also tried changing the file descriptor limit for root, but that did not help, and have tried using both #!/bin/sh and #!/bin/bash for the script.
We are using bash 4.1.5(1)-release on Ubuntu 10.04 LTS.
Any ideas or recommendations for a workaround would be appreciated.
What about opening a fd by hand and cleaning it up afterwards? I don't have a bash 4.1 to test with, but it might help.
LOG_FILE="/usr/local/bin/log_bla"
exec 9<> "$LOG_FILE"
function log_so () {
local msg="$1"
if [ -z "${LOG_FILE}" ] ; then warn_so "It's pointless use log_so() if LOG_FILE variable is undefined!" ; return 1 ; fi
echo -e "${msg}"
echo -e "${msg}" >&9
return 0
}
linenum=1
while [[ $linenum -lt 2000 ]] ; do
log_so "short text: $linenum"
let linenum++
done
exec 9>&-

Get the exit code for a command in Bash and KornShell (ksh)

I want to write code like this:
command="some command"
safeRunCommand $command
safeRunCommand() {
cmnd=$1
$($cmnd)
if [ $? != 0 ]; then
printf "Error when executing command: '$command'"
exit $ERROR_CODE
fi
}
But this code does not work the way I want. Where did I make the mistake?
Below is the fixed code:
#!/bin/ksh
safeRunCommand() {
typeset cmnd="$*"
typeset ret_code
echo cmnd=$cmnd
eval $cmnd
ret_code=$?
if [ $ret_code != 0 ]; then
printf "Error: [%d] when executing command: '$cmnd'" $ret_code
exit $ret_code
fi
}
command="ls -l | grep p"
safeRunCommand "$command"
Now if you look into this code, the few things that I changed are:
use of typeset is not necessary, but it is a good practice. It makes cmnd and ret_code local to safeRunCommand
use of ret_code is not necessary, but it is a good practice to store the return code in some variable (and store it ASAP), so that you can use it later like I did in printf "Error: [%d] when executing command: '$command'" $ret_code
pass the command with quotes surrounding the command like safeRunCommand "$command". If you don’t then cmnd will get only the value ls and not ls -l. And it is even more important if your command contains pipes.
you can use typeset cmnd="$*" instead of typeset cmnd="$1" if you want to keep the spaces. You can try with both depending upon how complex is your command argument.
'eval' is used to evaluate so that a command containing pipes can work fine
Note: Do remember some commands give 1 as the return code even though there isn't any error like grep. If grep found something it will return 0, else 1.
I had tested with KornShell and Bash. And it worked fine. Let me know if you face issues running this.
Try
safeRunCommand() {
"$#"
if [ $? != 0 ]; then
printf "Error when executing command: '$1'"
exit $ERROR_CODE
fi
}
It should be $cmd instead of $($cmd). It works fine with that on my box.
Your script works only for one-word commands, like ls. It will not work for "ls cpp". For this to work, replace cmd="$1"; $cmd with "$#". And, do not run your script as command="some cmd"; safeRun command. Run it as safeRun some cmd.
Also, when you have to debug your Bash scripts, execute with '-x' flag. [bash -x s.sh].
There are several things wrong with your script.
Functions (subroutines) should be declared before attempting to call them. You probably want to return() but not exit() from your subroutine to allow the calling block to test the success or failure of a particular command. That aside, you don't capture 'ERROR_CODE' so that is always zero (undefined).
It's good practice to surround your variable references with curly braces, too. Your code might look like:
#!/bin/sh
command="/bin/date -u" #...Example Only
safeRunCommand() {
cmnd="$#" #...insure whitespace passed and preserved
$cmnd
ERROR_CODE=$? #...so we have it for the command we want
if [ ${ERROR_CODE} != 0 ]; then
printf "Error when executing command: '${command}'\n"
exit ${ERROR_CODE} #...consider 'return()' here
fi
}
safeRunCommand $command
command="cp"
safeRunCommand $command
The normal idea would be to run the command and then use $? to get the exit code. However, sometimes you have multiple cases in which you need to get the exit code. For example, you might need to hide its output, but still return the exit code, or print both the exit code and the output.
ec() { [[ "$1" == "-h" ]] && { shift && eval $* > /dev/null 2>&1; ec=$?; echo $ec; } || eval $*; ec=$?; }
This will give you the option to suppress the output of the command you want the exit code for. When the output is suppressed for the command, the exit code will directly be returned by the function.
I personally like to put this function in my .bashrc file.
Below I demonstrate a few ways in which you can use this:
# In this example, the output for the command will be
# normally displayed, and the exit code will be stored
# in the variable $ec.
$ ec echo test
test
$ echo $ec
0
# In this example, the exit code is output
# and the output of the command passed
# to the `ec` function is suppressed.
$ echo "Exit Code: $(ec -h echo test)"
Exit Code: 0
# In this example, the output of the command
# passed to the `ec` function is suppressed
# and the exit code is stored in `$ec`
$ ec -h echo test
$ echo $ec
0
Solution to your code using this function
#!/bin/bash
if [[ "$(ec -h 'ls -l | grep p')" != "0" ]]; then
echo "Error when executing command: 'grep p' [$ec]"
exit $ec;
fi
You should also note that the exit code you will be seeing will be for the grep command that's being run, as it is the last command being executed. Not the ls.

Resources