Bash function calling command given as argument - bash

How do you write a function in bash that executes the command that it is given as an argument, where
The given command may be an alias
Arguments must be passed on exactly as given; no evaluating may be done
In other words, how to write an as-transparent-as-possible wrapper function.
The goal of the wrapper function could for example be to set the current directory before and after the given command, and/or set environment variables, or time how long the given command takes,... As a simple example here I take a function that just prints a line and then executes the given command.
A first attempt:
function wrap1 {
echo Starting: "$#"
"$#"
}
You could use it like wrap1 echo hello. But the problem is you cannot do alias myalias echo and then call wrap1 myalias hello: it wouldn't resolve the alias.
Another attempt using eval:
function wrap2 {
echo Starting: "$#"
eval "$#"
}
Now calling an alias works. But the problem is it evaluates the arguments too. For example wrap2 echo "\\a" prints just a instead of \a because the arguments are evaluated twice.
shopt -s expand_aliases doesn't seem to help here either.
Is there a way to both evaluate aliases like wrap2, but still pass on the arguments directly like wrap1?

You (uh, I) can use printf %q to escape the arguments.
At first sight, escaping with printf and then doing eval always gives the same result as passing the arguments directly.
wrap() {
echo Starting: "$#"
eval $(printf "%q " "$#")
}

It seems to be possible with a double eval:
eval "eval x=($(alias y | cut -s -d '=' -f 2))"
# now the array x contains the split expansion of alias y
"${x[#]}" "${other_args[#]}"
So maybe your function could be written as follows:
wrap() {
eval "eval prefix=($(alias $1 | cut -s -d '=' -f 2))"
shift
"${prefix[#]}" "$#"
}
However, eval is evil, and double eval is double evil, and aliases are not expanded in scripts for a reason.

Related

Are there lisp-like macros in the shell?

I have a set of shell commands that look like this
if check-some-condition $a then;
do stuff
run-exit-code $a
fi
where check-some-condition and run-exit-code could be replaced by functions taking a single argument $a, while do stuff is a placeholder for possibly several shell commands. Is it possible to emulate the Lisp functionality of a macro where I could just write
(my-macro $a stuff)
and have it replaced by the code above? I am using Bash but I can use any other shell if they have features that make this easier. I thought at first of using functions but I don't think I can pass in a block of commands.
There isn't a macro definition system in the shell, and consequently shell syntax is not walked in order to expand macros. However, the shell has a textual eval command. You can write a function which synthesizes shell syntax, such as by inserting arguments it has been given, into a template. The function can print that syntax, which the caller can capture using $(...) command substitution syntax and pass to eval:
eval $(macro-like foo bar)
The expansion will happen every time that line of code is executed.
I've done something like this on a very small number of occasions. I don't remember all the details, but I remember that the code was also taking advantage of Bash local variables, which have dynamic scope, like ancient Lisp dialects and defvar variables in Common Lisp.
In Bash, eval takes place in a dynamic environment which sees the surrounding local variables, which is something you can exploit; it can help bring about some macro-like semantics. In a Lisp with lexically scoped local variables, eval-ed code has no access to those variables, but macro-substituted code does. Under dynamic scope, evaled code can access and assign surrounding locals.
Here is an example. Note that because eval is a command which has no control over expansions taking place in its argument space because it is called (analogously to eval in Lisp being a function which doesn't control argument evaluation), the client code is encumbered with quoting responsibilities.
# $1 = variable
# $2 = low
# $3 = high
# $4 = body
dofor()
{
cat <<!
$1=$2 ;
while [ \$$1 -lt $3 ] ; do
$4
$1=\$(( $1 + 1 ))
done
!
}
eval "$(dofor i 0 100 'printf "[%d]\n" $i')"
We could make it so that
eval $(dofor i 0 100 'printf "[%d]\n" $i')
works without the quotes, at the cost of more heaps of arcane escapery inside dofor.
Imagine we extended the shell with a built-in command evalcmd, which let us write this instead of the above:
evalcmd dofor i 0 100 'printf "[%d]\n" $i'
Can we write that as a shell function? It turns out, yes:
# run the command specified in the arguments
# capturing its output, which is evaled in quotes
evcmd()
{
eval "$("$#")"
}
evcmd dofor i 0 100 'printf "[%d]\n" $i'
Now, though still monstrously inefficient, it's substantially more ergonomic.
Finally, let's ask: could we split dofor into an a dofor_impl which generates the code, and a dofor command which calls dofor_impl and invokes the evcmd semantics? Also, yes:
dofor_impl()
{
cat <<!
$1=$2 ;
while [ \$$1 -lt $3 ] ; do
$4
$1=\$(( $1 + 1 ))
done
!
}
dofor()
{
# like evcmd, but inserting an operator into the left position
eval "$(dofor_impl "$#")"
}
dofor i 0 100 'printf "[%d]\n" $i'
This is not bad for some simple uses, but what we can't achieve is not having to put the $i into a quote so that the substitution doesn't take place before dofor is invoked.
In Bash you can define functions as follows:
function run_exit_code () {
echo "EXIT $1"
}
function check_some_condition () {
echo "CHECKING $1";
true
}
And your code can execute commands associated with variables:
function my_code () {
var=$1
stuff=$2
if check_some_condition $var; then
echo "OK";
$stuff;
run_exit_code $var
fi
}
So you can write, for example:
$ my_code /tmp 'ls /'
CHECKING /tmp
OK
bin boot cdrom dev etc home lib lib32 lib64 libx32 lost+found media mnt opt proc root run sbin srv swapfile sys tmp usr var
EXIT /tmp
If you want stuff to refer to $var, then you need to add eval:
function my_code () {
var=$1
stuff=$2
if check_some_condition $var; then
echo "OK";
eval $stuff; # <<< eval
run_exit_code $var
fi
}
This allows you to write a quoted bash expression and have it beeing evaluated in the context of your function:
$ my_code / 'ls $var'
CHECKING /
OK
bin boot cdrom dev etc home lib lib32 ...
EXIT /

Evaluate variable at time of function declaration in shell

I'm setting up my shell environments and I want to be able to use some of the same functions/aliases in zsh as in bash. One of these functions opens either .bashrc or .zshrc in an editor (whichever file is relevant), waits for the editor to close, then reloads the rc file.
# a very simplified version of this function
editrc() {
local rcfile=".$(basename $SHELL)rc"
code -w ~/$rcfile
. ~/$rcfile
}
I use the value of rcfile in a few other functions, so I've pulled it out of the function declaration.
_rc=".$(basename $SHELL)rc"
editrc() {
code -w ~/$_rc
. ~/$_rc
}
# ... other functions that use it ...
unset _rc
However, because I'm a neat freak, I want to unset _rc at the end of my script, but I still want my functions to run correctly. Is there a clever way to evaluate $_rc at the time the function is declared?
I know I could use eval and place everything except $_rc instances within single quotes, but that seems like a pain, since the full version of my function uses both single-quotes and double-quotes.
_rc=".$(basename $SHELL)rc"
eval 'editrc() {
echo Here'"'"'s a thing that uses single quotes. As you can see it'"'"'s a pain.
code -w ~/'$_rc'
. ~/'$_rc'
}'
# ... other functions using `_rc`
unset _rc
I'm guessing I could declare my functions, then do some magic with eval "$(declare -f editrc | awk)". It very well be more pain than it's worth, but I'm always interested in learning new things.
Note: I'd love to generalize this into a utility function that does this.
_myvar=foo
anothervar=bar
myfunc() {
echo $_myvar $anothervar
}
# redeclares myfunc with `$_myvar` expanded, but leaves `$anothervar` as-is
expandfunctionvars myfunc '$_myvar'
Is there a clever way to evaluate $_rc at the time the function is declared?
_rc=".$(basename "$SHELL")rc"
# while you could eval here, source lets you work with a stream
source <(
cat <<EOF
editrc() {
local _rc
# first safely trasfer context
$(declare -p _rc)
EOF
# use quoted here string to do anything inside without caring.
cat <<'EOF'
# do anything else
echo "Here's a thing that uses single quotes. As you can see it's not a pain, just choose proper quoting."
code -w "~/$_rc"
. "~/$_rc"
}
EOF
)
unset _rc
Generally first use declare -p to transfer variables as strings to be evaluated. Then after you "import" variables, use a quoted here document to do anything as in a normal script.
References to read:
<<EOF is a here document. Note the difference in parsing when the here delimiter is quoted vs unquoted.
<(..) is a process substitution
The source command reads a pipe created by process substitution. Inside the process subtitution I output the function to be sourced. With the first here document I output the function name definition, with a local of the variable so that it doesn't pollute global namespace. Then with declare -p I output the variable definition as a properly quoted string later to be sourced by source. Then with a quoted here document I output the rest of the function, so that I do not need to care about quoting.
The code is bash specific, I know nothing about zsh and don't use it.
You could do it with eval too:
eval '
editrc() {
local _rc
# first safely trasfer context
'"$(declare -p _rc)"'
# use quoted here string to do anything inside without caring.
# do anything else
echo "Here'\''s a thing that uses single quotes. As you can see it'\''s not a pain, just choose proper quoting."
code -w "~/$_rc"
. "~/$_rc"
}'
But for me using a quoted here document delimiter allows for easier writing.
While KamilCuck was working on their answer, I devised a function that will take in any function name and a set of variable names, expand just those variables, and redeclare the function.
expandFnVars() {
if [[ $# -lt 2 ]]; then
>&2 echo 'expandFnVars requires at least two arguments: the function name and the variable(s) to be expanded'
return 1
fi
local fn="$1"
shift
local vars=("$#")
if [[ -z "$(declare -F $fn 2> /dev/null)" ]]; then
>&2 echo $fn is not a function.
return 1
fi
foundAllVars=true
for v in $vars; do
if [[ -z "$(declare -p $v 2> /dev/null)" ]]; then
>&2 echo $v is not a declared value.
foundAllVars=false
fi
done
[[ $foundAllVars != true ]] && return 1
fn="$(declare -f $fn)"
for v in $vars; do
local val="$(eval 'echo $'$v)" # get the value of the varable represented by $v
val="${val//\"/\\\"}" # escape any double-quotes
val="${val//\\/\\\\\\}" # escape any backslashes
fn="$(echo "$fn" | sed -r 's/"?\$'$v'"?/"'"$val"'"/g')" # replace instances of "$$v" and $$v with $val
done
eval "$fn"
}
Usage:
foo="foo bar"
bar='$foo'
baz=baz
fn() {
echo $bar $baz
}
expandFnVars fn bar
declare -f fn
# prints:
# fn ()
# {
# echo "$foo" $baz
# }
expandFnVars fn foo
declare -f fn
# prints:
# fn ()
# {
# echo "foo bar" $baz
# }
Looking at it now, I see one flaw. Suppose $bar in the original function was in single-quotes. We probably would not want its value to be replaced. This could be fixed by some clever regex lookbehinds to count the number of unescaped 's, but I'm happy with it as-is.

Using an environment variable to pass arguments to a command

I'm trying to write a bash script that takes an environment variable and passes it along to a command.
So if I had something like:
export OUT="-a=arg1 -b=\"arg2.0 arg2.1\""
I want in my bash script to do something like:
<command> -a=arg1 '-b=arg2.0 arg2.1'
I have one approach that seems to do this, but it involves using eval:
eval <command> ${OUT}
If I include set -x right about the command, I will see:
+ eval <command> a=arg1 'b="arg2.0' 'arg2.1"'
++ <command> -a=arg1 '-b=arg2.0 arg.1'
However, I've poked around the dangers of using eval and since this will be taking the arguments from user input, it's less than ideal.
Since this is bash, I've also considered using arrays to store my arguments and simply put: <command> "$ARRAY[#]" to do what I want. I've been trying to use IFS, but I'm not sure what I should be splitting on.
If you're not completely inflexible about the format of $OUT, one possibility would be to repeat the option= string to allow for concatenation. Then you'd write:
export OUT="a=arg1 b=arg2.0 b=arg2.1"
If that is acceptable, the following script will work
#!/bin/bash
# Parse $OUT into an associative array.
# Instead of using $OUT, it would be cleaner to use "$#".
declare -A args
for arg in $OUT; do
if [[ "$arg" =~ ^([[:alnum:]]+)=(.*)$ ]]; then
key=${BASH_REMATCH[1]}
val=${BASH_REMATCH[2]}
if [[ -z ${args[$key]} ]]; then
args[$key]=-$key="$val"
else
args[$key]+=" $val"
fi
fi
done
# Test, approximately as specified
command() { :; }
set -x
command "${args[#]}"
set +x
I can't say I like it much, but it's the closest I've been able to come.
Here's a sample run:
$ export OUT="a=foo b=bar b=glitch s9= s9=* "
./command-runner
+ command -a=foo '-b=bar glitch' '-s9= *'
+ :
+ set +x
If you import a bash function (for example, in your bash startup file), you can make much better use of arrays. Here's one approach:
# This goes into your bash startup file:
declare -a SAVED_ARGS
save_args() {
SAVED_ARGS=("$#")
}
do_script() {
/path/to/script.sh "${SAVED_ARGS[#]}" "$#"
}
For expository purposes, script.sh:
#!/bin/bash
command() { :; }
set -x
command "${#/#/-}"
set +x
Example:
$ save_args x=3 y="a few words from our sponsor"
$ do_script a=3 b="arg2.0 arg2.1"
+ command -x=3 '-y=a few words from our sponsor' -a=3 '-b=arg2.0 arg2.1'
+ :
+ set +x
$ do_script a=42
+ command -x=3 '-y=a few words from our sponsor' -a=42
+ :
+ set +x
In case it's not obvious:
command() { :; }
defines a bash function called command which does almost nothing (except invoke the builtin : which does nothing), and
"${#/#/-}"
expands to the positional parameters, inserting a dash at the beginning of each one use a find-and-replace substitution. The pattern # is actually an empty pattern which only matches at the beginning of the string.
For the simplified problem described in the answer above; i.e., turning the following environment variable into three arguments inside a bash script:
export OPTS="a=arg1 b=arg2.0 b=arg2.1"
Just do the following:
#!/bin/bash
opts=( $OPTS )
my-command "${opts[#]}"
# Use this for debugging:
echo "number of opts = ${#opts[#]}; opts are: ${opts[#]}"
set your env varivbale as:
export abc=123
while execution of any script where abc need to pass as an argument pass as below:
./testing.sh "$abc"

How to access command line arguments of the caller inside a function?

I'm attempting to write a function in bash that will access the scripts command line arguments, but they are replaced with the positional arguments to the function. Is there any way for the function to access the command line arguments if they aren't passed in explicitly?
# Demo function
function stuff {
echo $0 $*
}
# Echo's the name of the script, but no command line arguments
stuff
# Echo's everything I want, but trying to avoid
stuff $*
If you want to have your arguments C style (array of arguments + number of arguments) you can use $# and $#.
$# gives you the number of arguments.
$# gives you all arguments. You can turn this into an array by args=("$#").
So for example:
args=("$#")
echo $# arguments passed
echo ${args[0]} ${args[1]} ${args[2]}
Note that here ${args[0]} actually is the 1st argument and not the name of your script.
My reading of the Bash Reference Manual says this stuff is captured in BASH_ARGV,
although it talks about "the stack" a lot.
#!/bin/bash
shopt -s extdebug
function argv {
for a in ${BASH_ARGV[*]} ; do
echo -n "$a "
done
echo
}
function f {
echo f $1 $2 $3
echo -n f ; argv
}
function g {
echo g $1 $2 $3
echo -n g; argv
f
}
f boo bar baz
g goo gar gaz
Save in f.sh
$ ./f.sh arg0 arg1 arg2
f boo bar baz
fbaz bar boo arg2 arg1 arg0
g goo gar gaz
ggaz gar goo arg2 arg1 arg0
f
fgaz gar goo arg2 arg1 arg0
#!/usr/bin/env bash
echo name of script is $0
echo first argument is $1
echo second argument is $2
echo seventeenth argument is $17
echo number of arguments is $#
Edit: please see my comment on question
Ravi's comment is essentially the answer. Functions take their own arguments. If you want them to be the same as the command-line arguments, you must pass them in. Otherwise, you're clearly calling a function without arguments.
That said, you could if you like store the command-line arguments in a global array to use within other functions:
my_function() {
echo "stored arguments:"
for arg in "${commandline_args[#]}"; do
echo " $arg"
done
}
commandline_args=("$#")
my_function
You have to access the command-line arguments through the commandline_args variable, not $#, $1, $2, etc., but they're available. I'm unaware of any way to assign directly to the argument array, but if someone knows one, please enlighten me!
Also, note the way I've used and quoted $# - this is how you ensure special characters (whitespace) don't get mucked up.
# Save the script arguments
SCRIPT_NAME=$0
ARG_1=$1
ARGS_ALL=$*
function stuff {
# use script args via the variables you saved
# or the function args via $
echo $0 $*
}
# Call the function with arguments
stuff 1 2 3 4
One can do it like this as well
#!/bin/bash
# script_name function_test.sh
function argument(){
for i in $#;do
echo $i
done;
}
argument $#
Now call your script like
./function_test.sh argument1 argument2
This is #mcarifio response with several comments incorporated:
#!/bin/bash
shopt -s extdebug
function stuff() {
local argIndex="${#BASH_ARGV[#]}"
while [[ argIndex -gt 0 ]] ; do
argIndex=$((argIndex - 1))
echo -n "${BASH_ARGV[$argIndex]} "
done
echo
}
stuff
I want to highlight:
The shopt -s extdebug is important. Without this the BASH_ARGV array will be empty unless you use it in top level part of the script (it means outside of the stuff function). Details here: Why does the variable BASH_ARGV have a different value in a function, depending on whether it is used before calling the function
BASH_ARGV is a stack so arguments are stored there in backward order. That's the reason why I decrement the index inside loop so we get arguments in the right order.
Double quotes around the ${BASH_ARGV[#]} and the # as an index instead of * are needed so arguments with spaces are handled properly. Details here: bash arrays - what is difference between ${#array_name[*]} and ${#array_name[#]}
You can use the shift keyword (operator?) to iterate through them.
Example:
#!/bin/bash
function print()
{
while [ $# -gt 0 ]
do
echo "$1"
shift 1
done
}
print "$#"
I do it like this:
#! /bin/bash
ORIGARGS="$#"
function init(){
ORIGOPT= "- $ORIGARGS -" # tacs are for sed -E
echo "$ORIGOPT"
}
The simplest and likely the best way to get arguments passed from the command line to a particular function is to include the arguments directly in the function call.
# first you define your function
function func_ImportantPrints() {
printf '%s\n' "$1"
printf '%s\n' "$2"
printf '%s\n' "$3"
}
# then when you make your function call you do this:
func_ImportantPrints "$#"
This is useful no matter if you are sending the arguments to main or some function like func_parseArguments (a function containing a case statement as seen in previous examples) or any function in the script.

lambda functions in bash

Is there a way to implement/use lambda functions in bash? I'm thinking of something like:
$ someCommand | xargs -L1 (lambda function)
I don't know of a way to do this, however you may be able to accomplish what you're trying to do using:
somecommand | while read -r; do echo "Something with $REPLY"; done
This will also be faster, as you won't be creating a new process for each line of text.
[EDIT 2009-07-09]
I've made two changes:
Incorporated litb's suggestion of using -r to disable backslash processing -- this means that backslashes in the input will be passed through unchanged.
Instead of supplying a variable name (such as X) as a parameter to read, we let read assign to its default variable, REPLY. This has the pleasant side-effect of preserving leading and trailing spaces, which are stripped otherwise (even though internal spaces are preserved).
From my observations, together these changes preserve everything except literal NUL (ASCII 0) characters on each input line.
[EDIT 26/7/2016]
According to commenter Evi1M4chine, setting $IFS to the empty string before running read X (e.g., with the command IFS='' read X) should also preserve spaces at the beginning and end when storing the result into $X, meaning you aren't forced to use $REPLY.
if you want true functions, and not just pipes or while loops (e.g. if you want to pass them around, as if they were data) I’d just not do lambdas, and define dummy functions with a recurring dummy name, to use right away, and throw away afterwards. Like so:
# An example map function, to use in the example below.
map() { local f="$1"; shift; for i in "$#"; do "$f" "$i"; done; }
# Lambda function [λ], passed to the map function.
λ(){ echo "Lambda sees $1"; }; map λ *
Like in proper functional languages, there’s no need to pass parameters, as you can wrap them in a closure:
# Let’s say you have a function with three parameters
# that you want to use as a lambda:
# (As in: Partial function application.)
trio(){ echo "$1 Lambda sees $3 $2"; }
# And there are two values that you want to use to parametrize a
# function that shall be your lambda.
pre="<<<"
post=">>>"
# Then you’d just wrap them in a closure, and be done with it:
λ(){ trio "$pre" "$post" "$#"; }; map λ *
I’d argue that it’s even shorter than all other solutions presented here.
What about this?
somecommand | xargs -d"\n" -I{} echo "the argument is: {}"
(assumes each argument is a line, otherwise change delimiter)
#!/bin/bash
function customFunction() {
eval $1
}
command='echo Hello World; echo Welcome;'
customFunction "$command"
GL
Source
if you want only xargs (due parallel -P N option for example), and only bash as function code, then bash -c can be used as parameter for xargs.
seq 1 10 | tr '\n' '\0' | xargs -0 -n 1 bash -c 'echo any bash code $0'
tr and -0 option are used here to disable any xargs parameters substitutions.
Yes. One can pass around a string variable representing a command call, and then execute the command with eval.
Example:
command='echo howdy'
eval "$command"
The eval trick has been already mentioned but here's my extended example of bash closures:
#!/usr/bin/env bash
set -e
function multiplyBy() {
X="$1"
cat <<-EOF
Y="\$1"
echo "$X * \$Y = \$(( $X * \$Y ))"
EOF
}
function callFunc() {
CODE="$1"
shift
eval "$CODE"
}
MULT_BY_2=`multiplyBy 2`
MULT_BY_4=`multiplyBy 4`
callFunc "$MULT_BY_2" 10
callFunc "$MULT_BY_4" 10
PS I've just came up with this for a completely different purpose and was just searching google to see if sb is using that. I actually needed to evaluate a reusable function in the context (shell) of main script.

Resources