How to call function dynamically in ksh - shell

How to call a function along with script name at runtime in ksh.
For example: function test
function test { echo "shankar" }
Script: emptycheck.ksh
Run time:
./emptycheck.ksh test # <---- Here I want function name (test) here

It's not clear what you are after, but perhaps this is useful:
$ cat a.sh
#!/bin/bash
foo() { echo foo; }
bar() { echo bar; }
${1-foo}
$ ./a.sh foo
foo
$ ./a.sh
foo
$ ./a.sh bar
bar

When you do not want to use getopts and have all functions activated, read the functions in your current environment using a dot:
I copy the example of William Pursell with the other syntax:
$ cat a.sh
#!/bin/bash
foo() { echo Echoing foo; }
bar() { echo Echoing bar; }
$ . a.sh
$ foo
Echoing foo
$ bar
Echoing bar

you can name a function "test", but as mentioned, it's dangerous.
and don't bother with the "#! ..." when sourcing a file. if you ain't running bash when you source it, that won't change the picture.
if you insist on naming a function for an existing command or builtin (there are good reasons to), then to use the existing command or builtin, you use the "command" builtin, which says "avoid any alias or function of that name, and go for the default definition".
e.g. the former "test" is now accessed "command test ..."
therefore, the proper use of the "command" command is inside an overriding function, e.g.
function cd {
# clean up garbage in the current directory, so then:
command cd $* # executes the "real" cd, so you can:
# do whatever useful things you may what to do on arriving in the new one:
# e.g. i do a "dotlib", which sources any ".*rc" files in the current directory.
}
function comment { echo $* 2>/dev/null; }
comment good luck!

Related

Make shell functions only be found in the scope of the importing file

I declare functions in one shell file,
# a.sh
foo() { ... }
function bar() { ... }
and imported in another shell file by source:
# b.sh
source ./a.sh
# invoke foo and bar
foo
bar
Now in the shell, I can use foo/bar after executing b.sh
$ source b.sh
...
# I can call foo or bar now in the shell (undesirable)
$ foo
...
How can I make the functions be local variables in the scope of the importing file, and avoid them to contaminate global/environmental variables?
There's no such thing as "file scope" in shell -- just global scope and function scope. The closest you can come is running b.sh in another shell:
$ b.sh # run b.sh rather than reading it into the current shell
then everything in in b.sh will just be in that other shell and will "go away" when it exits. But that applies to everything defined in b.sh -- all functions, aliases, environment and other variables.
It is possible to isolates private shell functions this way.
# sourced a.sh
# a_main is exposed public
my_public_a() (
private_a() {
echo "I am private_a only visible to my_public_a"
}
private_b() {
echo "I am get_b only visible to my_public_a"
}
case "$1" in
a) private_a;;
b) private_b;;
*) exit;;
esac
)
# b.sh
source a.sh
my_public_a a
my_public_a b
private_a # command not found
private_b # command not found
Even though bash does not provide direct support, what you need is still achievable:
#!/usr/bin/env bash
# b.sh
if [[ "${BASH_SOURCE[0]}" = "$0" ]] ;then
source ./a.sh
# invoke foo and bar
foo
bar
else
echo "b.sh is being sourced. foo/bar will not be available."
fi
Above is not 100% reliable, but should cover most cases.

Call from script A a function defined in script B - without executing the body of script B [duplicate]

I have a shell script that I would like to test with shUnit. The script (and all the functions) are in a single file since it makes installation much easier.
Example for script.sh
#!/bin/sh
foo () { ... }
bar () { ... }
code
I wanted to write a second file (that does not need to be distributed and installed) to test the functions defined in script.sh
Something like run_tests.sh
#!/bin/sh
. script.sh
# Unit tests
Now the problem lies in the . (or source in Bash). It does not only parse function definitions but also executes the code in the script.
Since the script with no arguments does nothing bad I could
. script.sh > /dev/null 2>&1
but I was wandering if there is a better way to achieve my goal.
Edit
My proposed workaround does not work in the case the sourced script calls exit so I have to trap the exit
#!/bin/sh
trap run_tests ERR EXIT
run_tests() {
...
}
. script.sh
The run_tests function is called but as soon as I redirect the output of the source command the functions in the script are not parsed and are not available in the trap handler
This works but I get the output of script.sh:
#!/bin/sh
trap run_tests ERR EXIT
run_tests() {
function_defined_in_script_sh
}
. script.sh
This does not print the output but I get an error that the function is not defined:
#!/bin/sh
trap run_tests ERR EXIT
run_tests() {
function_defined_in_script_sh
}
. script.sh | grep OUTPUT_THAT_DOES_NOT_EXISTS
This does not print the output and the run_tests trap handler is not called at all:
#!/bin/sh
trap run_tests ERR EXIT
run_tests() {
function_defined_in_script_sh
}
. script.sh > /dev/null
According to the “Shell Builtin Commands” section of the bash manpage, . aka source takes an optional list of arguments which are passed to the script being sourced. You could use that to introduce a do-nothing option. For example, script.sh could be:
#!/bin/sh
foo() {
echo foo $1
}
main() {
foo 1
foo 2
}
if [ "${1}" != "--source-only" ]; then
main "${#}"
fi
and unit.sh could be:
#!/bin/bash
. ./script.sh --source-only
foo 3
Then script.sh will behave normally, and unit.sh will have access to all the functions from script.sh but will not invoke the main() code.
Note that the extra arguments to source are not in POSIX, so /bin/sh might not handle it—hence the #!/bin/bash at the start of unit.sh.
Picked up this technique from Python, but the concept works just fine in bash or any other shell...
The idea is that we turn the main code section of our script into a function. Then at the very end of the script, we put an 'if' statement that will only call that function if we executed the script but not if we sourced it. Then we explicitly call the script() function from our 'runtests' script which has sourced the 'script' script and thus contains all its functions.
This relies on the fact that if we source the script, the bash-maintained environment variable $0, which is the name of the script being executed, will be the name of the calling (parent) script (runtests in this case), not the sourced script.
(I've renamed script.sh to just script cause the .sh is redundant and confuses me. :-)
Below are the two scripts. Some notes...
$# evaluates to all of the arguments passed to the function or
script as individual strings. If instead, we used $*, all the
arguments would be concatenated together into one string.
The RUNNING="$(basename $0)" is required since $0 always includes at
least the current directory prefix as in ./script.
The test if [[ "$RUNNING" == "script" ]].... is the magic that causes
script to call the script() function only if script was run directly
from the commandline.
script
#!/bin/bash
foo () { echo "foo()"; }
bar () { echo "bar()"; }
script () {
ARG1=$1
ARG2=$2
#
echo "Running '$RUNNING'..."
echo "script() - all args: $#"
echo "script() - ARG1: $ARG1"
echo "script() - ARG2: $ARG2"
#
foo
bar
}
RUNNING="$(basename $0)"
if [[ "$RUNNING" == "script" ]]
then
script "$#"
fi
runtests
#!/bin/bash
source script
# execute 'script' function in sourced file 'script'
script arg1 arg2 arg3
If you are using Bash, a similar solution to #andrewdotn's approach (but without needing an extra flag or depending on the script name) can be accomplished by using BASH_SOURCE array.
script.sh:
#!/bin/bash
foo () { ... }
bar () { ... }
main() {
code
}
if [[ "${#BASH_SOURCE[#]}" -eq 1 ]]; then
main "$#"
fi
run_tests.sh:
#!/bin/bash
. script.sh
# Unit tests
If you are using Bash, another solution may be:
#!/bin/bash
foo () { ... }
bar () { ... }
[[ "${FUNCNAME[0]}" == "source" ]] && return
code
I devised this. Let's say our shell library file is the following file, named aLib.sh:
funcs=("a" "b" "c") # File's functions' names
for((i=0;i<${#funcs[#]};i++)); # Avoid function collision with existing
do
declare -f "${funcs[$i]}" >/dev/null
[ $? -eq 0 ] && echo "!!ATTENTION!! ${funcs[$i]} is already sourced"
done
function a(){
echo function a
}
function b(){
echo function b
}
function c(){
echo function c
}
if [ "$1" == "--source-specific" ]; # Source only specific given as arg
then
for((i=0;i<${#funcs[#]};i++));
do
for((j=2;j<=$#;j++));
do
anArg=$(eval 'echo ${'$j'}')
test "${funcs[$i]}" == "$anArg" && continue 2
done
unset ${funcs[$i]}
done
fi
unset i j funcs
At the beginning it checks and warns for any function name collision detected.
At the end, bash has already sourced all functions, so it frees memory from them and keeps only the ones selected.
Can be used like this:
user#pc:~$ source aLib.sh --source-specific a c
user#pc:~$ a; b; c
function a
bash: b: command not found
function c
~

Strange Bash function export for the Shellshock bug

Why does the code
date
bash -c "date"
declare -x date='() { echo today; }' #aka export date='() { echo today; }'
date
bash -c "date"
print
Wed Sep 24 22:01:50 CEST 2014
Wed Sep 24 22:01:50 CEST 2014
Wed Sep 24 22:01:50 CEST 2014
today
?
Where (and why) does the evaluation
date$date
happen and getting
date() {echo today; }
Ad: #Etan Reisner
I exporting a variable - not a function. Bash makes a function from it. The
export date='someting'
is still a variable regardless of its content. So, why is
export date='() { echo something; }' #Note, it is a variable, not function.
converted to an function?
The mentioned security advisory talks about the execution of the command following the variable, for example,
x='() { echo I do nothing; }; echo vulnerable' bash -c ':'
^^^^^^^^^^^^^^^
This is executed - this vunerability is CLOSED in version 4.3.25(1).
The command after the env-definition isn't executed in the latest Bash.
But the question remains - Why does Bash convert the exported variable to a function?
It is a bug ;) Full demo, based on #chepner's answer:
#Define three variables
foo='() { echo variable foo; }' # ()crafted
qux='() { echo variable qux; }' # ()crafted
bar='variable bar' # Normal
export foo qux bar # Export
#Define the same name functions (but not qux!)
foo() { echo "function foo"; }
bar() { echo "function bar"; }
declare -fx foo bar #Export
#printouts
echo "current shell foo variable:=$foo="
echo "current shell foo function:=$(foo)="
echo "current shell bar variable:=$bar="
echo "current shell bar function:=$(bar)="
echo "current shell qux variable:=$qux="
echo "current shell qux function:=$(qux)="
#subshell
bash -c 'echo subshell foo variable:=$foo='
bash -c 'echo subshell foo command :=$(foo)='
bash -c 'echo subshell bar variable:=$bar='
bash -c 'echo subshell bar command :=$(bar)='
bash -c 'echo subshell qux variable:=$qux='
bash -c 'echo subshell qux command :=$(qux)='
prints
current shell foo variable:=() { echo variable foo; }=
current shell foo function:=function foo=
current shell bar variable:=variable bar=
current shell bar function:=function bar=
current shell qux variable:=() { echo variable qux; }=
tt: line 20: qux: command not found
current shell qux function:==
subshell foo variable:== #<-- LOST the exported foo variable
subshell foo command :=function foo=
subshell bar variable:=variable bar=
subshell bar command :=function bar=
subshell qux variable:== #<-- And the variable qux got converted to
subshell qux command :=variable qux= #<-- function qux in the subshell (!!!).
Avoiding the long comments, here is code from the Bash sources:
if (privmode == 0 && read_but_dont_execute == 0 && STREQN ("() {", string, 4))
^^^^^^^^ THE PROBLEM
{
string_length = strlen (string);
temp_string = (char *)xmalloc (3 + string_length + char_index);
strcpy (temp_string, name);
temp_string[char_index] = ' ';
strcpy (temp_string + char_index + 1, string);
if (posixly_correct == 0 || legal_identifier (name))
parse_and_execute (temp_string, name, SEVAL_NONINT|SEVAL_NOHIST);
/* Ancient backwards compatibility. Old versions of bash exported
functions like name()=() {...} */
The "ancient" (seems) was better... :)
if (name[char_index - 1] == ')' && name[char_index - 2] == '(')
name[char_index - 2] = '\0';
The key point to remember is that
foo='() { echo 5; }'
only defines a string parameter with a string that looks a lot like a function. It's still a regular string:
$ echo $foo
() { echo 5; }
And not a function:
$ foo
bash: foo: command not found
Once foo is marked for export,
$ export foo
any child Bash will see the following string in its environment:
foo=() { echo 5; }
Normally, such strings become shell variables, using the part preceding the = as the name and the part following the value. However, Bash treats such strings specially by defining a function instead:
$ echo $foo
$ foo
5
You can see that the environment itself is not changed by examining it with something other than Bash:
$ perl -e 'print $ENV{foo}\n"'
() { echo 5
}
(The parent Bash replaces the semicolon with a newline when creating the child's environment, apparently). It's only the child Bash that creates a function instead of a shell variable from such a string.
The fact that foo could be both a parameter and a function within the same shell;
$ foo=5
$ foo () { echo 9; }
$ echo $foo
5
$ foo
9
explains why -f is needed with export. export foo would cause the string foo=5 to be added to the environment of a child; export -f foo is used to add the string foo=() { echo 9; }.
You are essentially manually exporting a function with the name date. (Since that is the format that bash uses internally to export functions. Which is suggested by Barmar in his answer. This mechanism is mentioned here at the very least.)
Then when you run bash it sees that exported function and uses it when you tell it to run date.
Is the question then where is that mechanism specified? My guess is it isn't since it is an internal detail.
This should show the merging of the behaviours if that helps anything.
$ bar() { echo automatic; }; export -f bar
$ declare -x foo='() { echo manual; }'
$ declare -p foo bar
declare -x foo="() { echo manual; }"
-bash: declare: bar: not found
$ type foo bar
-bash: type: foo: not found
bar is a function
bar ()
{
echo automatic
}
$ bash -c 'type foo bar'
foo is a function
foo ()
{
echo manual
}
bar is a function
bar ()
{
echo automatic
}
The answer to your question comes directly from man bash:
The export and declare -x commands allow parameters and functions
to be added to and deleted from the environment. If the value of a
parameter in the environment is modified, the new value becomes part
of the environment, replacing the old.
Thus
declare -x date='() { echo today; }'
replaces date in the environment. The next immediate call to date gives date as it exists in the script (which is unchanged). The call to bash -c "date" creates a new shell and executes date as defined by declare -x.

Importing functions from a shell script

I have a shell script that I would like to test with shUnit. The script (and all the functions) are in a single file since it makes installation much easier.
Example for script.sh
#!/bin/sh
foo () { ... }
bar () { ... }
code
I wanted to write a second file (that does not need to be distributed and installed) to test the functions defined in script.sh
Something like run_tests.sh
#!/bin/sh
. script.sh
# Unit tests
Now the problem lies in the . (or source in Bash). It does not only parse function definitions but also executes the code in the script.
Since the script with no arguments does nothing bad I could
. script.sh > /dev/null 2>&1
but I was wandering if there is a better way to achieve my goal.
Edit
My proposed workaround does not work in the case the sourced script calls exit so I have to trap the exit
#!/bin/sh
trap run_tests ERR EXIT
run_tests() {
...
}
. script.sh
The run_tests function is called but as soon as I redirect the output of the source command the functions in the script are not parsed and are not available in the trap handler
This works but I get the output of script.sh:
#!/bin/sh
trap run_tests ERR EXIT
run_tests() {
function_defined_in_script_sh
}
. script.sh
This does not print the output but I get an error that the function is not defined:
#!/bin/sh
trap run_tests ERR EXIT
run_tests() {
function_defined_in_script_sh
}
. script.sh | grep OUTPUT_THAT_DOES_NOT_EXISTS
This does not print the output and the run_tests trap handler is not called at all:
#!/bin/sh
trap run_tests ERR EXIT
run_tests() {
function_defined_in_script_sh
}
. script.sh > /dev/null
According to the “Shell Builtin Commands” section of the bash manpage, . aka source takes an optional list of arguments which are passed to the script being sourced. You could use that to introduce a do-nothing option. For example, script.sh could be:
#!/bin/sh
foo() {
echo foo $1
}
main() {
foo 1
foo 2
}
if [ "${1}" != "--source-only" ]; then
main "${#}"
fi
and unit.sh could be:
#!/bin/bash
. ./script.sh --source-only
foo 3
Then script.sh will behave normally, and unit.sh will have access to all the functions from script.sh but will not invoke the main() code.
Note that the extra arguments to source are not in POSIX, so /bin/sh might not handle it—hence the #!/bin/bash at the start of unit.sh.
Picked up this technique from Python, but the concept works just fine in bash or any other shell...
The idea is that we turn the main code section of our script into a function. Then at the very end of the script, we put an 'if' statement that will only call that function if we executed the script but not if we sourced it. Then we explicitly call the script() function from our 'runtests' script which has sourced the 'script' script and thus contains all its functions.
This relies on the fact that if we source the script, the bash-maintained environment variable $0, which is the name of the script being executed, will be the name of the calling (parent) script (runtests in this case), not the sourced script.
(I've renamed script.sh to just script cause the .sh is redundant and confuses me. :-)
Below are the two scripts. Some notes...
$# evaluates to all of the arguments passed to the function or
script as individual strings. If instead, we used $*, all the
arguments would be concatenated together into one string.
The RUNNING="$(basename $0)" is required since $0 always includes at
least the current directory prefix as in ./script.
The test if [[ "$RUNNING" == "script" ]].... is the magic that causes
script to call the script() function only if script was run directly
from the commandline.
script
#!/bin/bash
foo () { echo "foo()"; }
bar () { echo "bar()"; }
script () {
ARG1=$1
ARG2=$2
#
echo "Running '$RUNNING'..."
echo "script() - all args: $#"
echo "script() - ARG1: $ARG1"
echo "script() - ARG2: $ARG2"
#
foo
bar
}
RUNNING="$(basename $0)"
if [[ "$RUNNING" == "script" ]]
then
script "$#"
fi
runtests
#!/bin/bash
source script
# execute 'script' function in sourced file 'script'
script arg1 arg2 arg3
If you are using Bash, a similar solution to #andrewdotn's approach (but without needing an extra flag or depending on the script name) can be accomplished by using BASH_SOURCE array.
script.sh:
#!/bin/bash
foo () { ... }
bar () { ... }
main() {
code
}
if [[ "${#BASH_SOURCE[#]}" -eq 1 ]]; then
main "$#"
fi
run_tests.sh:
#!/bin/bash
. script.sh
# Unit tests
If you are using Bash, another solution may be:
#!/bin/bash
foo () { ... }
bar () { ... }
[[ "${FUNCNAME[0]}" == "source" ]] && return
code
I devised this. Let's say our shell library file is the following file, named aLib.sh:
funcs=("a" "b" "c") # File's functions' names
for((i=0;i<${#funcs[#]};i++)); # Avoid function collision with existing
do
declare -f "${funcs[$i]}" >/dev/null
[ $? -eq 0 ] && echo "!!ATTENTION!! ${funcs[$i]} is already sourced"
done
function a(){
echo function a
}
function b(){
echo function b
}
function c(){
echo function c
}
if [ "$1" == "--source-specific" ]; # Source only specific given as arg
then
for((i=0;i<${#funcs[#]};i++));
do
for((j=2;j<=$#;j++));
do
anArg=$(eval 'echo ${'$j'}')
test "${funcs[$i]}" == "$anArg" && continue 2
done
unset ${funcs[$i]}
done
fi
unset i j funcs
At the beginning it checks and warns for any function name collision detected.
At the end, bash has already sourced all functions, so it frees memory from them and keeps only the ones selected.
Can be used like this:
user#pc:~$ source aLib.sh --source-specific a c
user#pc:~$ a; b; c
function a
bash: b: command not found
function c
~

Bash - How to call a function declared in a parent shell?

I am writing a bash script that calls functions declared in the parent shell, but it doesn't work.
For example:
$ function myfunc() { echo "Here in myfunc" ; }
$ myfunc
Here in myfunc
$ cat test.sh
#! /bin/bash
echo "Here in the script"
myfunc
$ ./test.sh
Here in the script
./test.sh: line 4: myfunc: command not found
$ myfunc
Here in myfunc
As you can see the script ./test.sh is unable to call the function myfunc, is there some way to make that function visible to the script?
Try
$ export -f myfunc
in the parent shell, to export the function.
#OP, normally you would put your function that every script uses in a file, then you source it in your script. example, save
function myfunc() { echo "Here in myfunc" ; }
in a file called /path/library. Then in your script, source it like this:
#!/bin/bash
. /path/library
myfunc
This also works but I noticed ${0} takes parent's value:
Maybe more useful if you don't want to have a bunch of export calls in your scripts.
script1:
#!/bin/bash
func()
{
echo func "${1}"
}
func "1"
$(. ./script2)
script2:
#!/bin/bash
func "2"
Output:
[mymachine]# ./script1
func 1
func 2

Resources