In the bash manual section 3.7.4, it states
The environment for any simple command or function may be augmented
temporarily by prefixing it with parameter assignments, as described
in Shell Parameters [section 3.4].
And a trivial example of this is
MYVAR=MYVALUE mycommand
I have read section 3.4 and I still can't figure out how to specify multiple parameter assignments. Note that the statement in 3.7.4 is definitely plural, implying that it is possible.
The following does not seem to work:
MYVAR1=abc MYVAR2=xyz mycommand
I am using bash version 4.1, 23 December 2009.
It should work. That is acceptable syntax. Here's an example:
$ cat a
#!/bin/sh
echo $MYVAR1
echo $MYVAR2
$ ./a
$ MYVAR1=abc MYVAR2=xyz ./a
abc
xyz
$
UPDATE: Your updated example given in your answer will work if you precede the simple command with the variables as required:
mycommand () { echo MYVAR1=[$MYVAR1]; echo MYVAR2=[$MYVAR2]; }
for f in ~/*.txt ; do MYVAR1=abc MYVAR2=xyz mycommand; done
Oops, my example was over-simplified. This works fine:
mycommand () { echo MYVAR1=[$MYVAR1]; echo MYVAR2=[$MYVAR2]; }
MYVAR1=abc MYVAR2=xyz mycommand
but this does not:
mycommand () { echo MYVAR1=[$MYVAR1]; echo MYVAR2=[$MYVAR2]; }
MYVAR1=abc MYVAR2=xyz for f in ~/*.txt; do mycommand; done
and the key difference is this phrase:
for any simple command or function
A for command is a complex command, not "a simple command or function", according to section 3.2.
I've never seen that method used.
You could always just pass in regular parameters to the script.
Updating for new example:
This works in both situations:
> mycommand () { echo MYVAR1=[$1]; echo MYVAR2=[$2]; }
> mycommand a b
MYVAR1=[a]
MYVAR2=[b]
> for f in ~/file*.txt; do mycommand a b; done
MYVAR1=[a]
MYVAR2=[b]
MYVAR1=[a]
MYVAR2=[b]
Related
A recent vulnerability, CVE-2014-6271, in how Bash interprets environment variables was disclosed. The exploit relies on Bash parsing some environment variable declarations as function definitions, but then continuing to execute code following the definition:
$ x='() { echo i do nothing; }; echo vulnerable' bash -c ':'
vulnerable
But I don't get it. There's nothing I've been able to find in the Bash manual about interpreting environment variables as functions at all (except for inheriting functions, which is different). Indeed, a proper named function definition is just treated as a value:
$ x='y() { :; }' bash -c 'echo $x'
y() { :; }
But a corrupt one prints nothing:
$ x='() { :; }' bash -c 'echo $x'
$ # Nothing but newline
The corrupt function is unnamed, and so I can't just call it. Is this vulnerability a pure implementation bug, or is there an intended feature here, that I just can't see?
Update
Per Barmar's comment, I hypothesized the name of the function was the parameter name:
$ n='() { echo wat; }' bash -c 'n'
wat
Which I could swear I tried before, but I guess I didn't try hard enough. It's repeatable now. Here's a little more testing:
$ env n='() { echo wat; }; echo vuln' bash -c 'n'
vuln
wat
$ env n='() { echo wat; }; echo $1' bash -c 'n 2' 3 -- 4
wat
…so apparently the args are not set at the time the exploit executes.
Anyway, the basic answer to my question is, yes, this is how Bash implements inherited functions.
This seems like an implementation bug.
Apparently, the way exported functions work in bash is that they use specially-formatted environment variables. If you export a function:
f() { ... }
it defines an environment variable like:
f='() { ... }'
What's probably happening is that when the new shell sees an environment variable whose value begins with (), it prepends the variable name and executes the resulting string. The bug is that this includes executing anything after the function definition as well.
The fix described is apparently to parse the result to see if it's a valid function definition. If not, it prints the warning about the invalid function definition attempt.
This article confirms my explanation of the cause of the bug. It also goes into a little more detail about how the fix resolves it: not only do they parse the values more carefully, but variables that are used to pass exported functions follow a special naming convention. This naming convention is different from that used for the environment variables created for CGI scripts, so an HTTP client should never be able to get its foot into this door.
The following:
x='() { echo I do nothing; }; echo vulnerable' bash -c 'typeset -f'
prints
vulnerable
x ()
{
echo I do nothing
}
declare -fx x
seems, than Bash, after having parsed the x=..., discovered it as a function, exported it, saw the declare -fx x and allowed the execution of the command after the declaration.
echo vulnerable
x='() { x; }; echo vulnerable' bash -c 'typeset -f'
prints:
vulnerable
x ()
{
echo I do nothing
}
and running the x
x='() { x; }; echo Vulnerable' bash -c 'x'
prints
Vulnerable
Segmentation fault: 11
segfaults - infinite recursive calls
It doesn't overrides already defined function
$ x() { echo Something; }
$ declare -fx x
$ x='() { x; }; echo Vulnerable' bash -c 'typeset -f'
prints:
x ()
{
echo Something
}
declare -fx x
e.g. the x remains the previously (correctly) defined function.
For the Bash 4.3.25(1)-release the vulnerability is closed, so
x='() { echo I do nothing; }; echo Vulnerable' bash -c ':'
prints
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x'
but - what is strange (at least for me)
x='() { x; };' bash -c 'typeset -f'
STILL PRINTS
x ()
{
x
}
declare -fx x
and the
x='() { x; };' bash -c 'x'
segmentation faults too, so it STILL accept the strange function definition...
I think it's worth looking at the Bash code itself. The patch gives a bit of insight as to the problem. In particular,
*** ../bash-4.3-patched/variables.c 2014-05-15 08:26:50.000000000 -0400
--- variables.c 2014-09-14 14:23:35.000000000 -0400
***************
*** 359,369 ****
strcpy (temp_string + char_index + 1, string);
! if (posixly_correct == 0 || legal_identifier (name))
! parse_and_execute (temp_string, name, SEVAL_NONINT|SEVAL_NOHIST);
!
! /* Ancient backwards compatibility. Old versions of bash exported
! functions like name()=() {...} */
! if (name[char_index - 1] == ')' && name[char_index - 2] == '(')
! name[char_index - 2] = '\0';
if (temp_var = find_function (name))
--- 364,372 ----
strcpy (temp_string + char_index + 1, string);
! /* Don't import function names that are invalid identifiers from the
! environment, though we still allow them to be defined as shell
! variables. */
! if (legal_identifier (name))
! parse_and_execute (temp_string, name, SEVAL_NONINT|SEVAL_NOHIST|SEVAL_FUNCDEF|SEVAL_ONECMD);
if (temp_var = find_function (name))
When Bash exports a function, it shows up as an environment variable, for example:
$ foo() { echo 'hello world'; }
$ export -f foo
$ cat /proc/self/environ | tr '\0' '\n' | grep -A1 foo
foo=() { echo 'hello world'
}
When a new Bash process finds a function defined this way in its environment, it evalutes the code in the variable using parse_and_execute(). For normal, non-malicious code, executing it simply defines the function in Bash and moves on. However, because it's passed to a generic execution function, Bash will correctly parse and execute additional code defined in that variable after the function definition.
You can see that in the new code, a flag called SEVAL_ONECMD has been added that tells Bash to only evaluate the first command (that is, the function definition) and SEVAL_FUNCDEF to only allow functio0n definitions.
In regard to your question about documentation, notice here in the commandline documentation for the env command, that a study of the syntax shows that env is working as documented.
There are, optionally, 4 possible options
An optional hyphen as a synonym for -i (for backward compatibility I assume)
Zero or more NAME=VALUE pairs. These are the variable assignment(s) which could include function definitions.
Note that no semicolon (;) is required between or following the assignments.
The last argument(s) can be a single command followed by its argument(s). It will run with whatever permissions have been granted to the login being used. Security is controlled by restricting permissions on the login user and setting permissions on user-accessible executables such that users other than the executable's owner can only read and execute the program, not alter it.
[ spot#LX03:~ ] env --help
Usage: env [OPTION]... [-] [NAME=VALUE]... [COMMAND [ARG]...]
Set each NAME to VALUE in the environment and run COMMAND.
-i, --ignore-environment start with an empty environment
-u, --unset=NAME remove variable from the environment
--help display this help and exit
--version output version information and exit
A mere - implies -i. If no COMMAND, print the resulting environment.
Report env bugs to bug-coreutils#gnu.org
GNU coreutils home page: <http://www.gnu.org/software/coreutils/>
General help using GNU software: <http://www.gnu.org/gethelp/>
Report env translation bugs to <http://translationproject.org/team/>
I have a script a.sh which has :
a() {
echo "123"
}
echo "dont"
Then I have other script b.sh which has :
b() {
echo "345"
}
All I want to do is to use a in b, but when I source it I don't want to print whatever is in a() or echo "Dont".
I just want to source it for now.
so I did, source a.sh in b.sh
But it doesn't work.
Reason for sourcing is. so if I want I can call any functions when I want too.
If I do . /a.sh in b.sh it prints everything in a.sh.
One approach which will work on any POSIX-compliant shell is this:
# put function definitions at the top
a() {
echo "123"
}
# divide them with a conditional return
[ -n "$FUNCTIONS_ONLY" ] && return
# put direct code to execute below
echo "dont"
...and in your other script:
FUNCTIONS_ONLY=1 . other.sh
Make a library of common functions in a file called functionLib.sh like this:
#!/bin/sh
a(){
echo Inside a, with $1
}
b(){
echo Inside b, with $1
}
Then in script1, do this:
#!/bin/sh
. functionLib.sh # Source in the functions
a 42 # Use one
b 37 # Use another
and in another script, script2 re-use the functions:
#!/bin/sh
. functionLib.sh # Source in the functions
a 23 # Re-use one
b 24 # Re-use another
I have adopted a style in my shell scripts that allows me to design every script as a potential library, making it behave differently when it is sourced (with . .../path/script) and when it is executed directly. You can compare this to the python if __name__ == '__main__': trick.
I have not found a method that is portable across all Bourne shell descendants without explicitly referring to the script's name, but this is what I use:
a() {
echo a
}
b() {
echo b
}
(program=xyzzy
set -u -e
case $0 in
*${program}) : ;;
*) exit;;
esac
# main
a
b
)
The rules for this method are strictly:
Start a section with nothing but functions.
No variable assignments or any other activity.
Then, at the very end, create a subshell ( ... )
The first action inside the subshell tests whether
it's being sourced. If so, exit from the subshell.
If not, run a command.
I'm trying to implement a small bash script in AIX, but I'm having some problems. Bellow you can find a example. I have another question, if I want to add the script to Crontab, I think I'll have problems to call serverStatus.sh from IBM, how can avoid this problem.
#!/usr/bin/sh
WAS_HOME="/usr/IBM/WebSphere/AppServer/profiles/bpmnprd01/"
function StatusCheck()
{
$WAS_HOME/bin/serverStatus.sh BPM.AppTarget.bpmnprd01.0 -username admin -password admin
status=$(cat /usr/IBM/WebSphere/AppServer/profiles/bpmnprd01/logs/BPM.AppTarget.xxxxx/serverStatus.log| awk '{ if (NF > 0) { last = $NF } } END { print last }' "$#")
text="STOPPED"
if [[ $text == $status ]]
then
echo "OK"
else
echo "NOK"
fi
}
function start()
{
StatusCheck
}
start
-----------------------
when I try to execute the script above, I get the following error:
[root#bpmnprd01]/root/health_check# ./servers_check.sh
./servers_check.sh[7]: 0403-057 Syntax error at line 7 : `(' is not expected.
...after this I search on google, and I found some examples without "()" on subroutine.But I got this:
[root#bpmnprd01]/root/health_check# ./servers_check.sh
./servers_check.sh[30]: 0403-057 Syntax error at line 33 : `StatusCheck' is not expected.
Thanks in Advance
Tiago
AIX has a true bourne shell living in /bin/sh, not sure about /usr/bin/sh, but would expect that to be Bourne shell as well.
Change your script heading line (the #shebang!) to
#!/usr/bin/bash
Or the result of which bash
IHTH
You are using bash specific syntax but calling the script with sh, which has more limited capabilities. Since you want to use sh, you can use a tool like checkbashisms or shellcheck to help uncover non-portable syntax.
The immediate problem is that function foo() { ..; } is not a POSIX compliant function definition, and you should drop the keyword function and use just foo() { ..; }.
Your shell may also be lacking [[ ]] in which case you should use [ ] instead, with = instead of ==.
I have the following code in my ~/.bashrc:
date=$(which date)
date() {
if [[ $1 == -R || $1 == --rfc-822 ]]; then
# Output RFC-822 compliant date string.
# e.g. Wed, 16 Dec 2009 15:18:11 +0100
$date | sed "s/[^ ][^ ]*$/$($date +%z)/"
else
$date "$#"
fi
}
This works fine, as far as I can tell. Is there a reason to avoid having a variable and a function with the same name?
It's alright apart from being confusing. Besides, they are not the same:
$ date=/bin/ls
$ type date
date is hashed (/bin/date)
$ type $date
/bin/ls is /bin/ls
$ moo=foo
$ type $moo
-bash: type: foo: not found
$ function date() { true; }
$ type date
date is a function
date ()
{
true*emphasized text*
}
$ which true
/bin/true
$ type true
true is a shell builtin
Whenever you type a command, bash looks in three different places to find that command. The priority is as follows:
shell builtins (help)
shell aliases (help alias)
shell functions (help function)
hashed binaries files from $PATH ('leftmost' folders scanned first)
Variables are prefixed with a dollar sign, which makes them different from all of the above. To compare to your example: $date and date are not the same thing. So It's not really possible to have the same name for a variable and a function because they have different "namespaces".
You may find this somewhat confusing, but many scripts define "method variables" at the top of the file. e.g.
SED=/bin/sed
AWK=/usr/bin/awk
GREP/usr/local/gnu/bin/grep
The common thing to do is type the variable names in capitals. This is useful for two purposes (apart from being less confusing):
There is no $PATH
Checking that all "dependencies" are runnable
You can't really check like this:
if [ "`which binary`" ]; then echo it\'s ok to continue.. ;fi
Because which will give you an error if binary has not yet been hashed (found in a path folder).
Since you always have to use $ to dereference a variable in Bash, you're free to use any name you like.
Beware of overriding a global, though.
See also:
http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_03_02.html
An alternative to using a variable: use bash's command keyword (see the manual or run help command from a prompt):
date() {
case $1 in
-R|--rfc-2822) command date ... ;;
*) command date "$#" ;;
esac
}
Here's a shell script:
globvar=0
function myfunc {
let globvar=globvar+1
echo "myfunc: $globvar"
}
myfunc
echo "something" | myfunc
echo "Global: $globvar"
When called, it prints out the following:
$ sh zzz.sh
myfunc: 1
myfunc: 2
Global: 1
$ bash zzz.sh
myfunc: 1
myfunc: 2
Global: 1
$ zsh zzz.sh
myfunc: 1
myfunc: 2
Global: 2
The question is: why this happens and what behavior is correct?
P.S. I have a strange feeling that function behind the pipe is called in a forked shell... So, can there be a simple workaround?
P.P.S. This function is a simple test wrapper. It runs test application and analyzes its output. Then it increments $PASSED or $FAILED variables. Finally, you get a number of passed/failed tests in global variables. The usage is like:
test-util << EOF | myfunc
input for test #1
EOF
test-util << EOF | myfunc
input for test #2
EOF
echo "Passed: $PASSED, failed: $FAILED"
Korn shell gives the same results as zsh, by the way.
Please see BashFAQ/024. Pipes create subshells in Bash and variables are lost when subshells exit.
Based on your example, I would restructure it something like this:
globvar=0
function myfunc {
echo $(($1 + 1))
}
myfunc "$globvar"
globalvar=$(echo "something" | myfunc "$globalvar")
Piping something into myfunc in sh or bash causes a new shell to spawn. You can confirm this by adding a long sleep in myfunc. While it's sleeping call ps and you'll see a subprocess. When the function returns, that sub shell exits without changing the value in the parent process.
If you really need that value to be changed, you'll need to return a value from the function and check $PIPESTATUS after, I guess, like this:
globvar=0
function myfunc {
let globvar=globvar+1
echo "myfunc: $globvar"
return $globvar
}
myfunc
echo "something" | myfunc
globvar=${PIPESTATUS[1]}
echo "Global: $globvar"
The problem is 'which end of a pipeline using built-ins is executed by the original process?'
In zsh, it looks like the last command in the pipeline is executed by the main shell script when the command is a function or built-in.
In Bash (and sh is likely to be a link to Bash if you're on Linux), then either both commands are run in a sub-shell or the first command is run by the main process and the others are run by sub-shells.
Clearly, when the function is run in a sub-shell, it does not affect the variable in the parent shell (only the global in the sub-shell).
Consider adding an extra test:
echo Something | { myfunc; echo $globvar; }
echo $globvar