What's the equivalent of bash's ${!ENV} in sh? - bash

The command ${!ENV} in bash will take the value of the environment variable ENV and then interpret that value as the name of another environment variable which it will then return the value of right after.
ENV=PATH
echo ${!ENV}
Output:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
In an attempt to make an existing Gitlab Pipeline more concise, I found a use for this behaviour. However, our Gitlab runner runs things with sh, as opposed to bash.
What's the equivalent of bash's ${!ENV} in sh?

No. POSIX does not mandate such an "indirect expansion" in shell. The best thing you can do is eval if you want to be strictly POSIX-compliant:
ENV=PATH
eval echo "\"$ENV is \$$ENV\""
Output (example):
PATH is /usr/bin
In my experience, using Bash has almost no difficulty anywhere, so you should also consider sticking to Bash unless you're pedantically POSIX.
Be aware that eval is a dangerous thing. You are advised to verify that the string is safe to eval before actually evaluating it, unless the string is created by you (your script) and not some user input. Something as simple as this will work:
echo "$ENV" | grep -qi '^[A-Z_][A-Z0-9_]*$'
Check the exit value $?. If it's zero then you can safely evaluate $ENV. If it's 1 then you'd better check the string more thoroughly.

Related

Converting a BASH script to run on SH (via BusyBox)

I have an Asus router running a recent version of FreshTomato - that comes with BusyBox.
I need to run a script that was made with BASH in mind - it is an adaptation of this script - but it fails to run with this error: line 41: syntax error: bad substitution
Checking the script with shellcheck.net yields these errors:
Line 41:
for optionvarname in ${!foreign_option_*} ; do
^-- SC3053: In POSIX sh, indirect expansion is undefined.
^-- SC3056: In POSIX sh, name matching prefixes are undefined.
Line 42:
option="${!optionvarname}"
^-- SC3053: In POSIX sh, indirect expansion is undefined.
These are the lines that are causing problems:
for optionvarname in ${!foreign_option_*} ; do # line 41
option="${!optionvarname}" # line 42
# do some stuff with $option...
done
If my understanding is correct, the original script simply does something with all variables that have a name starting with foreign_option_
However, as far as I could determine, both ${!foreign_option_*} and ${!optionvarname} constructs are BASH-specific and not POSIX compliant, so there is no direct "bash to sh" code conversion possible.
I have tried to create a /bin/bash symlink that points to busybox, but I got the Read-only file system error.
So, how can I get this script to run on my router? I see only two options, but I cant figure out how to implement either:
Make BusyBox interpret the script as BASH instead of SH - can I use a specific shebang for this?
Seems like the fastest option, but only if BusyBox has a "complete" implementation of BASH
Alter the script code to not use BASH specifics.
This is safer, but since there is no "collect al variables starting with X" for SH, how can I do it?
how can I get this script to run on my router?
That easy, either:
install bash on your router or
port the script to busybox/posix compatible shell.
Make BusyBox interpret the script as BASH instead of SH - can I use a specific shebang for this?
That doesn't make sense. Busybox comes with ash shell interpreter and bash is bash. Bash can interpret bash extensions, ash can't interpret them. You can't "make busybox interpret bash" - cars don't fly, planes are for that. If you want to make a car fly, you add wings to it and make it faster. The answer to Make BusyBox interpret the script as BASH instead of SH would be: patch busybox and implement all bash extensions in it.
Shebang is used to run a file under different interpreter. Using #!/bin/bash would invoke bash, which would be unrelated to anything busybox related and busybox wouldn't be involved in it.
how can I do it?
Decide on a unrealistic maximum, iterate over variables named foreign_option_{1...some_max}, for each variable see if it is set, if it is set, cotinue the script.
for i in $(seq 100); do
optionvarname="foreign_option_${i}"
# https://stackoverflow.com/questions/3601515/how-to-check-if-a-variable-is-set-in-bash
if eval "[ -z \"\${${optionvarname}+x}\" ]"; then continue; fi;
With enough luck maybe you can use the set output. The following will fail if any variable contains a value as newline + the string that matches the regex:
for optionvarname in $(set | grep -o '^foreign_option_[0-9]\+=' | sed 's/=//'); then
Indirect expansion can be easily replaced by eval:
eval "option=\"\$${optionvarname}\""
If you really cannot install Bash on that router, here is one possible workaround, which seems to work for me in BusyBox on a Qnap NAS :
foreign_option_one=1
foreign_option_two=2
for x in one two; do
opt_var=foreign_option_${x}
eval "opt_value=\$$opt_var"
echo "$opt_var = $opt_value"
done
(But you will probably encounter more problems with moving a Bash script to busybox, so you might want to first consider alternatives like replacing the router)

Shell independent way of setting environment variable

I need to make a script which can modify an environment variable of the calling shell. To allow the script to modify the environment variable I'm using source <script> and I want both bash and tcsh to be able to use the same script.
I'm hitting the fact that tcsh and bash have different if syntax so I can't even switch between the two inside the script. What is the best way to handle setting the environment variable?
Ok, you got me. I did some experimentation, and you might actually be able to do this with one script. (Update: I way overcomplicated the original, here's a much better solution that also works in zsh.)
What you're trying to create is a bash/tcsh polyglot (we'll assume for now that you don't want to support any other shells). I'll put the actual polyglot here, then some explanation and caveats afterwards:
if ( : != : ) then
echo "In a POSIX shell or zsh or ksh"
else
echo "In tcsh"
alias fi :
endif
fi
The first line is really the interesting bit in this polyglot.
In POSIX sh, it creates a subshell to run the command : with two arguments, == and :. : always returns true, so the first branch of the if-statement is executed. (Usually a semicolon is used after the condition in an if-statement, but in fact a close-paren works too, since both are control operators, which can be used to end a simple command – the condition in an if-statement is really a list, but that degenerates to a simple command, going by the Bash manual.)
In tcsh, it compares the string : with the string : – since they are equal, and we were testing for inequality, it executes the second branch.
The last line of the second (tcsh) branch just ensures that that tcsh won't complain that the final fi isn't a command. There's no need for a similar alias in the first branch, because the endif is still in the second branch of the if-statement as far as a POSIX shell is concerned.
With regard to caveats, you're somewhat limited in what you can actually do in the POSIX shell section: for example, you can't define any functions with the POSIX syntax (foo() {...}), since tcsh will complain about the parentheses, although the Bash syntax (function foo {...}) works. I assume there are similar limitations in the tcsh section.
This polyglot also doesn't work in fish, though it does work in zsh. (That's why the condition is : != : rather than something like : == '' – in zsh, == expands to the path to the command =, which doesn't exist.) It also appears to work in ksh (though at this point it's turning into less of a polyglot, more of a "is this shell csh" program...)
I hate to write an answer that does little more than expand on the comment made by #Ash to the original question. But I felt it important to note that you need to consider not just POSIX 1003 shells like bash and classic shells like csh/tcsh. You also need to consider modern alternatives like fish which is not compatible with either of those shells.
As #Ash noted the solution is to use "bridge" code for each of the invoking shells which maps the information into the syntax appropriate for the invoking shell.

Build a shell command in a string for later execution portably

I am trying to do the following command in bash and dash:
x="env PATH=\"$PATH:/dir with space\""
cmd="ls"
"$x" $cmd
This fails with
-bash: env PATH="/opt/local/bin:/opt/local/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/usr/local/git/bin:/usr/local/go/bin:/dir
with space": No such file or directory
Note the following works:
env PATH="$PATH:/dir with space" $cmd
The reason I am assigning to variable x env, is because it is part of a larger command wrapper to $cmd, which is also a complicated variable.
It's more complex than the initial example. I have logic in setting these variables,, instead of repeating them each time. Eventually the invocation is as shown here:
path_value="$PATH"
invocation="env PATH=\"$path_value\" $other_val1 $other_val2"
base="python python_script.py --opt1=a,b,c script_args"
add_on="$base more_arg1 more_arg2"
"$invocation" $base
You can use shell array to store and reuse:
x=(env PATH="$PATH:/dir with space")
cmd="ls"
"${x[#]}" "$cmd"
anubhava's helpful array-based bash answer is the best choice if you can assume only bash (which appeared to be the case initially).
Given that you must also support dash, which almost exclusively supports POSIX sh features only, arrays are not an option.
Assuming that you fully control or trust the values that you use to build your command line in a string, you can use eval, which should generally be avoided:
path_value="$PATH"
invocation="env PATH=\"$path_value\" $other_val1 $other_val2"
base="python python_script.py --opt1=a,b,c script_args"
add_on="$base more_arg1 more_arg2"
eval "$invocation $add_on"

Why bash cannot accept option of command as string?

I tried following code.
command1="echo"
"${command1}" 'case1'
command2="echo -e"
"${command2}" 'case2'
echo -e 'case3'
The outputs are following,
case1
echo -e: command not found
case3
The case2 results in an error but similar cases, case1 and case3 runs well. It seems command with option cannot be recognized as valid command.
I would like to know why it does not work. Please teach me. Thank you very much.
Case 1 (Unmodified)
command1="echo"
"${command1}" 'case1'
This is bad practice as an idiom, but there's nothing actively incorrect about it.
Case 2 (Unmodified)
command2="echo -e"
"${command2}" 'case2'
This is looking for a program named something like /usr/bin/echo -e, with the space as part of its name.
Case 2 (Reduced Quotes)
# works in this very specific case, but bad practice
command2="echo -e"
$command2 'case2' # WITHOUT THE QUOTES
...this one works, but only because your command isn't interesting enough (doesn't have quotes, doesn't have backslashes, doesn't have other shell syntax). See BashFAQ #50 for a description of why it isn't an acceptable practice in general.
Case X (eval -- Bad Practice, Oft Advised)
You'll often see this:
eval "$command1 'case1'"
...in this very specific case, where command1 and all arguments are hardcoded, this isn't exceptionally harmful. However, it's extremely harmful with only a small change:
# SECURITY BUGS HERE
eval "$command1 ${thing_to_echo}"
...if thing_to_echo='$(rm -rf $HOME)', you'll have a very bad day.
Best Practices
In general, commands shouldn't be stored in strings. Use a function:
e() { echo -e "$#"; }
e "this works"
...or, if you need to build up your argument list incrementally, an array:
e=( echo -e )
"${e[#]}" "this works"
Aside: On echo -e
Any implementation of echo where -e does anything other than emit the characters -e on output is failing to comply with the relevant POSIX standard, which recommends using printf instead (see the APPLICATION USAGE section).
Consider instead:
# a POSIX-compliant alternative to bash's default echo -e
e() { printf '%b\n' "$*"; }
...this not only gives you compatibility with non-bash shells, but also fixes support for bash in POSIX mode if compiled with --enable-xpg-echo-default or --enable-usg-echo-default, or if shopt -s xpg_echo was set, or if BASHOPTS=xpg_echo was present in the shell's environment at startup time.
If the variable command contains the value echo -e.
And the command line given to the shell is:
"$command" 'case2'
The shell will search for a command called echo -e with spaces included.
That command doesn't exist and the shell reports the error.
The reason of why this happen is depicted in the image linked below, from O'Reilly's Learning the Bash Shell, 3rd Edition:
Learning the bash Shell, 3rd Edition
By Cameron Newham
...............................................
Publisher: O'Reilly
Pub Date: March 2005
ISBN: 0-596-00965-8
Pages: 352
If the variable is quoted (follow the right arrows) it goes almost (passing steps 6,7, and 8) directly to execution in step 12.
Therefore, the command searched has not been split on spaces.
Original image (removed because of #CharlesDuffy complaint, I don't agree, but ok, let's move to the impossible to be in fault side) is here:
Link to original image in the web site where I found it.
If the command line given to the shell is un-quoted:
$command 'case2'
The string command gets expanded in step 6 (Parameter expansion) and then the value of the variable $command: echo -e gets divided in step 9: "Word splitting".
Then the shell search for command echo with argument -e.
The command echo "see" an argument of -e and echo process it as an option.
Trying to store commands inside an string is a very bad idea.
Try this, think very carefully of what you would expect the out put to be, and then be surprised on execution:
$ command='echo -e case2; echo "next line"'; $command
To take a look at what happens, execute the command as this:
$ set -vx; $command; set +vx
It works on my machine if I give the command this way:
cmd2="echo -e"
if you are still facing a problem I would suggest storing the options in another variable so that if you are doing shell scripting then multiple commands that use similar option values you can leverage the variable so also try something like this.
cmd1="echo"
opt1="-e"
$cmd1 $opt1 Hello

why does bash set command list functions

Why does the builtin BASH command set print shell functions? My searches on Google, and even Stack Overflow, haven't produced a satisfactory answer. From the manual page, "Without options, the name and value of each shell variable are displayed ... ." With options doesn't change this. Functions aren't variables ... are they?
For reference, this became a problem when I was using set to test for the existence of a variable:
set_version() {
if [[ ! $(set | grep -q PKG_VERSION_STAMP) ]]; then
echo Version not set. Using default
PKG_VERSION_STAMP=0.1
fi
echo Package Version: $PKG_VERSION_STAMP
}
I tested the conditional construction from the interactive shell, and it worked, but that's because I hadn't defined it in a function. My script failed because the function was defined and the grep passed. Consequently, the variable, PKG_VERSION_STAMP, was never defined.
Incidentally, changing from set to env fixes the problem. I'd just like to know why BASH considers functions to be variables and print their contents.
Should you want to have the set builtin not listing the shell functions, you can switch to bash posix mode:
$ set -o posix
$ set
...
While not documented in its manual, it is a well known fact since the shellshock vulnerability exposition that bash is storing its functions as special variables.

Resources