Say I have a command I want to run (cmd) and a variable containing the arguments I want to pass to the function (something like --foo 'bar baz' qux). Like so:
#!/bin/sh
command=cmd
args="--foo 'bar baz' qux"
The arguments contain quotes, like the ones shown above, that group together an argument containing a space. I'd then like to run the command:
$command $args
This, of course, results in running the command with four arguments: --foo, 'bar, baz', and qux. The alternative I'm used to (i.e., when using "$#") presents a different problem:
$command "$args"
This executes the command with one argument: --foo 'bar baz' qux.
How can I run the command with three arguments (--foo, bar baz, and qux) as intended?
Use an array to specify your argument list exactly, without string-splitting (which is what's doing the wrong thing here) getting in your way:
args=( --foo "bar baz" qux )
command "${args[#]}"
If you need to build your argument list dynamically, you can append to arrays with +=:
args=( )
while ...; do
args+=( "$another_argument" )
done
call_your_subprocess "${args[#]}"
Note that the use of quotation marks, and [#] instead of [*], is essential.
If you can throw away the current positional variables ($1...) you can use the following:
set -- '--foo' 'bar baz' 'qux'
echo "$#" # Prints "3" (without quotes)
echo "$2" # Prints "bar baz" (without quotes)
command "$#"
Just tested it in a #!/usr/bin/env sh script, so it works at least in Dash, and should work in any Bourne Shell variant. No eval, Python or Bash necessary.
One possibility is to use eval:
#!/bin/sh
args="--foo 'bar baz' qux"
cmd="python -c 'import sys; print sys.argv'"
eval $cmd $args
That way you cause the command line to be interpreted rather than just split according to IFS. This gives the output:
$ ./args.sh
['-c', '--foo', 'bar baz', 'qux']
So that you can see the args are passed as you wanted.
If you have the command in the form:
args="--foo 'bar baz' qux"
and getting the command as an array in the first place isn't an option, then you'll need to use eval to turn it back into an array:
$ args="--foo 'bar baz' qux"
$ eval "arr=($args)"
But it's important to note that this is unsafe if $args is being provided by an untrusted source, since it can be used to execute arbitrary commands, e.g. args='$(rm -rf /)'; eval "arr=($args)" will cause the above code to run the rm -rf / before you've even used arr.
Then you can use "${arr[#]}" to expand it as arguments to a command:
$ bash -c 'echo $0' "${arr[#]}"
--foo
$ bash -c 'echo $1' "${arr[#]}"
bar baz
or to run your command:
"$command" "${arr[#]}"
Note that there are differences between ${arr[*]}, ${arr[#]}, "${arr[*]}" and "${arr[#]}", and only the last of these does what you want in most cases
It works for me by changing the IFS (the internal field seperator) of the shell (to only contain a newline). Here is how I override commands that use quoted arguments:
$ cat ~/.local/bin/make
#!/bin/sh
# THIS IS IMPORTANT! (don't split up quoted strings in arguments)
IFS="
"
exec /usr/bin/make ${#} -j6
(/bin/sh is dash)
It will eat the quotes when replacing the command with echo, so will look wrong when "testing"; but the command does get them as intended.
It can be tested by replacing the exec line with
for arg in $#; do echo $arg; done
Related
I've looked at the similar posts about this problem, but cannot figure out how to get the executed code to be in the correct format, which needs to be foo --bar "a='b'". My best attempt at this was
#!/bin/bash -x
bar='--bar ''"''a='"'"'b'"'"'"'
cmd=(foo $bar)
echo ${cmd[#]}
eval ${cmd[#]}
The output from this is correct for the echo, but incorrect for eval
+ bar='--bar "a='\''b'\''"'
+ cmd=(foo $bar)
+ echo foo --bar '"a='\''b'\''"'
foo --bar "a='b'"
+ eval foo --bar '"a='\''b'\''"'
++ foo --bar 'a='\''b'\'''
What is the correct way to execute the command with the option?
If you must store command fragments, use functions or arrays, not strings.
An example of best-practice code, in accordance with BashFAQ #50:
#!/usr/bin/env bash
bar=( --bar a="b" )
cmd=(foo "${bar[#]}" )
printf '%q ' "${cmd[#]}" && echo # print code equivalent to the command we're about to run
"${cmd[#]}" # actually run this code
Bonus: Your debug output doesn't prove what you think it does.
"a='b'" and 'a='\''b'\''' are two different ways to quote the exact same string.
To prove this:
printf '%s\n' "a='b'" | md5sum -
printf '%s\n' 'a='\''b'\''' | md5sum -
...emits as output:
7f183df5823cf51ec42a3d4d913595d7 -
7f183df5823cf51ec42a3d4d913595d7 -
...so there's nothing at all different between how the arguments to echo $foo and eval $foo are being parsed in your code.
Why is this true? Because syntactic quotes aren't part of the command that's actually run; they're removed by the shell after it uses them to determine how to interpret a command line character-by-character.
So, let's break down what set -x is showing you:
'a='\''b'\'''
...consists of the following literal strings concatenated together:
a= (in a single-quoted context that is entered and ended by the single quotes surrounding)
' (in an unquoted context, escaped by the backslash that precedes it)
b (in a single-quoted context that is entered and ended by the single quotes surrounding)
' (in an unquoted context)
...everything else is syntactic, meaningful to the shell but not ever passed to the program foo.
If you want exactly the same expansion to happen as in echo ${cmd[#]}, just run the command then:
${cmd[#]}
It will execute:
+ foo --bar '"a='\''b'\''"'
Note that because it is unquoted, for example * will be expanded according to filename expansion.
The man page for Bash says, regarding the -c option:
-c string
If the -c option is present, then commands are read from
string. If there are arguments after
the string, they are assigned to the
positional parameters, starting with
$0.
So given that description, I would think something like this ought to work:
bash -c "echo arg 0: $0, arg 1: $1" arg1
but the output just shows the following, so it looks like the arguments after the -c string are not being assigned to the positional parameters.
arg 0: -bash, arg 1:
I am running a fairly ancient Bash (on Fedora 4):
[root#dd42 trunk]# bash --version
GNU bash, version 3.00.16(1)-release (i386-redhat-linux-gnu)
Copyright (C) 2004 Free Software Foundation, Inc.
I am really trying to execute a bit of a shell script with arguments. I thought -c looked very promising, hence the issue above. I wondered about using eval, but I don't think I can pass arguments to the stuff that follows eval. I'm open to other suggestions as well.
You need to use single quotes to prevent interpolation happening in your calling shell.
$ bash -c 'echo arg 0: $0, arg 1: $1' arg1 arg2
arg 0: arg1, arg 1: arg2
Or escape the variables in your double-quoted string. Which to use might depend on exactly what you want to put in your snippet of code.
Because '$0' and '$1' in your string is replaced with a variable #0 and #1 respectively.
Try :
bash -c "echo arg 0: \$0, arg 1: \$1" arg0 arg1
In this code $ of both are escape so base see it as a string $ and not get replaced.
The result of this command is:
arg 0: arg0, arg 1: arg1
Hope this helps.
martin is right about the interpolation: you need to use single quotes. But note that if you're trying to pass arguments to a command that is being executed within the string, you need to forward them on explicitly. For example, if you have a script foo.sh like:
#!/bin/bash
echo 0:$0
echo 1:$1
echo 2:$2
Then you should call it like this:
$ bash -c './foo.sh ${1+"$#"}' foo "bar baz"
0:./foo.sh
1:bar baz
2:
Or more generally bash -c '${0} ${1+"$#"}' <command> [argument]...
Not like this:
$ bash -c ./foo.sh foo "bar baz"
0:./foo.sh
1:
2:
Nor like this:
$ bash -c './foo.sh $#' foo "bar baz"
0:./foo.sh
1:bar
2:baz
This means you can pass in arguments to sub-processes without embedding them in the command string, and without worrying about escaping them.
Add a backslash to the $0 (i.e., \$0), otherwise your current shell escapes $0 to the name of the shell before it even gets to the subshell.
Suppose you have the following command stored in a variable:
COMMAND='echo hello'
What's the difference between
$ eval "$COMMAND"
hello
$ bash -c "$COMMAND"
hello
$ $COMMAND
hello
? Why is the last version almost never used if it is shorter and (as far as I can see) does exactly the same thing?
The third form is not at all like the other two -- but to understand why, we need to go into the order of operations when bash in interpreting a command, and look at which of those are followed when each method is in use.
Bash Parsing Stages
Quote Processing
Splitting Into Commands
Special Operator Parsing
Expansions
Word Splitting
Globbing
Execution
Using eval "$string"
eval "$string" follows all the above steps starting from #1. Thus:
Literal quotes within the string become syntactic quotes
Special operators such as >() are processed
Expansions such as $foo are honored
Results of those expansions are split on characters into whitespace into separate words
Those words are expanded as globs if they parse as same and have available matches, and finally the command is executed.
Using sh -c "$string"
...performs the same as eval does, but in a new shell launched as a separate process; thus, changes to variable state, current directory, etc. will expire when this new process exits. (Note, too, that that new shell may be a different interpreter supporting a different language; ie. sh -c "foo" will not support the same syntax that bash, ksh, zsh, etc. do).
Using $string
...starts at step 5, "Word Splitting".
What does this mean?
Quotes are not honored.
printf '%s\n' "two words" will thus parse as printf %s\n "two words", as opposed to the usual/expected behavior of printf %s\n two words (with the quotes being consumed by the shell).
Splitting into multiple commands (on ;s, &s, or similar) does not take place.
Thus:
s='echo foo && echo bar'
$s
...will emit the following output:
foo && echo bar
...instead of the following, which would otherwise be expected:
foo
bar
Special operators and expansions are not honored.
No $(foo), no $foo, no <(foo), etc.
Redirections are not honored.
>foo or 2>&1 is just another word created by string-splitting, rather than a shell directive.
$ bash -c "$COMMAND"
This version starts up a new bash interpreter, runs the command, and then exits, returning control to the original shell. You don't need to be running bash at all in the first place to do this, you can start a bash interpreter from tcsh, for example. You might also do this from a bash script to start with a fresh environment or to keep from polluting your current environment.
EDIT:
As #CharlesDuffy points out starting a new bash shell in this way will clear shell variables but environment variables will be inherited by the spawned shell process.
Using eval causes the shell to parse your command twice. In the example you gave, executing $COMMAND directly or doing an eval are equivalent, but have a look at the answer here to get a more thorough idea of what eval is good (or bad) for.
There are at least times when they are different. Consider the following:
$ cmd="echo \$var"
$ var=hello
$ $cmd
$var
$ eval $cmd
hello
$ bash -c "$cmd"
$ var=world bash -c "$cmd"
world
which shows the different points at which variable expansion is performed. It's even more clear if we do set -x first
$ set -x
$ $cmd
+ echo '$var'
$var
$ eval $cmd
+ eval echo '$var'
++ echo hello
hello
$ bash -c "$cmd"
+ bash -c 'echo $var'
$ var=world bash -c "$cmd"
+ var=world
+ bash -c 'echo $var'
world
We can see here much of what Charles Duffy talks about in his excellent answer. For example, attempting to execute the variable directly prints $var because parameter expansion and those earlier steps had already been done, and so we don't get the value of var, as we do with eval.
The bash -c option only inherits exported variables from the parent shell, and since I didn't export var it's not available to the new shell.
I've looked at the similar posts about this problem, but cannot figure out how to get the executed code to be in the correct format, which needs to be foo --bar "a='b'". My best attempt at this was
#!/bin/bash -x
bar='--bar ''"''a='"'"'b'"'"'"'
cmd=(foo $bar)
echo ${cmd[#]}
eval ${cmd[#]}
The output from this is correct for the echo, but incorrect for eval
+ bar='--bar "a='\''b'\''"'
+ cmd=(foo $bar)
+ echo foo --bar '"a='\''b'\''"'
foo --bar "a='b'"
+ eval foo --bar '"a='\''b'\''"'
++ foo --bar 'a='\''b'\'''
What is the correct way to execute the command with the option?
If you must store command fragments, use functions or arrays, not strings.
An example of best-practice code, in accordance with BashFAQ #50:
#!/usr/bin/env bash
bar=( --bar a="b" )
cmd=(foo "${bar[#]}" )
printf '%q ' "${cmd[#]}" && echo # print code equivalent to the command we're about to run
"${cmd[#]}" # actually run this code
Bonus: Your debug output doesn't prove what you think it does.
"a='b'" and 'a='\''b'\''' are two different ways to quote the exact same string.
To prove this:
printf '%s\n' "a='b'" | md5sum -
printf '%s\n' 'a='\''b'\''' | md5sum -
...emits as output:
7f183df5823cf51ec42a3d4d913595d7 -
7f183df5823cf51ec42a3d4d913595d7 -
...so there's nothing at all different between how the arguments to echo $foo and eval $foo are being parsed in your code.
Why is this true? Because syntactic quotes aren't part of the command that's actually run; they're removed by the shell after it uses them to determine how to interpret a command line character-by-character.
So, let's break down what set -x is showing you:
'a='\''b'\'''
...consists of the following literal strings concatenated together:
a= (in a single-quoted context that is entered and ended by the single quotes surrounding)
' (in an unquoted context, escaped by the backslash that precedes it)
b (in a single-quoted context that is entered and ended by the single quotes surrounding)
' (in an unquoted context)
...everything else is syntactic, meaningful to the shell but not ever passed to the program foo.
If you want exactly the same expansion to happen as in echo ${cmd[#]}, just run the command then:
${cmd[#]}
It will execute:
+ foo --bar '"a='\''b'\''"'
Note that because it is unquoted, for example * will be expanded according to filename expansion.
I have the following script (example):
#!/bin/bash
while getopts a: opt; do
case "$opt" in
a) val="$OPTARG";;
?) echo "use the flag \"-a\""
exit 2;;
esac
done
echo "a specified with: ${val}"
When I now call this script with test.sh -a "here is a string" the output is: a specified with: here but not as I would like to have a specified with: here is a string.
I know that I can call the script with test.sh -a here\ is\ a\ string or test.sh -a "here\ is\ a\ string" and it will work. But in my case I can not manipulate the string I want to pass.
So how can I change my getopts function to make it work?
I also tried getopt, but I worked even more wors:
commandsShort="a:"
commandsLong="aval:"
TEMP=`getopt \
-o $commandsShort \
-l $commandsLong \
-q \
-n "$0" -- "$#"`
What am I doing wrong?
This got solved in comments on your question. :-)
You're calling the script with:
eval "test.sh $#"
The effect of this "eval" line, if "here is a string" is your option, is to create the command line that is in the quotes:
test.sh here is a string
and evaluate it.
Per the additional comments, if you can avoid eval, you should.
That said, if you need it, you could always quote the string within the eval:
eval "test.sh \"$#\""
Or if you don't like escaping quotes, use singles, since your $# will be expanded due to the outer quotes being double:
eval "test.sh '$#'"
And finally, as you mentioned in comments, just running directly may be the best option:
test.sh "$#"
Note that if your $# includes the -a option, you may have a new problem. Consider the command line:
test.sh "-a here is a string"
In this case, your entire string, starting with -a, is found in $1, and you will have no options for getopts and no OPTARG.