I need to switch to oracle user to change permissions to tnsnames.ora file. I am passing this file path as argument but looks like somewhere the syntax is wrong. Appreciate help in fixing this issue.
Belows is the peice of my script.
#!/bin/bash
sudo su - oracle <<-"EOF"
chmod 777 "$1"
EOF
It is failing by giving the following error:
/home/itsh->./dothis.sh /home/oracle/orasys/11.2.0.2/network/admin/tnsnames.ora
chmod: cannot access `': No such file or directory
If you're in any way concerned about security, the right thing to do is not to change your quoting, but to keep it as it is and use bash -s to pass your arguments to the shell running as the oracle user directly:
#!/bin/bash
sudo -u oracle bash -s "$#" <<-'EOF'
chmod 777 "$1"
EOF
...or, if you must use sudo su - oracle (which I'd argue is bad practice, and best avoided):
#!/bin/bash
printf -v sudo_cmd '%q ' bash -s "$#"
sudo su - oracle -c "$sudo_cmd" <<-'EOF'
chmod 777 "$1"
EOF
With either of these practices, your inner shell runs the $1 expansion itself -- and the data on the command line isn't substituted into, and parsed as, code.
The operation of a here document is specified in the POSIX spec where it says:
If any character in word is quoted, the delimiter shall be formed by performing quote removal on word, and the here-document lines shall not be expanded. Otherwise, the delimiter shall be the word itself.
If no characters in word are quoted, all lines of the here-document shall be expanded for parameter expansion, command substitution, and arithmetic expansion. In this case, the <backslash> in the input behaves as the <backslash> inside double-quotes (see Double-Quotes). However, the double-quote character ( '"' ) shall not be treated specially within a here-document, except when the double-quote appears within "$()", "``", or "${}".
So by using <<-"EOF" (instead of <<-EOF) as your here document marker you are explicitly telling the shell not to expand any variables (from the shell context) in the here document contents.
This is often what you want when you are using a heredoc for a shell snippet but in your case this is exactly the opposite of what you appear to be looking for.
Related
I'm trying to interpolate variables inside of a bash heredoc:
var=$1
sudo tee "/path/to/outfile" > /dev/null << "EOF"
Some text that contains my $var
EOF
This isn't working as I'd expect ($var is treated literally, not expanded).
I need to use sudo tee because creating the file requires sudo. Doing something like:
sudo cat > /path/to/outfile <<EOT
my text...
EOT
Doesn't work, because >outfile opens the file in the current shell, which is not using sudo.
In answer to your first question, there's no parameter substitution because you've put the delimiter in quotes - the bash manual says:
The format of here-documents is:
<<[-]word
here-document
delimiter
No parameter expansion, command substitution, arithmetic expansion, or
pathname expansion is performed on word. If any characters in word are
quoted, the delimiter is the result of quote removal on word, and the
lines in the here-document are not expanded. If word is unquoted, all
lines of the here-document are subjected to parameter expansion, command substitution, and arithmetic expansion. [...]
If you change your first example to use <<EOF instead of << "EOF" you'll find that it works.
In your second example, the shell invokes sudo only with the parameter cat, and the redirection applies to the output of sudo cat as the original user. It'll work if you try:
sudo sh -c "cat > /path/to/outfile" <<EOT
my text...
EOT
Don't use quotes with <<EOF:
var=$1
sudo tee "/path/to/outfile" > /dev/null <<EOF
Some text that contains my $var
EOF
Variable expansion is the default behavior inside of here-docs. You disable that behavior by quoting the label (with single or double quotes).
As a late corolloary to the earlier answers here, you probably end up in situations where you want some but not all variables to be interpolated. You can solve that by using backslashes to escape dollar signs and backticks; or you can put the static text in a variable.
Name='Rich Ba$tard'
dough='$$$dollars$$$'
cat <<____HERE
$Name, you can win a lot of $dough this week!
Notice that \`backticks' need escaping if you want
literal text, not `pwd`, just like in variables like
\$HOME (current value: $HOME)
____HERE
Demo: https://ideone.com/rMF2XA
Note that any of the quoting mechanisms -- \____HERE or "____HERE" or '____HERE' -- will disable all variable interpolation, and turn the here-document into a piece of literal text.
A common task is to combine local variables with script which should be evaluated by a different shell, programming language, or remote host.
local=$(uname)
ssh -t remote <<:
echo "$local is the value from the host which ran the ssh command"
# Prevent here doc from expanding locally; remote won't see backslash
remote=\$(uname)
# Same here
echo "\$remote is the value from the host we ssh:ed to"
:
This question already has answers here:
echo "#!" fails -- "event not found"
(5 answers)
Closed 7 years ago.
I am attempting to parse the output of a VNC server startup event and have run into a problem in parsing using sed in a command substitution. Specifically, the remote VNC server is started in a manner such as the following:
address1="user1#lxplus.cern.ch"
VNCServerResponse="$(ssh "${address1}" 'vncserver' 2>&1)"
The standard error output produced in this startup event is then to be parsed in order to extract the server and display information. At this point the content of the variable VNCServerResponse is something such as the following:
New 'lxplus0186.cern.ch:1 (user1)' desktop is lxplus0186.cern.ch:1
Starting applications specified in /afs/cern.ch/user/u/user1/.vnc/xstartup
Log file is /afs/cern.ch/user/u/user1/.vnc/lxplus0186.cern.ch:1.log
This output can be parsed in the following way in order to extract the server and display information:
echo "${VNCServerResponse}" | sed '/New.*desktop.*is/!d' \
| awk -F" desktop is " '{print $2}'
The result is something such as the following:
lxplus0186.cern.ch:1
What I want to do is use this parsing in a command substitution something like the following:
VNCServerAndDisplayNumber="$(echo "${VNCServerResponse}" \
| sed '/New.*desktop.*is/!d' | awk -F" desktop is " '{print $2}')"
On attempting to do this, I am presented with the following error:
bash: !d': event not found
I am not sure how to address this. It appears to be a problem in the way sed is being used in the command substitution. I would appreciate guidance.
Bash history expansion is a very odd corner in the bash command line parser, and you are clearly running into an unexpected history expansion, which is explained below. However, any sort of history expansion in a script is unexpected, because normally history expansion is not enabled in scripts; not even scripts run with the source (or .) builtin.
How history expansion is enabled (or disabled)
There are two shell options which control history expansion:
set -o history: Required for the history to be recorded.
set -H (or set -o histexpand): Additionally required for history expansion to be enabled.
Both of these options must be set for history expansion to be recognized. (I found the manual unclear on this interaction, but it's logical enough.)
According to the bash manual, these options are unset for non-interactive shells, so if you want to enable history expansion in a script (and I cannot imagine a reason you would want this), you would need to set both of them:
set -o history -o histexpand
The situation for scripts run with source is more complicated (and what I'm about to say only applies to bash v4, and since it's undocumented in might change in the future). [Note 3]
History recording (and consequently expansion) is turned off in source'd scripts, but through an internal flag which, as far as I know, is not made visible. It certainly does not appear in $SHELLOPTS. Since a sourced script runs in the current bash context, it shares the current execution environment, including shell options. So in the execution of a sourced script initiated from an interactive session, you'll see both history and histexpand in $SHELLOPTS, but no history expansion will take place. In order to enable it, you need to:
set -o history
which is not a no-op because it has the side-effect of resetting the internal flag which suppresses history recording. Setting the histexpand shell option does not have this side-effect.
In short, I'm not sure how you managed to enable history expansion in a script (if, indeed, the misbehaving command was in a script and not in an interactive shell), but you might want to consider not doing so, unless you have a really good reason.
How history expansion is parsed
The bash implementation of history expansion is designed to work with readline, so that it can be performed during command input. (By default this function is bound to Meta-^; generally Meta is ESC, but you can customize that as well.) However, it is also performed immediately after each line is input, before any bash parsing is performed.
By default, the history expansion character is !, and -- as mostly documented -- that will trigger history expansion except:
when it is followed by whitespace or =
if the shell option extglob is set, and it is followed by ( [Note 1]
if it appears in a single-quoted string
if it is preceded by a \ [Note 2 and see below]
if it is preceded by $ or ${ [Note 1]
if it is preceded by [ [Note 1]
(As of bash v4.3) if it is the last character in a double-quoted string.
The immediate issue here is the precise interpretation of the third case, an ! appearing inside of a single-quoted string. Normally, bash starts a new quoting context for a command substitution ($(...) or the deprecated backtick notation). For example:
$ s=SUBSTITUTED
$ # The interior single quotes are just characters
$ echo "'Echoing $s'"
'Echoing SUBSTITUTED'
$ # The interior single quotes are single quotes
$ echo "$(echo 'Echoing $s')"
Echoing $s
However, the history expansion scanner isn't that intelligent. It keeps track of quotes, but not of command substitution. So as far as it is concerned, both of the single quotes in the above example are double-quoted single quotes, which is to say ordinary characters. So history expansion occurs in both of them:
# A no-op to indicated history expansion
$ HIST() { :; }
# Single-quoted strings inhibit history expansion
$ HIST
$ echo '!!'
!!
# Double-quoted strings allow history expansion
$ HIST
$ echo "'!!'"
echo "'HIST'"
'HIST'
# ... and it applies also to interior command substitution.
$ HIST
$ echo "$(echo '!!')"
echo "$(echo 'HIST')"
HIST
So if you have a perfectly normal command like sed '/foo/!d' file, where you would expect the single-quotes to protect you from history-expansion, and you put it inside a double-quoted command substitution:
result="$(sed '/foo/!d' file)"
you suddenly find that the ! is a history expansion character. Worse, you can't fix this by backslash escaping the exclamation point, because although "\!" inhibits history expansion, it doesn't remove the backslash:
$ echo "\!"
\!
In this particular example -- and the one in the OP -- the double quotes are completely unnecessary, because the right-hand side of a variable assignment does not undergo either filename expansion nor word splitting. However, there are other contexts in which removing the double quotes would change the semantics:
# Undesired history expansion
printf "The answer is '%s'\n" "$(sed '/foo/!d' file)"
# Undesired word splitting
printf "The answer is '%s'\n" $(sed '/foo/!d' file)
In this case, the best solution is probably to put the sed argument in a variable
# Works
sed_prog='/foo/!d'
printf "The answer is '%s'\n" "$(sed "$sed_prog" file)"
(The quotes around $sed_prog were not necessary in this case but usually they would be, and they do no harm.)
Notes:
The inhibition of history expansion when the following character is some form of open parenthesis only works if there is a corresponding close parenthesis in the rest of the string. However, it doesn't have to really match the open parenthesis. For example:
# No matching close parenthesis
$ echo "!("
bash: !: event not found
# The matching close parenthesis has nothing to do with the open
$ echo "!(" ")"
!( )
# An actual extended glob: files whose names don't start with a
$ echo "!(a*)"
b
As indicated in the bash manual, a history-expansion character is treated as an ordinary character if immediately preceded by a backslash. This is literally true; it doesn't matter whether the backslash will later be considered an escape character or not:
$ echo \!
!
$ echo \\!
\!
$ echo \\\!
\!
\ also inhibits history expansion inside double quotes, but \! is not a valid escape sequence inside the double quoted string, so the backslash is not removed:
$ echo "\!"
\!
$ echo "\\!"
\!
$ echo "\\\!"
\\!
I'm referring to the source code for bash v4.2 as I write this, so any undocumented behaviour may be completely different as of v4.3.
The problem is that within double quotes, bash is trying to expand !d before passing it to the subshell. You can get around this problem by removing the double quotes but I would also propose a simplification to your script:
VNCServerAndDisplayNumber=$(echo "$VNCServerResponse" | awk '/desktop/ {print $NF}')
This simply prints the last field on the line containing the word "desktop".
On a newer bash, you can use a herestring rather than piping an echo:
VNCServerAndDisplayNumber=$(awk '/desktop/ {print $NF}' <<<"$VNCServerResponse")
Don't wrap the $(...) command substitution in double quotes. You are asking the shell to perform evaluation on the contents of the quotes and are hitting the history substitution expansion feature. Drop the quotes and you stop telling the shell to do that and you won't hit that problem.
And yes, dropping those quotes is safe on that assignment line even if the output may contain spaces or newlines or whatever. Assignments of that sort are not going to split on those the way command substitution or variable evaluation will on a normal shell execution line.
Alternatively, disable history expansion in your shell/script before you run that. (It should be off when running a script by default I believe anyway.)
This only happens when history expansion is enabled, which it normally isn't and definitely shouldn't be for scripts.
Rather than trying to work around it, figure out why history expansion is enabled and what to do so it isn't.
If you're executing your script with . foo or source foo, use ./foo instead.
If you're writing this as a function in .bashrc or similar, consider making it a separate script.
If your script (or BASH_ENV) explicitly does set -H, don't.
Quote it with '' or \ or disable history expansion with set +H or shopt -u -o histexpand. See History Expansion.
This question already has answers here:
echo "#!" fails -- "event not found"
(5 answers)
Closed 7 years ago.
I am attempting to parse the output of a VNC server startup event and have run into a problem in parsing using sed in a command substitution. Specifically, the remote VNC server is started in a manner such as the following:
address1="user1#lxplus.cern.ch"
VNCServerResponse="$(ssh "${address1}" 'vncserver' 2>&1)"
The standard error output produced in this startup event is then to be parsed in order to extract the server and display information. At this point the content of the variable VNCServerResponse is something such as the following:
New 'lxplus0186.cern.ch:1 (user1)' desktop is lxplus0186.cern.ch:1
Starting applications specified in /afs/cern.ch/user/u/user1/.vnc/xstartup
Log file is /afs/cern.ch/user/u/user1/.vnc/lxplus0186.cern.ch:1.log
This output can be parsed in the following way in order to extract the server and display information:
echo "${VNCServerResponse}" | sed '/New.*desktop.*is/!d' \
| awk -F" desktop is " '{print $2}'
The result is something such as the following:
lxplus0186.cern.ch:1
What I want to do is use this parsing in a command substitution something like the following:
VNCServerAndDisplayNumber="$(echo "${VNCServerResponse}" \
| sed '/New.*desktop.*is/!d' | awk -F" desktop is " '{print $2}')"
On attempting to do this, I am presented with the following error:
bash: !d': event not found
I am not sure how to address this. It appears to be a problem in the way sed is being used in the command substitution. I would appreciate guidance.
Bash history expansion is a very odd corner in the bash command line parser, and you are clearly running into an unexpected history expansion, which is explained below. However, any sort of history expansion in a script is unexpected, because normally history expansion is not enabled in scripts; not even scripts run with the source (or .) builtin.
How history expansion is enabled (or disabled)
There are two shell options which control history expansion:
set -o history: Required for the history to be recorded.
set -H (or set -o histexpand): Additionally required for history expansion to be enabled.
Both of these options must be set for history expansion to be recognized. (I found the manual unclear on this interaction, but it's logical enough.)
According to the bash manual, these options are unset for non-interactive shells, so if you want to enable history expansion in a script (and I cannot imagine a reason you would want this), you would need to set both of them:
set -o history -o histexpand
The situation for scripts run with source is more complicated (and what I'm about to say only applies to bash v4, and since it's undocumented in might change in the future). [Note 3]
History recording (and consequently expansion) is turned off in source'd scripts, but through an internal flag which, as far as I know, is not made visible. It certainly does not appear in $SHELLOPTS. Since a sourced script runs in the current bash context, it shares the current execution environment, including shell options. So in the execution of a sourced script initiated from an interactive session, you'll see both history and histexpand in $SHELLOPTS, but no history expansion will take place. In order to enable it, you need to:
set -o history
which is not a no-op because it has the side-effect of resetting the internal flag which suppresses history recording. Setting the histexpand shell option does not have this side-effect.
In short, I'm not sure how you managed to enable history expansion in a script (if, indeed, the misbehaving command was in a script and not in an interactive shell), but you might want to consider not doing so, unless you have a really good reason.
How history expansion is parsed
The bash implementation of history expansion is designed to work with readline, so that it can be performed during command input. (By default this function is bound to Meta-^; generally Meta is ESC, but you can customize that as well.) However, it is also performed immediately after each line is input, before any bash parsing is performed.
By default, the history expansion character is !, and -- as mostly documented -- that will trigger history expansion except:
when it is followed by whitespace or =
if the shell option extglob is set, and it is followed by ( [Note 1]
if it appears in a single-quoted string
if it is preceded by a \ [Note 2 and see below]
if it is preceded by $ or ${ [Note 1]
if it is preceded by [ [Note 1]
(As of bash v4.3) if it is the last character in a double-quoted string.
The immediate issue here is the precise interpretation of the third case, an ! appearing inside of a single-quoted string. Normally, bash starts a new quoting context for a command substitution ($(...) or the deprecated backtick notation). For example:
$ s=SUBSTITUTED
$ # The interior single quotes are just characters
$ echo "'Echoing $s'"
'Echoing SUBSTITUTED'
$ # The interior single quotes are single quotes
$ echo "$(echo 'Echoing $s')"
Echoing $s
However, the history expansion scanner isn't that intelligent. It keeps track of quotes, but not of command substitution. So as far as it is concerned, both of the single quotes in the above example are double-quoted single quotes, which is to say ordinary characters. So history expansion occurs in both of them:
# A no-op to indicated history expansion
$ HIST() { :; }
# Single-quoted strings inhibit history expansion
$ HIST
$ echo '!!'
!!
# Double-quoted strings allow history expansion
$ HIST
$ echo "'!!'"
echo "'HIST'"
'HIST'
# ... and it applies also to interior command substitution.
$ HIST
$ echo "$(echo '!!')"
echo "$(echo 'HIST')"
HIST
So if you have a perfectly normal command like sed '/foo/!d' file, where you would expect the single-quotes to protect you from history-expansion, and you put it inside a double-quoted command substitution:
result="$(sed '/foo/!d' file)"
you suddenly find that the ! is a history expansion character. Worse, you can't fix this by backslash escaping the exclamation point, because although "\!" inhibits history expansion, it doesn't remove the backslash:
$ echo "\!"
\!
In this particular example -- and the one in the OP -- the double quotes are completely unnecessary, because the right-hand side of a variable assignment does not undergo either filename expansion nor word splitting. However, there are other contexts in which removing the double quotes would change the semantics:
# Undesired history expansion
printf "The answer is '%s'\n" "$(sed '/foo/!d' file)"
# Undesired word splitting
printf "The answer is '%s'\n" $(sed '/foo/!d' file)
In this case, the best solution is probably to put the sed argument in a variable
# Works
sed_prog='/foo/!d'
printf "The answer is '%s'\n" "$(sed "$sed_prog" file)"
(The quotes around $sed_prog were not necessary in this case but usually they would be, and they do no harm.)
Notes:
The inhibition of history expansion when the following character is some form of open parenthesis only works if there is a corresponding close parenthesis in the rest of the string. However, it doesn't have to really match the open parenthesis. For example:
# No matching close parenthesis
$ echo "!("
bash: !: event not found
# The matching close parenthesis has nothing to do with the open
$ echo "!(" ")"
!( )
# An actual extended glob: files whose names don't start with a
$ echo "!(a*)"
b
As indicated in the bash manual, a history-expansion character is treated as an ordinary character if immediately preceded by a backslash. This is literally true; it doesn't matter whether the backslash will later be considered an escape character or not:
$ echo \!
!
$ echo \\!
\!
$ echo \\\!
\!
\ also inhibits history expansion inside double quotes, but \! is not a valid escape sequence inside the double quoted string, so the backslash is not removed:
$ echo "\!"
\!
$ echo "\\!"
\!
$ echo "\\\!"
\\!
I'm referring to the source code for bash v4.2 as I write this, so any undocumented behaviour may be completely different as of v4.3.
The problem is that within double quotes, bash is trying to expand !d before passing it to the subshell. You can get around this problem by removing the double quotes but I would also propose a simplification to your script:
VNCServerAndDisplayNumber=$(echo "$VNCServerResponse" | awk '/desktop/ {print $NF}')
This simply prints the last field on the line containing the word "desktop".
On a newer bash, you can use a herestring rather than piping an echo:
VNCServerAndDisplayNumber=$(awk '/desktop/ {print $NF}' <<<"$VNCServerResponse")
Don't wrap the $(...) command substitution in double quotes. You are asking the shell to perform evaluation on the contents of the quotes and are hitting the history substitution expansion feature. Drop the quotes and you stop telling the shell to do that and you won't hit that problem.
And yes, dropping those quotes is safe on that assignment line even if the output may contain spaces or newlines or whatever. Assignments of that sort are not going to split on those the way command substitution or variable evaluation will on a normal shell execution line.
Alternatively, disable history expansion in your shell/script before you run that. (It should be off when running a script by default I believe anyway.)
This only happens when history expansion is enabled, which it normally isn't and definitely shouldn't be for scripts.
Rather than trying to work around it, figure out why history expansion is enabled and what to do so it isn't.
If you're executing your script with . foo or source foo, use ./foo instead.
If you're writing this as a function in .bashrc or similar, consider making it a separate script.
If your script (or BASH_ENV) explicitly does set -H, don't.
Quote it with '' or \ or disable history expansion with set +H or shopt -u -o histexpand. See History Expansion.
Everybody says eval is evil, and you should use $() as a replacement. But I've run into a situation where the unquoting isn't handled the same inside $().
Background is that I've been burned too often by file paths with spaces in them, and so like to quote all such paths. More paranoia about wanting to know where all my executables are coming from. Even more paranoid, not trusting myself, and so like being able to display the created commands I'm about to run.
Below I try variations on using eval vs. $(), and whether the command name is quoted (cuz it could contain spaces)
BIN_LS="/bin/ls"
thefile="arf"
thecmd="\"${BIN_LS}\" -ld -- \"${thefile}\""
echo -e "\n Running command '${thecmd}'"
$($thecmd)
Running command '"/bin/ls" -ld -- "arf"'
./foo.sh: line 8: "/bin/ls": No such file or directory
echo -e "\n Eval'ing command '${thecmd}'"
eval $thecmd
Eval'ing command '"/bin/ls" -ld -- "arf"'
/bin/ls: cannot access arf: No such file or directory
thecmd="${BIN_LS} -ld -- \"${thefile}\""
echo -e "\n Running command '${thecmd}'"
$($thecmd)
Running command '/bin/ls -ld -- "arf"'
/bin/ls: cannot access "arf": No such file or directory
echo -e "\n Eval'ing command '${thecmd}'"
eval $thecmd
Eval'ing command '/bin/ls -ld -- "arf"'
/bin/ls: cannot access arf: No such file or directory
$("/bin/ls" -ld -- "${thefile}")
/bin/ls: cannot access arf: No such file or directory
So... this is confusing. A quoted command path is valid everywhere except inside a $() construct? A shorter, more direct example:
$ c="\"/bin/ls\" arf"
$ $($c)
-bash: "/bin/ls": No such file or directory
$ eval $c
/bin/ls: cannot access arf: No such file or directory
$ $("/bin/ls" arf)
/bin/ls: cannot access arf: No such file or directory
$ "/bin/ls" arf
/bin/ls: cannot access arf: No such file or directory
How does one explain the simple $($c) case?
The use of " to quote words is part of your interaction with Bash. When you type
$ "/bin/ls" arf
at the prompt, or in a script, you're telling Bash that the command consists of the words /bin/ls and arf, and the double-quotes are really emphasizing that /bin/ls is a single word.
When you type
$ eval '"/bin/ls" arf'
you're telling Bash that the command consists of the words eval and "/bin/ls" arf. Since the purpose of eval is to pretend that its argument is an actual human-input command, this is equivalent to running
$ "/bin/ls" arf
and the " gets processed just like at the prompt.
Note that this pretense is specific to eval; Bash doesn't usually go out of its way to pretend that something was an actual human-typed command.
When you type
$ c='"/bin/ls" arf'
$ $c
the $c gets substituted, and then undergoes word splitting (see §3.5.7 "Word Splitting" in the Bash Reference Manual), so the words of the command are "/bin/ls" (note the double-quotes!) and arf. Needless to say, this doesn't work. (It's also not very safe, since in addition to word-splitting, $c also undergoes filename-expansion and whatnot. Generally your parameter-expansions should always be in double-quotes, and if they can't be, then you should rewrite your code so they can be. Unquoted parameter-expansions are asking for trouble.)
When you type
$ c='"/bin/ls" arf'
$ $($c)
this is the same as before, except that now you're also trying to use the output of the nonworking command as a new command. Needless to say, that doesn't cause the nonworking command to suddenly work.
As Ignacio Vazquez-Abrams says in his answer, the right solution is to use an array, and handle the quoting properly:
$ c=("/bin/ls" arf)
$ "${c[#]}"
which sets c to an array with two elements, /bin/ls and arf, and uses those two elements as the word of a command.
With the fact that it doesn't make sense in the first place. Use an array instead.
$ c=("/bin/ls" arf)
$ "${c[#]}"
/bin/ls: cannot access arf: No such file or directory
From the man page for bash, regarding eval:
eval [arg ...]:
The args are read and concatenated together into a single command.
This command is then read and executed by the shell, and its exit
status is returned as the value of eval.
When c is defined as "\"/bin/ls\" arf", the outer quotes will cause the entire thing to be processed as the first argument to eval, which is expected to be a command or program. You need to pass your eval arguments in such a way that the target command and its arguments are listed separately.
The $(...) construct behaves differently than eval because it is not a command that takes arguments. It can process the entire command at once instead of processing arguments one at a time.
A note on your original premise: The main reason that people say that eval is evil was because it is commonly used by scripts to execute a user-provided string as a shell command. While handy at times, this is a major security problem (there's typically no practical way to safety-check the string before executing it). The security problem doesn't apply if you are using eval on hard-coded strings inside your script, as you are doing. However, it's typically easier and cleaner to use $(...) or `...` inside of scripts for command substitution, leaving no real use case left for eval.
Using set -vx helps us understand how bash process the command string.
As seen in the picture, "command" works cause quotes will be stripped when processing. However, when $c(quoted twice) is used, only the outside single quotes are removed. eval can process the string as the argument and outside quotes are removed step by step.
It is probably just related to how bash semanticallly process the string and quotes.
Bash does have many weird behaviours about quotes processing:
Bash inserting quotes into string before execution
How do you stop bash from stripping quotes when running a variable as a command?
Bash stripping quotes - how to preserve quotes
I have a script I wrote for switching to root or running a command as root without a password. I edited my /etc/sudoers file so that my user [matt] has permission to run /bin/su with no password. This is my script "s" contents:
matt: ~ $ cat ~/bin/s
#!/bin/bash
[ "$1" != "" ] && c='-c'
sudo su $c "$*"
If there are no parameters [simply s], it basically calls sudo su which goes to root with no password. But if I put paramaters, the $c variable equals "-c", which makes su execute a single command.
It works good, except for when I need to use spaces. For example:
matt: ~ $ touch file\ with\ spaces
matt: ~ $ s chown matt file\ with\ spaces
chown: cannot access 'file': No such file or directory
chown: cannot access 'with': No such file or directory
chown: cannot access 'spaces': No such file or directory
matt: ~ $ s chown matt 'file with spaces'
chown: cannot access 'file': No such file or directory
chown: cannot access 'with': No such file or directory
chown: cannot access 'spaces': No such file or directory
matt: ~ $ s chown matt 'file\ with\ spaces'
matt: ~ $
How can I fix this?
Also, what's the difference between $* and $# ?
Ah, fun with quoting. Usually, the approach #John suggests will work for this: use "$#", and it won't try to interpret (and get confused by) the spaces and other funny characters in your parameters. In this case, however, that won't work because su's -c option expects the entire command to be passed as a single parameter, and then it'll start a new shell which parses the command (including getting confused by spaces and such). In order to avoid this, you actually need to re-quote the parameters within the string you're going to pass to su -c. bash's printf builtin can do this:
#!/bin/bash
if [ $# -gt 0 ]; then
sudo su -c "$(printf "%q " "$#")"
else
sudo su
fi
Let me go over what's happening here:
You run a command like s chown matt file\ with\ spaces
bash parses this into a list of words: "s" "chown" "matt" "file with spaces". Note that at this point the escapes you typed have been removed (although they had their intended effect: making bash treat those spaces as part of a parameter, rather than separators between parameters).
When bash parses the printf "%q " "$#" command in the script, it replaces "$#" with the arguments to the script, with parameter breaks intact. It's equivalent to printf "%q " "chown" "matt" "file with spaces".
printf interprets the format string "%q " to mean "print each remaining parameter in quoted form, with a space after it". It prints: "chown matt file\ with\ spaces ", essentially reconstructing the original command line (it has an extra space on the end, but this turns out not to be a problem).
This is then passed to sudo as a parameter (since there are double-quotes around the $() construct, it'll be treated as a single parameter to sudo). This is equivalent to running sudo su -c "chown matt file\ with\ spaces ".
sudo runs su, and passes along the rest of the parameter list it got including the fully escaped command.
su runs a shell, and it also passes along the rest of its parameter list.
The shell executes the command it got as an argument to -c: chown matt file\ with\ spaces. In the normal course of parsing it, it'll interpret the unescaped spaces as separators between parameters, and the escaped spaces as part of a parameter, and it'll ignore the extra space at the end.
The shell runs chown, with the parameters "matt" and "file with spaces". This does what you expected.
Isn't bash parsing a hoot?
"$*" collects all the positional parameters ($1, $2, …) into a single word, separated by one space (more generally, the first character of $IFS). Note that in shell terminology, a word can include any character including spaces: "foo bar" or foo\ bar parses to a single word. For example, if there are three arguments, then "$*" is equivalent to "$1 $2 $3". If there is no argument, then "$*" is equivalent to "" (an empty word).
"$#" is a special syntax that expands to the list of positional parameters, each in its own word. For example, if there are three arguments, then "$#" is equivalent to "$1" "$2" "$3". If there is no argument, then "$#" is equivalent to nothing (an empty list, not a list with one word that is empty).
"$#" is almost always what you want to use, as opposed to "$*", or unquoted $* or $# (the last two are exactly equivalent and perform filename generation (a.k.a. globbing) and word splitting on all the positional parameters).
There's an additional problem, which is that su except a single shell command as the argument of -c, and you're passing it multiple words. You've had a detailed explanation of getting the quoting right, but let me add advice on how to do it right, which sidesteps the double quoting issues. You may also want to refer to https://unix.stackexchange.com/questions/3063/how-do-i-run-a-command-as-the-system-administrator-root for more background on sudo and su.
sudo already runs a command as root, so there's no need to invoke su. In case your script has no argument , you can just run a shell directly; unless your version of sudo is very old, there's an option for that: sudo -s. So your script can be:
#!/bin/sh
if [ $# -eq 0 ]; then set -- -s; else set -- -- "$#"; fi
exec sudo "$#"
(The else part is to handle the rare case of a command that begins with -.)
I wouldn't bother with such a short script though. Running a command as root is unusual and risky enough that typing the three extra characters shouldn't be a problem. Running a shell as root is even more unusual and risky and surely deserves six more characters.