Execute command with multiple layers of quoting via ssh - bash

I want to execute a docker command on a remote server. The problem is I don't know to escape multiple quotes.
ret=$(ssh root#server "docker exec nginx bash -c 'cat /etc/nginx/nginx.conf | grep 'ServerName' | cut -d '|' -f1'")
I get
bash: -f1: command not found

There's little need to execute so much on the remote host. The file you want to search isn't likely that big: just pipe the entire thing down via ssh to a local awk process:
ret=$(ssh root#server "docker exec nginx cat /etc/nginx/nginx.conf" |
awk -F'|' '/ServerName/ {print $1}')

Wrap your parameter string with N calls to "$(printf "%q" ...)", for N recursive calls .
ssh root#server "docker exec nginx bash -c 'cat /etc/nginx/nginx.conf | grep ServerName | cut -d | -f1'"
How may recursive calls the above line has? I don't wish to set up docker just for the test, so I may have one of the following wrong:
ssh - certainly counts
docker - ??
ngix - ??
bash - certainly counts
If there are four, then you need four calls to "$(printf "%q "str")", don't forget to add all those " marks
ssh root#server docker exec nginx bash -c "$(printf "%q" "$(printf "%q" "$(printf "%q" "$(printf "%q" "cat /etc/nginx/nginx.conf | grep ServerName | cut -d | -f1")")")")"
Explanation: ssh parses the string like bash -c does, stripping one level of quotes. docker and nginx may also each parse the string (or not). Finally, bash -c parses whatever the previous levels have parsed, and removes the final level of quotes. exec does not parse the strings, it simply passes them verbatim to the next level.
Another solution is to put the line, that you want bash to execute, into a script. Then you can simply invoke the script without all this quoting insanity.
#!/bin/bash
< /etc/nginx/nginx.conf grep ServerName | cut -d | -f1

Consider using here-document :
ret="$(ssh root#server << 'EOF'
docker exec nginx bash -c "grep 'ServerName' /etc/nginx/nginx.conf | cut -d '|' -f1"
EOF
)"
echo "$ret"
Or, simpler as suggested by #MichaelVeksler :
ret="$(ssh root#server docker exec -i nginx bash << 'EOF'
grep 'ServerName' /etc/nginx/nginx.conf | cut -d '|' -f1
EOF
)"
echo "$ret"

Related

OS version capture script - unexpected results when using awk

I have a small shell script as follows that I am using to login to multiple servers to capture whether the target server is using Redhat or Ubuntu as the OS version.
#!/bin/ksh
if [ -f $HOME/osver.report.txt ];then
rm -rf $HOME/osver.report.txt
fi
for x in `cat hostlist`
do
OSVER=$(ssh $USER#${x} "cat /etc/redhat-release 2>/dev/null || grep -i DISTRIB_DESCRIPTION /etc/lsb-release 2>/dev/null")
echo -e "$x \t\t $OSVER" >> osver.report.txt
done
The above script works, however, if I attempt to add in some awk as shown below and the server is a redhat server...my results in the osver.report.txt will only show the hostname and no OS version. I have played around with the quoting, but nothing seems to work.
OSVER=$(ssh $USER#${x} "cat /etc/redhat-release | awk {'print $1,$2,$6,$7'} 2>/dev/null || grep -i DISTRIB_DESCRIPTION /etc/lsb-release 2>/dev/null")
If I change the script as suggested to the following:
#!/bin/bash
if [ -f $HOME/osver.report.txt ];then
rm -rf $HOME/osver.report.txt
fi
for x in cat hostlist
do
OSVER=$(
ssh $USER#${x} bash << 'EOF'
awk '{print "$1,$2,$6,$7"}' /etc/redhat-release 2>/dev/null || grep -i DISTRIB_DESCRIPTION /etc/lsb-release 2>/dev/null
EOF
)
echo -e "$x \t\t $OSVER" >> osver.report.txt
done
Then I get the following errors:
./test.bash: line 9: unexpected EOF while looking for matching `)'
./test.bash: line 16: syntax error: unexpected end of file
You're suffering from a quoting problem. When you pass a quoted command to ssh, you effectively lose one level of quoting (as if you passed the same arguments to sh -c "..."). So the command that you're running on the remote host is actually:
cat /etc/redhat-release | awk '{print ,,,}' | grep -i DISTRIB_DESCRIPTION /etc/lsb-release
One way of resolving this is to pipe your script into a shell, rather than passing it as arguments:
OSVER=$(
ssh $USER#${x} bash <<'EOF'
awk '{print "$1,$2,$6,$7"}' /etc/redhat-release 2>/dev/null ||
grep -i DISTRIB_DESCRIPTION /etc/lsb-release 2>/dev/null
EOF
)
The use of <<'EOF' here inhibits any variable expansion in the here document...without that, expressions like $1 would be expanded locally.
A better solution would be to look into something like ansible which has built-in facilities for sshing to groups of hosts and collecting facts about them, including distribution version information.

eval printf works from command line but not in script

When I run the following command in a terminal it works, but not from a script:
eval $(printf "ssh foo -f -N "; \
for port in $(cat ~/bar.json | grep '_port' | grep -o '[0-9]\+'); do \
printf "-L $port:127.0.0.1:$port ";\
done)
The error I get tells me that printf usage is wrong, as if the -L argument within quotes would've been an argument to printf itself.
I was wondering why that is the case. Am I missing something obvious?
__
Context (in case my issue is an XY problem): I want to start and connect to a jupyter kernel running on a remote computer. To do so I wrote a small script that
sends a command per ssh for the remote to start the kernel
copies via scp a configuration file that I can use to connect to the kernel from my local computer
reads the configuration file and opens appropriate ssh tunnels between local and remote
For those not familiar with jupyter, a configuration file (bar.json) looks more or less like the following:
{
"shell_port": 35932,
"iopub_port": 37145,
"stdin_port": 42704,
"control_port": 39329,
"hb_port": 39253,
"ip": "127.0.0.1",
"key": "4cd3e12f-321bcb113c204eca3a0723d9",
"transport": "tcp",
"signature_scheme": "hmac-sha256",
"kernel_name": ""
}
And so, in my command above, the printf statement creates an ssh command with all the 5 -L port forwarding for my local computer to connect to the remote, and eval should run that command. Here's the full script:
#!/usr/bin/env bash
# Tell remote to start a jupyter kernel.
ssh foo -t 'python -m ipykernel_launcher -f ~/bar.json' &
# Wait a bit for the remote kernel to launch and write conf. file
sleep 5
# Copy the conf. file from remote to local.
scp foo:~/bar.json ~/bar.json
# Parse the conf. file and open ssh tunnels.
eval $(printf "ssh foo -f -N "; \
for port in $(cat ~/bar.json | grep '_port' | grep -o '[0-9]\+'); do \
printf "-L $port:127.0.0.1:$port ";\
done)
Finally, jupyter console --existing ~/foo.json connects to remote.
As #that other guy says, bash's printf builtin barfs on printf "-L ...". It thinks you're passing it a -L option. You can fix it by adding --:
printf -- "-L $port:127.0.0.1:$port "
Let's make that:
printf -- '-L %s:127.0.0.1:%s ' "$port" "$port"
But since we're here, we can do a lot better. First, let's not process JSON with basic shell tools. We don't want to rely on it being formatting a certain way. We can use jq, a lightweight and flexible command-line JSON processor.
$ jq -r 'to_entries | map(select(.key | test(".*_port"))) | .[].value' bar.json
35932
37145
42704
39329
39253
Here we use to_entries to convert each field to a key-value pair. Then we select entries where the .key matches the regex .*_port. Finally we extract the corresponding .values.
We can get rid of eval by constructing the ssh command in an array. It's always good to avoid eval when possible.
#!/bin/bash
readarray -t ports < <(jq -r 'to_entries | map(select(.key | test(".*_port"))) | .[].value' bar.json)
ssh=(ssh foo -f -N)
for port in "${ports[#]}"; do ssh+=(-L "$port:127.0.0.1:$port"); done
"${ssh[#]}"

running a pipe command with variable substitution on remote host

I'd to run a piped command with variable substitution on a remote host and redirect the output. Given that the login shell is csh, I have to used "bash -c". With help from users nlrc and jerdiggity, a command with no variable substitution can be formulated as:
localhost$ ssh -f -q remotehost 'bash -c "ls /var/tmp/ora_flist.sh|xargs -L1 cat >/var/tmp/1"'
but the single quote above will preclue using variable substitution, say, substituting ora_flist.sh for $filename. how can I accomplish that?
Thanks.
Something like this should work:
ssh -f -q remotehost 'bash -c "ls /var/tmp/ora_flist.sh|xargs -L1 cat >/var/tmp/1"'
So your problem was that you want the shell variable to be extended locally. Just leave it outside the single quotes, e.g.
ssh -f -q remotehost 'bash -c "ls '$filename' | xargs ..."'
Also very useful trick to avoid the quoting hell is to use heredoc, e.g.
ssh -f -q remotehost <<EOF
bash -c "ls $filename | xargs ... "
EOF

Passing Bash Command Through SSH - Executing Variable Capture

I am passing the following command straight through SSH:
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /key/path server#111.111.111.111 'bash -s' << EOF
FPM_EXISTS=`ps aux | grep php-fpm`
if [ ! -z "$FPM_EXISTS" ]
then
echo "" | sudo -S service php5-fpm reload
fi
EOF
I get the following error:
[2015-02-25 22:45:23] local.INFO: bash: line 1: syntax error near unexpected token `('
bash: line 1: ` FPM_EXISTS=root 2378 0.0 0.9 342792 18692 ? Ss 17:41 0:04 php-fpm: master process (/etc/php5/fpm/php-fpm.conf)
It's like it is trying to execute the output of ps aux | grep php-fpm instead of just capturing git the variable. So, if I change the command to try to capture ls, it acts like it tries to execute that as well, of course returning "command not found" for each directory.
If I just paste the contents of the Bash script into a file and run it it works fine; however, I can't seem to figure out how to pass it through SSH.
Any ideas?
You need to wrap starting EOF in single quotes. Otherwise ps aux | grep php-fpm would get interpreted by the local shell.
The command should look like this:
ssh ... server#111.111.111.111 'bash -s' << 'EOF'
FPM_EXISTS=$(ps aux | grep php-fpm)
if [ ! -z "$FPM_EXISTS" ]
then
echo "" | sudo -S service php5-fpm reload
fi
EOF
Check this: http://tldp.org/LDP/abs/html/here-docs.html (Section 19.7)
Btw, I would encourage you to use $() instead of backticks consequently for command substitution because of the ability to nest them. You will have more fun, believe me. Check this for example: What is the benefit of using $() instead of backticks in shell scripts?
You should wrap the EOF in single quotes.
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i /key/path server#111.111.111.111 'bash -s' << 'EOF'
FPM_EXISTS=`ps aux | grep php-fpm`
if [ ! -z "$FPM_EXISTS" ]
then
echo "" | sudo -S service php5-fpm reload
fi
EOF

Syntax errors when executing a complex command via ssh

I would like to know if a server is running and if not then i would like to start it. for this scenario I have written the following bash script
ssh -t machine "cd /to/the/bin/path
&& if [ -n $(sudo -u dev ps U dev | grep server_name | awk 'FNR == 1 {print $1}') ]
then
echo 'podsvr is running with
id $(sudo -u dev ps U dev | grep POD | awk 'FNR == 1 {print $1}')'
else
sudo -u dev sfservice server_name start
fi
"
When I run the above program I am getting the following error
bash: -c: line 1: syntax error: unexpected end of file
Could some one please help me in this regards
~Sunil
You can't nest single quotes like that in bash. Change the second occurrence of:
'FNR == 1 {print $1}'
To:
'\''FNR == 1 {print $1}'\''
There are a few things you can do to improve the command and simplify the quoting:
There's no need to run ps using sudo to see processes running as another user.
Use the -q option to suppress the output of grep, and simply check the exit status
to see if a match was found.
Use double-quotes with echo to allow the svc_id parameter to expand.
Use single quotes around the entire command for the argument to ssh.
Presumably, /to/the/bin/path is where sfservice lives? You can probably just specify the full path to run the command, rather than changing the working directory.
ssh -t machine 'if ps U dev -o command | grep -q -m1 server_name; then
svc_pid=$( ps U dev -o pid,command | grep -m1 POD | cut -d" " -f 1 )
echo "podsvr is running with id $svc_pid"
else
sudo -u dev /to/the/bin/path/sfservice server_name start
fi
'
Your quoting is messed up. Probably the main problem is that you put the entire ssh script in double quotes. Since it's included in double quotes, the $(...) parts are already evaluated on the local machine before the result is passed to the remote one, and the results are fairly nonsensical. I would use the following recipe:
Write the script that should be executed on the remote machine. Preferably log in to the remote machine and test it there.
Now enclose the entire script in single quotes and replace every enclosed ' by '\'' or alternatively by '"'"'.
In case that the command contains a variable that should be evaluated on the local machine, put '\'" in front of it and "\'' after it. For instance, if the command to be executed on the remote machine is
foo "$A" "$B" 'some string' bar
but $B should be evaluated on the local machine, you get
'foo "$A" '\'"$B"\'' '\''some string'\'' bar'
3.1 Note: This is not completely foolproof – it will fail if the string in $B contains '. To be safe in cases where you cannot guarantee that there is no ' inside of $B, you can first execute QQ=\'\\\'\' ; ProtectedB=${B//\'/$QQ} and then use '\'"$ProtectedB"\'' instead of '\'"$B"\'' in the command above.
Use the result as the argument of ssh.
I assume that the following works (but I can't test it here).
ssh -t machine '
cd /to/the/bin/path
&& if [ -n "$(sudo -u dev ps U dev | grep server_name | awk '\''FNR == 1 {print $1}'\'')" ]
then
echo "podsvr is running with id $(sudo -u dev ps U dev | grep POD | awk '\''FNR == 1 {print $1}'\'')"
else
sudo -u dev sfservice server_name start
fi
'

Resources