Syntax errors when executing a complex command via ssh - bash

I would like to know if a server is running and if not then i would like to start it. for this scenario I have written the following bash script
ssh -t machine "cd /to/the/bin/path
&& if [ -n $(sudo -u dev ps U dev | grep server_name | awk 'FNR == 1 {print $1}') ]
then
echo 'podsvr is running with
id $(sudo -u dev ps U dev | grep POD | awk 'FNR == 1 {print $1}')'
else
sudo -u dev sfservice server_name start
fi
"
When I run the above program I am getting the following error
bash: -c: line 1: syntax error: unexpected end of file
Could some one please help me in this regards
~Sunil

You can't nest single quotes like that in bash. Change the second occurrence of:
'FNR == 1 {print $1}'
To:
'\''FNR == 1 {print $1}'\''

There are a few things you can do to improve the command and simplify the quoting:
There's no need to run ps using sudo to see processes running as another user.
Use the -q option to suppress the output of grep, and simply check the exit status
to see if a match was found.
Use double-quotes with echo to allow the svc_id parameter to expand.
Use single quotes around the entire command for the argument to ssh.
Presumably, /to/the/bin/path is where sfservice lives? You can probably just specify the full path to run the command, rather than changing the working directory.
ssh -t machine 'if ps U dev -o command | grep -q -m1 server_name; then
svc_pid=$( ps U dev -o pid,command | grep -m1 POD | cut -d" " -f 1 )
echo "podsvr is running with id $svc_pid"
else
sudo -u dev /to/the/bin/path/sfservice server_name start
fi
'

Your quoting is messed up. Probably the main problem is that you put the entire ssh script in double quotes. Since it's included in double quotes, the $(...) parts are already evaluated on the local machine before the result is passed to the remote one, and the results are fairly nonsensical. I would use the following recipe:
Write the script that should be executed on the remote machine. Preferably log in to the remote machine and test it there.
Now enclose the entire script in single quotes and replace every enclosed ' by '\'' or alternatively by '"'"'.
In case that the command contains a variable that should be evaluated on the local machine, put '\'" in front of it and "\'' after it. For instance, if the command to be executed on the remote machine is
foo "$A" "$B" 'some string' bar
but $B should be evaluated on the local machine, you get
'foo "$A" '\'"$B"\'' '\''some string'\'' bar'
3.1 Note: This is not completely foolproof – it will fail if the string in $B contains '. To be safe in cases where you cannot guarantee that there is no ' inside of $B, you can first execute QQ=\'\\\'\' ; ProtectedB=${B//\'/$QQ} and then use '\'"$ProtectedB"\'' instead of '\'"$B"\'' in the command above.
Use the result as the argument of ssh.
I assume that the following works (but I can't test it here).
ssh -t machine '
cd /to/the/bin/path
&& if [ -n "$(sudo -u dev ps U dev | grep server_name | awk '\''FNR == 1 {print $1}'\'')" ]
then
echo "podsvr is running with id $(sudo -u dev ps U dev | grep POD | awk '\''FNR == 1 {print $1}'\'')"
else
sudo -u dev sfservice server_name start
fi
'

Related

Execute command with multiple layers of quoting via ssh

I want to execute a docker command on a remote server. The problem is I don't know to escape multiple quotes.
ret=$(ssh root#server "docker exec nginx bash -c 'cat /etc/nginx/nginx.conf | grep 'ServerName' | cut -d '|' -f1'")
I get
bash: -f1: command not found
There's little need to execute so much on the remote host. The file you want to search isn't likely that big: just pipe the entire thing down via ssh to a local awk process:
ret=$(ssh root#server "docker exec nginx cat /etc/nginx/nginx.conf" |
awk -F'|' '/ServerName/ {print $1}')
Wrap your parameter string with N calls to "$(printf "%q" ...)", for N recursive calls .
ssh root#server "docker exec nginx bash -c 'cat /etc/nginx/nginx.conf | grep ServerName | cut -d | -f1'"
How may recursive calls the above line has? I don't wish to set up docker just for the test, so I may have one of the following wrong:
ssh - certainly counts
docker - ??
ngix - ??
bash - certainly counts
If there are four, then you need four calls to "$(printf "%q "str")", don't forget to add all those " marks
ssh root#server docker exec nginx bash -c "$(printf "%q" "$(printf "%q" "$(printf "%q" "$(printf "%q" "cat /etc/nginx/nginx.conf | grep ServerName | cut -d | -f1")")")")"
Explanation: ssh parses the string like bash -c does, stripping one level of quotes. docker and nginx may also each parse the string (or not). Finally, bash -c parses whatever the previous levels have parsed, and removes the final level of quotes. exec does not parse the strings, it simply passes them verbatim to the next level.
Another solution is to put the line, that you want bash to execute, into a script. Then you can simply invoke the script without all this quoting insanity.
#!/bin/bash
< /etc/nginx/nginx.conf grep ServerName | cut -d | -f1
Consider using here-document :
ret="$(ssh root#server << 'EOF'
docker exec nginx bash -c "grep 'ServerName' /etc/nginx/nginx.conf | cut -d '|' -f1"
EOF
)"
echo "$ret"
Or, simpler as suggested by #MichaelVeksler :
ret="$(ssh root#server docker exec -i nginx bash << 'EOF'
grep 'ServerName' /etc/nginx/nginx.conf | cut -d '|' -f1
EOF
)"
echo "$ret"

OS version capture script - unexpected results when using awk

I have a small shell script as follows that I am using to login to multiple servers to capture whether the target server is using Redhat or Ubuntu as the OS version.
#!/bin/ksh
if [ -f $HOME/osver.report.txt ];then
rm -rf $HOME/osver.report.txt
fi
for x in `cat hostlist`
do
OSVER=$(ssh $USER#${x} "cat /etc/redhat-release 2>/dev/null || grep -i DISTRIB_DESCRIPTION /etc/lsb-release 2>/dev/null")
echo -e "$x \t\t $OSVER" >> osver.report.txt
done
The above script works, however, if I attempt to add in some awk as shown below and the server is a redhat server...my results in the osver.report.txt will only show the hostname and no OS version. I have played around with the quoting, but nothing seems to work.
OSVER=$(ssh $USER#${x} "cat /etc/redhat-release | awk {'print $1,$2,$6,$7'} 2>/dev/null || grep -i DISTRIB_DESCRIPTION /etc/lsb-release 2>/dev/null")
If I change the script as suggested to the following:
#!/bin/bash
if [ -f $HOME/osver.report.txt ];then
rm -rf $HOME/osver.report.txt
fi
for x in cat hostlist
do
OSVER=$(
ssh $USER#${x} bash << 'EOF'
awk '{print "$1,$2,$6,$7"}' /etc/redhat-release 2>/dev/null || grep -i DISTRIB_DESCRIPTION /etc/lsb-release 2>/dev/null
EOF
)
echo -e "$x \t\t $OSVER" >> osver.report.txt
done
Then I get the following errors:
./test.bash: line 9: unexpected EOF while looking for matching `)'
./test.bash: line 16: syntax error: unexpected end of file
You're suffering from a quoting problem. When you pass a quoted command to ssh, you effectively lose one level of quoting (as if you passed the same arguments to sh -c "..."). So the command that you're running on the remote host is actually:
cat /etc/redhat-release | awk '{print ,,,}' | grep -i DISTRIB_DESCRIPTION /etc/lsb-release
One way of resolving this is to pipe your script into a shell, rather than passing it as arguments:
OSVER=$(
ssh $USER#${x} bash <<'EOF'
awk '{print "$1,$2,$6,$7"}' /etc/redhat-release 2>/dev/null ||
grep -i DISTRIB_DESCRIPTION /etc/lsb-release 2>/dev/null
EOF
)
The use of <<'EOF' here inhibits any variable expansion in the here document...without that, expressions like $1 would be expanded locally.
A better solution would be to look into something like ansible which has built-in facilities for sshing to groups of hosts and collecting facts about them, including distribution version information.

eval printf works from command line but not in script

When I run the following command in a terminal it works, but not from a script:
eval $(printf "ssh foo -f -N "; \
for port in $(cat ~/bar.json | grep '_port' | grep -o '[0-9]\+'); do \
printf "-L $port:127.0.0.1:$port ";\
done)
The error I get tells me that printf usage is wrong, as if the -L argument within quotes would've been an argument to printf itself.
I was wondering why that is the case. Am I missing something obvious?
__
Context (in case my issue is an XY problem): I want to start and connect to a jupyter kernel running on a remote computer. To do so I wrote a small script that
sends a command per ssh for the remote to start the kernel
copies via scp a configuration file that I can use to connect to the kernel from my local computer
reads the configuration file and opens appropriate ssh tunnels between local and remote
For those not familiar with jupyter, a configuration file (bar.json) looks more or less like the following:
{
"shell_port": 35932,
"iopub_port": 37145,
"stdin_port": 42704,
"control_port": 39329,
"hb_port": 39253,
"ip": "127.0.0.1",
"key": "4cd3e12f-321bcb113c204eca3a0723d9",
"transport": "tcp",
"signature_scheme": "hmac-sha256",
"kernel_name": ""
}
And so, in my command above, the printf statement creates an ssh command with all the 5 -L port forwarding for my local computer to connect to the remote, and eval should run that command. Here's the full script:
#!/usr/bin/env bash
# Tell remote to start a jupyter kernel.
ssh foo -t 'python -m ipykernel_launcher -f ~/bar.json' &
# Wait a bit for the remote kernel to launch and write conf. file
sleep 5
# Copy the conf. file from remote to local.
scp foo:~/bar.json ~/bar.json
# Parse the conf. file and open ssh tunnels.
eval $(printf "ssh foo -f -N "; \
for port in $(cat ~/bar.json | grep '_port' | grep -o '[0-9]\+'); do \
printf "-L $port:127.0.0.1:$port ";\
done)
Finally, jupyter console --existing ~/foo.json connects to remote.
As #that other guy says, bash's printf builtin barfs on printf "-L ...". It thinks you're passing it a -L option. You can fix it by adding --:
printf -- "-L $port:127.0.0.1:$port "
Let's make that:
printf -- '-L %s:127.0.0.1:%s ' "$port" "$port"
But since we're here, we can do a lot better. First, let's not process JSON with basic shell tools. We don't want to rely on it being formatting a certain way. We can use jq, a lightweight and flexible command-line JSON processor.
$ jq -r 'to_entries | map(select(.key | test(".*_port"))) | .[].value' bar.json
35932
37145
42704
39329
39253
Here we use to_entries to convert each field to a key-value pair. Then we select entries where the .key matches the regex .*_port. Finally we extract the corresponding .values.
We can get rid of eval by constructing the ssh command in an array. It's always good to avoid eval when possible.
#!/bin/bash
readarray -t ports < <(jq -r 'to_entries | map(select(.key | test(".*_port"))) | .[].value' bar.json)
ssh=(ssh foo -f -N)
for port in "${ports[#]}"; do ssh+=(-L "$port:127.0.0.1:$port"); done
"${ssh[#]}"

remote SSH and variable substitution

The uncommented line complains that 'mus' file doesn't doesn't exist, whereas the commented line behaves as expected and gives me the number of lines in 'mus' file
vr=$(ssh $1 "cd $2; count=`cat mus | wc -l`; echo $count")
#vr=$(ssh $1 "cd $2; cat mus | wc -l")
echo $vr
The uncommented line is looking for file mus on your local system, whereas the commented one looks on the remote system. You need to escape the backticks and the $ in the count variable for this to work:
vr=$(ssh $1 "cd $2; count=\`cat mus | wc -l\`; echo \$count")
echo $vr
You'll be getting this error:
cat: mus: No such file or directory
Reason is this command
count=`cat mus | wc -l`
is getting executed locally not on remote host.
To execute multiple commands on remote host use here-doc:
ssh -t -t "$1"<<EOF
cd "$2"
c=\$(wc -l < mus)
echo \$c
exit
EOF

Testing if a Daemon is alive or not with Shell

I have a log_sender.pl perl script that when executed runs a daemon. I want to make a test, using Shell:
#!/bin/bash
function log_sender()
{
perl -I $HOME/script/log_sender.pl
}
(
[[ "${BASH_SOURCE[0]}" == "${0}" ]] || exit 0
function check_log_sender()
{
if [ "ps -aef | grep -v grep log_sender.pl" ]; then
echo "PASSED"
else
echo FAILED
fi
}
log_sender
check_log_sender
)
Unfortunately when I run this my terminal becomes:
-bash-4.1$ sh log_sender.sh
...
...
What am I doing wrong?
> if [ "ps -aef | grep -v grep log_sender.pl" ]; then
This is certainly not what you want. Try this:
if ps -aef | grep -q 'log_sender\.pl'; then
...
In a shell script, the if construct takes as its argument a command whose exit status it examines. In your code, the command is [ (also known as test) and you run it on the literal string "ps -aef | grep -v grep log_sender.pl" which is simply always true.
You probably intended to check whether ps -aef outputs a line which contains log_sender.pl but does not contain grep; that would be something like ps -aef | grep -v grep | grep 'log_sender\.pl' but you can avoid the extra grep -v by specifying a regular expression which does not match itself.
The -q option to grep suppresses any output; the exit code indicates whether or not the input matched the regular expression.
The perl invocation is also not correct; the -I option requires an argument, so you are saying effectively just perl and your Perl interpreter is now waiting for you to type in a Perl script for it to execute. Apparently the script is log_sender.pl so you should simply drop the -I (or add an argument to it, if you really do need to add some Perl library paths in order for the script to work).
Finally, if you write a Bash script, you should execute it with Bash.
chmod +x log_sender.sh
./log_sender.sh
or alternatively
bash ./log_sender.sh
The BASH_SOURCE construct you use is a Bashism, so your script will simply not work correctly under sh.
Finally, the parentheses around the main logic are completely redundant. They will cause the script to run these commands in a separate subshell for no apparent benefit.

Resources