Obscure bash syntax error - bash

I am unable to find my syntax problem here:
» ssh bootstrap01 bash -c 'for master in master01 master02 master03 ; do ssh root#$master -i .ssh/master hostname ; done'
bash: -c: line 0: syntax error near unexpected token `do'
bash: -c: line 0: `bash -c for master in master01 master02 master03 ; do ssh root#$master -i .ssh/master hostname ; done'
EDIT
To verify that my in-line script works:
$ for master in localhost localhost localhost ; do ssh $master hostname ; done
myhost.mydomain.net
myhost.mydomain.net
myhost.mydomain.net

It's actually a problem with the way that SSH passes the command to the remote side. Compare these examples:
$ ssh localhost bash -x -c 'echo 1; echo 2; echo 3'
+ echo
2
3
$ ssh localhost bash -x -c "'echo 1; echo 2; echo 3'"
1
2
3
+ echo 1
+ echo 2
+ echo 3
The key to understanding the problem is that SSH reconstructs a command line from its arguments and it does it badly. It just pastes the arguments back together using spaces, as can be seen if we run the two commands above with the -v option:
debug1: Sending command: bash -x -c echo 1; echo 2; echo 3
debug1: Sending command: bash -x -c 'echo 1; echo 2; echo 3'
respectively.
Obviously, the first of those is run (in the remote shell) as
bash -x -c "echo" 1
echo 2
echo 3
and that's what we see above.
In short, you need to provide quotes for the remote shell.
In your case, you'll probably be able to just omit the bash -c, as there's nothing in your command that a standard shell won't like:
ssh bootstrap01 'for master in master01 master02 master03 ; do ssh root#$master -i .ssh/master hostname ; done'

Related

variables in remote nested server via ssh

Trying to set and access some variables on a remote server
the script will execute on a local server, it will login to remote-server1 and then login again to another second remote server remote-server2
I can successfully set and access variables in both local and remote-server1 without any issues, but having problem doing the same on remote-server2
local1=$(echo "local-srv"); echo "${local1}"
output=$(sshpass -p "${PSSWD}" ssh -t -q -oStrictHostKeyChecking=no admin#$mgmtIP "bash -s" <<EOF
remote1=\$(echo "remote-srv1"); echo "\${remote1}"
ssh -t -q -oStrictHostKeyChecking=no "${targetCompute}" "bash -s" <<EOF2
remote2=\$(echo "remote-srv2"); echo "\${remote2}"
EOF2
EOF
)
Here is my output
local-srv
remote-srv1
as you can see remote-srv2 is missing
----- UPDATE ---
please note that $(echo "text") is just for simplicity but a complex command will be executed here and the output set to a variable
You have two nested ssh commands with nested here-documents, and to delay interpretation of the $ expressions in the inner one, you need more escapes. To see the problem, you can replace the ssh command with cat to see what would be sent to the remote computer. Here's an example, using your original code (and some modified variable definitions); note that the $ and > are prompts from my shell.
$ targetCompute=remote-server2
$ local1="local-srv"; echo "${local1}"
local-srv
$ cat <<EOF
> remote1=\$(echo "remote-srv1"); echo "\${remote1}"
> ssh -t -q -oStrictHostKeyChecking=no "${targetCompute}" "bash -s" <<EOF2
> remote2=\$(echo "remote-srv2"); echo "\${remote2}"
> EOF2
> EOF
remote1=$(echo "remote-srv1"); echo "${remote1}"
ssh -t -q -oStrictHostKeyChecking=no "remote-server2" "bash -s" <<EOF2
remote2=$(echo "remote-srv2"); echo "${remote2}"
EOF2
Notice that the lines relating to remote1 and remote2 have both had their escapes removed, so they're both going to have their $ expressions expanded on remote-srv1. That's what you want for the remote1 line, but to delay interpretation of the remote2 line you have to add another escape... and that escape itself needs to be escaped, so there'll actually be three escapes before each $:
$ cat <<EOF
> remote1=\$(echo "remote-srv1"); echo "\${remote1}"
> ssh -t -q -oStrictHostKeyChecking=no "${targetCompute}" "bash -s" <<EOF2
> remote2=\\\$(echo "remote-srv2"); echo "\\\${remote2}"
> EOF2
> EOF
remote1=$(echo "remote-srv1"); echo "${remote1}"
ssh -t -q -oStrictHostKeyChecking=no "remote-server2" "bash -s" <<EOF2
remote2=\$(echo "remote-srv2"); echo "\${remote2}"
EOF2
So \\\$(echo "remote-srv2") and "\\\${remote2}" in the local here-document become \$(echo "remote-srv2") and "\${remote2}" in the here-document on remote-srv1, and then the command actually gets executed and the variable expanded on remote-srv2.
I needed to escape by ///
Thanks to the answer given by #Goron Davisson
local1=$(echo "local-srv"); echo "${local1}"
output=$(sshpass -p "${PSSWD}" ssh -t -q -oStrictHostKeyChecking=no admin#$mgmtIP "bash -s" <<EOF
remote1=\$(echo "remote-srv1"); echo "\${remote1}"
ssh -t -q -oStrictHostKeyChecking=no "${targetCompute}" "bash -s" <<EOF2
remote2=\\\$(echo "remote-srv2"); echo "\\\${remote2}"
EOF2
EOF
)
Output
local-srv
remote-srv1
remote-srv2

bash while read loop stops after the first line (after corrections based on a StackExchange post) [duplicate]

The eventual goal is to have my bash script execute a command on multiple servers. I almost have it set up. My SSH authentication is working, but this simple while loop is killing me. When I execute the while loop, reading my file for host names, it works fine when I run a
ssh $HOST "uname -a"
but when I attempt to run another ssh command,
ssh $HOST "oslevel -s"
the while loop ends early! I can't figure it out. Why would the while read do loop run perfectly fine with the first command, but not when the second is added?
I have a simple text file called hosts.list that has 4 hostnames, one per line.
$ cat hosts.list
pcced1bip04
pcced1bit04
pcced1bo02
pcced1bo04
$ cat getinfo.bash
#!/bin/bash
set -x
while read HOST
do
echo $HOST
ssh $HOST "uname -a"
#ssh $HOST "oslevel -s"
echo ""
done < hosts.list`
When it runs, it works fine. It goes through the file, line by line and gets the results of "uname -a". So everything is fine, right? (Sorry, but I turned on set -x).
$ ./getinfo.bash
+ read HOST
+ echo pcced1bip04
pcced1bip04
+ ssh pcced1bip04 'uname -a'
AIX pcced1bip04 1 6 0001431BD400
+ echo ''
+ read HOST
+ echo pcced1bit04
pcced1bit04
+ ssh pcced1bit04 'uname -a'
AIX pcced1bit04 1 6 0001431BD400
+ echo ''
+ read HOST
+ echo pcced1bo02
pcced1bo02
+ ssh pcced1bo02 'uname -a'
AIX pcced1bo02 1 6 0009FE2AD400
+ echo ''
+ read HOST
+ echo pcced1bo04
pcced1bo04
+ ssh pcced1bo04 'uname -a'
AIX pcced1bo04 1 6 0009FE2AD400
+ echo ''
+ read HOST
$
The problem occurs when I enable the line [ssh $HOST "oslevel -s"]. When I do, the script only reads the first line of the file, and then stops. Why won't it go onto the other lines?
$ ./getinfo.bash
+ read HOST
+ echo pcced1bip04
pcced1bip04
+ ssh pcced1bip04 'uname -a'
AIX pcced1bip04 1 6 0001431BD400
+ ssh pcced1bip04 'oslevel -s'
6100-06-02-1044
+ echo ''
+ read HOST
$
If I had a problem with my script, why would it be working perfectly fine with just the [ssh $HOST "uname -a"] in the while loop?
If you run commands which read from stdin (such as ssh) inside a loop, you need to ensure that either:
Your loop isn't iterating over stdin
Your command has had its stdin redirected:
...otherwise, the command can consume input intended for the loop, causing it to end.
The former:
while read -u 5 -r hostname; do
ssh "$hostname" ...
done 5<file
...which, using bash 4.1 or newer, can be rewritten with automatic file descriptor assignment as so:
while read -u "$file_fd" -r hostname; do
ssh "$hostname" ...
done {file_fd}<file
The latter:
while read -r hostname; do
ssh "$hostname" ... </dev/null
done <file
...can also, for ssh alone, can also be approximated with the -n parameter (which also redirects stdin from /dev/null):
while read -r hostname; do
ssh -n "$hostname"
done <file
Assign to an array before the loop, so that you are not using stdin for your loop variables. The ssh inside the loop can then use stdin without interfering with your loop.
readarray a < hosts.list
for HOST in "${a[#]}"; do
ssh $HOST "uname -a"
#...other stuff in loop
done
As the solution specified here use -n option for ssh or open file with a different handle:
while read -u 4 HOST
do
echo $HOST
ssh $HOST "uname -a"
ssh $HOST "oslevel -s"
echo ""
done 4< hosts.list`
maybe with python XD
#!/usr/bin/python
import sys
import Queue
from subprocess import call
logfile = sys.argv[1]
q = Queue.Queue()
with open(logfile) as data:
datalines = (line.rstrip('\r\n') for line in data)
for line in datalines:
q.put(line)
while not q.empty() :
host = q.get()
print "++++++ " + host + " ++++++"
call(["ssh", host, "uname -a"])
call(["ssh", host, "oslevel -s"])
print "++++++++++++++++++++++++++"

ssh bash -c exit status does not propagate [duplicate]

This question already has an answer here:
How to have simple and double quotes in a scripted ssh command
(1 answer)
Closed 4 years ago.
According to man ssh and this previous answer, ssh should propagate the exit status of whatever process it ran on the remote server. I seem to have found a mystifying exception!
$ ssh myserver exit 34 ; echo $?
34
Good...
$ ssh myserver 'exit 34' ; echo $?
34
Good...
$ ssh myserver bash -c 'exit 34' ; echo $?
0
What?!?
$ ssh myserver
ubuntu#myserver $ bash -c 'exit 34' ; echo $?
34
So the problem does not appear to be either ssh or bash -c in isolation, but their combination does not behave as I would expect.
I'm designing a script to be run on a remote machine that needs to take an argument list that's computed on the client side. For the sake of argument, let's say it fails if any of the arguments is not a file on the remote server:
ssh myserver bash -c '
for arg ; do
if [[ ! -f "$arg" ]] ; then
exit 1
fi
done
' arg1 arg2 ...
How can I run something like this and effectively inspect its return status? The test above seems to suggest I cannot.
The problem is that the quoting is being lost. ssh simply concatenates the arguments, it doesn't requote them, so the command you're actually executing on the server is:
bash -c exit 34
The -c option only takes one argument, not all the remaining arguments, so it's just executing exit; 34 is being ignored.
You can see a similar effect if you do:
ssh myserver bash -c 'echo foo'
It will just echo a blank line, not foo.
You can fix it by giving a single argument to ssh:
ssh myserver "bash -c 'exit 34'"
or by doubling the quotes:
ssh myserver bash -c "'exit 34'"
Insofar as your question is how to run a command remotely while passing it on ssh's command line without it getting in a mangle that triggers the bug in question, printf '%q ' can be used to ask the shell to perform quoting on your behalf, to build a string which can then be passed to ssh:
printf -v cmd_str '%q ' bash -c '
for arg ; do
if [[ ! -f "$arg" ]] ; then
exit 1
fi
done
' arg1 arg2 ...
ssh "$host" "$cmd_str"
However, this is only guaranteed to work correctly if the default shell for the remote user is also bash (or, if you used ksh's printf %q locally, if the remote shell is ksh). It's much safer to pass your script text out-of-band, as on stdin:
printf -v arg_str '%q ' arg1 arg2 ...
ssh "$host" "bash -s $arg_str" <<'EOF'
for arg; do
if [[ ! -f "$arg" ]]; then
exit 1
fi
done
EOF
...wherein we still depend on printf %q to generate correct output, but only for the arguments, not for the script itself.
Try wrapping in quotes:
╰─➤ ssh server "bash -c 'exit 34' "; echo $?
34

Running a shell script on remote servers and passing a command line argument to it

I want to run top command through a script on remote servers.
I also want to add a filter that will allow only integers to pass as a command line argument to the script which will run on the remote servers
this is the command which i'm using:-
ssh -oConnectTimeout=5 -oBatchMode=yes -l group servername 'bash -s' < /some/path/top_command.sh
Now when i'm not passing any argument to the script, it works fine and displays top 20 lines of the top command.
it is also filtering out the garbage values like any character (non integer)
But the issue is with the negative integers
ssh -oConnectTimeout=5 -oBatchMode=yes -l group servername 'bash -s' < /some/path/top_command.sh -7
Now i'm getting an error:-
Usage: bash [GNU long option] [option] ...
bash [GNU long option] [option] script-file ...
GNU long options:
--debug
--debugger
--dump-po-strings
--dump-strings
--help
--init-file
--login
--noediting
--noprofile
--norc
--posix
--protected
--rcfile
--restricted
--verbose
--version
--wordexp
Shell options:
-irsD or -c command or -O shopt_option (invocation only)
-abefhkmnptuvxBCHP or -o option
But when i try running the command like without using the top_command.sh script:-
ssh -oConnectTimeout=5 -oBatchMode=yes -l group servername 'top -b -n 1 | head -n -2'
I'm getting the top command's output for negative head values
Now i'm confused, what am i doing wrong?
Btw Content of top_command.sh
1 #!/bin/bash
2 if [[ $1 == "" ]]; then
3 echo -e "No Argument passed:- Showing default top 20 lines\n"
4 command=$(top -b -n 1 | head -n 20 2>&1)
5 echo "$command"
6 else
7 re='^[-0-9]+$'
8 if [[ $1 =~ $re ]]; then
9 command=$(top -b -n 1 | head -n $1 2>&1)
10 echo "$command"
11 else
12 echo "Argument passed is not an integer"
13 fi
14 fi
You can do it like
ssh -oConnectTimeout=5 -oBatchMode=yes -l group servername bash -s -- -7 < /some/path/top_command.sh
-- is a common option-argument separator that is helpful when passing arguments starting with - to a command. Commands like mv and rm also recognizes it. Everything that follows -- is no longer tested as being an option or not and is already just considered as a normal argument. To rm and mv it's helpful if the file starts with -.

in my bash loop over a list of some servers, if the ssh connects the bash script exits

I have a quick script to run a command on each server using ssh (i am sure there are lots of better ways to do this, but it was intended to just work quick!!). For the test1 etc, there is no server so the script continues, the script also continues if the pubkey auth fails. However if the script connects, the date is printed but the ssh loop terminates...
#!/bin/bash -x
cat <<EOF |
##file servers
test1
test2
server1
server2
EOF
while read line
do
if [ "${line:0:1}" != "#" ]; then
ssh -q -oPasswordAuthentication=no -i id_dsa user1#${line} date
fi
done
echo read line must have exited
output is like so;
+ cat
+ read line
+ '[' t '!=' '#' ']'
+ ssh -q -oPasswordAuthentication=no -i id_dsa user1#test1 date
+ read line
+ '[' t '!=' '#' ']'
+ ssh -q -oPasswordAuthentication=no -i id_dsa user1#test2 date
+ read line1
+ '[' s '!=' '#' ']'
+ ssh -q -oPasswordAuthentication=no -i id_dsa user1#server1 date
Fri Jul 9 09:04:16 PDT 2010
+ read line
+ echo read line must have exited
read line must have exited`enter code here`
something to do with the successful return of the ssh command is messing with the condition for the loop or the var... any suggestions on why?
You should pass the -n flag to ssh, to prevent it messing with stdin:
ssh -n -q -oPasswordAuthentication=no -i id_dsa user1#${line} date
I tested this with my own server and reproduced the problem, adding -n solves it. As the ssh man page says:
Redirects stdin from /dev/null
(actually, prevents reading from
stdin)
In your example, ssh must have read from stdin, which messes up your read in the loop.
I think the reason is that as ssh is being forked and exec'd in your bash script the script's standard input is being reopened so your read simultaneously terminates. Try re-crafting as follows:
for line in test1 test2 server1 server2
do
if [ "${line:0:1}" != "#" ]; then
ssh -q -oPasswordAuthentication=no -i id_dsa user1#${line} date
fi
done
or maybe run the ssh in a sub-shell like this:
( ssh -q -oPasswordAuthentication=no -i id_dsa user1#${line} date )

Resources