ssh bash -c exit status does not propagate [duplicate] - bash

This question already has an answer here:
How to have simple and double quotes in a scripted ssh command
(1 answer)
Closed 4 years ago.
According to man ssh and this previous answer, ssh should propagate the exit status of whatever process it ran on the remote server. I seem to have found a mystifying exception!
$ ssh myserver exit 34 ; echo $?
34
Good...
$ ssh myserver 'exit 34' ; echo $?
34
Good...
$ ssh myserver bash -c 'exit 34' ; echo $?
0
What?!?
$ ssh myserver
ubuntu#myserver $ bash -c 'exit 34' ; echo $?
34
So the problem does not appear to be either ssh or bash -c in isolation, but their combination does not behave as I would expect.
I'm designing a script to be run on a remote machine that needs to take an argument list that's computed on the client side. For the sake of argument, let's say it fails if any of the arguments is not a file on the remote server:
ssh myserver bash -c '
for arg ; do
if [[ ! -f "$arg" ]] ; then
exit 1
fi
done
' arg1 arg2 ...
How can I run something like this and effectively inspect its return status? The test above seems to suggest I cannot.

The problem is that the quoting is being lost. ssh simply concatenates the arguments, it doesn't requote them, so the command you're actually executing on the server is:
bash -c exit 34
The -c option only takes one argument, not all the remaining arguments, so it's just executing exit; 34 is being ignored.
You can see a similar effect if you do:
ssh myserver bash -c 'echo foo'
It will just echo a blank line, not foo.
You can fix it by giving a single argument to ssh:
ssh myserver "bash -c 'exit 34'"
or by doubling the quotes:
ssh myserver bash -c "'exit 34'"

Insofar as your question is how to run a command remotely while passing it on ssh's command line without it getting in a mangle that triggers the bug in question, printf '%q ' can be used to ask the shell to perform quoting on your behalf, to build a string which can then be passed to ssh:
printf -v cmd_str '%q ' bash -c '
for arg ; do
if [[ ! -f "$arg" ]] ; then
exit 1
fi
done
' arg1 arg2 ...
ssh "$host" "$cmd_str"
However, this is only guaranteed to work correctly if the default shell for the remote user is also bash (or, if you used ksh's printf %q locally, if the remote shell is ksh). It's much safer to pass your script text out-of-band, as on stdin:
printf -v arg_str '%q ' arg1 arg2 ...
ssh "$host" "bash -s $arg_str" <<'EOF'
for arg; do
if [[ ! -f "$arg" ]]; then
exit 1
fi
done
EOF
...wherein we still depend on printf %q to generate correct output, but only for the arguments, not for the script itself.

Try wrapping in quotes:
╰─➤ ssh server "bash -c 'exit 34' "; echo $?
34

Related

Passing variables to SSH [duplicate]

This question already has answers here:
Passing external shell script variable via ssh
(2 answers)
Variable issues in SSH
(1 answer)
Closed 1 year ago.
The following code loops through states in a array and passes a state to a server via ssh -
STATES="NY CO"
arr_states=(${STATES//' /'/ })
for i in "${arr_states[#]}"; do
state=$i
ssh -o SendEnv=state jenkins#server sh -s << 'EOF'
sudo su
cd /home/jenkins/report
psql -d db -c "$(sed 's/state_name/'"$state"'/' county.sql)" -U user
echo $state
EOF
done
The output of echo $state in the above is an empty string even if I pass it NY.
When I change the 'EOF' to EOF, the output of echo $state is the string I passed (NY). But then it says, the file county.sql does not exist.
How do I get it to recognize both the variable I pass and the file on the remote I am trying to run.
As an approach that doesn't require you to do any manual escaping of your code (which frequently becomes a maintenance nightmare, since it means that code needs to be changed whenever you modify where it's expected to run) -- consider defining a function, and using declare -f to ask the shell to generate code that will output that function for you.
The same can be done with variables, using declare -p. Thus, passing both a function with the remote code, and the variables that remote code needs to operate that way:
#!/usr/bin/env bash
# This is run on the remote server _as root_ (behind sudo su)
remotePostEscalationFunc() {
cd /home/jenkins/report || return
if psql -d db -U user -c "$(sed -e "s/state_name/${state}/" county.sql)"; then
echo "Success processing $state" >&2
else
rc=$?
echo "Failure processing $state" >&2
return "$rc"
fi
}
# This is run on the remote server as the jenkins user (before sudo).
remoteFunc() {
sudo su -c "$(declare -p state); $(declare -f remotePostEscalationFunc); remotePostEscalationFunc"
}
# Everything below here is run locally.
arr_states=( NY CO )
for state in "${arr_states[#]}"; do
ssh jenkins#server 'bash -s' <<EOF
$(declare -f remoteFunc remotePostEscalationFunc); $(declare -p state); remoteFunc
EOF
done
You were almost right with the change from 'EOF' to EOF. You are just missing a backslash (\) before $(sed. So the following should work:
arr_states=(${STATES//' /'/ })
for i in "${arr_states[#]}"; do
state=$i
ssh -o SendEnv=state jenkins#server sh -s << EOF
sudo su
cd /home/jenkins/report
psql -d db -c "\$(sed 's/state_name/'"$state"'/' county.sql)" -U user
echo $state
EOF
done

pass variables to shell script over ssh

How do I make $1 and $2 variables to the remote shell through ssh. Below is the sample,
#!/bin/bash
user_name="${1}"
shift
user_password="${1}"
shift
tenant_name="${1}"
realscript="/IDM_ARTIFACTS/reset.sh"
ssh -qT oracle#slc05pzz.us.oracle.com bash -c "'echo $user_name'" < "$realscript"
I am able to echo $user_name but not able to access it in $realscript.
Cant call using HERE tags or single quotes'' as the script doesn't have straight forward commands.
What other options do I have? Please help
I do not have your script, so I put a test one on my remote host:
$ realscript=/home/jack/show_params.sh
$ second="second one"
$ ssh TEST cat ${realscript}
#!/bin/bash
nParams=$#
echo There are ${nParams} parameters.
for (( ii=1; ii<=${nParams}; ii++ )); do
echo "$1"
shift
done
$ ssh TEST 'bash '${realscript}' "first one" '\'${second}\'
There are 2 parameters.
first one
second one
The quoting gets a bit weird, but you can pass into parameters variables with spaces.

Command line Parameters in bash shell script in nested ssh

I am trying to use $1, $2 variables which I have passed through command line to a bash shell script. These variables I am using within a ssh call. But its seems the variables within ssh are not getting replaced, the outer ones are getting replaced. Any workaround? Here's the code
#!/bin/bash
ssh -t "StrictHostKeyChecking=no" -i $1 user#ip<<'EOF1'
ssh -t -i $1 user2#ip2 <<'EOF2'
exit
EOF2
exit
EOF1
Here the first $1 gets replaced but the second one doesn't. Its basically key name for password less authentication
Use printf %q to generate an eval-safe string version of your argument list:
# generate a string which evals to list of command-line parameters
printf -v cmd_str '%q ' "$#"
# pass that list of parameters on the remote shell's command line
ssh "$host" "bash -s $cmd_str" <<'EOF'
echo "This is running on the remote host."
echo "Got arguments:"
printf '- %q\n' "$#"
EOF
For what you're really doing, the best practice is probably to use a ProxyCommand -- see the relevant documentation -- and to have your private key exposed via agent forwarding, rather than having it sitting on your bounce host on-disk. That said, it's straightforward enough to adopt the answer given above to fit the code in the question:
#!/bin/bash
printf -v args '%q ' "$#"
echo "Arguments on original host are:"
printf '- %q\n' "$#"
ssh -t "StrictHostKeyChecking=no" -i "$1" user#ip "bash -s $args" <<'EOF1'
printf -v args '%q ' "$#"
echo "Arguments on ip1 are:"
printf '- %q\n' "$#"
ssh -t -i "$1" user2#ip2 "bash -s $args" <<'EOF2'
echo "Arguments on ip2 are:"
printf '- %q\n' "$#"
EOF2
EOF1
Much simpler is to let ssh handle the tunneling for you.
ssh -o ProxyCommand="ssh user1#ip1 nc -w 10 %h %p" user2#ip2
(This example found at http://undeadly.org/cgi?action=article&sid=20070925181947).

script doesn't see arg in '$ ssh bash script arg'

I'd like to see both commands print hello
$ bash -l -c "/bin/echo hello"
hello
$ ssh example_host bash -l -c /bin/echo hello
$
How can hello be passed as a parameter in the ssh command?
The bash -l -c is needed, so login shell startup scripts are executed.
Getting ssh to start a login shell would solve the problem too.
When you pass extra args after -c, they're put into the argv of the shell while that command is executing. You can see that like so:
bash -l -c '/bin/echo "$0" "$#"' hello world
...so, those arguments aren't put on the command line of echo (unless you go out of your way to make it so), but instead are put on the command line of the shell which you're telling to run echo with no arguments.
That is to say: When you run
bash -l -c /bin/echo hello
...that's the equivalent of this:
(exec -a hello bash -c /bin/echo)
...which puts hello into $0 of a bash which runs only /bin/echo. Since running /bin/echo doesn't look at $0, of course it's not going to print hello.
Now, because executing things via ssh means you're going through two steps of shell expansion, it adds some extra complexity. Fortunately, you can have the shell handle that for you automatically, like so:
printf -v cmd_str '%q ' bash -l -c '/bin/echo "$0" "$#"' hello world
ssh remote_host "$cmd_str"
This tells bash (printf %q is a bash extension, not available in POSIX printf) to quote your command such that it expands to itself when processed by a shell, then feeds the result into ssh.
All that said -- treating $0 as a regular parameter is bad practice, and generally shouldn't be done absent a specific and compelling reason. The Right Thing is more like the following:
printf -v cmd '%q ' /bin/echo hello world # your command
printf -v cmd '%q ' bash -l -c "$cmd" # your command, in a login shell
ssh remotehost "$cmd" # your command, in a login shell, in ssh

Shell script calling ssh: how to interpret wildcard on remote server

I work on a certain customer environment on a daily basis, comprised of 5 AIX servers, and sometimes I need to issue a same command on all 5 of them.
So I set up SSH key-based authentication between the servers, and whipped up a little ksh script that broadcasts the command to all of them:
#!/usr/bin/ksh
if [[ $# -eq 0 ]]; then
print "broadcast.ksh - broadcasts a command to the 5 XXXXXXXX environments and returns output for each"
print "usage: ./broadcast.ksh command_to_issue"
exit
fi
set -A CUST_HOSTS aaa bbb ccc ddd eee
for host in ${CUST_HOSTS[#]}; do
echo "============ $host ================"
if [[ `uname -n` = $host ]]; then
$*
continue
fi
ssh $host $*
done
echo "========================================="
echo "Finished"
Now, this works just fine, until I want to use a wildcard on the remote end, something like:
./broadcast.ksh ls -l java*
since the '*' is expanded on the local system as opposed to the remote.
Now, if using ssh remote commands, I can get around this by using single quotes:
ssh user#host ls -l java* <-- will _not_ work as expected, since asterisk will be interpreted locally
ssh user#host 'ls -l java*' <-- _will_ work as expected, since asterisk will be interpreted on the remote end
Now, I have tried to incorporate that into my script, and have tried to create a $command variable made up of the $* contents surrounded by single quotes, but have drowned in a sea of escaping backslashes and concatenation attempts in ksh, to no avail.
I'm sure there's a simple solution to this, but I'm not finding it so thought I would come out and ask.
Thanks,
James
As you found, passing an asterisk as an argument to your script doesn't work because the shell expands it before the arguments are processed. Try double-quoting $* and either escaping asterisks/semi-colons etc with backslashes in your script call, or single quoting the command.
for host in ${CUST_HOSTS[#]}; do
echo "============ $host ================"
if [[ `uname -n` = $host ]]; then
"$*"
continue
fi
ssh $host "$*"
done
$ ./broadcast.ksh ls -l java\*
$ ./broadcast.ksh 'ls -l java*; ls -l *log'
I wanted to comment but still too low on the totum, but Josh's single quote suggestion should work.
I spun up a couple of vms each with 2 files in /tmp : /tmp/foo1 and /tmp/foo2
then used a variation of your script
root#jdsdrop1:~# cat foo.sh
#!/usr/bin/ksh
if [[ $# -eq 0 ]]; then
print "broadcast.ksh - broadcasts a command to the 5 XXXXXXXX environments and returns output for each"
print "usage: ./broadcast.ksh command_to_issue"
exit
fi
set -A CUST_HOSTS jdsdropfed1 jdsdropfed2-2
for host in ${CUST_HOSTS[#]}; do
echo "============ $host ================"
if [[ `uname -n` = $host ]]; then
$*
continue
fi
ssh $host $*
done
echo "========================================="
echo "Finished"
root#jdsdrop1:~# ./foo.sh 'ls /tmp/foo*'
============ jdsdropfed1 ================
/tmp/foo1
/tmp/foo2
============ jdsdropfed2-2 ================
/tmp/foo1
/tmp/foo2
=========================================
Finished
root#jdsdrop1:~# ssh jdsdropfed1 "ls /tmp/foo*"
/tmp/foo1
/tmp/foo2
root#jdsdrop1:~# ssh jdsdropfed2-2. "ls /tmp/foo*"
/tmp/foo1
/tmp/foo2

Resources