Capture output of double-ssh (ssh twice) session as BASH variable - bash

I'd like to capture the output of an ssh session. However, I first need to ssh twice (from my local computer to the remote portal to the remote server), then run a command and capture the output.
Doing this line-by-line, I would do:
ssh name#remote.portal.com
ssh remote.server.com
remote.command.sh
I have tried the following:
server=remote.server.com ##define in the script, since it varies
sshoutput=$(ssh -tt name#remote.portal.com exec "ssh -tt ${server} echo \"test\"")
echo $sshoutput
I would expect the above script to echo "test" after the final command. However, the outer ssh prompt just hangs after I enter my command and, once I Ctrl+c or fail to enter my password, the inner ssh session fails (I believe since stdout is no longer printed to screen and I no longer get my password prompt).
If I run just the inner command (i.e., without "sshoutput=$(" to save it as a variable), then it works but (obviously) does not capture output. I have also tried without the "exec".
I have also tried saving the inner ssh as a variable like
sshoutput=$(ssh -tt name#portal myvar=$(ssh -tt ${server} echo \"test\"") && echo $myvar)
but that fails because BASH tries to execute the inner ssh before sending it to the outer ssh session (I believe), and the server name is not recognized.
(I have looked at https://unix.stackexchange.com/questions/89428/ssh-twice-in-bash-alias-function but they simply say "more flags required if using interactive passwords" and do not address capturing output)
Thanks in advance for any assistance!

The best-practice approach here is to have ssh itself do the work of jumping through your bouncehost.
result=$(ssh \
-o 'ProxyCommand=ssh name#remote.portal.com nc -w 120 %h %p' \
name#remote.server.com \
"remote.command.sh")
You can automate that in your ~/.ssh/config, like so:
Host remote.server.com
ProxyCommand ssh name#remote.portal.com nc -w 120 %h %p
...after which any ssh remote.server.com command will automatically jump through remote.portal.com. (Change nc to netcat or similar, as appropriate for tools that are installed on the bouncehost).
That said, if you really want to do it yourself, you can:
printf -v inner_cmd '%q ' "remote.command.sh"
printf -v outer_cmd '%q ' ssh name#remote.server.com "$inner_cmd"
ssh name#remote.portal.com bash -s <<EOF
$outer_cmd
EOF
...the last piece of which can be run in a command substitution like so:
result=$(ssh name#remote.portal.com bash -s <<EOF
$outer_cmd
EOF
)

Related

Bash: get output of sudo command on remote using SSH

I'm getting incredibly frustrated here. I simply want to run a sudo command on a remote SSH connection and perform operations on the results I get locally in my script. I've looked around for close to an hour now and not seen anything related to that issue.
When I do:
#!/usr/bin/env bash
OUT=$(ssh username#host "command" 2>&1 )
echo $OUT
Then, I get the expected output in OUT.
Now, when I try to do a sudo command:
#!/usr/bin/env bash
OUT=$(ssh username#host "sudo command" 2>&1 )
echo $OUT
I get "sudo: no tty present and no askpass program specified". Fair enough, I'll use ssh -t.
#!/usr/bin/env bash
OUT=$(ssh -t username#host "sudo command" 2>&1 )
echo $OUT
Then, nothing happens. It hangs, never asking for the sudo password in my terminal. Note that this happens whether I send a sudo command or not, the ssh -t hangs, period.
Alright, let's forget the variable for now and just issue the ssh -t command.
#!/usr/bin/env bash
ssh -t username#host "sudo command" 2>&1
Then, well, it works no problem.
So the issue is that ssh -t inside a variable just doesn't do anything, but I can't figure out why or how to make it work for the life of me. Anyone with a suggestion?
If your script is rather concise, you could consider this:
#!/usr/bin/env bash
ssh -t username#host "sudo command" 2>&1 \
| ( \
read output
# do something with $output, e.g.
echo "$output"
)
For more information, consider this: https://stackoverflow.com/a/15170225/10470287

Redirecting an output of command executed using Plink on double-hop SSH session

From my Windows system, I'm connecting via SSH to a remote system [remote1], and then connecting to another remote system [remote2] which remote1 has connectivity to, but my Windows system doesn't.
Here's an example that is working;
plink -ssh -pw password -batch root#remote1 ssh remote2 "sed -i 's/param=.*/param=newValue/' /root/test.txt"
This routine connects to remote1 via Plink, then connects to remote2 via ssh, then checks for a string param= and if it exists, replaces it with param=newValue. Again, this is working.
Here's what isn't working;
plink -ssh -pw password -batch root#remote1 ssh remote2 "grep -q -F 'param=newValue' /root/test.txt || echo 'export param=newValue' >> /root/test.txt"
This routine connects in the same way to remote1 and remote2, and then searches for param=newValue and if it doesn't exist, appends param=newValue to the end of the file. When I run this on Windows command line, it takes a couple seconds then exits with no errors, but the test.txt file is unchanged.
If I remote into remote1 using putty and then run the same command starting from ssh remote2 "grep ... then it does append the test.txt file.
I've tried escaping both | and >, but neither worked.
I've determined that the second half of the command is the part that is failing.
echo 'export param=newValue' >> /root/test.txt
More specifically, it appears to be the redirect portion, as I'm able to echo to the console when I remove the redirect.
It's say that the double-quotes are lost at some early stage (when executing Plink already), making the redirection happen on the first server already.
Consider providing the command to Plink using a different method.
either using -m switch (recommended):
plink -ssh -pw password -batch root#remote1 ssh remote2 -m command.txt
With command.txt containing
grep -q -F 'param=newValue' /root/test.txt || echo 'export param=newValue' >> /root/test.txt
or using an input redirection:
plink -ssh -pw password -batch -T root#remote1 ssh remote2 < command.txt
Note the -T switch, that ensures that a shell is started in non-interactive mode - the same mode as used when the command is specified on command-line (like you original wanted) or using -m switch (as above).
Normally, when you provide a command using an input redirection, an interactive shell session is started.
Even with -T switch, the command is still executed using a "shell" SSH channel, contrary to "exec" SSH channel, when providing the command on command-line or using -m switch. So you experience some differences.
Or you can store the command on either of the servers to a shell script.

How to copy echo 'x' to file during an ssh connection

I have a script which starts an ssh-connection.
so the variable $ssh start the ssh connection.
so $SSH hostname gives the hostname of the host where I ssh to.
Now I try to echo something and copy the output of the echo to a file.
SSH="ssh -tt -i key.pem user#ec2-instance"
When I perform a manual ssh to the host and perform:
sudo sh -c "echo 'DEVS=/dev/xvdbb' >> /etc/sysconfig/docker-storage-setup"
it works.
But when I perform
${SSH} sudo sh -c "echo 'DEVS=/dev/xvdb' > /etc/sysconfig/docker-storage-setup"
it does not seem to work.
EDIT:
Also using tee is working fine after performing an ssh manually but does not seem to work after the ssh in the script.sh
The echo command after an ssh of the script is happening on my real host (from where I'm running the script, not the host where I'm performing an ssh to). So the file on my real host is being changed and not the file on my host where I've performed an ssh to.
The command passed to ssh will be executed by the remote shell, so you need to add one level of quoting:
${SSH} "sudo sh -c \"echo 'DEVS=/dev/xvdb' > /etc/sysconfig/docker-storage-setup\""
The only thing you really need on the server is the writing though, so if you don't have password prompts and such you can get rid of some of this nesting:
echo 'DEVS=/dev/xvdb' | $SSH 'sudo tee /etc/sysconfig/docker-storage-setup'

open gnome terminal tabs programmatically and execute commands in sequence

When working remotely, I have a series of tabs that I open in gnome-terminal, and commands that I execute in them. I would like to automate all this setup as a single command.
If these commands could run independently and in parallel, I'd just adapt the answer to this question. In fact, I tried, using the following shell script:
gnome-terminal --working-directory="/home/superelectric" --tab -t "gate" -e 'bash -c "export BASH_POST_RC=\"ssh gate_tunnel\"; exec bash"' --tab -t "mydesktop" -e 'bash -c "export BASH_POST_RC=\"ssh tunneled_mydesktop\"; exec bash"'
Spread out over multiple lines, for readability:
gnome-terminal \
--working-directory="/home/superelectric" \
--tab \
-t "gate" \
-e \
'bash -c "export BASH_POST_RC=\"ssh gate_tunnel\"; exec bash"' \
--tab \
-t "mydesktop" \
-e \
'bash -c "export BASH_POST_RC=\"ssh tunneled_mydesktop\"; exec bash"'
The first part opens a tab, names it 'gate', and executes 'ssh gate_tunnel' within it. This is an ssh alias that opens a tunnel to 'mydesktop' at school, through the school's outward-facing server, 'gate'.
The second part opens another tab, names it 'mydesktop', and executes 'ssh tunneled_mydesktop' within it. This is another ssh alias, which connects to mydesktop through the tunnel.
~/.ssh/config:
Host gate_tunnel
LocalForward 8023 <my_desktop_at_school>:22
HostName <my_school_server>
That's the theory. In practice, the two commands execute in parallel, whereas I need to ensure that the first tab's command (open tunnel) completes before executing the second tab's command (connect through tunnel).
Is there maybe some command I can execute in the second tab, that 'waits' until the ssh tunnel is opened?
Ok, I think i get it. As i mentioned in the comments the first thing that comes to mind for reaching your school desktop from the outside is to ssh into the school gate and from there ssh into your desktop with something like:
$ ssh -t gate.school.edu ssh desktop_name
There's only one tab then, so your problem doesn't exist.
However there's something very cool with your current setup:
From home it's almost as if you had a direct connection to your desktop machine, so you can scp into it directly and forget about gate. With the solution above that's not possible anymore because we end up with an indirect connection: If you want to scp you have to do it from gate and that sucks.
Check out this article on using ssh's ProxyCommand feature:
Transparent Multi-hop SSH
You get the best of both worlds then :)
Hmm... this may not be a perfect solution. Ideally you should use something that monitors the ssh connection. But, you can check the ssh process with ps. And wait for ssh command to come alive.
#!/bin/bash
COUNTER=0
while [ $COUNTER -lt 10 ]; do # try 10 times
if ps aux ¦ grep <my_desktop_at_school> then
# the tunnel connected now execute the second command
'bash -c "export BASH_POST_RC=\"ssh tunneled_mydesktop\"; exec bash"'
else
continue # or you could do something here if you wish
fi
sleep 10 # sleep for 10 seconds and try again
let COUNTER=COUNTER+1
done
You will have to run this script in the second tab.
Hope it helps.

Connect to multiple ssh connections through scripts

I have been trying to automatically enter a ssh connection using a script. This previous SOF post has helped me so far. Using one connection works (the first ssh statement). However, I want to create another ssh connection once connected, which I thought could look like this:
#! /bin/bash
# My ssh script
sshpass -p "MY_PASSWORD1" ssh -o StrictHostKeyChecking=no *my_hostname_1*
sshpass -p "MY_PASSWORD2" ssh -o StrictHostKeyChecking=no *my_hostname_2*
When running the script, I get only connected to the my_hostname_1 and the second ssh command is not run until I exit the first ssh connection.
I've tried using an if statement like this:
if [ "$HOSTNAME" = my_host_name_1 ]; then
sshpass -p "MY_PASSWORD2" ssh -o StrictHostKeyChecking=no *my_hostname_2*
fi
but I can't get any commands to be read until I exit the first connection.
Here is a ProxyCommand example as suggested by #lihao:
#!/bin/bash
sshpass -p "MY_PASSWORD2" ssh -o StrictHostKeyChecking=no \
-o ProxyCommand='sshpass -p "MY_PASSWORD1" ssh m_hostname_1 netcat -w 1 %h %p' \
my_hostname_2
You are proxying through the first host to get to the second. This assumes you have netcat installed on my_hostname_2. If not, you'll need to install it.
You can also set this up in your ~/.ssh/config file so you don't need the proxy stuff on the command line:
Host my_hostname_1
HostName my_hostname_1
Host my_hostname_2
HostName my_hostname_2
ProxyCommand ssh my_hostname_1 netcat -w 1 %h %p
However, this is a little trickier with the password handling. While you could put the sshpass here, it's not a great idea to have passwords in plain text. Using key based authentication might be better.
A Bash script is a sequence of commands.
echo moo
echo bar
will run echo moo and wait for it to complete, then run the next command.
You can run a remote command like this:
ssh remote echo moo
which will connect to remote, run the command, and exit. If there are additional commands in the script file after this, the shell which is executing these commands will continue with the next one, obviously on the host where you started the script.
To connect to one host from another, you could in principle do
ssh host1 ssh host2
but the proxy command suggested by #zerodiff improves on several aspects of the experience.

Resources