SSH Remote Forwarding - Send to Background & Save Output as Variable - ssh-tunnel

I'm working on a bash script to connect to a server via SSH that is running sish (https://github.com/antoniomika/sish). This will essentially create a port forward on the internet like ngrok using only SSH. Here is what happens during normal usage.
The command:
ssh -i ./tun -o StrictHostKeyChecking=no -R 5900:localhost:5900 tun.domain.tld sleep 10
The response:
Starting SSH Forwarding service for tcp:5900. Forwarded connections can be accessed via the following methods:
TCP: tun.domain.tld:43345
Now I need to send the ssh command to the background and figure out some way of capturing the response from the server as a variable so that I can grab the port that sish has assigned and send that somewhere (probably a webhook). I've tried a few things like using -f and piping to a file or named pipe and trying to cat it, but the issue is that the piping to the file never works and although the file is created, it's always empty. Any assistance would be greatly appreciated.

If you're running a single instance of sish (and the tunnel you're attempting to define) you can actually have sish bind the specific part you want (in this case 5900).
You just set the --bind-random-ports=false flag on your server command in order to tell sish that it's okay to not use random ports.
If you don't want to do this (or you have multiple clients that will expose this same port), you can use a simple script like the following:
#!/bin/bash
ADDR=""
# Start the tunnel. Use a phony command to tell ssh to clean the output.
exec 3< <(ssh -R 5900:localhost:5900 tun.domain.tld foobar 2>&1 | grep --line-buffered TCP | awk '{print $2; system("")}')
# Get our buffered output that is now just the address sish has given to us.
for i in 1; do
read <&3 line;
ADDR="$line"
done
# Here is where you'd call the webhook
echo "Do something with $ADDR"
# If you want the ssh command to continue to run in the background
# you can omit the following. This is to wait for the ssh command to
# exit or until this script dies in order to kill the ssh command.
PIDS=($(pgrep -P $(pgrep -P $$)))
function killssh() {
kill ${PIDS[0]}
}
trap killssh EXIT
while kill -0 ${PIDS[0]} 2> /dev/null; do sleep 1; done;
sish also has an admin api which you can scrape. The information on that is available here.
References: I build and maintain sish and use it myself (as well as a similar type of script).

Related

Remote server array args in for loop

I need to pass values from an array to the script on remote host.
The remote script creates files locally on each array value.
Yes, i can do it by:
for i in ${LIST[#]}
do ssh root#${servers} bash "/home/test.sh" "$i"
done
but this action is rather slow and it makes ssh session on every array value
ssh root#${servers} bash "/home/test.sh" "${LIST[#]}"
by this code i get an error:
bash: line 1338: command not found
How can i do it?
Use the connection-sharing feature of ssh so that you only have a single, preauthenticated connection that is used by each ssh process in your loop.
# This is the socket all of the following ssh processes will use
# to establish a connection to the remote host.
socket=~/.ssh/ssh_mux
# This starts a background process that does nothing except keep the
# authenticated connection open on the specified socket file.
ssh -N -M -o ControlPath="$socket" root#${servers} &
# Each ssh process should start much more quickly, as it doesn't have to
# go through the authentication protocol each time.
for i in "${LIST[#]}"; do
# This uses the existing connection to avoid having to authenticate again
ssh -o ControlPath="$socket" root#${servers} "bash /home/test.sh '$i'"
# The above command is still a little fragile, as it assumes that $i
# doesn't have a single quote in its name.
done
# This closes the connection master
ssh -o ControlPath="$socket" -O exit root#{servers}
The alternative is to try to move your loop into the remote command, but this is fragile as the array isn't defined on the remote host, and there is no good way to transfer each element in a way that protects each element. If you weren't concerned about word-splitting, you could use
ssh root#${servers} "for i in ${LIST[*]}; do bash /home/test.sh \$i; done"
but then you probably wouldn't be using an array in the first place.

Capturing ssh output in bash script while backgrounding connection

I have a loop that will connect to a server via ssh to execute a command. I want to save the output of that command.
o=$(ssh $s "$#")
This works fine. I can then do what I need with the output. However I have a lot of servers to run this against and I'm trying to speed up the process by backgrounding the ssh connection, basically to do all of the requests at once. If I wasn't saving the output I could do something like
ssh $s "$#" &
and this works fine
I haven't been able to get the correct combination to do both.
o=$(ssh $s "$#")&
This doesn't give me any output. Other combinations I've tried appear to try to execute the output. Suggestions?
Thanks!
A process going to the background gets its own copies of the file descriptors. The stdout (o=..) will not be available in the calling process. However, you can bind the stdout to a file and access the file.
ssh $s "$#" >outfile &
wait
o=$(cat outfile)
If you don't like files, you could also use named pipes. This way the 'wait' is done by the 'cat' command. The pipe can be reused and consumes no space on the disk.
mkfifo testpipe
ssh $s "$#" >testpipe &
o=$(cat testpipe)
I would just use a temporary file. You can't set a variable in a background process and access it from the shell that started it.
ssh "$s" "$#" > output.txt & ssh_pid=$!
...
wait "$ssh_pid"
o=$(<output.txt)

IFS read not getting executed completely when using commands over remote in linux

I am reading a file through a script using the below method and storing it in myArray
while IFS=$'\t' read -r -a myArray
do
"do something"
done < file.txt
echo "ALL DONE"
Now in the "do something" area I am using some commands over ssh
ssh user#$SERVER "some command"
But the issue is after executing this for the 1st line of file.txt, the script stops reading the file further and skips to next step that is I get the output
ALL DONE
But instead of commands over ssh I use local commands the scripts run file. I am not sure why this is happening. Can someone please suggest what I need to do?
You'll have to try giving the -n flag to ssh, from the manpage:
-n Redirects stdin from /dev/null (actually, prevents reading from
stdin). This must be used when ssh is run in the background. A
common trick is to use this to run X11 programs on a remote
machine. For example, ssh -n shadows.cs.hut.fi emacs & will
start an emacs on shadows.cs.hut.fi, and the X11 connection will
be automatically forwarded over an encrypted channel. The ssh
program will be put in the background. (This does not work if
ssh needs to ask for a password or passphrase; see also the -f
option.)

BASH: keep connection alive

I have the following scenario:
I use netcat to connect to a host running telnet server on port 23, I log in using provided username and password, issue some commands, after which I need to do fairly complex analysis of the provided output. Naturally, expect comes to mind, with a script like this:
spawn nc host 23
send "user\r"
send "pass\r"
send "command\r"
expect EOF
then, it is executed with expect example.scr >output.log, so the output file can be parsed. The parser is 150+ lines of bash code that executes under 2 seconds, and makes a decision what command should be executed next. Thus, it replaces "command" with "command2", and executes the expect script again, like this:
sed -i '/send "command\r"/send "command2\r"/' example.scr
expect example.scr >output.log
Obviously, it is not needed to re-establish telnet connection and perform log in process all over again, just to issue a single telnet command after 2 seconds of processing. A conclusion can be made, that telnet session should be kept alive as a background process, so one could freely talk to it at any given time. Naturally, using named pipes comes to mind:
mkfifo in
mkfifo output.log
cat in | nc host 23 >output.log &
echo -e "user\npass\ncommand\n" >in
cat output.log
After the file is written to, EOF causes the named pipe to close, thus terminating the telnet session. I was thinking what kind of eternal process could be piped to netcat so it can be used as telnet relay to host. I came up with a very silly idea, but it works:
nc -k -l 666 | nc host 23 >output.log &
echo -e "user\npass\ncommand\n" | nc localhost 666
cat output.log
The netcat server is started with k(eep alive), listening on port 666, and any data stream is redirected to the netcat telnet client connected to the host, while the entire conversation is dumped to output.log. One can now echo telnet commands to nc localhost 666, and read the result from output.log.
One should keep in mind that the expect script can be easily modified to accommodate SSH and even serial console connection, just by spawning ssh or socat instead of netcat. I never liked expect because it forces a use of another scripting language within bash, requires tcl libraries, and needs to be compiled for the embedded platforms, while netcat is a part of busybox and readily available everywhere.
So, the question is - could this be done in a simpler way? I'd put my bet on having some sort of link between console and TCP socket. Any suggestions are appreciated.
How about using like a file descriptor?
exec 3<>/dev/tcp/host/port
while true; do
echo -e "user\npass\ncommand" >&3
read_response_generate_next_command <&3 >&3
# if no more commands, break;
done
exec 3>&-

Loop broken by ssh running script aside

I have a couple of machines to update some script on. I can do this with a small bash script on my side, which consists of one while loop for reading IPs from a list and calling scp for them. It works fine, but when I try to run updated script in a loop, it break the loop, although runs quite fine itself.
#!/bin/bash
cat ip_list.txt | while read i; do
echo ${i}
scp the_script root#${i}:/usr/sbin/ # works ok
ssh root#${i} /usr/sbin/the_script # works for a first IP, then breaks
done
Is this how it suppose to work? If so, how can I run a script remotely via ssh without breaking the loop?
Use this:
ssh -n root#${i} /usr/sbin/the_script # works for a first IP, then breaks
The -n option tells ssh not to read from stdin. Otherwise, it reads stdin and passes it through to the network connection, and this consumes the rest of the input pipe.
You need to change the ssh line like this
ssh root#${i} /usr/sbin/the_script < /dev/null # works for a first IP, then breaks

Resources