I've got a OpenWRT router that I'm trying to make a persistent reverse ssh tunnel to an Amazon AWS server. The issue is my ISP changes my public IP so in order to ssh to it, I have to use port knocking to prevent every IP from seeing the ssh port. I've written a bash script to make a connection. And no, I don't want to use autossh...it doesn't support input key path.
What works:
Anyway this script is located at /root/scripts/autosshtoaws.sh. I can run it fine from the terminal with the command ./root/scripts/autosshtoaws.sh. It goes in the background just fine. If I manually disconnect my internet line, sshd on the AWS server is configured to kill the socket AND my router is configured to kill the socket. It attempts like it's suppose to.
The problem:
I've got this running as an init.d service. It starts up well after network does. When I reboot the router and I look at the netstat command, it shows multiple attempts as if it is trying over and over and over.
Here is the script:
#!/bin/bash
PATH=/usr/sbin:/sbin:/usr/bin:/bin
ssh_command="/usr/bin/netcat -z MYMACHINE.amazonaws.com 777 611 501; sleep 2; /usr/bin/ssh -i /root/.ssh/aws_key.pem -R 8080:192.168.0.99:7070 ubuntu#MYMACHINE.amazonaws.com"
while true; do
if [[ -z $(ps | grep "$ssh_command" | sed '$ d') ]]
then eval $ssh_command
else sleep 60
fi
done
This is what the netstat command shows:
tcp 0 0 1.1.1.1:54926 2.2.2.2:22 ESTABLISHED
tcp 0 0 1.1.1.1:54312 2.2.2.2:22 TIME_WAIT
tcp 0 0 1.1.1.1:54760 2.2.2.2:22 TIME_WAIT
tcp 0 0 1.1.1.1:54700 2.2.2.2:22 TIME_WAIT
tcp 0 0 1.1.1.1:54636 2.2.2.2:22 TIME_WAIT
Here is the service configuration:
#!/bin/sh /etc/rc.common
USE_PROCD=1
START=95
STOP=01
start_service() {
procd_open_instance
procd_set_param command /bin/sh "/root/scripts/autostartsshtoaws.sh"
procd_close_instance
}
Related
I have a problem about connections to a docker containers from outside of my network. Iptables not worked yet for me (See this question).
The container open a connection on port 9010 which maps to its 443:
docker run -d [some other configs] --restart=always -p 9010:443 -p 9010:443/udp xxx/myImage
and I cannot see the client connections to this in my host:
root#ubuntu:~# netstat -anlp | grep ESTABLISHED
tcp 0 64 88.99.126.173:22 191.96.180.79:2543 ESTABLISHED 20855/sshd: root#pt
However I can see it inside container:
root#ubuntu:~# docker exec -it b7772c4d43dc /bin/bash -c 'netstat -anlp | grep ESTABLISHED'
tcp 0 0 ::ffff:172.17.0.2:443 ::ffff:191.96.180.79:3298 ESTABLISHED 7/python
Now I want to close this connection in that container, but since docker is from python:3.6-alpine image, there is not useful commands such as tcpkill command. How I can close this connections by bash inside my host.
I use the following script to open a ssh tunnel to a bunch servers always varying between mysql, redis and ssh ports.
I am doing this while being in the company vpn, but I had the same problem back in the days, when you worked in the office.
Usually I start the script and use the opened connection with other tools like SequelPro or PhpStorm to connect to webserver or databases. Ideally it would just run until I don't need it any more and then I would exit the jumpserver and the connections should close. That is fine as long as I don't loose the connection and get kicked out of the jumpserver.
#!/bin/sh
username="my-user"
jumpServer="my.bastionserver.net"
hosts=("my.awsserver1.com" "my.awsserver2.com" "my.awsserver3.com")
destMysqlPort=3306
destSshPort=22
destRedisPort=6379
x=10001
y=10002
z=10003
for i in "${hosts[#]}"; do
:
server=$i
sshTunnel="$sshTunnel -L $x:$server:$destMysqlPort -L $y:$server:$destSshPort -L $y:$server:$destRedisPort"
echo "Server: $server -- MYSQL: $x -- SSH: $y-- Redis: $z"
x=$((x + 3))
y=$((y + 3))
z=$((z + 3))
done
if [ -z "$sshTunnel" ]
then
echo "ssh tunnels are empty"
else
ssh $sshTunnel $username#$jumpServer -i ~/.ssh/aws
fi
the output is as follows:
$ ./awstunnel.sh
Server: my.awsserver1.com -- MYSQL: 10001 -- SSH: 10002-- Redis: 10003
Server: my.awsserver1.com -- MYSQL: 10004 -- SSH: 10005-- Redis: 10006
Server: my.awsserver1.com -- MYSQL: 10007 -- SSH: 10008-- Redis: 10009
[...]
When I try to connect again via this script I get the messages that the address is already in use:
bind [127.0.0.1]:10002: Address already in use
channel_setup_fwd_listener_tcpip: cannot listen to port: 10002
bind [127.0.0.1]:10005: Address already in use
channel_setup_fwd_listener_tcpip: cannot listen to port: 10005
[...]
How can I change the script so that I can start it again right away and don't have to wait for quite some time until the connection via this tunnel really closes?
I work from a Mac and the jumpserver is a Linux server, where I should not change settings.
Just like this, a little hint:
To get the PID of the last executed command you have to type:
echo "$!"
So, what you can do is just store the PID after each ssh login command like this for example:
#Store the pid of the last command in a variable named sshPid:
sshPid=$!
and when you are done just kill the corresponding PID with:
kill ${sshPid}
Tell me if that worked for you :p
Bguess
I have one http server open 8000 port like next:
orange#orange:~$ python -m SimpleHTTPServer
Serving HTTP on 0.0.0.0 port 8000 ...
As known to all, there are several versions netcat, but for some reasons, I can just use next versions:
root#orange:~# busybox nc
BusyBox v1.27.2 (Ubuntu 1:1.27.2-2ubuntu3.2) multi-call binary.
Usage: nc [-iN] [-wN] [-l] [-p PORT] [-f FILE|IPADDR PORT] [-e PROG]
Open a pipe to IP:PORT or FILE
-l Listen mode, for inbound connects
(use -ll with -e for persistent server)
-p PORT Local port
-w SEC Connect timeout
-i SEC Delay interval for lines sent
-f FILE Use file (ala /dev/ttyS0) instead of network
-e PROG Run PROG after connect
This means only above parameters should be used.
I did next:
root#orange:~# rm -f /tmp/backpipe && mkfifo /tmp/backpipe && cat /tmp/backpipe | busybox nc 127.0.0.1 8000 | busybox nc -l -p 80 > /tmp/backpipe
The aim is: when user visit http://127.0.0.1:80, it will automactically forward to http://127.0.0.1:8000, so the contents of python simplehttpserver will returned to user.
Finally, I launch test client:
orange#orange:~$ wget http://127.0.0.1
--2019-06-26 22:47:25-- http://127.0.0.1/
Connecting to 127.0.0.1:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1378 (1.3K) [text/html]
Saving to: ‘index.html’
index.html 100%
[==============================================>] 1.35K --.-KB/s in 0s
2019-06-26 22:47:25 (505 MB/s) - ‘index.html’ saved [1378/1378]
Above all ok, but back to the port forward command, I found it had been exit, so it nolonger receive the second time connect.
So, my question is, with above busybox netcat, how can I make this port forward command not exit after the first connection.
NOTE: I don't want the solution with for-loop, I just want to find the way to do port forward with above netcat, meanwhile it will continue serve after the first connection.
You may have been using an older version of busybox. Current versions come with the -lk flag, which allows you to persist the server beyond just a single connection.
In order to accomplish what you wanted, you can do something like this:
busybox nc -lk -p 80 -e busybox nc localhost 8000
I'm having a TCP-Server which prints out some logging information. Normally I'm used to dump these logging to terminal by:
nc -v 192.168.0.42 7777
Or I dump the logging to a file using
nc -v 192.168.0.42 7777 >> log.log
Well, sometimes the server runs through a reboot. Thus the connection to the client is disconnected and the logging to the file log.log stops.
So I ask: How to reconnect automatically to the TCP Server?
I tried this:
while true; do nc -v 192.168.0.42 7777; done
But this does not work. If the server does its reboot nc does not notice that the connection has become inactive.
If you would like to reproduce this scenario just open a server-terminal with nc -l 7777 and then run the commands above within a second terminal. The server can then be terminated by ctrl-c.
I have a bash script using ssh to remote forward, so that I can login the machine running the script. I use a script because periodically the network needs some sort of auth. If ssh fails, I'll retry. The script is basically this:
while [ 1 ]; do
authentication
ssh -N -v -R 9999:localhost:22 user#$remote_ip
done
The problem is ssh won't exit upon remote forward failure like below:
debug1: Remote: Forwarding listen address "localhost" overridden by server GatewayPorts
debug1: remote forward failure for: listen 9999, connect localhost:22
Warning: remote port forwarding failed for listen port 9999
debug1: All remote forwarding requests processed
The failure is due to this:
debug1: server_input_global_request: tcpip-forward listen localhost port 9999
debug1: Local forwarding listening on 0.0.0.0 port 9999.
bind: Address already in use
channel_setup_fwd_listener: cannot listen to port: 9999
The previous session doesn't end yet, and the port is in use.
Is there a way to check whether ssh succeeds or not?
And as a programmer, I really can't understand why it's designed this way. What's the rationale behind this design?
You can pass in config parameter ExitOnForwardFailure to tell ssh client to terminate the connection if it cannot set up port forwarding.
ExitOnForwardFailure
Specifies whether ssh(1) should terminate the connection if it cannot set up all requested dynamic, tunnel, local, and remote port forwardings. The argument must be ''yes'' or ''no''. The default is ''no''.
Something like:
ssh -N -v -R 9999:localhost:22 user#$remote_ip -o ExitOnForwardFailure=yes
Your ssh session has a timeout on the server side, so when you try to reconnect, the previous connection is still listening on port 9000.
Use openvpn, instead of this hacky solution.