bash script to log/print all connections in the last N seconds - bash

I'd like to: run a script in background that prints every X seconds something like:
tcp 0 0 localhost:5555 localhost:47824 ESTABLISHED
tcp 0 0 localhost:47824 localhost:5555 ESTABLISHED
even if the connection has been already closed, so I will see what connections has been opened (does not matter if closed or still open)in that timespan.
I did this to tail the output to a temp file:
#!/bin/sh
#dump netstat result of established connections to a temp file
(netstat -c | grep --line-buffered ESTABLISHED >> 1.tmp &)
while true
do
#copy the file with the dump and flush it, something like log rotation
#THE PROBLEM IS HERE######
cp 1.tmp 2.tmp && rm 1.tmp
##########################
#add a prefix on each line to reconize the logs
sed -i -e 's/^/Connections: /' 2.tmp
#print the logs to standard output, only one entry for each connection
cat 2.tmp | uniq
sleep 10
done
Obviously this is not the right way, as the deletion of the file will kill the netstat process, I thought that it would just create another new file.
I can do something with the head command on netstat to block it every X lines and change file, but I'm not sure if this will cause to loose some info on a very fast connection/disconnection
Any other way to have a reliable way to log connections are welcome, keep in mind I don't have root privileges, but I only need to know connections made by the user that also runs this script so is fine.
EDIT: I am not happy with the actual solution I just posted, as netstat runs by default every second, and some connections can be way more rapid, and I need a way to get ALL outbound connections made from the user running this script (remember, no root privileges). At the moment I'm trying netstat -c 0.1 but I still cannot trust it

This is the working script with the suggestion from #vlp plus a fix to get unique results, as only uniqwas not enough, I need to sort them to get unique lines.
#!/bin/sh
#dump netstat result of established connections to a temp file
(netstat -c | grep --line-buffered ESTABLISHED >> 1.tmp &)
while true
do
#redirect content of 1.tmp to 2.tmp and remove it from 1.tmp
cat 1.tmp > 2.tmp && > 1.tmp
#add a prefix on each line to reconize the logs
sed -i -e 's/^/Connections: /' 2.tmp
#print the logs to standard output, only one entry for each connection
cat 2.tmp | sort -u
#delete the file
rm 2.tmp
sleep 10
done

As an alternative approach, you might want to consider investigating whether you can insert iptables rules to log connection establishment, and have your script parse the resulting logs. This may require more work, but should report short-lived connections, and could (if you want) log refused connections and more.

Related

How do you close netcat connection after receipt of data?

A very simple use of netcat is below:
#!/bin/sh
echo "hello world" > original.txt
base64 original.txt > encoded.txt
cat encoded.txt | nc -l -p 1234 -q 0
#!/bin/sh
nc localhost 1234 -q 0 > received.txt
base64 -d received.txt > decoded.txt
rm received.txt encoded.txt
echo "Sent:"
md5sum original.txt
echo "Received:"
md5sum decoded.txt
rm decoded.txt original.txt
This creates a file, encodes it into base64, sends it over netcat, and then decodes it, finally comparing whether what was sent and what was received is identical using a checksum comparison.
On a Kali Linux machine I was using earlier, the netcat connection closes upon execution of the second script, but trying on Ubuntu at home, this is not the case. I need to manually close the connection with Ctrl+D.
How can I make it such that the connection closes after receiving this data?
I think including the -c flag for nc should do it. Also, are you sure you want to use Bourne shell, and not Bash? I'd suggest changing your shebang to #!/bin/bash

Read and write to the same netcat tcp connection

Say I write to a netcat connection:
tail -f ${file} | nc localhost 7050 | do_whatever | nc localhost 7050
what happens here is that we have two socket connections, to do some request/response. But that's not ideal for a few reasons.
What I am looking to do is reuse the same connection, to read from and write to.
Does anyone know how I can reuse just one netcat connection?
The correct way to do this in UNIX is to make use of a back pipe. You can do so as follows:
First, create a pipe: mknod bkpipe p
This creates a file named bkpipe of type pipe.
Next, figure out what you need to do. Here are two useful scenarios. In these, replace the hosts/addresses and port numbers with the appropriate ports for your relay.
To forward data sent to a local port to a remote port on another machine:
nc -l -p 9999 0<bkpipe | nc remotehost 7000 | tee bkpipe
To connect to another machine and then relay data in that connection to another:
nc leftHost 6000 0<bkpipe | nc rightHost 6000 | tee bkpipe
If you simply need to handle basic IPC within a single host, however, you can do away with netcat completely and just use the FIFO pipe that mknod creates. If you stuff things into the FIFO with one process, they will hang out there until something else reads them out.
Yeah, I think the simplest thing to do is use this method:
tail -f ${file} | nc localhost 7050 | do_whatever > ${file}
just write back into the same file (it's a 'named pipe').
As long as your messages are less than about 500 bytes, they won't interleave.
Using ncat is much easier and understandable for beginners as a one-liner ;)
prompt$> ncat -lk 5087 -c ' while true; do read i && echo [You entered:] $i; done'
Connect with telnet (or nc) to localhost port 5087, and everything you type echoes back to you ;)
Use -lk option for listening and keeping/maintaining (multiple) connections.
You can make one bash script out of it like this, using back slashes but it invokes multiple bash, not cheap on resource usage:
#!/bin/bash
# title : ncat-listener-bidirectional.sh
# description : This script will listen for text entered by a client
# like for instance telnet
# and echo back the key strokes
#
ncat -lk 5087 -c ' \
#!/bin/bash \
while true; do \
read i && echo [You entered:] $i; \
done'

How to redirect command I/O to nc two-way?

I am trying to do something like this on a POSIX compliance way using nc, following the second example given here, it must run on sh (can be tested on dash or busybox' ash, but not on bash).
I have done the following:
# on operator machine
nc -lvp 9999
# on remote machine
out="/tmp/.$$.out"
in="/tmp/.$$.in"
mkfifo "$out" "$in"
trap 'rm "$out" "$in"' EXIT
# nc 192.168.0.10 9999 > "$out" 0< "$in" &
cat "$in" | nc 192.168.0.10 9999 | cat > "$out" &
bash -i > "$in" 2>&1 0< "$out"
I have two questions, please, answer both:
As you can see I have tried to do nc 192.168.0.10 9999 > "$out" 0< "$in" but I do not known why the heck it didn't worked with file redirection, the command freezes, the process isn't even launched, I have tried with pipe on input only and also on output only, but neither way worked. The downside with this cat solution is when you exit the command after connected on operator machine it still keeps the process alive on remote machine, requiring to be killed. I want to know why redirection solution didn't work? And how to solve the exit issue? Even though the cat solution is not the most elegant one, if it manage to work it would be acceptable.
Is there a way to do this two-way I/O redirect using fd instead of mkfifo? For instance, using exec 3> dummy? Remembering it must be POSIX compliance so >(:) is unacceptable.
If the fd solution is possible, it is not required to make the mkfifo solution to work, but I still would be glade to know why the file redirection didn't work.
You're trying to do a reverse-shell? I can't quite tell from the logic.
A few things about fifos - opening a fifo blocks until both sides are opened - i.e. if you open for reading you will get blocked until someone opens it for writing and vice versa.
The reason it won't work with redirection is that the open of the redirection happens in the shell before launching the nc, so as a result until a process opens the write-end of the same fifo, it won't be able to open the read-end of the same fifo. In this case you've got a deadly embrace of blocking:
nc is blocked opening "$out" because nobody's opened it for reading yet.
bash is blocked opening "$in" because nobody's opened it for writing yet.
You can try rearranging the file descriptor opening order:
nc 127.0.0.1 9999 2>&1 > "$out" 0< "$in" &
bash -i 0< "$out" >"$in" 2>&1
i.e. we make sure that the opening of in and out happen in the same order for the commands, which should fix the fifo opening issue.
You don't need to do this with two fifos, you can accomplish it with one using:
ncfifo=/tmp/.$$.nc
mkfifo $ncfifo
cat $ncfifo | bash -i 2>&1 | nc 127.0.0.1 9999 >$ncfifo
As for (2); I don't know - it's not something I've tried to accomplish myself. The reason for using the fifo is that it allows you to wire up two sides of the pipeline together. I can't see any way of accomplishing this without a fifo, although I'm open to being corrected on this.
The nc man page on debian contains the following example, which is for the 'opposite' interpretation:
rm -f /tmp/f; mkfifo /tmp/f
cat /tmp/f | /bin/sh -i 2>&1 | nc -l 127.0.0.1 1234 > /tmp/f
which is what I've used to do a simpler cat answer; although it seems to work as well omitting the leading cat as:
rm -f /tmp/f; mkfifo /tmp/f
/bin/sh -i </tmp/f 2>&1 | nc -l 127.0.0.1 1234 > /tmp/f

Get ssh remote command to terminate so that xargs with parallel option can continue

I'm running a command similar to the following
getHosts | xargs -I{} -P3 -n1 ssh {} 'startServer; sleep 5; grep -m 1 "server up" <(tail -f log)'
The problem is that it seems like ssh hangs for a while sometimes even well after the server has come up. Is there any problem with this command that might cause it not to terminate so that parallel execution can continue? When I run the command in a remote shell, the check for the server coming up seems reliable and closes punctually when "server up" is written to the logs.
Instead of the remote command being
startServer; sleep 5; grep -m 1 "server up" <(tail -f log)
I'd use
grep -m 1 "server up" <(tail -F log -n 0) & startServer ; wait
Differences:
Start tailing the log before attempting to restart the server, so that we don't miss any messages. We start at the end of the log so we don't see any previous "server up" messages.
Use tail's -F option instead of -f, so that if the log file is rotated we will follow the new file, instead of continuing to uselessly follow the old file.
Two ways I could see it failing to terminate:
Remote end hangs on startServer
The server generates so many messages after "server up", tail -f doesn't catch that line and waits forever (since tail will, by default, take the last 10 lines)
ssh could also fail to connect for a variety of reasons: host down, keys lost, etc. I would add some error checking conditions in the form of writing to a log and/or having
|| echo "Failed to do stuff" | mail -s SUBJECT TO#WHO.com

How do I kill a backgrounded/detached ssh session?

I am using the program synergy together with an ssh tunnel
It works, i just have to open an console an type these two commands:
ssh -f -N -L localhost:12345:otherHost:12345 otherUser#OtherHost
synergyc localhost
because im lazy i made an Bash-Script which is run with one mouseclick on an icon:
#!/bin/bash
ssh -f -N -L localhost:12345:otherHost:12345 otherUser#OtherHost
synergyc localhost
the Bash-Script above works as well, but now i also want to kill synergy and the ssh tunnel via one mouseclick, so i have to save the PIDs of synergy and ssh into file to kill them later:
#!/bin/bash
mkdir -p /tmp/synergyPIDs || exit 1
rm -f /tmp/synergyPIDs/ssh || exit 1
rm -f /tmp/synergyPIDs/synergy || exit 1
[ ! -e /tmp/synergyPIDs/ssh ] || exit 1
[ ! -e /tmp/synergyPIDs/synergy ] || exit 1
ssh -f -N -L localhost:12345:otherHost:12345 otherUser#OtherHost
echo $! > /tmp/synergyPIDs/ssh
synergyc localhost
echo $! > /tmp/synergyPIDs/synergy
But the files of this script are empty.
How do I get the PIDs of ssh and synergy?
(I try to avoid ps aux | grep ... | awk ... | sed ... combinations, there has to be an easier way.)
With all due respect to the users of pgrep, pkill, ps | awk, etc, there is a much better way.
Consider that if you rely on ps -aux | grep ... to find a process you run the risk of a collision. You may have a use case where that is unlikely, but as a general rule, it's not the way to go.
SSH provides a mechanism for managing and controlling background processes. But like so many SSH things, it's an "advanced" feature, and many people (it seems, from the other answers here) are unaware of its existence.
In my own use case, I have a workstation at home on which I want to leave a tunnel that connects to an HTTP proxy on the internal network at my office, and another one that gives me quick access to management interfaces on co-located servers. This is how you might create the basic tunnels, initiated from home:
$ ssh -fNT -L8888:proxyhost:8888 -R22222:localhost:22 officefirewall
$ ssh -fNT -L4431:www1:443 -L4432:www2:443 colocatedserver
These cause ssh to background itself, leaving the tunnels open. But if the tunnel goes away, I'm stuck, and if I want to find it, I have to parse my process list and home I've got the "right" ssh (in case I've accidentally launched multiple ones that look similar).
Instead, if I want to manage multiple connections, I use SSH's ControlMaster config option, along with the -O command-line option for control. For example, with the following in my ~/.ssh/config file,
host officefirewall colocatedserver
ControlMaster auto
ControlPath ~/.ssh/cm_sockets/%r#%h:%p
the ssh commands above, when run, will leave spoor in ~/.ssh/cm_sockets/ which can then provide access for control, for example:
$ ssh -O check officefirewall
Master running (pid=23980)
$ ssh -O exit officefirewall
Exit request sent.
$ ssh -O check officefirewall
Control socket connect(/home/ghoti/.ssh/cm_socket/ghoti#192.0.2.5:22): No such file or directory
And at this point, the tunnel (and controlling SSH session) is gone, without the need to use a hammer (kill, killall, pkill, etc).
Bringing this back to your use-case...
You're establishing the tunnel through which you want syngergyc to talk to syngergys on TCP port 12345. For that, I'd do something like the following.
Add an entry to your ~/.ssh/config file:
Host otherHosttunnel
HostName otherHost
User otherUser
LocalForward 12345 otherHost:12345
RequestTTY no
ExitOnForwardFailure yes
ControlMaster auto
ControlPath ~/.ssh/cm_sockets/%r#%h:%p
Note that the command line -L option is handled with the LocalForward keyword, and the Control{Master,Path} lines are included to make sure you have control after the tunnel is established.
Then, you might modify your bash script to something like this:
#!/bin/bash
if ! ssh -f -N otherHosttunnel; then
echo "ERROR: couldn't start tunnel." >&2
exit 1
else
synergyc localhost
ssh -O exit otherHosttunnel
fi
The -f option backgrounds the tunnel, leaving a socket on your ControlPath to close the tunnel later. If the ssh fails (which it might due to a network error or ExitOnForwardFailure), there's no need to exit the tunnel, but if it did not fail (else), synergyc is launched and then the tunnel is closed after it exits.
You might also want to look in to whether the SSH option LocalCommand could be used to launch synergyc from right within your ssh config file.
Quick summary: Will not work.
My first idea is that you need to start the processes in the background to get their PIDs with $!.
A pattern like
some_program &
some_pid=$!
wait $some_pid
might do what you need... except that then ssh won't be in the foreground to ask for passphrases any more.
Well then, you might need something different after all. ssh -f probably spawns a new process your shell can never know from invoking it anyway. Ideally, ssh itself would offer a way to write its PID into some file.
just came across this thread and wanted to mention the "pidof" linux utility:
$ pidof init
1
You can use lsof to show the pid of the process listening to port 12345 on localhost:
lsof -t -i #localhost:12345 -sTCP:listen
Examples:
PID=$(lsof -t -i #localhost:12345 -sTCP:listen)
lsof -t -i #localhost:12345 -sTCP:listen >/dev/null && echo "Port in use"
well i dont want to add an & at the end of the commands as the connection will die if the console wintow is closed ... so i ended up with an ps-grep-awk-sed-combo
ssh -f -N -L localhost:12345:otherHost:12345 otherUser#otherHost
echo `ps aux | grep -F 'ssh -f -N -L localhost' | grep -v -F 'grep' | awk '{ print $2 }'` > /tmp/synergyPIDs/ssh
synergyc localhost
echo `ps aux | grep -F 'synergyc localhost' | grep -v -F 'grep' | awk '{ print $2 }'` > /tmp/synergyPIDs/synergy
(you could integrate grep into awk, but im too lazy now)
You can drop the -f, which makes it run it in background, then run it with eval and force it to the background yourself.
You can then grab the pid. Make sure to put the & within the eval statement.
eval "ssh -N -L localhost:12345:otherHost:12345 otherUser#OtherHost & "
tunnelpid=$!
Another option is to use pgrep to find the PID of the newest ssh process
ssh -fNTL 8073:localhost:873 otherUser#OtherHost
tunnelPID=$(pgrep -n -x ssh)
synergyc localhost
kill -HUP $tunnelPID
This is more of a special case for synergyc (and most other programs that try to daemonize themselves). Using $! would work, except that synergyc does a clone() syscall during execution that will give it a new PID other than the one that bash thought it has. If you want to get around this so that you can use $!, then you can tell synergyc to stay in the forground and then background it.
synergyc -f -n mydesktop remoteip &
synergypid=$!
synergyc also does a few other things like autorestart that you may want to turn off if you are trying to manage it.
Based on the very good answer of #ghoti, here is a simpler script (for testing) utilising the SSH control sockets without the need of extra configuration:
#!/bin/bash
if ssh -fN -MS /tmp/mysocket -L localhost:12345:otherHost:12345 otherUser#otherHost; then
synergyc localhost
ssh -S /tmp/mysocket -O exit otherHost
fi
synergyc will be only started if tunnel has been established successfully, which itself will be closed as soon as synergyc returns.
Albeit the solution lacks proper error reporting.
You could look out for the ssh proceess that is bound to your local port, using this line:
netstat -tpln | grep 127\.0\.0\.1:12345 | awk '{print $7}' | sed 's#/.*##'
It returns the PID of the process using port 12345/TCP on localhost. So you don't have to filter all ssh results from ps.
If you just need to check, if that port is bound, use:
netstat -tln | grep 127\.0\.0\.1:12345 >/dev/null 2>&1
Returns 1 if none bound or 0 if someone is listening to this port.
There are many interesting answers here, but nobody mentioned that the manpage of SSH does describe this exact case! (see TCP FORWARDING section). And the solution they offer is much simpler:
ssh -fL 12345:localhost:12345 user#remoteserver sleep 10
synergyc localhost
Now in details:
First we start SSH with a tunnel; thanks to -f it will initiate the connection and only then fork to background (unlike solutions with ssh ... &; pid=$! where ssh is sent to background and next command is executed before the tunnel is created). On the remote machine it will run sleep 10 which will wait 10 seconds and then end.
Within 10 seconds, we should start our desired command, in this case synergyc localhost. It will connect to the tunnel and SSH will then know that the tunnel is in use.
After 10 seconds pass, sleep 10 command will finish. But the tunnel is still in use by synergyc, so SSH will not close the underlying connection until the tunnel is released (i.e. until synergyc closes socket).
When synergyc is closed, it will release the tunnel, and SSH in turn will terminate itself, closing a connection.
The only downside of this approach is that if the program we use will close and re-open connection for some reason then SSH will close the tunnel right after connection is closed, and the program won't be able to reconnect. If this is an issue then you should use an approach described in #doak's answer which uses control socket to properly terminate SSH connection and uses -f to make sure tunnel is created when SSH forks to the background.

Resources