Bash broken pipe with tcpdump - bash

I use the following command to send pinging IP's to a script:
sudo tcpdump -ne -l -i eth0 icmp and icmp[icmptype]=icmp-echo \
| cut -d " " -f 10 | xargs -L2 ./pong.sh
Unfortunately this gives me:
tcpdump: Unable to write output: Broken pipe
To dissect my commands:
Output the ping's from the traffic:
sudo tcpdump -ne -l -i eth0 icmp and icmp[icmptype]=icmp-echo
Output:
11:55:58.812177 IP xxxxxxx > 127.0.0.1: ICMP echo request, id 50776, seq 761, length 64
This will get the IP's from the tcpdump output:
cut -d " " -f 10 # output: 127.0.0.1
Get the output to the script:
xargs -L2 ./pong.sh
This will mimic the following example command:
./pong.sh 127.0.0.1
The strange thing is that the commands work seperate (on their own)...
I tried debugging it but I have no experience with debugging pipes. I checked the commands but they seem fine.

It would seem that's cut stdio buffering is interfering here, i.e. replace | xargs ... by | cat in your cmdline to find out.
Fwiw below cmdline wfm (pipe straight to xargs then use the shell itself to get the nth arg), note btw the extra tcpdump args : -c10 (just to limit to 10pkts, then show the 10/2 lines) and -Q in (only inbound pkts):
$ sudo tcpdump -c 10 -Q in -ne -l -i eth0 icmp and icmp[icmptype]=icmp-echo 2>/dev/null | \
xargs -L2 sh -c 'echo -n "$9: "; ping -nqc1 $9 | grep rtt'
192.168.100.132: rtt min/avg/max/mdev = 3.743/3.743/3.743/0.000 ms
192.168.100.132: rtt min/avg/max/mdev = 5.863/5.863/5.863/0.000 ms
192.168.100.132: rtt min/avg/max/mdev = 6.167/6.167/6.167/0.000 ms
192.168.100.132: rtt min/avg/max/mdev = 4.256/4.256/4.256/0.000 ms
192.168.100.132: rtt min/avg/max/mdev = 1.545/1.545/1.545/0.000 ms
$ _

For those coming across this (like me), tcpdump buffering is the issue.
From the man page:
-l Make stdout line buffered. Useful if you want to see the data
while capturing it. For example:
# tcpdump -l | tee dat
or
# tcpdump -l > dat & tail -f dat

Related

netcat to return the result of a command (run when connection happens)

I want to use netcat to return the result of a command on a server. BUT here is the trick, I want the command to run when the connection is made. Not when I start the netcat.
Here is a simplified single shell example. I want the ls to run when I connect to 1234 (which I would normally do from a remote server, obviously this example is pointless I could do an ls locally)
max $ ls | nc -l 1234 &
[1] 36685
max $ touch file
max $ nc 0 1234
[1]+ Done ls | nc -l 1234
max $ ls | nc -l 1234 &
[1] 36699
max $ rm file
max $ nc 0 1234
file
[1]+ Done ls | nc -l 1234
You can see the ls runs when I start the listener, not when I connect to it. So in the first instance I had no file when I started it and created the file, then made the connection and it reported the state of the filesystem when the listen command started (empty), not the current state. And the second time around when file was already gone it showed it as still present.
Something similar to the way it works when you redirect from a file. eg:
max $ touch file
max $ nc -l 1234 < file &
[1] 36768
max $ echo content > file
max $ nc 0 1234
content
[1]+ Done nc -l 1234 < file
The remote connection gets the latest content of the file, not the content when the listen command started.
I tried using the "file redirect" style with a subshell and that doesn't work either.
max $ nc -l 1234 < <(cat file) &
[1] 36791
max $ echo bar > file
max $ nc 0 1234
content
[1]+ Done nc -l 1234 < <(cat file)
The only thing I can think of is adding my command |netcat to xinetd.conf/systemd... I was probably going to have to add it to systemd as a service anyway.
(Actual thing I want to do : provide the list of users of the VPN to a network port for a remote service to get a current user list. My command to generate the list looks like :
awk '/Bytes/,/ROUTING/' /etc/openvpn/openvpn-status.log | cut -f1 -d. | tail -n +2 | tac | tail -n +2 | sort -b | join -o 2.2 -j1 - <(sort -k1b,1 /etc/openvpn/generate-new-certs.sh)
)
I think you want something like this, which I actually did with socat:
# Listen on port 65500 and exec '/bin/date' when a connection is received
socat -d -d -d TCP-LISTEN:65500 EXEC:/bin/date
Then in another Terminal, a few seconds later:
echo Hi | nc localhost 65500
Note: For macOS users, you can install socat with homebrew using:
brew install socat

GI want torep with inotify [duplicate]

(maybe it is the "tcpflow" problem)
I write a script to monitoring http traffic, and I install tcpflow, then grep
it works (and you should make a http request, for example curl www.163.com)
sudo tcpflow -p -c -i eth0 port 80 2>/dev/null | grep '^Host: '
it outputs like this (continuously)
Host: config.getsync.com
Host: i.stack.imgur.com
Host: www.gravatar.com
Host: www.gravatar.com
but I can't continue to use pipe
does not work (nothing output)
sudo tcpflow -p -c -i eth0 port 80 2>/dev/null | grep '^Host: ' | cut -b 7-
does not work (nothing output)
sudo tcpflow -p -c -i eth0 port 80 2>/dev/null | grep '^Host: ' | grep H
When I replace sudo tcpflow with cat foo.txt, it works:
cat foo.txt | grep '^Host: ' | grep H
so what's wrong with pipe or grep or tcpflow ?
update:
This is my final script: https://github.com/zhengkai/config/blob/master/script/monitor_outgoing_http.sh
To grep a continuous stream use --line-buffered option:
sudo tcpflow -p -c -i eth0 port 80 2> /dev/null | grep --line-buffered '^Host'
--line-buffered
Use line buffering on output. This can cause a performance penalty.
Some reflections about buffered outputting(stdbuf tool is also mentioned):
Pipes, how do data flow in a pipeline?
I think the problem is because of stdio buffering, you need to use GNU stdbuf before calling grep,
sudo tcpflow -p -c -i eth0 port 80 2>/dev/null | stdbuf -o0 grep '^Host: '
With the -o0, it basically means the output (stdout) stream from tcpflow will be unbuffered. The default behavior will be to automatically buffer up data into 40961 byte chunks before sending to next command in pipeline, which is what overriden using stdbuf
1. Refer this nice detail into the subject.

Append output of grep filter to a file

I am trying to save the output of a grep filter to a file.
I want to run tcpdump for a long time, and filter a certain IP to a file.
tcpdump -i eth0 -n -s 0 port 5060 -vvv | grep "A.B.C."
This works fine. It shows me IP's from my network.
But when I add >> file.dump at the end, the file is always empty.
My script:
tcpdump -i eth0 -n -s 0 port 5060 -vvv | grep "A.B.C." >> file.dump
And yes, it must be grep. I don't want to use tcpdump filters because it gives me millions of lines and with grep I get only one line per IP.
How can I redirect (append) the full output of the grep command to a file?
The output of tcpdump is probably going through stderr, not stdout. This means that grep won't catch it unless you convert it into stdout.
To do this you can use |&:
tcpdump -i eth0 -n -s 0 port 5060 -vvv |& grep "A.B.C."
Then, it may happen that the output is a continuous stream, so that you somehow have to tell grep to use line buffering. For this you have the option --line-buffered option.
All together, say:
tcpdump ... |& grep --line-buffered "A.B.C" >> file.dump

how to ping each ip in a file?

I have a file named "ips" containing all ips I need to ping. In order to ping those IPs, I use the following code:
cat ips|xargs ping -c 2
but the console show me the usage of ping, I don't know how to do it correctly. I'm using Mac os
You need to use the option -n1 with xargs to pass one IP at time as ping doesn't support multiple IPs:
$ cat ips | xargs -n1 ping -c 2
Demo:
$ cat ips
127.0.0.1
google.com
bbc.co.uk
$ cat ips | xargs echo ping -c 2
ping -c 2 127.0.0.1 google.com bbc.co.uk
$ cat ips | xargs -n1 echo ping -c 2
ping -c 2 127.0.0.1
ping -c 2 google.com
ping -c 2 bbc.co.uk
# Drop the UUOC and redirect the input
$ xargs -n1 echo ping -c 2 < ips
ping -c 2 127.0.0.1
ping -c 2 google.com
ping -c 2 bbc.co.uk
With ip or hostname in each line of ips file:
( while read ip; do ping -c 2 $ip; done ) < ips
You can also change timeout, with -W flag, so if some hosts isn'up, it wont lock your script for too much time. Also -q for quiet output is useful in this case.
( while read ip; do ping -c1 -W1 -q $ip; done ) < ips
If the file is 1 ip per line (and it's not overly large), you can do it with a for loop:
for ip in $(cat ips); do
ping -c 2 $ip;
done
You could use fping. It also does that in parallel and has script friendly output.
$ cat ips | xargs fping -q -C 3
10.xx.xx.xx : 201.39 203.62 200.77
10.xx.xx.xx : 288.10 287.25 288.02
10.xx.xx.xx : 187.62 187.86 188.69
...
With GNU Parallel you would do:
parallel -j0 ping -c 2 {} :::: ips
This will run as many jobs in parallel as you have ips or processes.
It also makes sure the output from different jobs are not mixed together, so if you use the output you are guaranteed that you will not get half-a-line from two different jobs.
GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to.
If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:
GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:
Installation
If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash
For other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README
Learn more
See more examples: http://www.gnu.org/software/parallel/man.html
Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel
Try doing this :
cat ips | xargs -i% ping -c 2 %
As suggested by #Lupus you can use "fping", but the output is not human friendly - it will scroll out of your screen in few seconds laving you with no trace as of what is going on. To address this I've just released ping-xray. I tried to make it as visual as possible under ascii terminal plus it creates CSV logs with exact millisecond resolution for all targets.
https://dimon.ca/ping-xray/
Hope you'll find it helpful.

tcpdump: Output only source and destination addresses

Problem description:
I want to print only the source and destination address from a tcpdump[1].
Have one working solution, but believe it could be improved a lot. An example that captures 5 packets, just as an example of what I'm looking for:
tcpdump -i eth1 -n -c 5 ip | \
cut -d" " -f3,5 | \
sed -e 's/^\([0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\)\..* \([0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\).*$/\1 > \2/'
Question:
Can this be done in any easier way? Performance is also an issue here.
[1] A part of a test if the snort home_net is correctly defined, or if we see traffic not defined in the home_net.
Solution:
Ok, thanks to everyone who have replied to this one. There have been two concerns related to the answers, one is the compatibility across different linux-versions and the second one is speed.
Here is the results on the speed test I did. First the grep-version:
time tcpdump -l -r test.dmp -n ip 2>/dev/null | grep -P -o '([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+).*? > ([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)' | grep -P -o '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | xargs -n 2 echo >/dev/null
real 0m5.625s
user 0m0.513s
sys 0m4.305s
Then the sed-version:
time tcpdump -n -r test.dmp ip | sed -une 's/^.* \(\([0-9]\{1,3\}\.\?\)\{4\}\)\..* \(\([0-9]\{1,3\}\.\?\)\{4\}\)\..*$/\1 > \3/p' >/dev/null
reading from file test.dmp, link-type EN10MB (Ethernet)
real 0m0.491s
user 0m0.496s
sys 0m0.020s
And the fastest one, the awk-version:
time tcpdump -l -r test.dmp -n ip | awk '{ print gensub(/(.*)\..*/,"\\1","g",$3), $4, gensub(/(.*)\..*/,"\\1","g",$5) }' >/dev/null
reading from file test.dmp, link-type EN10MB (Ethernet)
real 0m0.093s
user 0m0.111s
sys 0m0.013s
Unfortunately I have not been able to test how compatible they are, but the awk needs gnu awk to work due to the gensub function. Anyway, all three solutions works on the two platforms I have tested them on. :)
Here's one way using GNU awk:
tcpdump -i eth1 -n -c 5 ip | awk '{ print gensub(/(.*)\..*/,"\\1","g",$3), $4, gensub(/(.*)\..*/,"\\1","g",$5) }'
Try this:
tcpdump -i eth1 -n -c 5 ip 2>/dev/null | sed -r 's/.* ([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+).* > ([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+).*/\1 > \2/'
If running from a .sh script, remember to escape \1 & \2 as required.
Warning You have to use unbuffered ou line-buffered output to monitor the output of another command like tcpdump.
But you command seem correct.
To simplify, you could:
tcpdump -i eth1 -n -c 5 ip |
sed -une 's/^.* \(\([0-9]\{1,3\}\.\?\)\{4\}\)\..* \(\([0-9]\{1,3\}\.\?\)\{4\}\)\..*$/\1 > \3/p'
Notice the u switch usefull without -c 5 at tcpdump
tcpdump -ni eth1 ip |
sed -une 's/^.* \(\([0-9]\{1,3\}\.\?\)\{4\}\)\..* \(\([0-9]\{1,3\}\.\?\)\{4\}\)\..*$/\1 > \3/p'
& here is a grep only solution:
tcpdump -l -i eth1 -n -c 5 ip 2>/dev/null | grep -P -o '([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+).*? > ([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)' | grep -P -o '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | xargs -n 2 echo
Note -l, in case you don't want to limit the number of packets using -c.

Resources