Tcpdump with -w writing gibberish to file - bash

When trying to capture tcpdump output to a file, I get the following:
▒ò▒▒▒▒3▒X▒▒<<▒▒▒▒▒▒▒4▒4▒b
7
7▒▒3▒X▒▒<<▒▒▒▒▒▒▒4▒4▒b
7
7▒▒3▒X▒▒<<▒▒▒▒▒▒▒4▒4▒b
7
7▒▒3▒X▒<<▒▒▒▒▒▒▒4▒4▒b
7
7▒▒3▒Xu<<▒▒▒▒▒▒▒4▒4▒b
7
7▒▒3▒X▒<<▒▒▒▒▒▒▒4▒4▒b
7
7▒▒3▒X▒D<<▒▒▒▒▒▒▒4▒4▒b
7
7▒▒3▒X▒D<<▒▒▒▒▒▒▒4▒4▒b
7
7▒▒3▒X5▒<<▒▒▒▒▒▒▒4▒4▒b
7
7▒▒3▒X▒<<▒▒▒▒▒▒▒4▒4▒b
If I run tcpdump without the -w the output displays fine in the shell.
Here is the input:
tcpdump -i eth0 -Z root -w `date '+%m-%d-%y.%T.pcap'`

tcpdump -w writes the raw file, which is not meant for reading directly. You can read the file back with the tcpdump -r option as suggested in the man page:
-r Read packets from file (which was created with the -w option). Standard input is used if file is ‘‘-’’.
-w Write the raw packets to file rather than parsing and printing them out. They can later be printed with the -r option. Standard output is used if file is ‘‘-’’. See pcap-savefile(5) for a description of the file format.
Another option would be to redirect the output without using the -w option:
tcpdump -i eth0 -Z root > `date '+%m-%d-%y.%T.pcap'`
But if I remember correctly you don’t get exactly what would be written with the -w option.

Related

tcpdump suppress console output in script & write to file

In a bash script I need to run a tcpdump command and save the output to a file however when I do that via > /tmp/test.txt i still get the following output in the console:
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 1500 bytes
1 packet captured
1 packet received by filter
0 packets dropped by kernel
However I do wnat the script to wait for the command to complete before continuing.
is it possible to supress this output?
The output you're seeing is written to stderr, not stdout, so you can redirect it to /dev/null if you don't want to see it. For example:
tcpdump -nn -v -i eth0 -s 1500 -c 1 'ether proto 0x88cc' > /tmp/test.txt 2> /dev/null

Append output of grep filter to a file

I am trying to save the output of a grep filter to a file.
I want to run tcpdump for a long time, and filter a certain IP to a file.
tcpdump -i eth0 -n -s 0 port 5060 -vvv | grep "A.B.C."
This works fine. It shows me IP's from my network.
But when I add >> file.dump at the end, the file is always empty.
My script:
tcpdump -i eth0 -n -s 0 port 5060 -vvv | grep "A.B.C." >> file.dump
And yes, it must be grep. I don't want to use tcpdump filters because it gives me millions of lines and with grep I get only one line per IP.
How can I redirect (append) the full output of the grep command to a file?
The output of tcpdump is probably going through stderr, not stdout. This means that grep won't catch it unless you convert it into stdout.
To do this you can use |&:
tcpdump -i eth0 -n -s 0 port 5060 -vvv |& grep "A.B.C."
Then, it may happen that the output is a continuous stream, so that you somehow have to tell grep to use line buffering. For this you have the option --line-buffered option.
All together, say:
tcpdump ... |& grep --line-buffered "A.B.C" >> file.dump

wget to parse a webpage in shell

I am trying to extract URLS from a webpage using wget. I tried this
wget -r -l2 --reject=gif -O out.html www.google.com | sed -n 's/.*href="\([^"]*\).*/\1/p'
It is displaiyng FINISHED
Downloaded: 18,472 bytes in 1 files
But not displaying the weblinks. If I try to do it seperately
wget -r -l2 --reject=gif -O out.html www.google.com
sed -n 's/.*href="\([^"]*\).*/\1/p' < out.html
Output
http://www.google.com/intl/en/options/
/intl/en/policies/terms/
It is not displaying all the links
ttp://www.google.com
http://maps.google.com
https://play.google.com
http://www.youtube.com
http://news.google.com
https://mail.google.com
https://drive.google.com
http://www.google.com
http://www.google.com
http://www.google.com
https://www.google.com
https://plus.google.com
And more over I want to get links from 2nd level and more can any one give a solution for this
Thanks in advance
The -O file option captures the output of wget and writes it to the specified file, so there is no output going through the pipe to sed.
You can say -O - to direct wget output to standard output.
If you don't want to use grep, you can try
sed -n "/href/ s/.*href=['\"]\([^'\"]*\)['\"].*/\1/gp"

Unable to filter by regex the ouput of bind -p

I tried to do filter from the output of bind -p bash command. I tried this command:
bind -p | grep '.*forward.*'
The output was:
Binary file (standard input) matches
What's the problem? Maybe it depends on the terminal (I'm using the latest cygwin).
I don't have cygwin to test, but you can force grep to treat its input as-if it was text. The flag is -a:
Process a binary file as if it were text; this is equivalent to the --binary-files=text option.
So try:
bind -p | grep -a forward

Is it possible to run two programs simultaneously or one after another using a bash or expect script?

I have basically two lines of code which are:
tcpdump -i eth0 -s 65535 -w - >/tmp/Captures
tshark -i /tmp/Captures -T pdml >results.xml
if I run them both in separate terminals it works fine.
However I've been trying to create a simple bash script that will execute them at the same time, but have had no luck. Bash script is as follows:
#! /bin/bash
tcpdump -i eth0 -s 65535 -w - >/tmp/Captures &
tshark -i /tmp/Captures -T pdml >results.xml &
If anyone could possibly help in getting this to work or getting it to "run tcpdump until a key is pressed, then run tshark. then when a key is pressed again close."
I have only a little bash scripting experience.
Do you need to run tcpdump and tshark separately? Using a pipe command will feed the output of tcpdump to the input of tshark.
tcpdump -i eth0 -s 65535 | tshark -T -pdml > results.xml

Resources