Capture ftp hostname and uri using tshark (wireshark) - ftp

I want to watch the ftp traffic and find which ftp urls are being accessed with tshark. For http traffic I can use
tshark -i eth0 -f 'port 80' -l -t ad -n -R 'http.request' -T fields -e http.host -e http.request.uri
Wireshark's Display Filters contains the fields http.request.uri and http.host
See: http://www.wireshark.org/docs/dfref/h/http.html
But these options are not available for ftp traffic.
http://www.wireshark.org/docs/dfref/f/ftp.html
What can I do?

The problem is that FTP is not a stateless transactional protocol like HTTP - with HTTP the client does a single request which details all the parameters required to deliver the file, and the server responds with a single message that contains all the metadata and the file contents.
In comparison, FTP is a chat-style protocol: to get something done you open a connection to the server and starts chatting with the server - login, change to some directory, list files, get me this file, etc.
You can listen into this conversation using wireshark like this:
tshark -i lo -f 'port 21' -l -t ad -n -R ftp.request.command -T fields -e ftp.request.command -e ftp.request.arg
The output received when a user tries to retrieve a file from the FTP server (in this example using the client software curl) might look like this:
USER username
PASS password
PWD
CWD Documents
EPSV
TYPE I
SIZE somefile.ext
RETR somefile.ext
QUIT
A bit of processing over that might give you a URL like log of file retrievals. For example, I came up with this thing using perl:
tshark -i lo -f 'port 21' -l -t ad -n -R ftp.request.command \
-T fields -e ftp.request.command -e ftp.request.arg | \
perl -nle '
m|CWD\s*(\S+)| and do {
$dir=$1;
if ($dir =~ m,^/,) { $cwd=$dir } else { $cwd .= "/$dir"; }
};
m|RETR\s*(\S+)| and print "$cwd/$1";'
For the same FTP session above, this script will yield a single line of output:
/Documents/somefile.ext
I hope that helps.

Related

Script that will print HTTP headers for multiple servers

I've created the following bash script:
#!/bin/bash
for ip in $(cat targets.txt); do
"curl -I -k https://"${ip};
"curl -I http://"${ip}
done
However I am not receiving the expected output, which is the HTTP header responses from IP addresses listed in targets.txt
I'm not sure how curl can attempt both HTTP and HTTPS (80/443) within one command, so I've set two seperate curl commands.
nmap might be more appropriate for the task: nmap -iL targets.txt -p T:80,443 -sV --script=banner --open
Perform a network map (nmap) of hosts from the input list (-iL targets.txt) on TCP ports 80 and 443 (-p T:80,443) with service/version detection (-sV) and use the banner grabber script (--script=banner, ref. https://nmap.org/nsedoc/scripts/banner.html). Return results for open ports (--open).
... or masscan (ref. https://github.com/robertdavidgraham/masscan): masscan $(cat targets.txt) -p 80,443 --banners
Mass scan (masscan) all targets on ports 80 and 443 (-p 80,443) and grab banners (--banners).
Remove the quotes around your curl commands. You also don't need the ; after the first curl.
#!/bin/bash
for ip in $(cat targets.txt); do
curl -I -k https://${ip}
curl -I http://${ip}
done
I added some echo's to #John's answer to be able to have a better visibility of the results of the curl executions. Also added port 8080 in case of proxy.
#!/bin/bash
for ip in $(cat $1); do
echo "> Webserver Port Scan on IP ${ip}."
echo "Attempting IP ${ip} on port 443..."
curl -I -k https://${ip}
echo "Attempting IP ${ip} on port 80..."
curl -I http://${ip}
echo "Attempting IP ${ip} on port 8080..."
curl -I http://${ip}:8080
done

How to search for a string in a text file and perform a specific action based on the result

I have very little experience with Bash but here is what I am trying to accomplish.
I have two different text files with a bunch of server names in them. Before installing any windows updates and rebooting them, I need to disable all the nagios host/service alerts.
host=/Users/bob/WSUS/wsus_test.txt
password="my_password"
while read -r host
do
curl -vs -o /dev/null -d "cmd_mod=2&cmd_typ=25&host=$host&btnSubmit=Commit" "https://nagios.fqdn.here/nagios/cgi-bin/cmd.cgi" -u "bob:$password" -k
done < wsus_test.txt >> /Users/bob/WSUS/diable_test.log 2>&1
This is a reduced form of my current code which works as intended, however, we have servers in a bunch of regions. Each server name is prepended with a 3 letter code based on region (ie, LAX, NYC, etc). Secondly, we have a nagios server in each region so I need the code above to be connecting to the correct regional nagios server based on the server name being passed in.
I tried adding 4 test servers into a text file and just adding a line like this:
if grep lax1 /Users/bob/WSUS/wsus_text.txt; then
<same command as above but with the regional nagios server name>
fi
This doesn't work as intended and nothing is actually disabled/enabled via API calls. Again, I've done very little with Bash so any pointers would be appreciated.
Extract the region from host name and use it in the Nagios URL, like this:
while read -r host; do
region=$(cut -f1 -d- <<< "$host")
curl -vs -o /dev/null -d "cmd_mod=2&cmd_typ=25&host=$host&btnSubmit=Commit" "https://nagios-$region.fqdn.here/nagios/cgi-bin/cmd.cgi" -u "bob:$password" -k
done < wsus_test.txt >> /Users/bob/WSUS/diable_test.log 2>&1

Append output of grep filter to a file

I am trying to save the output of a grep filter to a file.
I want to run tcpdump for a long time, and filter a certain IP to a file.
tcpdump -i eth0 -n -s 0 port 5060 -vvv | grep "A.B.C."
This works fine. It shows me IP's from my network.
But when I add >> file.dump at the end, the file is always empty.
My script:
tcpdump -i eth0 -n -s 0 port 5060 -vvv | grep "A.B.C." >> file.dump
And yes, it must be grep. I don't want to use tcpdump filters because it gives me millions of lines and with grep I get only one line per IP.
How can I redirect (append) the full output of the grep command to a file?
The output of tcpdump is probably going through stderr, not stdout. This means that grep won't catch it unless you convert it into stdout.
To do this you can use |&:
tcpdump -i eth0 -n -s 0 port 5060 -vvv |& grep "A.B.C."
Then, it may happen that the output is a continuous stream, so that you somehow have to tell grep to use line buffering. For this you have the option --line-buffered option.
All together, say:
tcpdump ... |& grep --line-buffered "A.B.C" >> file.dump

Waiting for input from script that is running remotely via ssh

There is a script I'm running that I can not install on the remote machine.
clear && printf '\e[3J'
read -p "Please enter device: " pattern
read -p "Enter date: (YYYY-MM-DD): " date
pfix=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 5 | head -n 1)
mkdir /home/user/logCollectRes/"${pfix}"
ssh xxx.xxx.xxx.xxx 'bash -s' < /usr/local/bin/SearchAdvanced.sh ${pattern} ${date} ${pfix}
In that script, I would like to be able to use read.
ls -g *"${pattern}"*
read -p "Select one of these? [y/n] " "found";
I've tried adding the -n on the read as well as the -t -t option on ssh. As you can see the script presents information that is only seen once the script starts, so I can't use the read on local machine.
EDIT: So lets say server B stores syslogs for 5K computers. The file names are given by using the internal IP of the device and the date at the end.
/var/log/remotes/192.168.1.500201505050736.gz
/var/log/remotes/192.168.1.500201505050936.gz
/var/log/remotes/192.168.1.500201505051136.gz
/var/log/remotes/192.168.1.600201505050836.gz
/var/log/remotes/192.168.1.600201505051036.gz
/var/log/remotes/192.168.1.600201505051236.gz
I'd like to be able to select the IP address from the main script, list all the files matching that IP address, and then select which I want to scp to my local machine.
After speaking with some coworkers I found the answer to be running two scripts: The first pulls the ls -g result and directs the answer to a variable on the local machine. I then print that output with the read option of selecting on of the files. The 2nd script will take that answer and scp the file from the remote machine
In the main script
ssh xxx.xxx.xxx.xxx 'bash -s' < /usr/local/bin/SearchAdvanced.sh ${pattern} ${date} > ${result}
then as a follow up
printf "${result}"
read -p "Select file: "

Bash script upd error

I execute my bash script PLCCheck as process
./PLCCheck &
PLCCheck
while read -r line
do
...
def_host=192.168.100.110
def_port=6002
HOST=${2:-$def_host}
PORT=${3:-$def_port}
echo -n "OKConnection" | netcat -u -c $HOST $PORT
done < <(netcat -u -l -p 6001)
It listens on UDP Port 6001.
When I want to execute my second bash script SQLCheck as process that listens on UDP Port 4001
./SQLCheck &
SQLCheck
while read -r line
do
...
def_host=192.168.100.110
def_port=6002
HOST=${2:-$def_host}
PORT=${3:-$def_port}
echo -n "OPENEF1" | netcat -u -c $HOST $PORT
done < <(nc -l -p 4001)
I got this error:
Error: Couldn't setup listening socket (err=-3)
Port 6001 and 4001 are open in the iptables and both scripts work as a single process. Why do I get this error?
I have checked the man page of nc. I think it is used on a wrong way:
-l Used to specify that nc should listen for an incoming connection rather
than initiate a connection to a remote host. It is an error to use this
option in conjunction with the -p, -s, or -z options. Additionally,
any timeouts specified with the -w option are ignored.
...
-p source_port
Specifies the source port nc should use, subject to privilege restrictions
and availability. It is an error to use this option in conjunction with the
-l option.
According to this one should not use -l option with -p option!
Try to use without -p, just nc -l 4001. Maybe this is the error...

Resources