How to quickly add rules to iptables from blocklists? - bash

I am using Ubuntu Server 14.04 32bit for the following.
I am trying to use blocklists to add regional blocks (China, Russia...) to my firewall rules and am struggling with the length it takes my script to complete and understanding why a different script fails to work.
I had originally used http://whatnotlinux.blogspot.com/2012/12/add-block-lists-to-iptables-from.html as an example and tidied up / changed parts of the script to pretty close to what's below:
#!/bin/bash
# Blacklist's names & URLs array
declare -A blacklists
blacklists[china]="http://www.example.com"
#blacklists[key]="url"
for key in ${!blacklists[#]}; do
#Download blacklist
wget --output-document=/tmp/blacklist_$key.gz -w 3 ${blacklists[$key]}
iptables -D INPUT -j $key #Delete current iptables chain link
iptables -F $key #Flush current iptables chain
iptables -X $key #Delete current iptables chain
iptables -N $key #Create current iptables chain
iptables -A INPUT -j $key #Link current iptables chain to INPUT chain
#Read blacklist
while read line; do
#Drop description, keep only IP range
ip_range=`echo -n $line | sed -e 's/.*:\(.*\)-\(.*\)/\1-\2/'`
#Test if it's an IP range
if [[ $ip_range =~ ^[0-9].*$ ]]; then
# Add to the blacklist
iptables -A $key -m iprange --src-range $ip_range -j LOGNDROP
fi
done < <(zcat /tmp/blacklist_$key.gz | iconv -f latin1 -t utf-8 - | dos2unix)
done
# Delete files
rm /tmp/blacklist*
exit 0
This appears to work fine for short test lists, but manually adding many (200,000+) entries to iptables takes an EXORBITANT amount of time and I'm not sure why? Depending on the list I have calculated this taking upwards of 10 hours to complete which just seems silly.
After viewing the format of the iptables-save output I created a new script that uses iptables-save to save working iptables rules and then appends the expected format for blocks to this file, such as: -A bogon -m iprange --src-range 0.0.0.1-0.255.255.255 -j LOGNDROP, and eventually uses iptables-restore to load the file as seen below:
#!/bin/bash
# Blacklist's names & URLs arrays
declare -A blacklists
blacklists[china]="http://www.example.com"
#blacklists[key]="url"
iptables -F # Flush iptables chains
iptables -X # Delete all user created chains
iptables -P FORWARD DROP # Drop all forwarded traffic
iptables -N LOGNDROP # Create LOGNDROP chain
iptables -A LOGNDROP -p tcp -m limit --limit 5/min -j LOG --log-prefix "Denied TCP: " --log-level 7
iptables -A LOGNDROP -p udp -m limit --limit 5/min -j LOG --log-prefix "Denied UDP: " --log-level 7
iptables -A LOGNDROP -p icmp -m limit --limit 5/min -j LOG --log-prefix "Denied ICMP: " --log-level 7
iptables -A LOGNDROP -j DROP # Drop after logging
# Build first part of iptables-rules
for key in ${!blacklists[#]}; do
iptables -N $key # Create chain for current list
iptables -A INPUT -j $key # Link input to current list chain
done
iptables-save | sed '$d' | sed '$d' > /tmp/iptables-rules.rules # Save WORKING iptables-rules and remove last 2 liens (COMMIT & comment)
for key in ${!blacklists[#]}; do
#Download blacklist
wget --output-document=/tmp/blacklist_$key.gz -w 3 ${blacklists[$key]}
zcat /tmp/blacklist_$key.gz | sed '1,2d' | sed s/.*:/-A\ $key\ -m\ iprange\ --src-range\ / | sed s/$/\ -j\ LOGNDROP/ >> iptables-rules.rules
done
echo 'COMMIT' >> /tmp/iptables-rules.rules
iptables-restore < /tmp/iptables-rules.rules
# Delete files
rm /tmp/blacklist*
rm /tmp/iptables-rules.rules
exit 0
This works great for most lists on the testbed however there are specific lists that if included will produce the iptables-restore: line 389971 failed error, which is always the last line (COMMIT). I've read that due to the way iptables works whenever there is an issue reloading rules the error will always say the last line failed.
The truly odd thing is that testing these same lists on Ubuntu Desktop 14.04 64bit the second script works just fine. I have tried running the script on the Desktop machine, then using iptables-save to save a "properly" formatted version of the ruleset, and then loading this file to iptables on the server using iptables-restore and still receive the error.
I am at a loss as to how to troubleshoot this, why the initial script takes so long to add rules to iptables, and what could potentially be causing problems with the lists in the second script.

If you need to block a multitude of IP Addresses, use ipset instead.
Step 1: Create the IPset:
# Hashsize of 1024 is usually enough. Higher numbers might speed up the search,
# but at the cost of higher memory usage.
ipset create BlockAddress hash:ip hashsize 1024
Step 2: Add the addresses to block into that IPset:
# Put this in a loop, the loop reading a file containing list of addresses to block
ipset add BlockAddress $IP_TO_BLOCK
Finally, replace all those lines to block with just one line in netfilter:
iptables -t raw -A PREROUTING -m set --match-set BlockAddress src -j DROP
Done. iptables-restore will be mucho fasta.
IMPORTANT NOTE: I strongly suggest NOT using a domain name to be added into netfilter; netfilter needs to first do a DNS Resolve, and if DNS is not properly configured and/or too slow, it will fail. Rather, do a pre-resolve (or periodic resolve) of domain names to block, and feed the found IP addresses to the "file containing list of addresses to block". It should be an easy script, invoked from crontab every 5 minutes or so.
EDIT 1:
This is an example of a cronjob I use to get facebook.com's address, invoked every 5 minutes:
#!/bin/bash
fbookfile=/etc/iptables.d/facebook.ip
for d in www.facebook.com m.facebook.com facebook.com; do
dig +short "$d" >> "$fbookfile"
done
sort -n -u "$fbookfile" -o "$fbookfile"
Every half hour, another cronjob feeds those addresses to ipset:
#!/bin/bash
ipset flush IP_Fbook
while read ip; do
ipset add IP_Fbook "$ip"
done < /etc/iptables.d/facebook.ip
Note: I have to do this because doing dig +short facebook.com, for instance, returns exactly ONE IP address. After some observation, the IP address returned changed every ~5 minutes. Since I'm too lazy occupied to make an optimized version, I took the easy way out and do a flush/rebuild only every 30 minutes to minimize CPU spikes.

The following is how I ended up solving this using ipsets as well.
#!/bin/bash
# Blacklist names & URLs array
declare -A blacklists
blacklists[China]="url"
# blacklists[key]="url"
# etc...
for key in ${!blacklists[#]}; do
# Download blacklist
wget --output-document=/tmp/blacklist_$key.gz -w 3 ${blacklists[$key]}
# Create ipset for current blacklist
ipset create $key hash:net maxelem 400000
# TODO method for determining appropriate maxelem
while read line; do
# Add addresses from list to ipset
ipset add $key $line -quiet
done < <(zcat /tmp/blacklist_$key.gz | sed '1,2d' | sed s/.*://)
# Add rules to iptables
iptables -D INPUT -m set --match-set $key src -j $key # Delete link to list chain from INPUT
iptables -F $key # Flush list chain if existed
iptables -X $key # Delete list chain if existed
iptables -N $key # Create list chain
iptables -A $key -p tcp -m limit --limit 5/min -j LOG --log-prefix "Denied $key TCP: " --log-level 7
iptables -A $key -p udp -m limit --limit 5/min -j LOG --log-prefix "Denied $key UDP: " --log-level 7
iptables -A $key -p icmp -m limit --limit 5/min -j LOG --log-prefix "Denied $key ICMP: " --log-level 7
iptables -A $key -j DROP # Drop after logging
iptables -A INPUT -m set --match-set $key src -j $key
done
I'm not wildly familiar with ipsets but this makes for a much faster method of downloading, parsing and adding blocks.
I've added individual chains for each list for more verbose logging that will log which blocklist the dropped ip is coming from should you have multiples. On my actual box I'm using around 10 lists and have added several hundred thousand addresses with no problem!

Donwload Zones
#!/bin/bash
# http://www.ipdeny.com/ipblocks/
zone=/path_to_folder/zones
if [ ! -d $zone ]; then mkdir -p $zone; fi
wget -c -N http://www.ipdeny.com/ipblocks/data/countries/all-zones.tar.gz
tar -C $zone -zxvf all-zones.tar.gz >/dev/null 2>&1
rm -f all-zones.tar.gz >/dev/null 2>&1
Edit your Iptables bash script and add the following lines:
#!/bin/bash
ipset=/sbin/ipset
iptables=/sbin/iptables
route=/path_to_blackip/
$ipset -F
$ipset -N -! blockzone hash:net maxelem 1000000
for ip in $(cat $zone/{cn,ru}.zone $route/blackip.txt); do
$ipset -A blockzone $ip
done
$iptables -t mangle -A PREROUTING -m set --match-set blockzone src -j DROP
$iptables -A FORWARD -m set --match-set blockzone dst -j DROP
example: where "blackip.txt" is your own ip blacklist and "cn,ru" china-russia"
Source: blackip

Related

Expanding to a flag only if value is set

I have some logic like:
if [[ -n "$SSH_CLIENT" ]]
then
sflag="-s $(echo "$SSH_CLIENT" | awk '{ print $1}')"
else
sflag=''
fi
iptables -A MY_RULE "$sflag" -p tcp -m tcp --dport 9999 -m conntrack -j ACCEPT
In other words, I want to mimic only passing the -s flag to iptables if SSH_CLIENT is set. What actually happens is that the empty string is inadvertently passed.
I'm interested in whether it is possible, in the interest of not repeating two quite long iptables calls, to expand the flag name and value. E.g. the command above should expand to
iptables -A MY_RULE -s 10.10.10.10 -p tcp -m tcp ..., or
iptables -A MY_RULE -p tcp -m tcp ...
The problem is that in the second case, the expansion actually becomes:
iptables -A MY_RULE '' -p tcp -m tcp
and there is an extra empty string that is treated as a positional argument. How can I achieve this correctly?
All POSIX shells: Using ${var+ ...expansion...}
Using ${var+ ...words...} lets you have an arbitrary number of words only if a variable is set:
iptables -A MY_RULE \
${SSH_CLIENT+ -s "${SSH_CLIENT%% *}"} \
-p tcp -m tcp --dport 9999 -m conntrack -j ACCEPT
Here, if-and-only-if SSH_CLIENT is set, we add -s followed by everything in SSH_CLIENT up to the first space.
Bash (and other extended shells): Using Arrays
The more general approach is to use an array whenever you want to represent multiple strings as a single value:
ssh_client_args=( )
[[ $SSH_CLIENT ]] && ssh_client_args+=( -s "${SSH_CLIENT%% *}" )
iptables -A MY_RULE "${ssh_client_args[#]}" -p tcp -m tcp --dport 9999 -m conntrack -j ACCEPT
The syntax "${var%% *} is a parameter expansion which expands to var with the longest possible suffix starting with a space trimmed; thus, leaving the first word. This is much faster than running an external program like awk. See also BashFAQ #100 describing general best practices for native-bash string manipulation.
This is one of those occasions where you probably don't want to quote the variable expansion:
iptables -A MY_RULE $sflag -p tcp -m tcp ...
Alternatively, and more robustly, we can use ${var+...} expansion:
iptables -A MY_RULE ${SSH_CLIENT:+-s "${SSH_CLIENT%% *}"} -p tcp -m tcp ...
Read this as "if $SSH_CLIENT is set (and not null) then expand and substitute -s "${SSH_CLIENT%% *}", else nothing".
Note that we don't quote the $.+ expansion, but we do quote the individual arguments within it where needed (obviously -s doesn't need quotes, although you're free to add them if you want).
I removed the external awk command, as $.%% expansion is the simpler and more efficient way to truncate a string.

Passing multiple arguments from input file, to a command multiple times (Bash)

I have found myself in several situations, where it would be handy to be able to feed a command arguments from an input file, on a per line basis. In the handful of times I've wanted to be able to do this, I've ended up finding a workaround or running the command multiple times manually.
I have a file input.txt which contains multiple lines, with an arbitrary number of arguments on each line. I am going to use my most recent need for this functionality as an example. I am trying to simply copy my iptables rules to ip6tables. I have run the command iptables -S > input.txt to generate the following file:
-P INPUT DROP
-P FORWARD DROP
-P OUTPUT DROP
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -m state --state ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -m state --state ESTABLISHED -j ACCEPT
-A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -o lo -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 53 -m state --state NEW -j ACCEPT
-A OUTPUT -p udp -m udp --dport 53 -m state --state NEW -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 80 -m state --state NEW -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 443 -m state --state NEW -j ACCEPT
For each line, I would like to run something like sudo ip6tables $line, so that ip6tables runs with all the arguments from a given line, and N commands are run, where N is the number of lines in input.txt.
The answer to this question should be applicable to this scenario in general. I am not looking for a solution that only works for copying iptables rules to the IPv6 counterpart (through some functionality in iptables for example). I would also prefer a one liner that I can execute in the terminal if possible, instead of writing a bash script. Though, if a bash script turns out to be much less cumbersome than a one liner, then I would accept that as an answer as well.
This sounds like the perfect job for xargs. For your iptables example it works like this:
xargs -L 1 sudo ip6tables < input.txt
xargs reads command arguments from stdin and executes the provided command with the arguments added to the command line. Here the arguments are piped in to stdin from the input.txt file. -L 1 limits the arguments to one line per execution, i.e. one line is added to the command line, the resulting command gets executed, continue with the next line etc.
If it is just for iptables your system may already have an appropriate iptables-restore command you just have to call with:
sudo iptables-restore input.txt
Now if input.txt is to be used to pass arguments to other commands, there i s an easy solution with Bash:
arguments is an array that is read (filled) from each line of input.txt
Iterate over each line of the input.txt file.
Pass the arguments array to your command like this:
#!/usr/bin/env bash
arguments=()
while read -r -a arguments; do
your_command "${arguments[#]}"
done <input.txt
for a one-liner of the above:
while read -ra a;do your_command "${a[#]}";done<input.txt
Or for a POSIX compatible version (no array):
while read -r a;do set -- $a; your_command "$#";done <input.txt

Getting unterminated 's' command with sed (bash)

My final goal is to get a bunch of text above the COMMIT line in /etc/ufw/before.rules. I am trying to replace the COMMIT with something like this:
TEXT I WANT\nCOMMIT
I see this as an easy sed command: echo COMMIT | sed "s#COMMIT#${morerules}#g"
I have set the variable morerules to a string like this:
morerules="""
### Start HTTP ###
# Enter rule
-A ufw-before-input -p tcp --dport 80 -j ufw-http
-A ufw-before-input -p tcp --dport 443 -j ufw-http
# Limit connections per Class C
-A ufw-http -p tcp --syn -m connlimit --connlimit-above 50 --connlimit-mask 24 -j ufw-http-logdrop
# Limit connections per IP
-A ufw-http -m state --state NEW -m recent --name conn_per_ip --set
-A ufw-http -m state --state NEW -m recent --name conn_per_ip --update --seconds 10 --hitcount 20 -j ufw-http-logdrop
# Limit packets per IP
-A ufw-http -m recent --name pack_per_ip --set
-A ufw-http -m recent --name pack_per_ip --update --seconds 1 --hitcount 20 -j ufw-http-logdrop
# Finally accept
-A ufw-http -j ACCEPT
# Log
-A ufw-http-logdrop -m limit --limit 3/min --limit-burst 10 -j LOG --log-prefix \"[UFW HTTP DROP]\"
-A ufw-http-logdrop -j DROP
### End HTTP ##
"""
As you can see, I added a backslash to the quotations marks. Setting the variable is successful. When I run the command from above echo COMMIT | sed "s#COMMIT#${morerules}#g", this is the output:
sed: -e expression #1, char 9: unterminateds' command`
Is the error with my string or with my sed command? Also, I know other people have gotten this error, but none of their fixes seem to work.
To place a literal backslash before each newline in your replacement text, thus telling sed that it should be considered part of the same expression:
orig=$'\n'; replace=$'\\\n'
sed "s#COMMIT#${morerules//$orig/$replace}#g"
(Placing the values in variables makes it easier to implement this in a way that works across multiple versions of bash).

Using If-Else statement to check for output on Bash scripting

I would want the bash scripting to run the following command
iptables -t nat -A PREROUTING -p tcp --destination-port 80 -j REDIRECT --to-port 10000
if there is no output of it found using
iptables -t nat --list
How can I use the If-Else to look for the output. Can i use 'cat' ?
Use $() to capture the output of a command and -z to determine if it is empty:
output=$(iptables -t nat --list)
if [ -z $output ] # returns true if the length of $output is 0
then
output=$(iptables -t nat -A PREROUTING -p tcp --destination-port 80 -j REDIRECT --to-port 10000)
fi
You could use grep with the iptables list, depending on how you're trying to match it.
if iptables -t nat --list PREROUTING | grep -- '--destintation-port 80' | grep -q -- '--to-port 10000'
iptables -t nat -A PREROUTING -p tcp --destination-port 80 -j REDIRECT --to-port 10000
fi
This will look if there is a PREROUTING entry that concerns both --destination-port 80 and --to-port 10000. If the output string is more predictable you could use a single grep for it, but I don't know iptables well enough to offer that as part of the solution

Check if user chain is available in bash script

I have two bash scripts. In the first one a user chain will be created like
#!/bin/bash
iptables -X STATS
iptables -N STATS
iptables -I INPUT -j STATS
In another bash script I will insert the rules like
#!/bin/bash
# HERE: If STATS-Chain not available end skript
iptables -A STATS --dport 80
How can I check at the position HERE: if the STATS-Chain is available?
Is it possible with iptables itself or only with iptables -L an some
sed/awk/grep... magic?
iptables-save output might be easier to grep for this task.
Check if chain STATUS is loaded, if it is exits:
/sbin/iptables-save | grep -q '\-A STATUS ' && exit 0

Resources