I have come across a script i.e. Wondershaper
The script is terrific, however any way to make it smarter?
Like it runs after certain traffic has gone through?
Say 1TB is set per day, once 1TB is hit, the script turns on automatically?
I have thought about setting crn job,
At 12 am it clears the wondershaper, and in 15mins interval, it checks if the server has crossed 1TB limit for the day, and then if it is true then it runs the limiter,
but I am not sure how to set up the 2nd part, how can i setup a way that will enable the limiter to run after 1TB is crossed?
Remove Code
wondershaper -ca eth0
Limit Code
wondershaper -a eth0 -u 154000
I have made a custom script for this, as it is not possible to do it within the system, i had to become creative and do a API call to the datacenter and then run cron job.
I also used bashjson, to run it. I have attached the script below.
date=$(date +%F)
url='API URL /metrics/datatraffic?from='
url1='T00:00:00Z&to='
url2='T23:59:59Z&aggregation=SUM'
final="$url$date$url1$date$url2"
wget --no-check-certificate -O output.txt \
--method GET \
--timeout=0 \
--header 'X-Lsw-Auth: API AUTH' \
$final
sed 's/[][]//g' output.txt >> test1.json // will remove '[]' from the code just to make things easier for bashjson to understand
down=$(/root/bashjson/bashjson.sh test1.json metrics DOWN_PUBLIC values value) // outputs the data to variable
up=$(/root/bashjson/bashjson.sh test1.json metrics UP_PUBLIC values value)
newdown=$(printf "%.14f" $down)
newup=$(printf "%.14f" $up)
upp=$(printf "%.0f\n" "$newup") // removes scientific notation as bash does not like it
downn=$(printf "%.0f\n" "$newdown")
if (($upp>800000000000 | bc))
then
wondershaper -a eth0 -u 100000 //main command to limit
else
echo uppworks
fi
if (($downn>500000000000 | bc))
then
wondershaper -a eth0 -d 100000
else
echo downworks
fi
rm -rf output.txt test1.json
echo $upp
echo $downn
You can always update it as per your preference.
Related
I am writing a shell script where i want to ssh to a server and get the cpu and memory details data of that displayed as a result. I’m using the help of top command here.
Script line:
ssh -q user#host -n “cd; top -n 1 | egrep ‘Cpu|Mem|Swap’”
But the result is
TERM environment variable is not set.
I had checked the same in the server by entering set | grep TERM and got result as TERM=xterm
Please someone help me on this. Many thanks.
Try using the top -b flag:
ssh -q user#host -n "cd; top -bn 1 | egrep 'Cpu|Mem|Swap'"
This tells top to run non-interactively, and is intended for this sort of use.
top need an environment. You have to add the parameter -t to get the result:
ssh -t user#host -n "top -n 1 | egrep 'Cpu|Mem|Swap'"
Got it..!! Need to make a small modification for the below script line.
ssh -t user#host -n "top -n 1 | egrep 'Cpu|Mem|Swap'"
Instead of -t we need to give -tt. It worked for me.
To execute command top after ssh’ing. It requires a tty to run. Using -tt it will enable a force pseudo-tty allocation.
Thanks stony for providing me a close enough answer!! :)
I have very little experience with Bash but here is what I am trying to accomplish.
I have two different text files with a bunch of server names in them. Before installing any windows updates and rebooting them, I need to disable all the nagios host/service alerts.
host=/Users/bob/WSUS/wsus_test.txt
password="my_password"
while read -r host
do
curl -vs -o /dev/null -d "cmd_mod=2&cmd_typ=25&host=$host&btnSubmit=Commit" "https://nagios.fqdn.here/nagios/cgi-bin/cmd.cgi" -u "bob:$password" -k
done < wsus_test.txt >> /Users/bob/WSUS/diable_test.log 2>&1
This is a reduced form of my current code which works as intended, however, we have servers in a bunch of regions. Each server name is prepended with a 3 letter code based on region (ie, LAX, NYC, etc). Secondly, we have a nagios server in each region so I need the code above to be connecting to the correct regional nagios server based on the server name being passed in.
I tried adding 4 test servers into a text file and just adding a line like this:
if grep lax1 /Users/bob/WSUS/wsus_text.txt; then
<same command as above but with the regional nagios server name>
fi
This doesn't work as intended and nothing is actually disabled/enabled via API calls. Again, I've done very little with Bash so any pointers would be appreciated.
Extract the region from host name and use it in the Nagios URL, like this:
while read -r host; do
region=$(cut -f1 -d- <<< "$host")
curl -vs -o /dev/null -d "cmd_mod=2&cmd_typ=25&host=$host&btnSubmit=Commit" "https://nagios-$region.fqdn.here/nagios/cgi-bin/cmd.cgi" -u "bob:$password" -k
done < wsus_test.txt >> /Users/bob/WSUS/diable_test.log 2>&1
I'm not quite sure what the issue is. I'm on Kali Linux 2.0 right now, fresh install. The following worked on Ubuntu 14.04 but it's not working anymore (maybe I accidentally changed it?). It looks correct to me, but every time it runs it blocks.
backup_folder=$(ssh -i /home/dexter/.ssh/id_rsa $server 'ls -t '$dir' | head -1')
This is part of a larger script. $server and $dir are set. When I run the command alone, I get the correct output, but it doesn't end the connection.
I don't know if this may help to solve the question but your command doesn't handle dirs with space in the filename. Add double quotes inside the single quote section like this:
SERVER='remoteServer' && REMOTE_DIR='remoteDir' && backup_folder=$(ssh -i /home/dexter/.ssh/id_rsa "${SERVER}" 'ls -t "'${REMOTE_DIR}'" | head -n1'); echo "${backup_folder}"
If it doesn't help try to add increasing number of -v switch to ssh to debug eventually reaching:
SERVER='remoteServer' && REMOTE_DIR='remoteDir' && backup_folder=$(ssh -vvv -i /home/dexter/.ssh/id_rsa "${SERVER}" 'ls -t "'${REMOTE_DIR}'" | head -n1'); echo "${backup_folder}"
If the verbose output does not help may be an MTU problem (these kind of problems are not of binary type, acts strangely).
You can try lowering MTU (usually 1500) on your side to solve:
sudo ifconfig eth0 mtu 1048 up
eth0 is obviously an example interface, use your own.
I am using Ubuntu Server 14.04 32bit for the following.
I am trying to use blocklists to add regional blocks (China, Russia...) to my firewall rules and am struggling with the length it takes my script to complete and understanding why a different script fails to work.
I had originally used http://whatnotlinux.blogspot.com/2012/12/add-block-lists-to-iptables-from.html as an example and tidied up / changed parts of the script to pretty close to what's below:
#!/bin/bash
# Blacklist's names & URLs array
declare -A blacklists
blacklists[china]="http://www.example.com"
#blacklists[key]="url"
for key in ${!blacklists[#]}; do
#Download blacklist
wget --output-document=/tmp/blacklist_$key.gz -w 3 ${blacklists[$key]}
iptables -D INPUT -j $key #Delete current iptables chain link
iptables -F $key #Flush current iptables chain
iptables -X $key #Delete current iptables chain
iptables -N $key #Create current iptables chain
iptables -A INPUT -j $key #Link current iptables chain to INPUT chain
#Read blacklist
while read line; do
#Drop description, keep only IP range
ip_range=`echo -n $line | sed -e 's/.*:\(.*\)-\(.*\)/\1-\2/'`
#Test if it's an IP range
if [[ $ip_range =~ ^[0-9].*$ ]]; then
# Add to the blacklist
iptables -A $key -m iprange --src-range $ip_range -j LOGNDROP
fi
done < <(zcat /tmp/blacklist_$key.gz | iconv -f latin1 -t utf-8 - | dos2unix)
done
# Delete files
rm /tmp/blacklist*
exit 0
This appears to work fine for short test lists, but manually adding many (200,000+) entries to iptables takes an EXORBITANT amount of time and I'm not sure why? Depending on the list I have calculated this taking upwards of 10 hours to complete which just seems silly.
After viewing the format of the iptables-save output I created a new script that uses iptables-save to save working iptables rules and then appends the expected format for blocks to this file, such as: -A bogon -m iprange --src-range 0.0.0.1-0.255.255.255 -j LOGNDROP, and eventually uses iptables-restore to load the file as seen below:
#!/bin/bash
# Blacklist's names & URLs arrays
declare -A blacklists
blacklists[china]="http://www.example.com"
#blacklists[key]="url"
iptables -F # Flush iptables chains
iptables -X # Delete all user created chains
iptables -P FORWARD DROP # Drop all forwarded traffic
iptables -N LOGNDROP # Create LOGNDROP chain
iptables -A LOGNDROP -p tcp -m limit --limit 5/min -j LOG --log-prefix "Denied TCP: " --log-level 7
iptables -A LOGNDROP -p udp -m limit --limit 5/min -j LOG --log-prefix "Denied UDP: " --log-level 7
iptables -A LOGNDROP -p icmp -m limit --limit 5/min -j LOG --log-prefix "Denied ICMP: " --log-level 7
iptables -A LOGNDROP -j DROP # Drop after logging
# Build first part of iptables-rules
for key in ${!blacklists[#]}; do
iptables -N $key # Create chain for current list
iptables -A INPUT -j $key # Link input to current list chain
done
iptables-save | sed '$d' | sed '$d' > /tmp/iptables-rules.rules # Save WORKING iptables-rules and remove last 2 liens (COMMIT & comment)
for key in ${!blacklists[#]}; do
#Download blacklist
wget --output-document=/tmp/blacklist_$key.gz -w 3 ${blacklists[$key]}
zcat /tmp/blacklist_$key.gz | sed '1,2d' | sed s/.*:/-A\ $key\ -m\ iprange\ --src-range\ / | sed s/$/\ -j\ LOGNDROP/ >> iptables-rules.rules
done
echo 'COMMIT' >> /tmp/iptables-rules.rules
iptables-restore < /tmp/iptables-rules.rules
# Delete files
rm /tmp/blacklist*
rm /tmp/iptables-rules.rules
exit 0
This works great for most lists on the testbed however there are specific lists that if included will produce the iptables-restore: line 389971 failed error, which is always the last line (COMMIT). I've read that due to the way iptables works whenever there is an issue reloading rules the error will always say the last line failed.
The truly odd thing is that testing these same lists on Ubuntu Desktop 14.04 64bit the second script works just fine. I have tried running the script on the Desktop machine, then using iptables-save to save a "properly" formatted version of the ruleset, and then loading this file to iptables on the server using iptables-restore and still receive the error.
I am at a loss as to how to troubleshoot this, why the initial script takes so long to add rules to iptables, and what could potentially be causing problems with the lists in the second script.
If you need to block a multitude of IP Addresses, use ipset instead.
Step 1: Create the IPset:
# Hashsize of 1024 is usually enough. Higher numbers might speed up the search,
# but at the cost of higher memory usage.
ipset create BlockAddress hash:ip hashsize 1024
Step 2: Add the addresses to block into that IPset:
# Put this in a loop, the loop reading a file containing list of addresses to block
ipset add BlockAddress $IP_TO_BLOCK
Finally, replace all those lines to block with just one line in netfilter:
iptables -t raw -A PREROUTING -m set --match-set BlockAddress src -j DROP
Done. iptables-restore will be mucho fasta.
IMPORTANT NOTE: I strongly suggest NOT using a domain name to be added into netfilter; netfilter needs to first do a DNS Resolve, and if DNS is not properly configured and/or too slow, it will fail. Rather, do a pre-resolve (or periodic resolve) of domain names to block, and feed the found IP addresses to the "file containing list of addresses to block". It should be an easy script, invoked from crontab every 5 minutes or so.
EDIT 1:
This is an example of a cronjob I use to get facebook.com's address, invoked every 5 minutes:
#!/bin/bash
fbookfile=/etc/iptables.d/facebook.ip
for d in www.facebook.com m.facebook.com facebook.com; do
dig +short "$d" >> "$fbookfile"
done
sort -n -u "$fbookfile" -o "$fbookfile"
Every half hour, another cronjob feeds those addresses to ipset:
#!/bin/bash
ipset flush IP_Fbook
while read ip; do
ipset add IP_Fbook "$ip"
done < /etc/iptables.d/facebook.ip
Note: I have to do this because doing dig +short facebook.com, for instance, returns exactly ONE IP address. After some observation, the IP address returned changed every ~5 minutes. Since I'm too lazy occupied to make an optimized version, I took the easy way out and do a flush/rebuild only every 30 minutes to minimize CPU spikes.
The following is how I ended up solving this using ipsets as well.
#!/bin/bash
# Blacklist names & URLs array
declare -A blacklists
blacklists[China]="url"
# blacklists[key]="url"
# etc...
for key in ${!blacklists[#]}; do
# Download blacklist
wget --output-document=/tmp/blacklist_$key.gz -w 3 ${blacklists[$key]}
# Create ipset for current blacklist
ipset create $key hash:net maxelem 400000
# TODO method for determining appropriate maxelem
while read line; do
# Add addresses from list to ipset
ipset add $key $line -quiet
done < <(zcat /tmp/blacklist_$key.gz | sed '1,2d' | sed s/.*://)
# Add rules to iptables
iptables -D INPUT -m set --match-set $key src -j $key # Delete link to list chain from INPUT
iptables -F $key # Flush list chain if existed
iptables -X $key # Delete list chain if existed
iptables -N $key # Create list chain
iptables -A $key -p tcp -m limit --limit 5/min -j LOG --log-prefix "Denied $key TCP: " --log-level 7
iptables -A $key -p udp -m limit --limit 5/min -j LOG --log-prefix "Denied $key UDP: " --log-level 7
iptables -A $key -p icmp -m limit --limit 5/min -j LOG --log-prefix "Denied $key ICMP: " --log-level 7
iptables -A $key -j DROP # Drop after logging
iptables -A INPUT -m set --match-set $key src -j $key
done
I'm not wildly familiar with ipsets but this makes for a much faster method of downloading, parsing and adding blocks.
I've added individual chains for each list for more verbose logging that will log which blocklist the dropped ip is coming from should you have multiples. On my actual box I'm using around 10 lists and have added several hundred thousand addresses with no problem!
Donwload Zones
#!/bin/bash
# http://www.ipdeny.com/ipblocks/
zone=/path_to_folder/zones
if [ ! -d $zone ]; then mkdir -p $zone; fi
wget -c -N http://www.ipdeny.com/ipblocks/data/countries/all-zones.tar.gz
tar -C $zone -zxvf all-zones.tar.gz >/dev/null 2>&1
rm -f all-zones.tar.gz >/dev/null 2>&1
Edit your Iptables bash script and add the following lines:
#!/bin/bash
ipset=/sbin/ipset
iptables=/sbin/iptables
route=/path_to_blackip/
$ipset -F
$ipset -N -! blockzone hash:net maxelem 1000000
for ip in $(cat $zone/{cn,ru}.zone $route/blackip.txt); do
$ipset -A blockzone $ip
done
$iptables -t mangle -A PREROUTING -m set --match-set blockzone src -j DROP
$iptables -A FORWARD -m set --match-set blockzone dst -j DROP
example: where "blackip.txt" is your own ip blacklist and "cn,ru" china-russia"
Source: blackip
I have 200MB file to download. I don't want to download it directly by passing URL to cURL (because my college blocks requests with more than 150MB).
So, I can download data by 10MB chunks, by passing range parameters to cURL. But I don't know how many 10MB chunks to download. Is there a way in cURL so that I can download data indefinitely. Something more like
while(next byte present)
download byte;
Thanks :)
command line curl lets you specify a range to download, so for your 150meg max, you'd do something like
curl http://example.com/200_meg_file -r 0-104857600 > the_file
curl http://example.com/200_meg_file -r 104857601-209715200 >> the_file
and so on until the entire thing's downloaded, grabbing 100meg chunks at a time and appending each chunk to the local copy.
Curl already has the ability to resume a download. Just run like this:
$> curl -C - $url -o $output_file
Of course this won't figure out when to stop, per se. However it would be pretty easy to write a wrapper. Something like this:
#!/bin/bash
url="http://someurl/somefile"
out="outfile"
touch "$out"
last_size=-1
while [ "`du -b $out | sed 's/\W.*//'`" -ne "$last_size" ]; do
curl -C - "$url" -o "$out"
last_size=`du -b $out | sed 's/\W.*//'`
done
I should note that curl outputs a fun looking error:
curl: (18) transfer closed with outstanding read data remaining
However I tested this on a rather large ISO file, and the md5 still matched up even though the above error was shown.