Whois search for public IP on a csv file using bash script - bash

I have a daily report in the csv file format of a list of public IP address and I need to to fill in the Hostname for the public IP. The Hostname can be an OrgId or netname.
I need to do a bash script to automate the whois search instead of searching manually one by one and filling it up on the csv file.
Example: This is an excerpt of a long list of Public IP address
Port,Type,S_Host,S_IP,Port,D_Host,D_IP,Port
2,tcp,N/A,8.8.8.8,2,N/A,47.246.57.232,8
3,tcp,N/A,47.246.57.232,2,N/A,217.17.81.9,3
I need to do a whois search on the IPs in column 4 and 7 then, fill in the Hostname inside field 3 and 6.
Desired output:
Port,Type,S_Host,S_IP,Port,D_Host,D_IP,Port
2,tcp,Google,8.8.8.8,2,Alibaba,47.246.57.232,8
3,tcp,Alibaba,47.246.57.232,2,MVTV,217.17.81.9,3

A very simple approach could be to read the list of IP addresses (i.e. pubIP.lst) and write it out into a new file but with resolved hostnames (i.e. hosts.lst).
#!/bin/bash
resolveHostname() {
# You may change or extend this function to your needs
dig -x "$1" +short
}
# Make sure there is no file with resolved hostnames
rm hosts.lst
while read LINE; # by line from a list
do
# Each Comma Separated Value (CSV) into a variable
PORT=$(echo "${LINE}" | cut -d "," -f 1)
TYPE=$(echo "${LINE}" | cut -d "," -f 2)
# SRC_HOST=$(echo "${LINE}" | cut -d "," -f 3)
SRC_IP=$(echo "${LINE}" | cut -d "," -f 4)
SRC_PORT=$(echo "${LINE}" | cut -d "," -f 5)
# DEST_HOST=$(echo "${LINE}" | cut -d "," -f 6)
DEST_IP=$(echo "${LINE}" | cut -d "," -f 7)
DEST_PORT=$(echo "${LINE}" | cut -d "," -f 8)
# And write it out the columns into a new file
# but for Col 3,6 with hostnames instead of IP
echo "${PORT},${TYPE},$(resolveHostname ${SRC_IP}),${SRC_IP},${SRC_PORT},$(resolveHostname ${DEST_IP}),${DEST_IP},${DEST_PORT}" >> hosts.lst
done < pubIP.lst
Thanks to
Passing parameters to a Bash function

Related

insert hosts in check via for loop

following problem:
we have a file called "file.conf"
192.168.30.1|192.168.30.1|os
_gateway|192.168.30.2|Linux 2.6.18 - 2.6.22
...
first is the hostname second is the ipv4
now we have a script where he should automaticaly insert the hosts and ip's via automated user from checkMK
#!/bin/bash
FILE=filename
source $FILE
for i in ${FILE}
do
HOSTNAME=$(cat $i | cut -d '|' -f1)
IP=$(cat $i | cut -d '|' -f2)
curl "http://checkmkadress/check_mk/host&user" -d 'request={"hostname":"'"$HOSTNAME"'","folder":"ansible","attributes":{"ipaddress":"'"$IP"'","site":"sitename","tag_agent":"cmk-agent"}}'
done
but if we do it like that we get the following error cause he try's to put in every host in host and every ip in ip without going through all lines
{"result": "Check_MK exception: Failed to parse JSON request: '{\"hostname\":\"allhostnames":{\"ipaddress\":all_ips\",\"site\":\"sitename\",\"tag_agent\":\"cmk-agent\"}}': Invalid control character at: line 1 column 26 (char 25)", "result_code": 1}
how can we make the curl script go through each line to get host and ip individually
Using what you already done, try in this way.
FILE="filename"
source $FILE
while read line
do
HOSTNAME=$(echo $line | cut -d '|' -f1)
IP=$(echo $line | cut -d '|' -f2)
#your curl command here
done <$FILE
OR, which I prefer
while IFS='|' read -r host ip description
do
#your curl command here
echo "$host : $ip : $description"
done <$FILE

While Read Line - Limit Number of Lines

I am trying to limit the number of lines found during a while read line loop. For example:
File: order.csv
123456,ORDER1,NEW
123456,ORDER-2,NEW
123456,ORDER-3,SHIPPED
I am doing the following.
cat order.csv | while read line;
do
order=$(echo $line | cut -d "," -f 1)
status=$(echo $line | cut -d "," -f 3)
echo "$order:$status"
done
Which outputs:
123456:NEW
123456:NEW
123456:SHIPPED
How can I limit the number of lines. In this case there are three. How can I limit them to only 2 so that only the first two are displayed?
Desired output:
123456:NEW
123456:NEW
There are some ways to meet your requirements:
Method 1
Use head to display first few lines of a file.
head -n 2 order.csv | while read line;
do
order=$(echo $line | cut -d "," -f 1)
status=$(echo $line | cut -d "," -f 3)
echo "$order:$status"
done
Method 2
Use a for loop.
for i in {1..2}
do
read line
order=$(echo $line | cut -d "," -f 1)
status=$(echo $line | cut -d "," -f 3)
echo "$order:$status"
done < order.csv
Method 3
Use awk.
awk -F, 'NR <= 2 { print $1":"$3 }' order.csv

bash scripting to add users

I created a bash script to read information such as username, group etc., from a text file and create users based on it in linux. The code seems to function properly and creates the users as desired. But the user information in the last line of the text file always gets misinterpreted. Even if i delete it then the next last line gets misinterpreted i.e., the text is read wrongly.
`
#!/bin/bash
userfile="users.txt"
IFS=$'\n'
if [ ! -f "$userfile" ]
then
echo "File does not exist. Specify a valid file and try again. "
exit
fi
groups=(`cut -f 4 "$userfile" | sed 's/ //'`)
fullnames=(`cut -f 1 "$userfile" | sed 's/,//' | sed 's/"//g'`)
username1=(`cut -f 1 "$userfile" |sed 's/,//' | sed 's/"//' | tr [A-Z] [a-z] | awk '{print substr($2,1,1) substr($3,1,1) substr($1,1,1)}'`)
username2=(`cut -f 4 "$userfile" | tr [A-Z] [a-z] | awk '{print substr($1,1,1)}'`)
i=0
n=${#username1[#]}
for (( q=0; q<n; q++ ))
do
usernames[$q]=${username1[$q]}"${username2[$q]}"
done
declare -a usernames
x=0
created=0
for user in ${usernames[*]}
do
adduser -c ${fullnames[$x]} -p 123456789 -f 15 -m -d /home/${groups[$x]}/$user -K LOGIN_RETRIES=3 -K PASS_MAX_DAYS=30 -K PASS_WARN_AGE=3 -N -s /bin/bash $user 2> /dev/null
usermod -g ${groups[$x]} $user
chage -d 0 $user
let created=$created+1
x=$x+1
echo -e "User $user created "
done
echo "$created Users created"
enter image description here`
#!/bin/bash
userfile="./users.txt"; # <-- Config
while read line; do
# FULL NAME
# Capture all between quotes as full name
fullname=$(printf '%s' "${line}" | sed 's/^"\(.*\)".*/\1/')
# Remove spaces and punctuations???:
fullname=$(printf '%s' "${fullname}" | tr -d '[:punct:][:blank:]')
# Right-side names:
partb=$(printf '%s' "${line}" | sed "s/^\".*\"//g")
# CODE 1, capture second row
code1=$(printf '%s' "${partb}" | cut -f 2 )
# CODE 2, capture third row
code2=$(printf '%s' "${partb}" | cut -f 3 )
# GROUP, capture fourth row
group=$(printf '%s' "${partb}" | cut -f 4 )
# Print only for report
echo "fullname: ${fullname}\n code 1: ${code1}\n code 2: ${code2}\n group: ${group}\n"
done <${userfile}
Maybe these are the fields that you want, now you have it in variables for manipulate them: $fullname, $code1, $code2 and $group.
Although maybe the fail that you observed was due to some misplaced quotation mark in the text file or the line breaks, on the attached screenshot I can see one missed quote.

Bash Script to batch-convert IP Addresses to CIDR?

Ok, here's the problem.
I have a plaintext list of IP addresses that I'm blocking on my servers, growing more and more unwieldy every day (added 3000+ entries today alone).
It's already been sorted for duplicates so that's not a problem. What I'd like to do is write a script to go through it and consolidate the entries a bit better for mass blocking.
For example, take this:
2.132.35.104
2.132.79.240
2.132.99.87
2.132.236.34
2.132.245.30
And turn it into this:
2.132.0.0/16
Any suggestions on how to code that in a bash script?
UPDATE: I've worked out part-way how to do what I'm needing. Converting it to /24 is easy, as follows:
cat /usr/local/blocks/blocks.txt | while read line; do
oc1=`echo "$line" | cut -d '.' -f 1`
oc2=`echo "$line" | cut -d '.' -f 2`
oc3=`echo "$line" | cut -d '.' -f 3`
oc4=`echo "$line" | cut -d '.' -f 4`
echo "$oc1.$oc2.$oc3.0/24" >> twentyfour.srt
done
sort -u twentyfour.srt > twentyfour.txt
rm -f twentyfour.srt
ori=`cat /usr/local/blocks/blocks.txt | wc -l`
new=`cat twentyfour.txt | wc -l`
echo "$ori"
echo "$new"
That reduced it down from 4,452 entries to 4,148 entries.
Instead of having:
109.86.9.93
109.86.26.77
109.86.55.225
109.86.70.224
109.86.87.199
109.86.89.202
109.86.95.248
109.86.100.19
109.86.110.43
109.86.145.216
109.86.152.86
109.86.155.238
109.86.156.54
109.86.187.91
109.86.228.86
109.86.234.51
109.86.239.61
I now have:
109.86.100.0/24
109.86.110.0/24
109.86.145.0/24
109.86.152.0/24
109.86.155.0/24
109.86.156.0/24
109.86.187.0/24
109.86.228.0/24
109.86.234.0/24
109.86.239.0/24
109.86.26.0/24
109.86.55.0/24
109.86.70.0/24
109.86.87.0/24
109.86.89.0/24
109.86.9.0/24
109.86.95.0/24
All well and good. BUT, there's 17 entries from the 109.86.. area. In a case where the first 2 octets match more than say 5 entries on /24, I'd like to reduce that to /16.
That's where I'm stuck.
UPDATE 2:
For Steve: Here's the block list for today. And here's the result so far. Apparently it's not removing the near-duplicate entries from twentyfour that are in sixteen.
I wish I could tell you this is a simple filter. However, all of the 2.0.0.0/8 network is registered to RIPE NCC. There's just way too many different ranges of blocked IP addresses, its easier to just narrow down the scope of visitors you do want versus what you don't want.
You could also use various tools you can use to block attacks automatically.
Map to identify which is which. https://www.iana.org/numbers
Here's a script I just made for you. Then you can create the major block lists for each of the primary registries. Afrinic, Lacnic, Apnic, Ripe, and Arin.
create_tables_by_registry.sh
Just run this script... Then run the following registry.sh files. (E.g; ripe.sh)
#!/bin/bash
# Author: Steve Kline
# Date: 03-04-2014
# Designed and tested to run on properly on CentOS 6.5
#Grab Updated IANA Address Space Assignments only if Newer Version
wget -N https://www.iana.org/assignments/ipv4-address-space/ipv4-address-space.txt
assigned=ipv4-address-space.txt
arrayregistry=( afrinic apnic arin lacnic ripe )
for registry in "${arrayregistry[#]}"
do
#Clean up the ipv4-address-space.txt file and keep useable IPs
grep "$registry" $assigned | sed 's/\/8/\.0\.0\.0\/8/g'| colrm 15 > $registry-tmp1.txt
ip=($(cat $registry-tmp1.txt))
echo "#!/bin/bash" > $registry.sh
for ip in "${ip[#]}"
do
echo $ip | sed -e 's/" "//g' > $registry-tmp2.txt
#INSERT OR MODIFY YOUR COMPATIBLE FIREWALL RULES HERE
#This section creates the country to block.
echo "iptables -A INPUT -s $ip -j DROP" >> $registry.sh
chmod +x $registry.sh
done
rm $registry-tmp1.txt -f
rm $registry-tmp2.txt -f
done
Ok! Well I'm back, a little insane here and a little nutty there... I think I helped figure this out for you. I'm sure you can piece together a modification to better fit your needs.
#MODIFY FOR YOUR LIST OF IP ADDRESSES
BADIPS=block.ip
twentyfour=./twentyfour.ips #temp file for all IPs converted to twentyfour net ids
sixteen=./sixteen.ips #temp file for sixteen bit
twentyfourlst1=./twentyfour1.txt #temp file for 24 bit IDs
twentyfourlst2=./twentyfour2.txt #temp file for 24 bit IDs filtered by 16 bit IDs that match
sixteenlst=./sixteen.txt #temp file for parsed sixteenbit
#MODIFY FOR YOUR OUTPUT OF CIDR ADDRESSES
finalfile=./blockips.list #Final file post-merge
cat $BADIPS | while read line; do
oc1=`echo "$line" | cut -d '.' -f 1`
oc2=`echo "$line" | cut -d '.' -f 2`
oc3=`echo "$line" | cut -d '.' -f 3`
oc4=`echo "$line" | cut -d '.' -f 4`
echo "$oc1.$oc2.$oc3.0/24" >> $twentyfour
echo "$oc1.$oc2.0.0/16" >> $sixteen
done
awk '{i=1;while(i <= NF){a[$(i++)]++}}END{for(i in a){if(a[i]>4){print i,a[i]}}}' $sixteen | sed 's/ [0-9]\| [0-9][0-9]\| [0-9][0-9][0-9]//g' > $sixteenlst
sort -u $twentyfour > twentyfour.txt
# THIS FINDS NEAR DUPLICATES MATCHING FIRST TWO OCTETS
cat $sixteenlst | while read line; do
oc1=`echo "$line" | cut -d '.' -f 1`
oc2=`echo "$line" | cut -d '.' -f 2`
oc3=`echo "$line" | cut -d '.' -f 3`
oc4=`echo "$line" | cut -d '.' -f 4`
grep "\b$oc1.$oc2\b" twentyfour.txt >> duplicates.txt
done
#THIS REMOVES THE NEAR DUPLICATES FROM THE TWENTYFOUR FILE
fgrep -vw -f duplicates.txt twentyfour.txt > twentyfourfinal.txt
#THIS MERGES BOTH RESULTS
cat twentyfourfinal.txt $sixteenlst > $finalfile
sort -u $finalfile
ori=`cat $BADIPS | wc -l`
new=`cat $finalfile | wc -l`
echo "$ori"
echo "$new"
#LAST MIN CLEANUP
rm -f $twentyfour $twentyfourlst $sixteen $sixteenlst duplicates.txt twentyfourfinal.txt
Going Back to fix: I noted a problem... Originally unsuccessful.
`grep "$oc1.$oc1" twentyfour.txt > duplicates.txt
For Example: The old script had bad results with this test IP range... the updated version now above... Does exactly as its intended. match the octet exactly.. and not a similar.
192.168.1.1
192.168.2.50
192.168.5.23
192.168.14.10
192.168.10.5
192.168.24.25
192.165.20.10
10.192.168.30
5.76.10.20
5.76.20.30
5.76.250.10
5.76.34.10
5.76.50.30
95.76.30.1 - Old script matched this to 5.76
20.20.5.5
20.20.10.10
20.20.16.50
20.20.205.20
20.20.60.20
205.20.16.20 - not a problem
20.205.150.150 - Old script matched this to 20.20
220.20.16.0 - Also failed without adding -w parameter to the last grep to only match exact strings.

Bash output to multiple variales

I'm creating a script in Bash to change all MAC addresses of my PC. I can list all network interfaces with this:
ip link | grep "<" | cut -d " " -f 2 | cut -d ":" -f 1 | grep -v lo
And the output of the script is:
eth0
wlan0
Now I need to create a variable for each network interface (to use it in the future), but I don't know how, and Google didn't help me...
Answer:
readarray -t interfaces < <(ip link | grep "<" | cut -d " " -f 2 | cut -d ":" -f 1 | grep -v lo)
echo "${interfaces[0]}" # prints eth0
echo "${interfaces[1]}" # prints wlan0
And to loop over them use for:
for curInterface in "${interfaces[#]}"; do
echo "$curInterface"
done
But there are better ways to parse data:
First of all, instead of grepping < character you can use -o flag. This will output all of the data on single lines. Then you simply need the second word without : character. This is very simple in pure bash:
interfaces=()
while read -r _ curInterface _; do
interfaces+=("${curInterface%:}")
done < <(ip -o link)
Store the output in an array:
interfaces=( $(ip link | awk '/</ { print $2 }' | awk -F: '!/lo/ {print $1}') )
You can create an array from this output, and loop through it after.
my_array=( $(ip link | grep "<" | cut -d " " -f 2 | cut -d ":" -f 1 | grep -v lo) )
You can also this exmaple giving different alternatives redirect output to array
And I could have it simpler like this with one awk command:
readarray -t youravar < <(exec ip link | awk -F': ' '/^[0-9]+:/&&!/ lo: /{print $2}')

Resources