wget resolves to a different IP than host - shell

I have a shell script in which I use host to get the IP of the target site to update ufw and allow outbound traffic to that IP. However, when I make the subsequent wget call to the same base URL, it resolves to a different IP, and thus is blocked by ufw. Just to test, I tried pinging the URL, and it returned a different third IP.
We're blocking all outbound traffic by default in ufw, and only enable what we need to go out, so I need the script to update the correct IP so I can wget the content. The IP in each instance (host vs wget) is consistently the same, but they return different values with respect to each other, so I don't think it's simply a DNS issue. How do I get a consistent IP to update the firewall with, so that the subsequent wget request performs successfully? I disabled the firewall as a test, and was able to download from the URL successfully, so the issue is definitely in getting a consistent IP to point to.
HOSTNAME=<name of site to resolve>
LOGFILE=<logfile path>
Current_IP=$(host $HOSTNAME | head -n 1 | cut -d " " -f 4)
#this echoes the correct value
echo $Current_IP
if [ ! -f $LOGFILE ]; then
/usr/sbin/ufw allow out from any to $Current_IP
echo $Current_IP > $LOGFILE
echo New IP address found and logged >> ./download.log
else
Old_IP=$(cat $LOGFILE)
if [ "$Current_IP" = "$Old_IP" ] ; then
echo IP address has not changed >> ./download.log
else
/usr/sbin/ufw delete allow out from any to $Old_IP
/usr/sbin/ufw allow out from any to $Current_IP
echo $Current_IP > $LOGFILE
echo IP Address was updated in ufw >> ./download.log
fi
fi
After that updates the firewall, a subsequent wget to HOSTNAME attempts to go out to a different IP than was just updated.

Turns out the difference was "www.". When I was resolving host I was not using www, and when I was using wget I was using www, and thus they resolved to different IPs for this particular site.

Related

Trying to get BASH backup script to find an IP address based off known MAC address?

I have a small BASH backup script that uses Rsync to grab a handful of computers on my LAN. It works very well for static devices using an Ethernet cable - trouble comes in for my even smaller number of Laptop users that have docks. Every once in a while they do not connect to the Dock & Ethernet cable/statically assigned address and end up on the WiFi with a DHCP assigned address. I already have a list of known statically assigned targets in a file that is parsed through to actually backed up. So I keep thinking I should fairly easily be able to create a second file with an nmap scan before each backup run with other code I found - something like:
sudo nmap -n -sP 192.168.2.0/24 | awk '/Nmap scan report for/{printf $5;}/MAC Address:/{print " => "$3;}' | sort
which gives me a list of 192.168.2.101 => B4:FB:E4:FE:C6:6F for all found devices in the LAN. I just removed the | sort and send it to a file > found.devices instead.
So Now I have a list of found devices IP and MAC address - and I'd like to compare the two files and create a new target list with any changed IP addresses found (for those Laptop users that forgot to connect to the Dock and are now using DHCP). But I still want to keep my original targets file clean for the times that they do remember and also continue to get those other devices that are wired all the time while ignoring everything else on the LAN.
found.devices
192.168.2.190 => D4:XB:E4:FE:C6:6F
192.168.2.102 => B4:QB:Y4:FE:C6:6F
192.168.2.200 => B4:FB:P4:ZE:C6:6F
192.168.2.104 => B4:FB:E4:BE:P6:6F
known.targets
192.168.2.101 D4:XB:E4:FE:C6:6F domain source destination
192.168.2.102 B4:QB:Y4:FE:C6:6F domain source destination
192.168.2.103 B4:FB:P4:ZE:C6:6F domain source destination
192.168.2.104 B4:FB:E4:BE:P6:6F domain source destination
Should get a list or a file for the current back run to use of:
192.168.2.190 domain source destination
192.168.2.102 domain source destination
192.168.2.200 domain source destination
192.168.2.104 domain source destination
Currently my bash script just reads the file of known.targets one line at a time:
cat /known.targets | while read ip hostname source destination
do
this mounts and backs up the data I want ...
I really like the current system, and have found it to be very reliable for my simple needs, just need to find some way to get those users that intermittently forget to dock. I expect its series of nested loops, but I cannot get my head to wrap around it - been away from actual coding for too long - Any suggestions would be greatly appreciated. I'd also really like to get rid of the => and just use comma or space separated data but every time I mess with that awk statement - I end up shifting the data and getting an oddly placed CR somewhere I cannot figure it either!
Try this pure Bash code:
declare -A found_mac2ip
while read -r ip _ mac; do
[[ -n $mac ]] && found_mac2ip[$mac]=$ip
done <'found.devices'
while read -r ip mac domain source destination; do
ip=${found_mac2ip[$mac]-$ip}
# ... DO BACKUP FOR $ip ...
done <'known.targets'
It first sets up a Bash associative array mapping found mac address to ip addresses.
It then loops through the known.targets file and for each mac address it uses the ip address from the known.targets file if the mac address is listed in it. Otherwise it uses the ip address read from the known.targets file.
It's also possible to extract the "found" MAC and IP address information by getting it directly from the nmap output instead of from a `found.devices' file. This alternative version of the code does that:
declare -A found_mac2ip
nmap_output=$(sudo nmap -n -sP 192.168.2.0/24)
while IFS=$' \t\n()' read -r f1 f2 f3 f4 f5 _; do
[[ "$f1 $f2 $f3 $f4" == 'Nmap scan report for' ]] && ip=$f5
[[ "$f1 $f2" == 'MAC Address:' ]] && found_mac2ip[$f3]=$ip
done <<<"$nmap_output"
while read -r ip mac domain source destination; do
ip=${found_mac2ip[$mac]-$ip}
# ... DO BACKUP FOR $ip ...
done <'known.targets'
UPDATE: per comment from OP, dropping assumption (and associated code) about keeping an IP address that shows up in found.devices but without a match in known.targets (ie, this cannot happen)
Assumptions:
start with a list of IP/MAC addresses from known.targets
if a MAC address also shows up in found.devices then the IP address from found.devices takes precendence
Adding a standalone entry to both files:
$ cat known.targets
192.168.2.101 D4:XB:E4:FE:C6:6F domain source destination
192.168.2.102 B4:QB:Y4:FE:C6:6F domain source destination
192.168.2.103 B4:FB:P4:ZE:C6:6F domain source destination
192.168.2.104 B4:FB:E4:BE:P6:6F domain source destination
111.111.111.111 AA:BB:CC:DD:EE:FF domain source destination
$ cat found.devices
192.168.2.190 => D4:XB:E4:FE:C6:6F
192.168.2.102 => B4:QB:Y4:FE:C6:6F
192.168.2.200 => B4:FB:P4:ZE:C6:6F
192.168.2.104 => B4:FB:E4:BE:P6:6F
222.222.222.222 => FF:EE:CC:BB:AA:11
One awk idea:
$ cat ip.awk
FNR==NR { ip[$2]=$1; dsd[$2]=$3 FS $4 FS $5; next }
$3 in ip { ip[$3]=$1 }
END { for (mac in ip) print ip[mac],dsd[mac] }
Running against our files:
$ awk -f ip.awk known.targets found.devices
192.168.2.200 domain source destination
192.168.2.190 domain source destination
192.168.2.104 domain source destination
111.111.111.111 domain source destination
192.168.2.102 domain source destination
Feeding this to a while loop:
while read ip hostname source destination
do
echo "${ip} : ${hostname} : ${source} : ${destination}"
done < <(awk -f ip.awk known.targets found.devices)
This generates:
192.168.2.200 : domain : source : destination
192.168.2.190 : domain : source : destination
192.168.2.104 : domain : source : destination
111.111.111.111 : domain : source : destination
192.168.2.102 : domain : source : destination

Validating a CIDR IP to set for an interface

I'm writing a bash script, which sets a fixed IP for an interface. I'd set the chosen IP with sudo ip addr change dev eth0 192.168.3.14/24.
For this I'll need to validate the user given CIDR IP and came across this perl command: perl -MNet::CIDR=cidrvalidate -e 'printf("%s\n", cidrvalidate($ARGV[0]) ? "valid" : "invalid")' -- 1.2.3.0/24
Now this would be a great one-liner for the bash script, but it only checks if it is a valid network, not if it's valid client IP on the network.
Bash-only solutions become rather extensive quickly, so I'd be fine to use perl or python for this.
I could not identify the appropriate perl command to check if the user entered a valid client IP (CIDR).
I started implementing a regex check in bash, but that became rather extensive quickly.
This perl command almost does the job perfectly, except it states client IPs on the network are "invalid".
perl -MNet::CIDR=cidrvalidate -e 'printf("%s\n", cidrvalidate($ARGV[0]) ? "valid" : "invalid")' -- 1.2.3.0/24
I'd expect the function to identify valid CIDR client IPs. For example:
127.0.0.1/32 = True
What perl/python/bash function can I use to check if a user define IP (CIDR) is a valid client IP?
edit: I've resorted to using ipcalc:
while true; do
read -p "Enter IP: " ip
ipcalc=`ipcalc ${ip}`
if [[ ${ipcalc} =~ "INVALID" ]]; then
echo "Invalid."
else
break
fi
done
See find in Net::CIDR::Lite.
perl -mNet::CIDR::Lite -E'
my $c = Net::CIDR::Lite->new;
$c->add("209.152.214.112/30");
$c->add("209.152.214.116/31");
$c->add("209.152.214.118/31");
for (qw(209.152.214.111 209.152.214.112)) {
say $c->find($_) ? "$_ valid" : "$_ invalid";
}
'
output
209.152.214.111 invalid
209.152.214.112 valid

Simulate Network Latency mac Sierra

I am trying to simulate network latency for all traffic to a certain ip/url. I tried using a proxy through Charles but the traffic is going through HTTP or SOCKS. I found some information online but it does not seem to work for me. Can anyone see what is wrong with my commands?
#enable pf
pfctl -E
#add a temporary extra ruleset (called "anchor") named "deeelay
(cat /etc/pf.conf && echo "dummynet-anchor \"deeelay\"" && echo "anchor
\"deeelay\"") | sudo pfctl -f -
#add a rule to the deeelay set to send any traffic to endpoint through new rule
echo "dummynet out proto tcp from any to myurl.com pipe 1" |
sudo pfctl -a deeelay -f -
#Add a rule to dummynet pipe 1 to delay every packet by 500ms
sudo dnctl pipe 1 config delay 500
I see this warning when I run the commands:
No ALTQ support in kernel
ALTQ related functions disabled
Is that the issue?
The problem was the proto parameter. The application is not using tcp, it is using another protocol. You can either supply all the protocols you want as a list like so:
proto { tcp udp icmp ipv6 tlsp smp }
Or you can just remove the proto parameter altogether and it will do all protocols.

What could prevent frequently switching default source ip of a machine with several interfaces

The goal was to frequently change default outgoing source ip on a machine with multiple interfaces and live ips.
I used ip route replace default as per its documentation and let a script run in loop for some interval. It changes source ip fine for a while but then all internet access to the machine is lost. It has to be remotely rebooted from a web interface to have any thing working
Is there any thing that could possibly prevent this from working stably. I have tried this on more than one servers?
Following is a minimum example
# extract all currently active source ips except loopback
IPs="$(ifconfig | grep 'inet addr:'| grep -v '127.0.0.1' | cut -d: -f2 |
awk '{ print $1}')"
read -a ip_arr <<<$IPs
# extract all currently active mac / ethernet addresses
Int="$(ifconfig | grep 'eth'| grep -v 'lo' | awk '{print $1}')"
read -a eth_arr <<<$Int
ip_len=${#ip_arr[#]}
eth_len=${#eth_arr[#]}
i=0;
e=0;
while(true); do
#ip route replace 0.0.0.0 dev eth0:1 src 192.168.1.18
route_cmd="ip route replace 0.0.0.0 dev ${eth_arr[e]} src ${ip_arr[i]}"
echo $route_cmd
eval $route_cmd
sleep 300
(i++)
(e++)
if [ $i -eq $ip_len ]; then
i=0;
e=0;
echo "all ips exhausted - starting from first again"
# break;
fi
done
I wanted to comment, but as I'm not having enough points, it won't let me.
Consider:
Does varying the delay time before its run again change the number of iterations before it fails?
Exporting the ifconfig & routes every time you change it, to see if something is meaningfully different over time. Maybe some basic tests to it (ping, nslookup, etc) Basically, find what is exactly going wrong with it. Also exporting the commands you send to a logfile (text file per change?) to see changes in them to see if some is different after x iterations.
What connectivity is lost? Incoming? Outgoing? Specific applications?
You say you use/do this on other servers without problems?
Are the IP's: Static (/etc/network/interfaces), bootp/DHCP, semi-static (bootp/DHCP server serving, based on MAC address), and if served by bootp/DHCP, what is the lease duration?
On the last remark:
bootp/dhcp will give IP's for x duration. say its 60 minutes. After half that time it will "check" with the bootp/dhcp server if it can keep the IP, and extend the lease to 60 minutes again, this can mean a small reconfig on the ifconfig (maybe even at the same time of your script?).
hth

BASH- trouble pinging from text file lines

Have a text file w/ around 3 million URL's of sites I want to block.
Trying to ping them one by one (yes, I know it is going to take some time).
Have a script (yes, I am a bit slow in BASH) which reads the lines one at a time from text file.
Obviously cannot print text file here. Text file was created >> w/ Python some time ago.
Problem is that ping returns "unknown host" w/ every entry. If I make a smaller file by hand using the same entries the script works. I thought it may be a white space or end of line issue so tried addressing that in script. What could the issue possibly be?
#!/bin/bash
while read line
do
li=$(echo $line|tr -d '\n')
li2=$(echo $li|tr -d ' ')
if [ ${#line} -lt 2 ]
then
continue
fi
ping -c 2 -- $li2>>/dev/null
if [ $? -gt 0 ]
then
echo 'bad'
else
echo 'good'
fi
done<'temp_file.txt'
Does the file contains URLs or hostnames ?
If it contains URLs you must extract the hostname from URLs before pinging:
hostname=$(echo "$li2"|cut -d/ -f3);
ping -c 2 -- "$hostname"
Ping is used to ping hosts. If you have URLs of websites, then it will not work. Check that you have hosts in your file , example www.google.com or an IP address and not actual full website urls. If you want to check actual URLs, use a tool like wget and another tool like grep/awk to grab for errors like 404 or others. Last but not least, people who are security conscious will sometimes block pinging from the outside, so take note.
C heck if the file contains windows-style \r\n line endings: head file | od -c
If so, to fix it: dos2unix filename filename
I wouldn't use ping for this. It can easily be blocked, and it's not the best way to check for either ip addresses or if a server presents web pages.
If you just want to find the corresponding IP, use host:
$ host www.google.com
www.google.com is an alias for www.l.google.com.
www.l.google.com has address 209.85.149.106
www.l.google.com has address 209.85.149.147
www.l.google.com has address 209.85.149.99
www.l.google.com has address 209.85.149.103
www.l.google.com has address 209.85.149.104
www.l.google.com has address 209.85.149.105
As you see, you get all the IPs registered to a host. (Note that this requires you to parse the hostname from your urls!)
If you want to see if a URL points at a web server, use wget:
wget --spider $url
The --spider flag makes wget not save the page, just check that it exists. You could look at the return code, or add the -S flag (which prints the HTTP headers returned)

Resources