Pattern involved in target name in make file - makefile

I want to execute multiple targets for the list of servers. From makefile output it seems only $(SERVERS) target executed twice. I want Launch-% to be executed twice. How can I make it work. How can I access each IP address in target Launch-%? Please help me out. I have following make file source code and output. Thanks in advance.
Makefile source code:
SERVERS=172.16.0.17 172.16.0.100
test-all: test-port-connectivity
test-port-connectivity: Launch-$(SERVERS)
echo "Test suit 1: Port Connectivity $<"
Launch-%: $(SERVERS)
echo "Launch Server $<"
$(SERVERS):
echo "Server IP - $#"
Output of Makefile:
# make
echo "Server IP - 172.16.0.17"
Server IP - 172.16.0.17
echo "Server IP - 172.16.0.100"
Server IP - 172.16.0.100
echo "Launch Server 172.16.0.17"
Launch Server 172.16.0.17
echo "Test suit 1: Port Connectivity Launch-172.16.0.17"
Test suit 1: Port Connectivity Launch-172.16.0.17

SERVERS=172.16.0.17 172.16.0.100
LAUNCHES=$(addprefix Launch-, $(SERVERS)) # this will be Launch-172.16.0.17 Launch-172.16.0.100
test-all: test-port-connectivity
test-port-connectivity: $(LAUNCHES)
echo "Test suit 1: Port Connectivity $^"
$(LAUNCHES): Launch-%: %
echo "Launch Server $<"
$(SERVERS):
echo "Server IP - $#"
Notice that the $(LAUNCHES) rule is a static pattern rule. (A simpler pattern rule would also suffice, but not be as tidy.) Also notice the use of $^ in the test-port-connectivity rule.

Related

wget resolves to a different IP than host

I have a shell script in which I use host to get the IP of the target site to update ufw and allow outbound traffic to that IP. However, when I make the subsequent wget call to the same base URL, it resolves to a different IP, and thus is blocked by ufw. Just to test, I tried pinging the URL, and it returned a different third IP.
We're blocking all outbound traffic by default in ufw, and only enable what we need to go out, so I need the script to update the correct IP so I can wget the content. The IP in each instance (host vs wget) is consistently the same, but they return different values with respect to each other, so I don't think it's simply a DNS issue. How do I get a consistent IP to update the firewall with, so that the subsequent wget request performs successfully? I disabled the firewall as a test, and was able to download from the URL successfully, so the issue is definitely in getting a consistent IP to point to.
HOSTNAME=<name of site to resolve>
LOGFILE=<logfile path>
Current_IP=$(host $HOSTNAME | head -n 1 | cut -d " " -f 4)
#this echoes the correct value
echo $Current_IP
if [ ! -f $LOGFILE ]; then
/usr/sbin/ufw allow out from any to $Current_IP
echo $Current_IP > $LOGFILE
echo New IP address found and logged >> ./download.log
else
Old_IP=$(cat $LOGFILE)
if [ "$Current_IP" = "$Old_IP" ] ; then
echo IP address has not changed >> ./download.log
else
/usr/sbin/ufw delete allow out from any to $Old_IP
/usr/sbin/ufw allow out from any to $Current_IP
echo $Current_IP > $LOGFILE
echo IP Address was updated in ufw >> ./download.log
fi
fi
After that updates the firewall, a subsequent wget to HOSTNAME attempts to go out to a different IP than was just updated.
Turns out the difference was "www.". When I was resolving host I was not using www, and when I was using wget I was using www, and thus they resolved to different IPs for this particular site.

Validating a CIDR IP to set for an interface

I'm writing a bash script, which sets a fixed IP for an interface. I'd set the chosen IP with sudo ip addr change dev eth0 192.168.3.14/24.
For this I'll need to validate the user given CIDR IP and came across this perl command: perl -MNet::CIDR=cidrvalidate -e 'printf("%s\n", cidrvalidate($ARGV[0]) ? "valid" : "invalid")' -- 1.2.3.0/24
Now this would be a great one-liner for the bash script, but it only checks if it is a valid network, not if it's valid client IP on the network.
Bash-only solutions become rather extensive quickly, so I'd be fine to use perl or python for this.
I could not identify the appropriate perl command to check if the user entered a valid client IP (CIDR).
I started implementing a regex check in bash, but that became rather extensive quickly.
This perl command almost does the job perfectly, except it states client IPs on the network are "invalid".
perl -MNet::CIDR=cidrvalidate -e 'printf("%s\n", cidrvalidate($ARGV[0]) ? "valid" : "invalid")' -- 1.2.3.0/24
I'd expect the function to identify valid CIDR client IPs. For example:
127.0.0.1/32 = True
What perl/python/bash function can I use to check if a user define IP (CIDR) is a valid client IP?
edit: I've resorted to using ipcalc:
while true; do
read -p "Enter IP: " ip
ipcalc=`ipcalc ${ip}`
if [[ ${ipcalc} =~ "INVALID" ]]; then
echo "Invalid."
else
break
fi
done
See find in Net::CIDR::Lite.
perl -mNet::CIDR::Lite -E'
my $c = Net::CIDR::Lite->new;
$c->add("209.152.214.112/30");
$c->add("209.152.214.116/31");
$c->add("209.152.214.118/31");
for (qw(209.152.214.111 209.152.214.112)) {
say $c->find($_) ? "$_ valid" : "$_ invalid";
}
'
output
209.152.214.111 invalid
209.152.214.112 valid

Shell / Korn script - set variable depending on hostname in list

I need to write a korn script that depending on the host the script is running on, will set a deployment directory (so say 5 hosts deploy the software to directory one and five other hosts deploy to directory two).
How could I do this - I wanted to avoid an if condition for every host like below
IF [hostname = host1] then $INSTALL_DIR=Dir1
ELSE IF [hostname = host2] then $INSTALL_DIR=Dir1
and would prefer to have a list of say Directory1Hosts and Directory2Hosts which contains all the hosts valid for each directory, and then I would just check if the host the script is running on is in my Directory1Hosts or Directory2Hosts (so only two IF conditions instead of 10).
Thanks for your help - have been struggling to find how to do effectively a contains clause.
Use a case statement:
case $hostname in
host1) INSTALL_DIR=DIR1 ;;
host2) INSTALL_DIR=DIR2 ;;
esac
or use an associative array
install_dirs=([host1]=DIR1 [host2]=DIR2)
...
INSTALL_DIR=${install_dirs[$hostname]}
When you want to have configuration and code apart, you can make a config directory: one file with hosts for each install dir.
# cat installdirs/Dir1
host1
host2
With these files your code can be
INSTALL_DIR=$(grep -Flx "${hostname}" installdirs/* | cut -d"/" -f2)

Bash case not properly evaluating value

The Problem
I have a script that has a case statement which I'm expecting to execute based on the value of a variable. The case statement appears to either ignore the value or not properly evaluate it instead dropping to the default.
The Scenario
I pull a specific character out of our server hostnames which indicates where in our environment the server resides. We have six different locations:
Management(m): servers that are part of the infrastructure such as monitoring, email, ticketing, etc
Development(d): servers that are for developing code and application functionality
Test(t): servers that are used for initial testing of the code and application functionality
Implementation(i): servers that the code is pushed to for pre-production evaluation
Production(p): self-explanatory
Services(s): servers that the customer needs to integrate that provide functionality across their project. These are separate from the Management servers in that these are customer servers while Management servers are owned and operated by us.
After pulling the character from the hostname I pass it to a case block. I expect the case block to evaluate the character and add a couple lines of text to our rsyslog.conf file. What is happening instead is that the case block returns the default which does nothing but tell the person building the server to manually configure the entry due to an unrecognized character.
I've tested this manually against a server I recently built and verified that the character I am pulling from the hostname (an 's') is expected and accounted for in the case block.
The Code
# Determine which environment our server resides in
host=$(hostname -s)
env=${host:(-8):1}
OLDFILE=/etc/rsyslog.conf
NEWFILE=/etc/rsyslog.conf.new
# This is the configuration we need on every server regardless of environment
read -d '' common <<- EOF
...
TEXT WHICH IS ADDED TO ALL CONFIG FILES REGARDLESS OF FURTHER CODE EXECUTION
SNIPPED
....
EOF
# If a server is in the Management, Dev or Test environments send logs to lg01
read -d '' lg01conf <<- EOF
# Relay messages to lg01
*.notice ##xxx.xxx.xxx.100
#### END FORWARDING RULE ####
EOF
# If a server is in the Imp, Prod or is a non-affiliated Services zone server send logs to lg02
read -d '' lg02conf <<- EOF
# Relay messages to lg02
*.notice ##xxx.xxx.xxx.101
#### END FORWARDING RULE ####
EOF
# The general rsyslog configuration remains the same; pull it out and write it to a new file
head -n 63 $OLDFILE > $NEWFILE
# Add the common language to our config file
echo "$common" >> $NEWFILE
# Depending on which environment ($env) our server is in, add the appropriate
# remote log server to the configuration with the $common settings.
case $env in
m) echo "$lg01conf" >> $NEWFILE;;
d) echo "$lg01conf" >> $NEWFILE;;
t) echo "$lg01conf" >> $NEWFILE;;
i) echo "$lg02conf" >> $NEWFILE;;
p) echo "$lg02conf" >> $NEWFILE;;
s) echo "$lg02conf" >> $NEWFILE;;
*) echo "Unknown environment; Manually configure"
esac
# Keep a dated backup of the original rsyslog.conf file
cp $OLDFILE $OLDFILE.$(date +%Y%m%d)
# Replace the original rsyslog.conf file with the new version
mv $NEWFILE $OLDFILE
An Aside
I've already determined that I can combine the different groups of code from the case block onto single lines (a total of two) using the | operator. I've listed it in the manner above since this is how it is coded while I'm having issues with it.
I can't see what's wrong with your code. Maybe add another ;; to the default clause. To find the problem add a set -vx as a first line. Will show you lots of debug information.

BASH- trouble pinging from text file lines

Have a text file w/ around 3 million URL's of sites I want to block.
Trying to ping them one by one (yes, I know it is going to take some time).
Have a script (yes, I am a bit slow in BASH) which reads the lines one at a time from text file.
Obviously cannot print text file here. Text file was created >> w/ Python some time ago.
Problem is that ping returns "unknown host" w/ every entry. If I make a smaller file by hand using the same entries the script works. I thought it may be a white space or end of line issue so tried addressing that in script. What could the issue possibly be?
#!/bin/bash
while read line
do
li=$(echo $line|tr -d '\n')
li2=$(echo $li|tr -d ' ')
if [ ${#line} -lt 2 ]
then
continue
fi
ping -c 2 -- $li2>>/dev/null
if [ $? -gt 0 ]
then
echo 'bad'
else
echo 'good'
fi
done<'temp_file.txt'
Does the file contains URLs or hostnames ?
If it contains URLs you must extract the hostname from URLs before pinging:
hostname=$(echo "$li2"|cut -d/ -f3);
ping -c 2 -- "$hostname"
Ping is used to ping hosts. If you have URLs of websites, then it will not work. Check that you have hosts in your file , example www.google.com or an IP address and not actual full website urls. If you want to check actual URLs, use a tool like wget and another tool like grep/awk to grab for errors like 404 or others. Last but not least, people who are security conscious will sometimes block pinging from the outside, so take note.
C heck if the file contains windows-style \r\n line endings: head file | od -c
If so, to fix it: dos2unix filename filename
I wouldn't use ping for this. It can easily be blocked, and it's not the best way to check for either ip addresses or if a server presents web pages.
If you just want to find the corresponding IP, use host:
$ host www.google.com
www.google.com is an alias for www.l.google.com.
www.l.google.com has address 209.85.149.106
www.l.google.com has address 209.85.149.147
www.l.google.com has address 209.85.149.99
www.l.google.com has address 209.85.149.103
www.l.google.com has address 209.85.149.104
www.l.google.com has address 209.85.149.105
As you see, you get all the IPs registered to a host. (Note that this requires you to parse the hostname from your urls!)
If you want to see if a URL points at a web server, use wget:
wget --spider $url
The --spider flag makes wget not save the page, just check that it exists. You could look at the return code, or add the -S flag (which prints the HTTP headers returned)

Resources