How can I query in a bash script which of the haproxy load balancer hosts is currently the primary and which is the secondary? - bash

I want to be able to determine which LB is primary and which is secondary from a bash script running on both load balancers.
Background is: For the renewal of a Letsencrypt certificate on a HAproxy load balancer pair, where the service IP is usually bound to the master, it would be necessary to determine which server is the master (has the service IP bound) and which is only a secondary backup (without web access via port :80 and port:443)

If you follow this guide by Sebastian Schrader https://serverfault.com/a/871783, the following procedure will help to determine the master and backup:
IFS="/"
# /org/keepalived/Vrrp1/Instance/ens192/151/IPv4
vrrpInstance=$(/usr/bin/busctl tree | grep keepalived | grep IPv4)
set $vrrpInstance
#151
vrrpRouterID=$7
# (us) 2 "Master" or "Backup"
vrrpProp=$(/usr/bin/busctl get-property org.keepalived.Vrrp1 /org/keepalived/Vrrp1/Instance/ens192/"${vrrpRouterID}"/IPv4 org.keepalived.Vrrp1.Instance State)
# Master or Backup
vrrpStatus=$(echo ${vrrpProp} | cut -c 9-14)
unset IFS

Related

Retrieve the instance ID after deploying at Vultr with the vultr-cli?

I'm scripting a process with the vultr-cli. I need to deploy a new VPS at Vultr, perform some intermediate steps, then destroy the VPS in a bash script. How do I retrieve instance values in the script after the deployment? Is there a way to capture the information as JSON or set environment variables directly?
So far, my script looks like:
#!/bin/bash
## Create an instance. How do I retrieve the instance ID
## for use later in the script?
vultr-cli instance create --plan vc2-1c-1gb --os 387 --region ewr
## With the instance ID, retrieve the main IPv4 address.
## Note: I only want the main IP, but there may be multiple.
vultr-cli instance ipv4 list $INSTANCE_ID
## Perform some tasks here with the IPv4. Assuming I created
## the instance with my SSH key, for example:
scp root#$INSTANCE_IPv4:/var/log/logfile.log ./logfile.log
## Destroy the instance.
vultr-cli instance delete $INSTANCE_ID
The vultr-cli will output the response it gets back from the API like this
☁ ~ vultr-cli instance create --plan vc2-1c-1gb --os 387 --region ewr
INSTANCE INFO
ID 87e98eb0-a189-4519-8b4e-fc46bb0a5331
Os Ubuntu 20.04 x64
RAM 1024
DISK 0
MAIN IP 0.0.0.0
VCPU COUNT 1
REGION ewr
DATE CREATED 2021-01-23T17:39:45+00:00
STATUS pending
ALLOWED BANDWIDTH 1000
NETMASK V4
GATEWAY V4 0.0.0.0
POWER STATUS running
SERVER STATE none
PLAN vc2-1c-1gb
LABEL
INTERNAL IP
KVM URL
TAG
OsID 387
AppID 0
FIREWALL GROUP ID
V6 MAIN IP
V6 NETWORK
V6 NETWORK SIZE 0
FEATURES []
So you would want to capture the ID and it's value from the response. This is a crude example but it does work.
vultr-cli instance create --plan vc2-1c-1gb --os 387 --region ewr | grep -m1 -w "ID" | sed 's/ID//g' | tr -d " \t\n\r"
We are grepping for the 1st line that has ID (which will always be be the 1st line). Then remove the word ID followed by removing any whitespace and newlines.
You will want to do something similar to with the ipv4 list call you have.
Again, there may be a better way to write out the grep/sed/tr portion but this will work for your needs. Hopefully this helps!

How to allow external connection to elastic search on GCE instance

I'm setting elasticsearch cluster on GCE, eventualy it will be used from within the application which is on the same network, but for now while developing, i want to have access from my dev env. Also even on long term i would have to access kibana from external network so need to know how to allow that. For now i'm not taking care of any security considirations.
The cluster (currently one node) on GCE instance, centos7.
It has external host enabled, the ephemeral option.
I can access 9200 from within the instance:
es-instance-1 ~]$ curl -XGET 'localhost:9200/?pretty'
But not via the external ip which when i test it shows 9200 closed:
es-instance-1 ~]$ nmap -v -A my-external-ip | grep 9200
9200/tcp closed wap-wsp
localhost as expected is good:
es-instance-1 ~]$ nmap -v -A localhost | grep 9200
Discovered open port 9200/tcp on 127.0.0.1
I see a similar question here and following it i went to create a firewall rule. First added a tag 'elastic-cluster' to the instance and then a rule
$ gcloud compute instances describe es-instance-1
tags:
items:
- elastic-cluster
$ gcloud compute firewall-rules create allow-elasticsearch --allow TCP:9200 --target-tags elastic-cluster
Here it's listed as created:
gcloud compute firewall-rules list
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS
allow-elasticsearch default 0.0.0.0/0 tcp:9200 elastic-cluster
so now there is a rule which supposed to allow 9200
but still not reachable
es-instance-1 ~]$ nmap -v -A my-external-ip | grep 9200
9200/tcp closed wap-wsp
What am i missing?
Thanks

Oracle RAC VIP and SCAN IPs

I've read the Oracle RAC documentation a couple of times but SCAN and VIP are still confusing me. Can someone help me understand how this needs to be configured technically so that I can explain it my network admin.
VIP in Oracle RAC, should each VIP bind to the node or just require a DNS A record without allocating it to node1 or node2 and an entry in the host file?
I know while performing Grid cluster installation Oracle will bind the VIP automatically, but should this be part of DNS assigned to one of the nodes or should it be free and unassigned?
Oracle SCAN IPs need to be created in DNS record; is this an A record to 3 IPs with reverse lookup or round robin way and this should not be part of hosts file?
I need to explain this to my network admin to add it on the DNS server.
First, VIPs:
A VIP is a Virtual IP address, and should be defined in DNS and not assigned to any host or interface. When you install GRID/ASM home, you'll specify the VIP names that were assigned in DNS. When Oracle Clusterware starts up, it will assign a VIP to each node in the cluster. The idea is, if a node goes down (crashes), clusterware can immediately re-assign that VIP to a new (surviving) node. This way, you avoid TCP timeout issues.
Next, SCAN:
A SCAN (Single Client Access Name) is a special case of VIP. The SCAN should also be defined in DNS, and not assigned to any host or interface. There should be three IPs associated with the SCAN name in DNS, and the DNS entry should be defined so that one of the three IPs is returned each time DNS is queried, in a round robin fashion.
At clusterware startup time, each of the three VIPs that make up the SCAN will be assigned to a different node in the cluster. (Except in the special case of a two node cluster, one of the nodes wil have a 2 SCAN VIPs assigned to it.) The point of the SCAN, is that no matter how many nodes are added to or removed from the cluster, all the Net Service Name definitions in your tnsnames.ora (or LDAP equivalent) will not need to ever change, because they all refer to the SCAN, which doesn't change, regardless of how many node additions or drops are made to the cluster.
For example, in the three node cluster, you may have:
Physical and virtual hostnames/IPs assigned as follows:
Hostname Physical IP Virtual hostnmae Virtual IP
rac1 10.1.1.1 rac1-vip 10.1.1.4
rac2 10.1.1.2 rac2-vip 10.1.1.5
rac3 10.1.1.3 rac3-vip 10.1.1.6
Additionally, you may have the SCAN defined as:
rac-scan with three IPs, 10.1.1.7, 10.1.1.8, 10.1.1.9. Again, the DNS definition would be defined so those IPs are served up in a round robin order.
Note that the SCAN VIPs, Host VIPs, and the Physical IPs are all in the same subnet.
Finally, though you didn't ask about it, to complete the picture, you'd also need one private, non-routable IP assigned per host, and that IP would be associated with the private interconnect. So, you may have something like:
rac1-priv 172.16.1.1
rac2-priv 172.16.1.2
rac3-priv 172.16.1.3
Note that the '-priv' addresses should not be in DNS, only in the /etc/hosts file of each host in the RAC cluster. (They are private, non-routable, and only clusterware will ever know about or use those addresses, so adding to DNS doesn't make sense.)
Note also, that '-priv' and physical IP/hostname definitions should go in /etc/hosts, and the physical IPs and VIPs should be in DNS. So, physical IPs in both DNS and /etc/hosts, VIPs only in DNS, '-priv' addresses only in /etc/hosts.
not entirely sure what you mean for this, i have each VIP address created in DNS as A records assigned to the hosts, and also record them in the hosts file as well.
in answer to 2, you are correct, the SCAN IPs should not be in the hosts file. And yes 3 "A" records with reserve lookup will be enough (at least that's what has worked for me).
these are my iptables entries
Oracle ports
Allow access from other oracle RAC hosts
-A INPUT -m state --state NEW -p tcp -m iprange --src-range 172.28.1.90-172.28.1.97 -j ACCEPT
-A INPUT -m state --state NEW -p tcp -m iprange --src-range 172.28.97.91-172.28.97.93 -j ACCEPT
-A INPUT -m state --state NEW -p tcp -m iprange --src-range 192.168.28.91-192.168.28.93 -j ACCEPT
Allow Multicast
-A INPUT -m state --state NEW -p udp -m iprange --src-range 172.28.1.90-172.28.1.97 -j ACCEPT
-A INPUT -m state --state NEW -p udp -m iprange --src-range 172.28.97.91-172.28.97.93 -j ACCEPT
-A INPUT -m state --state NEW -p udp -m iprange --src-range 192.168.28.91-192.168.28.93 -j ACCEPT
Allow multicast
-A INPUT -m pkttype --pkt-type multicast -j ACCEPT
-A INPUT -s 224.0.0.0/24 -j ACCEPT
-A INPUT -s 230.0.1.0/24 -j ACCEPT
I also needed to get our systems admin to give permissions a the firewall level to allow my nodes, their vips and the scan ips to connect via port 1521
hope this helps

Remove EC2's entry from resolv.conf

I have private DNS servers and I want to write them to resolv.conf with resolvconf on Debian on AWS/EC2.
There is a problem in the order of nameserver entries.
In my resolv.conf, EC2's default nameserver is always written at first line like so:
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 172.16.0.23
nameserver 10.0.1.185
nameserver 10.100.0.130
search ap-northeast-1.compute.internal
172.16.0.23 is EC2's default nameserver and others are mine.
How to remove EC2 entry? Or, how to move EC2 entry to third?
Here I have an interface file:
% ls -l /etc/resolvconf/run/interface/
-rw-r--r-- 1 root root 62 Jun 7 23:35 eth0
It seems that the file eth0 is automatically generated by dhcp so can't remove it permanently.
% cat /etc/resolvconf/run/interface/eth0
search ap-northeast-1.compute.internal
nameserver 172.16.0.23
My private DNS entry is here:
% cat /etc/resolvconf/resolv.conf.d/base
nameserver 10.0.1.185
nameserver 10.100.0.130
Please help.
I think I just solved a very similar problem. I was bothered by Amazon EC2's crappy internal DNS servers so I wanted to run a local caching dnsmasq daemon and use that in /etc/resolv.conf. At first I just did echo nameserver 127.0.0.1 > /etc/resolv.conf but then I realized that my change would eventually be overwritten by the DHCP client after a reboot or DHCP lease refresh.
What I've now done instead is to edit /etc/dhcp3/dhclient.conf and uncomment the line prepend domain-name-servers 127.0.0.1;. You should be able to use the prepend directive in a very similar way.
Update: These instructions are based on Ubuntu Linux but I imagine the general concept applies on other systems as well, even other DHCP clients must have similar configuration options.
I'm approaching this problem from the other direction (wanting the internal nameservers), much of what I've learned may be of interest.
There are several options to control name resolution in the VPC management console.
VPC -> DHCP option sets -> Create dhcp option set
You can specify your own name servers there.
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_DHCP_Options.html
Be sure to attach this dhcp option set to your VPC to get it to take effect.
Alternatively (I found this out by mistake) local dns servers are not set if the following settings are disabled in VPC settings:
DnsHostnames
and
DnsSupport
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-dns.html
Settings can also be overridden locally (which you'll notice if you move instances between vpcs). /etc/dhcp/dhclient.conf
The following line might be of interest:
prepend domain-name-servers
Changes, of course, take effect on dhclient start.
How do I assign a static DNS server to a private Amazon EC2 instance running Ubuntu, RHEL, or Amazon Linux?
Short Description
Default behavior for an EC2 instance associated with a virtual private cloud (VPC) is to request a DNS server address at startup using the Dynamic Host Configuration Protocol (DHCP). The VPC responds to DHCP requests with the address of an internal DNS server. The DNS server addresses returned in the DHCP response are written to the local /etc/resolv.conf file and are used for DNS name resolution requests. Any manual modifications to the resolv.conf file are overwritten when the instance is restarted.
Resolution
To configure an EC2 instance running Linux to use static DNS server entries, use a text editor such as vim to edit the file /etc/dhcp/dhclient.conf and add the following line to the end of the file:
supersede domain-name-servers xxx.xxx.xxx.xxx, xxx.xxx.xxx.xxx;
Ubuntu - dhclient.conf - DHCP client configuration file 
The supersede statement
supersede [ option declaration ] ;
If for some option the client should always use a locally-configured value or values
rather than whatever is supplied by the server, these values can be defined in the
supersede statement.
The prepend statement
prepend [ option declaration ] ;
If for some set of options the client should use a value you supply, and then use the
values supplied by the server, if any, these values can be defined in the prepend
statement. The prepend statement can only be used for options which allow more than one
value to be given. This restriction is not enforced - if you ignore it, the behaviour
will be unpredictable.
The append statement
append [ option declaration ] ;
If for some set of options the client should first use the values supplied by the server,
if any, and then use values you supply, these values can be defined in the append
statement. The append statement can only be used for options which allow more than one
value to be given. This restriction is not enforced - if you ignore it, the behaviour
will be unpredictable.
In here someone come with solution that basically replaces the file on boot using rc.local
https://forums.aws.amazon.com/thread.jspa?threadID=74497
Edit /etc/sysconfig/network-scripts/ifcfg-eth0 to say PEERDNS=no
Create a file called /etc/resolv.backup with what you want
Add the following 2 lines to /etc/rc.local:
rm -f /etc/resolv.conf cp /etc/resolv.backup /etc/resolv.conf
This is what we are doing for our servers in the environment.
interface "eth0"
{
prepend domain-name-servers 10.x.x.x;
supersede host-name "{Hostname}";
append domain-search "domain";
supersede domain-name "DOMAIN";
}
Hope this helps.
The following worked in a Debian stretch on AWS EC2.
Just create /etc/dhcp/dhclient-enter-hooks.d/nodnsupdate:
#!/bin/sh
make_resolv_conf(){
:
}
Then you can modify /etc/resolv.conf and it will persist your changes across restarts.
Setup in crontab as
#reboot cp -r /home/.../resolv.conf /etc/resolv.conf

What gems do you recommend to use for this kind of automation?

I have to create a script to manage maintenance pages server for my hosting company.
I will need to do a CLI interface that would act like this (example scenario) :
(here, let's suppose that mcli is the name of the script, 1.1.1.1 the original server address (that host the website, www.exemple.com)
Here I just create the loopback interface on the maintenance server with the original ip address and create the nginx site-specific config file in sites-enabled
$ mcli register www.exemple.com 1.1.1.1
[DEBUG] Adding IP 1.1.1.1 to new loopback interface lo:001001001001
[WARNING] No root directory specified, setting default maintenance page.
[DEBUG] Registering www.exemple.com maintenance page and reloading Nginx: OK
Then when I want to enable the maintenance page and completely shutdown the website:
$ mcli maintenance www.exemple.com
[DEBUG] Connecting to router with SSH: OK
[DEBUG] Setting new route to 1.1.1.1 to maintenance server: OK
[DEBUG] Writing configuration: Ok
Then removing the maintenance page:
$ mcli nomaintenance www.exemple.com
[DEBUG] Connecting to router with SSH: OK
[DEBUG] Removing route to 1.1.1.1: Ok
[DEBUG] Writing configuration: Ok
And I would need a function to see the actual states of the websites
$ mcli list
+------------------+-----------------+------------------+
| Site Name | Server I.P | Maintenance mode |
+------------------+-----------------+------------------+
| www.example.com | 1.1.1.1 | Enabled |
| www.example.org | 1.1.1.2 | Disabled |
+------------------+-----------------+------------------+
$ mcli show www.example.org
Site Name: www.example.org
Server I.P: 1.1.1.1
Maintenance Mode: Enabled
Root Directory : /var/www/maintenance/default/
But I never did this kind of scripting with Ruby. What gems do you recommend for this kind of things ? For command line parsing ? Column/Colorized output ? SSH connection (needed to connect to cisco routers)
Do you recommend me to use a local database (sqlite) to store meta datas (Stages changes, actual states) or do you recommend me to compute on the fly by analyzing nginx/interfaces configuration files and using syslog for monitoring changes done with this script ?
This script will be used at first time for a massive datacenter physical migration, and next for standard usages for scheduled downtimes.
Thank you
First of all, I'd recommend you get a copy of Build awesome command-line applications in Ruby.
That said, you might want to check
GLI command line parsing like git
OptionParser command line parsing
Personally, I'd go for the SQLite approach for storing data, but I'm biased (having a strong SQL background).
Thor is a good gem for handling CLI options. It allows this type of organization in your script:
class Maintenance < Thor
desc "maintenance", "put up maintenance page"
method_option :switch, :aliases => '-s', :type => 'string'
#The method name is the name of the task that would be run => mcli maintenance
def maintenance
#do stuff
end
no_tasks do
#methods that you don't want cli tasks for go here
end
end
Maintenance.start
I don't really have any good suggestions for column/colorized output though.
I definitely recommend using some kind of a database to store states though. Maybe not sqlite, I would probably opt for maybe a redis database that stores key/value pairs with the information you are looking for.
We have similar task. I use next architecture
Small application (C) what generate config file
Add nginx init.d script new switch update_clusters. This script will restart nginx only if config file is changed
update_clusters() {
${CONF_GEN} --outfile=/tmp/nginx_clusters.conf
RETVAL=$?
if [[ "$RETVAL" != "0" ]]; then
return 5
fi
if ! diff ${CLUSTER_CONF_FILE} /tmp/nginx_clusters.conf > /dev/null; then
echo "Cluster configuration changed. Reload service"
mv -f /tmp/nginx_clusters.conf ${CLUSTER_CONF_FILE}
reload
fi
}
Set of bash scripts to add records to database.
Web console to add/modify/delete records in database (extjs+nginx module)

Resources