I'm scripting a process with the vultr-cli. I need to deploy a new VPS at Vultr, perform some intermediate steps, then destroy the VPS in a bash script. How do I retrieve instance values in the script after the deployment? Is there a way to capture the information as JSON or set environment variables directly?
So far, my script looks like:
#!/bin/bash
## Create an instance. How do I retrieve the instance ID
## for use later in the script?
vultr-cli instance create --plan vc2-1c-1gb --os 387 --region ewr
## With the instance ID, retrieve the main IPv4 address.
## Note: I only want the main IP, but there may be multiple.
vultr-cli instance ipv4 list $INSTANCE_ID
## Perform some tasks here with the IPv4. Assuming I created
## the instance with my SSH key, for example:
scp root#$INSTANCE_IPv4:/var/log/logfile.log ./logfile.log
## Destroy the instance.
vultr-cli instance delete $INSTANCE_ID
The vultr-cli will output the response it gets back from the API like this
☁ ~ vultr-cli instance create --plan vc2-1c-1gb --os 387 --region ewr
INSTANCE INFO
ID 87e98eb0-a189-4519-8b4e-fc46bb0a5331
Os Ubuntu 20.04 x64
RAM 1024
DISK 0
MAIN IP 0.0.0.0
VCPU COUNT 1
REGION ewr
DATE CREATED 2021-01-23T17:39:45+00:00
STATUS pending
ALLOWED BANDWIDTH 1000
NETMASK V4
GATEWAY V4 0.0.0.0
POWER STATUS running
SERVER STATE none
PLAN vc2-1c-1gb
LABEL
INTERNAL IP
KVM URL
TAG
OsID 387
AppID 0
FIREWALL GROUP ID
V6 MAIN IP
V6 NETWORK
V6 NETWORK SIZE 0
FEATURES []
So you would want to capture the ID and it's value from the response. This is a crude example but it does work.
vultr-cli instance create --plan vc2-1c-1gb --os 387 --region ewr | grep -m1 -w "ID" | sed 's/ID//g' | tr -d " \t\n\r"
We are grepping for the 1st line that has ID (which will always be be the 1st line). Then remove the word ID followed by removing any whitespace and newlines.
You will want to do something similar to with the ipv4 list call you have.
Again, there may be a better way to write out the grep/sed/tr portion but this will work for your needs. Hopefully this helps!
Related
I want to be able to determine which LB is primary and which is secondary from a bash script running on both load balancers.
Background is: For the renewal of a Letsencrypt certificate on a HAproxy load balancer pair, where the service IP is usually bound to the master, it would be necessary to determine which server is the master (has the service IP bound) and which is only a secondary backup (without web access via port :80 and port:443)
If you follow this guide by Sebastian Schrader https://serverfault.com/a/871783, the following procedure will help to determine the master and backup:
IFS="/"
# /org/keepalived/Vrrp1/Instance/ens192/151/IPv4
vrrpInstance=$(/usr/bin/busctl tree | grep keepalived | grep IPv4)
set $vrrpInstance
#151
vrrpRouterID=$7
# (us) 2 "Master" or "Backup"
vrrpProp=$(/usr/bin/busctl get-property org.keepalived.Vrrp1 /org/keepalived/Vrrp1/Instance/ens192/"${vrrpRouterID}"/IPv4 org.keepalived.Vrrp1.Instance State)
# Master or Backup
vrrpStatus=$(echo ${vrrpProp} | cut -c 9-14)
unset IFS
We have a Nagios server running on Linux and one of the host machine is running on Linux.
When I try to manually run the command to get the information of swap space using SNMP I am getting the output, but it is not reflecting on the dashboard.
Can anybody help me?
For your reference, please find the output from manually running the command.
check_snmp_swap.pl -H IP Address -C public -m -w 80 -c 90
Swap Space: 0%used(26MB/95998MB) /data: 0%used(188MB/129704MB) Real
Memory: 16%used(10263MB/64444MB) /: 62%used(30070MB/48432MB) Memory
Buffers: 0%used(239MB/64444MB) (<80%) : OK
But in dashboard I'm not able to see the status of only Swap space, but I'm able to see the status of CPU and RAM.
Check your service definition for check_snmp_swap. Make sure that the service is registered, meaning you set register 1 in the service definition.
For example:
define service{
host_name check_snmp_swap
service_description check-swap
check_command check_snmp_swap!public!80!90
max_check_attempts 5
check_interval 5
retry_interval 3
check_period 24x7
notification_interval 30
notification_period 24x7
notification_options w,c,r
contact_groups linux-admins
register 1
}
Also check the command definition for check_snmp_swap. Make sure that the correct community string gets passed into the command -- in this case, public.
EDIT:
From the configuration information you posted in the comments, I think you have a bit of confusion regarding service definitions and service template definitions.
It looks like you posted a template - which as a template, really should have it's register value set to 0 to indicate it's a template.
Now a real service definition may inherit some settings from a service template. The purpose of this is to save you from having to re-enter the same information over and over again when you create service definitions.
You can override the settings inherited from the service template by explicitly defining those settings in the service definition.
You should create a service definition that looks something like this:
define service{
host_name check_snmp_swap
use generic-service
service_description check-swap
check_command check_snmp_swap
max_check_attempts 5
check_interval 10
retry_interval 2
check_period 24x7
notification_interval 30
notification_period 24x7
notification_options w,u,c,r
contact_groups admins
register 1
}
Then restart your nagios service:
service nagios restart
I've read the Oracle RAC documentation a couple of times but SCAN and VIP are still confusing me. Can someone help me understand how this needs to be configured technically so that I can explain it my network admin.
VIP in Oracle RAC, should each VIP bind to the node or just require a DNS A record without allocating it to node1 or node2 and an entry in the host file?
I know while performing Grid cluster installation Oracle will bind the VIP automatically, but should this be part of DNS assigned to one of the nodes or should it be free and unassigned?
Oracle SCAN IPs need to be created in DNS record; is this an A record to 3 IPs with reverse lookup or round robin way and this should not be part of hosts file?
I need to explain this to my network admin to add it on the DNS server.
First, VIPs:
A VIP is a Virtual IP address, and should be defined in DNS and not assigned to any host or interface. When you install GRID/ASM home, you'll specify the VIP names that were assigned in DNS. When Oracle Clusterware starts up, it will assign a VIP to each node in the cluster. The idea is, if a node goes down (crashes), clusterware can immediately re-assign that VIP to a new (surviving) node. This way, you avoid TCP timeout issues.
Next, SCAN:
A SCAN (Single Client Access Name) is a special case of VIP. The SCAN should also be defined in DNS, and not assigned to any host or interface. There should be three IPs associated with the SCAN name in DNS, and the DNS entry should be defined so that one of the three IPs is returned each time DNS is queried, in a round robin fashion.
At clusterware startup time, each of the three VIPs that make up the SCAN will be assigned to a different node in the cluster. (Except in the special case of a two node cluster, one of the nodes wil have a 2 SCAN VIPs assigned to it.) The point of the SCAN, is that no matter how many nodes are added to or removed from the cluster, all the Net Service Name definitions in your tnsnames.ora (or LDAP equivalent) will not need to ever change, because they all refer to the SCAN, which doesn't change, regardless of how many node additions or drops are made to the cluster.
For example, in the three node cluster, you may have:
Physical and virtual hostnames/IPs assigned as follows:
Hostname Physical IP Virtual hostnmae Virtual IP
rac1 10.1.1.1 rac1-vip 10.1.1.4
rac2 10.1.1.2 rac2-vip 10.1.1.5
rac3 10.1.1.3 rac3-vip 10.1.1.6
Additionally, you may have the SCAN defined as:
rac-scan with three IPs, 10.1.1.7, 10.1.1.8, 10.1.1.9. Again, the DNS definition would be defined so those IPs are served up in a round robin order.
Note that the SCAN VIPs, Host VIPs, and the Physical IPs are all in the same subnet.
Finally, though you didn't ask about it, to complete the picture, you'd also need one private, non-routable IP assigned per host, and that IP would be associated with the private interconnect. So, you may have something like:
rac1-priv 172.16.1.1
rac2-priv 172.16.1.2
rac3-priv 172.16.1.3
Note that the '-priv' addresses should not be in DNS, only in the /etc/hosts file of each host in the RAC cluster. (They are private, non-routable, and only clusterware will ever know about or use those addresses, so adding to DNS doesn't make sense.)
Note also, that '-priv' and physical IP/hostname definitions should go in /etc/hosts, and the physical IPs and VIPs should be in DNS. So, physical IPs in both DNS and /etc/hosts, VIPs only in DNS, '-priv' addresses only in /etc/hosts.
not entirely sure what you mean for this, i have each VIP address created in DNS as A records assigned to the hosts, and also record them in the hosts file as well.
in answer to 2, you are correct, the SCAN IPs should not be in the hosts file. And yes 3 "A" records with reserve lookup will be enough (at least that's what has worked for me).
these are my iptables entries
Oracle ports
Allow access from other oracle RAC hosts
-A INPUT -m state --state NEW -p tcp -m iprange --src-range 172.28.1.90-172.28.1.97 -j ACCEPT
-A INPUT -m state --state NEW -p tcp -m iprange --src-range 172.28.97.91-172.28.97.93 -j ACCEPT
-A INPUT -m state --state NEW -p tcp -m iprange --src-range 192.168.28.91-192.168.28.93 -j ACCEPT
Allow Multicast
-A INPUT -m state --state NEW -p udp -m iprange --src-range 172.28.1.90-172.28.1.97 -j ACCEPT
-A INPUT -m state --state NEW -p udp -m iprange --src-range 172.28.97.91-172.28.97.93 -j ACCEPT
-A INPUT -m state --state NEW -p udp -m iprange --src-range 192.168.28.91-192.168.28.93 -j ACCEPT
Allow multicast
-A INPUT -m pkttype --pkt-type multicast -j ACCEPT
-A INPUT -s 224.0.0.0/24 -j ACCEPT
-A INPUT -s 230.0.1.0/24 -j ACCEPT
I also needed to get our systems admin to give permissions a the firewall level to allow my nodes, their vips and the scan ips to connect via port 1521
hope this helps
I'm trying to convert snmp v1 traps to v3. I've followed this discussion but it's vague.
I've also looked here but without success.
To be more clear: I have a Centos 6 station, with net-snmp 5.5 on it. I need to generate v1 traps, receive them, convert them to v3, then forward them.
Regarding the first guide, this is what I managed so far:
Master:
snmpd -Lo --master=agentx --agentXSocket=tcp:192.168.58.64:42000 udp:1161
Listen:
snmpwalk -v3 -u snmpv3user -A snmpv3pass -a MD5 -l authnoPriv 192.168.58.64:1161
Later edit:
I have made some progress, I was able to run snmpd as master, connect snmptrapd as agent to it, then have v1 traps mechanism functional.
I did the following:
In order to get snmptrapd connected as a subagent to snmpd you need to do the following:
###1 EDIT /etc/hosts.allow and add
snmpd: $(your_ip)
smptrapd: $(your_ip)
this is important because snmptrapd fails silently if rejected
by tcp wrap.
###2 EDIT /etc/snmp/snmpd.conf and add at the bottom of the other
com2sec directives.
com2sec infwnet $(your_ip) YOUR-COMMUNITY
add these lines
group MyROGroup v1 infwnet
group MyROGroup v2c infwnet
group MyROGroup usm infwnet
under
"# Second, map the security names into group names:"
add this view at the bottom of the other views
view all included .1 80
add this group acces at the bottom of other group access directives
access MyROGroup "" any noauth exact all none none
add this line as well:
master agentx
###3 TEST it with this:
snmpwalk -v1 -c YOUR_COMMUNITY $(your_ip) .
###4 CREATE THE FOLLOWING TRAP TEST EXAMPLE:
touch /usr/share/snmp/mibs/UCD-TRAP-TEST-MIB.txt
###5 COPY PASTE THE TEXT BELOW INTO IT:
UCD-TRAP-TEST-MIB DEFINITIONS ::= BEGIN
IMPORTS ucdExperimental FROM UCD-SNMP-MIB;
demotraps OBJECT IDENTIFIER ::= { ucdExperimental 990 }
demoTrap TRAP-TYPE
ENTERPRISE demotraps
VARIABLES { sysLocation }
DESCRIPTION "An example of an SMIv1 trap"
::= 17
END
###6 EDIT /etc/sysconfig/snmptrapd (not /etc/default/snmptrapd !!)
replace OPTIONS with this:
OPTIONS="-Lsd -m ALL -M /usr/share/snmp/mibs -p /var/run/snmptrapd.pid"
###7 TEST IT WITH
snmptrap -v 1 -c public $(your_ip) UCD-TRAP-TEST-MIB::demotraps "" 6 17 "" SNMPv2-MIB::sysLocation.0 s "Just here"
Now I just need to find a way to convert them to v3 and read/receive them from a remote snmpd
I have private DNS servers and I want to write them to resolv.conf with resolvconf on Debian on AWS/EC2.
There is a problem in the order of nameserver entries.
In my resolv.conf, EC2's default nameserver is always written at first line like so:
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 172.16.0.23
nameserver 10.0.1.185
nameserver 10.100.0.130
search ap-northeast-1.compute.internal
172.16.0.23 is EC2's default nameserver and others are mine.
How to remove EC2 entry? Or, how to move EC2 entry to third?
Here I have an interface file:
% ls -l /etc/resolvconf/run/interface/
-rw-r--r-- 1 root root 62 Jun 7 23:35 eth0
It seems that the file eth0 is automatically generated by dhcp so can't remove it permanently.
% cat /etc/resolvconf/run/interface/eth0
search ap-northeast-1.compute.internal
nameserver 172.16.0.23
My private DNS entry is here:
% cat /etc/resolvconf/resolv.conf.d/base
nameserver 10.0.1.185
nameserver 10.100.0.130
Please help.
I think I just solved a very similar problem. I was bothered by Amazon EC2's crappy internal DNS servers so I wanted to run a local caching dnsmasq daemon and use that in /etc/resolv.conf. At first I just did echo nameserver 127.0.0.1 > /etc/resolv.conf but then I realized that my change would eventually be overwritten by the DHCP client after a reboot or DHCP lease refresh.
What I've now done instead is to edit /etc/dhcp3/dhclient.conf and uncomment the line prepend domain-name-servers 127.0.0.1;. You should be able to use the prepend directive in a very similar way.
Update: These instructions are based on Ubuntu Linux but I imagine the general concept applies on other systems as well, even other DHCP clients must have similar configuration options.
I'm approaching this problem from the other direction (wanting the internal nameservers), much of what I've learned may be of interest.
There are several options to control name resolution in the VPC management console.
VPC -> DHCP option sets -> Create dhcp option set
You can specify your own name servers there.
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_DHCP_Options.html
Be sure to attach this dhcp option set to your VPC to get it to take effect.
Alternatively (I found this out by mistake) local dns servers are not set if the following settings are disabled in VPC settings:
DnsHostnames
and
DnsSupport
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-dns.html
Settings can also be overridden locally (which you'll notice if you move instances between vpcs). /etc/dhcp/dhclient.conf
The following line might be of interest:
prepend domain-name-servers
Changes, of course, take effect on dhclient start.
How do I assign a static DNS server to a private Amazon EC2 instance running Ubuntu, RHEL, or Amazon Linux?
Short Description
Default behavior for an EC2 instance associated with a virtual private cloud (VPC) is to request a DNS server address at startup using the Dynamic Host Configuration Protocol (DHCP). The VPC responds to DHCP requests with the address of an internal DNS server. The DNS server addresses returned in the DHCP response are written to the local /etc/resolv.conf file and are used for DNS name resolution requests. Any manual modifications to the resolv.conf file are overwritten when the instance is restarted.
Resolution
To configure an EC2 instance running Linux to use static DNS server entries, use a text editor such as vim to edit the file /etc/dhcp/dhclient.conf and add the following line to the end of the file:
supersede domain-name-servers xxx.xxx.xxx.xxx, xxx.xxx.xxx.xxx;
Ubuntu - dhclient.conf - DHCP client configuration file
The supersede statement
supersede [ option declaration ] ;
If for some option the client should always use a locally-configured value or values
rather than whatever is supplied by the server, these values can be defined in the
supersede statement.
The prepend statement
prepend [ option declaration ] ;
If for some set of options the client should use a value you supply, and then use the
values supplied by the server, if any, these values can be defined in the prepend
statement. The prepend statement can only be used for options which allow more than one
value to be given. This restriction is not enforced - if you ignore it, the behaviour
will be unpredictable.
The append statement
append [ option declaration ] ;
If for some set of options the client should first use the values supplied by the server,
if any, and then use values you supply, these values can be defined in the append
statement. The append statement can only be used for options which allow more than one
value to be given. This restriction is not enforced - if you ignore it, the behaviour
will be unpredictable.
In here someone come with solution that basically replaces the file on boot using rc.local
https://forums.aws.amazon.com/thread.jspa?threadID=74497
Edit /etc/sysconfig/network-scripts/ifcfg-eth0 to say PEERDNS=no
Create a file called /etc/resolv.backup with what you want
Add the following 2 lines to /etc/rc.local:
rm -f /etc/resolv.conf cp /etc/resolv.backup /etc/resolv.conf
This is what we are doing for our servers in the environment.
interface "eth0"
{
prepend domain-name-servers 10.x.x.x;
supersede host-name "{Hostname}";
append domain-search "domain";
supersede domain-name "DOMAIN";
}
Hope this helps.
The following worked in a Debian stretch on AWS EC2.
Just create /etc/dhcp/dhclient-enter-hooks.d/nodnsupdate:
#!/bin/sh
make_resolv_conf(){
:
}
Then you can modify /etc/resolv.conf and it will persist your changes across restarts.
Setup in crontab as
#reboot cp -r /home/.../resolv.conf /etc/resolv.conf