Installing corosync and pacemaker on Amazon EC2 instances - amazon-ec2

I'm trying to setup a HA cluster for 2 amazon instances. The OS of my instances is CentOS7.
Hostnames:
master1.example.com
master2.example.com
IP internal:
10.0.0.x1
10.0.0.x2
IP public:
52.19.x.x
52.18.x.x
I'm following this tutorial:
http://jensd.be/156/linux/building-a-high-available-failover-cluster-with-pacemaker-corosync-pcs
[root#master1 centos]# pcs status nodes
Pacemaker Nodes:
Online: master1.example.com
Standby:
Offline: master2.example.com
while my master 2 is showing the following
[root#master2 centos]# pcs status nodes
Pacemaker Nodes:
Online: master2.example.com
Standby:
Offline: master1.example.com
But they should be online, both.
What am I doing wrong?
Which IP do I have to choose as Virtual IP? Because the IP's are not in the same subnet.

Change you security group rules to allow inbound and outbound tcp & https traffic between all cluster nodes. That should do it. (pretty old question but unanswered so thought someone might need it).

Related

Curl into API cluster secured by open VPN from another cluster's pod's container

I have created 2 kubernetes clusters on AWS within a VPC.
1) Cluster dedicated to micro services (MI)
2) Cluster dedicated to Consul/Vault (Vault)
So basically both of those clusters can be reached through distinct classic public load balancers which expose k8s APIs
MI: https://api.k8s.domain.com
Vault: https://api.vault.domain.com
I also set up openvpn on both clusters, so you need to be logged in vpn to "curl" or "kubectl" into the clusters.
To do that I just added a new rule in the ELBs's security groups with the VPN's IP on port 443:
HTTPS 443 VPN's IP/32
At this point all works correctly, which means I'm able to successfully "kubectl" in both clusters.
Next thing I need to do, is to be able to do a curl from Vault's cluster within pod's container within into the MI cluster. Basically:
Vault Cluster --------> curl https://api.k8s.domain.com --header "Authorization: Bearer $TOKEN"--------> MI cluster
The problem is that at the moment clusters only allow traffic from VPN's IP.
To solve that, I've added new rules in the security group of MI cluster's load balancer.
Those new rules allow traffic from each vault's node private and master instances IPs.
But for some reason it does not work!
Please note that before adding restrictions in the ELB's security group I've made sure the communication works with both clusters allowing all traffic (0.0.0.0/0)
So the question is when I execute a command curl in pod's container into another cluster api within the same VPC, what is the IP of the container to add to the security group ?
NAT gateway's EIP for the Vault VPC had to be added to the ELB's security group to allow traffic.

replication server with same id in docker/openshift cluster

I'm having problems when setting up replication in openshift/docker cluster.
In openshift, each opendj server will have two ips: service ip and pod id. So when I setup two opendj service, two service ip and two pod ip will be there.
I want to set up the replication by service ip, because pod is is not accessible from other pod, but apparently OpenDJ think there are four replication server there, with each two server having same ServerId.
Log snippet:
category=SYNC severity=ERROR msgID=org.opends.messages.replication.55 msg=In Replication server Replication Server 8989 31635: replication servers 172.30.244.127(service ip):8989 and 10.129.0.1:8989(pod ip) have the same ServerId : 11281
My Question is: is it possible to just build the Replication Server cluster by Service IP, not Pod id?
Thanks a lot.
PS: seems this issue is similar with this https://bugster.forgerock.org/jira/browse/OPENDJ-567
Wayne
For anyone having the same issue, please config your opendj service to headless service, that will solve the problem

Difference in telnet of amazon ec2 instance using internal and public IP

I have a 4 node hadoop cluster on ec2. We have configured Hortonworks Hadoop (HDP version 2.4) through Ambari.
I have opened all traffic for our all four instances internally and the office external IP.
Whenever I do telnet within the cluster using internal IP:
telnet <internal_ip> 2181
It is able to connect to the specific port I have my service(zookeeper) running on.
When I use public IP of the same instance(Elastic IP) instead of internal IP, I am not able to telnet either within the cluster or from my office IP:
telnet <elastic_ip> 2181
I have already configured security group to allow all traffic. I am using Ubuntu 14.04. We are not using any other firewall except AWS security group.
Please suggest how can I connect using Elastic IP/Public IP of my instance on this port.
Please find the screenshot of Security Group of EC2:
Do you use the default VPC ?
If not, check if the VPC has an Internet Gateway, the Route table (you need a route to the Internet Gateway) and the Networks ACLs.
The Route table and Network ACLs are applied to a subnet.
The default VPC is configured to allow outside traffic, not a new VPC.
Or, the Elastic IP is linked to the same network interface ? The Elastic IP is linked to a network interface of an instance.
EDIT: you can take a look on AWS doc for a better explanation:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstancesConnecting.html

Do 2 different Availability Zones in EC2 act as a WAN or a LAN?

I am trying to establish a RabbitMQ Cluster on EC2 over 2 availability zones.
In its docs, RabbitMQ mentions to avoid network partition on the cluster nodes.
Do 2 different Availability Zones in EC2 act as a WAN or a LAN?
Can anyone direct me to a link?
Thank you.
RabbitMQ cluster is not recommended in case of WAN network (say 2 Regions) . But the connection between availability zones can be viewed as a LAN.
We running RabbitMQ cluster in differnt AZ's with no issues.
AWS doesn't tells you how far away each AZ from each other, but you can assume it's close enough to be viewed as a LAN. One of the characteristics of a LAN is Coverage area is generally a few kilometers.

Elastic Search Clustering in the Cloud

I have 2 Linux VM's (both at same datacenter of Cloud Provider): Elastic1 and Elastic2 (where Elastic 2 is a clone of Elastic 1). Both have same version centos, same cluster name, and same version ES, again - Elastic2 is a clone.
I use the service wrapper to automatically start them both at boot, and introduced each others ip to their respective iptables file, so now I can successfully ping between nodes.
I thought this would be enough to allow ES to form a cluster, but to no avail.
Both Elastic1 and Elastic2 have 1 index each named e1 and e2 respectfully. Each index has 1 shard with no replicas.
I can use the head and paramedic plugins on each server successfully. And use curl -XGET 'http://localhost:9200/_cluster/nodes?pretty=true' to validate the cluster name is the same and each server only has 1 node listed.
Is there anything glaring out at why these nodes arent talking? Ive restarted the ES service and rebooted on both servers to no avail. Could cloning be the problem??
In your elasticsearch.yml:
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ['host1:9300', 'host2:9300']
So, just list your node IPs with the transport port (default is 9300) under unicast hosts. Multicast is enabled by default, but is generally impossible on cloud environments without use of external plugins.
Also, make sure to check your IP rules / security groups! That's easy to forget.

Resources