Migrating from Ec2Snitch to Ec2MultiRegionSnitch - amazon-ec2

I currently have a 5 node cassandra cluster running with Ec2Snitch in us-west-2.
I want to add a second datacenter in us-west-1, so I need to use Ec2MultiRegionSnitch instead.
How should I go about doing this?
On a test deployment, I tried setting all broadcast_addresses to public ip, changing the snitch, and using public ips for the seeds.
After doing this, when running nodetool from a node, I only see UN for that node, and DN for all the other nodes, who are only reporting their private ip address.

Related

how to disjoin a datacenter from two consul datacenter wan

I have a cluster of consul servers in two datacenters. each datacenter consists of 3 servers each. When I execute consul members -wan command I can see all 6 servers.
I want to separate these two into individual clusters and no connection between them.
I tried to use the command force-leave and leave as per the consul documentation:
https://www.consul.io/commands/force-leave: When I used this command
the result was a 500 - no node is found. I tried using the node name as server.datacenter, full FQDN of the server, IP of the server, none of them worked for me.
https://www.consul.io/commands/leave: When I used this command from
the node which I want to remove from the cluster, the response was
success but when I execute consul members -wan I still can see this
node.
I tried another approach where in I stopped the consul on the node I want to remove from cluster. Then executed the command: consul force-leave node-name. Then the command: consul members -wan showed this node as left. When I started the consul on this node, the node is back in cluster.
What steps am I missing here?
I think I solved the problem I had. I followed the instructions here:
https://support.hashicorp.com/hc/en-us/articles/1500011413401-Remove-WAN-federation-between-Consul-clusters

Multi-Server Nodes Use Private IP Instead Of Public IP

We have our multi-node couchbase server on ec2 instance (one instance each for data / query / index).
When the data node is setup, under server nodes tab in couchbase UI console it shows the private IP address of the node instead of public IP address.
Similarly, when index / query nodes are attached to the data server then it is using the private IP address for each of them to communicate instead of the public IP address.
Now, we want the nodes to be connected using only public IP addresses instead of private IP addresses so that, when we execute our lambda function it would be able to connect to the server.
Please let us know how to proceed further.
Thanks
Have resolved this issue by connecting different nodes using public DNS instead of the elastic IP address which amazon provides i.e: connect your couchbase nodes using public dns like : "ec2-xxx-xxxx".
Hope this will help others who encounter the same issue.

How to restart single node hadoop cluster on ec2

I have installed a single node haodoop cluster on using Hortonworks/Ambari on Amazon's ec2 host.
Since I don't want this cluster running 24/7, I stop the instance when done. When I reboot the instance later, I get a new IP address and then ambari no longer is able to start the Hadoop related services.
Is there a way other than completely redeploying to reconfigure the cluster so the services will start?
It looks like the IP address lives in various xml files under /etc, in the postgres database table ambari, and possibly other places I haven't found yet.
I tried updating the xml files and postgres database with updated versions of the ip address, internal and external dns names as I could find them, but to no avail. I have not been able to restart the services.
The reason I am doing this is to possibly save the deployment time and data configuration on hdfs and other project specific setup each time I restart the host.
Any suggestions?
Thanks!
Elastic IP can be used. Also, since you mentioned it being a single node cluster - you can use localhost or private IP.
If you use elastic IP, your UIs will always be on the same public IP. However, if you use private IP or localhost and do not associate your instance with an elastic IP you will have to look for public IP everytime you start the instance and then connect to the web UI using the IP.
Thanks for the help, both Harman and TJ are correct. I haven't used an elastic IP because I might have more than one of these running and a time, and for now at least, I don't mind looking up the public ip address.
Harman's suggestion of using "localhost" as the fqdn when setting up ambari in the first place is a really good idea in retrospect. Unless I go through the whole setup again, that's water under the bridge for me, but I recommend this to others who might read this post.
In my case, I figured this out on my own before coming back to the page. The specific step I took was insanely simple after all, thanks to Occam's Razor.
I added the following line in /etc/hosts:
<new internal IP> <old internal dns name>
and then did
ambari-server restart. from the command line. Then I am able to restart all services after logging into ambari.

Private networking necessary for Mesos and Marathon?

I am working through this tutorial: http://mesosphere.io/docs/getting-started/cloud-install/
Just learning on an Ubuntu instance on Digital Ocean, I let the master process bind to the public IP, and the Mesos and Marathon web interfaces became publicly accessible. No surprises there.
Do Mesos and Marathon rely on Zookeeper to create private IPs between instances? Could you skip using Zookeeper by manually setting up a private network between instances? Then the proper way to start the master and slave processes is to bind to the secondary, private IPs of each instance?
Digital Ocean can set up private IPs automatically, but this is kind of a learning exercise for me. I am aware of the broad rule that administrator access to a server shouldn't come through a public IP. Another way of phrasing this posting is, does private networking provide the security for Mesos and Marathon?
Only starting with one Ubuntu instance, running both master and slave, for now. Binding to the loopback address would fix this issue for just one machine, I realize.
ZooKeeper is used for a few different things for both Marathon and Mesos:
Leader election
Storing state
Resolving the Mesos masters
At the moment, you can't skip ZooKeeper entirely because of 2 and 3 (although later versions of Mesos have their own registry which keeps track of state). AFAIK, Mesos doesn't rely on ZooKeeper for creation of private IPs - it'll bind to whatever is available (but you can force this via the ip parameter). So, you won't be able to forgo ZooKeeper entirely with a private network.
Private networking will provide some security for Mesos and Marathon - assuming you firewall off their access to the external world.
A good (although not necessarily the best) solution for keeping the instances on a private network is to set up an OpenVPN (or similar) network to one of the masters. Then, launch each instance on its private IP and make you also set the hostname parameter to that IP. Connect to the Mesos/Marathon web consoles via their private IP and the VPN and all should resolve correctly.
Mesos and marathon doesn't create private IPs between instance.
For that, I suggest you use tinc or directly a docker image tinc
Using this, I was able to do the config you want in 5 minutes, it's easier to configure than openvpn, and each host can connect to another, no need to use a vpn server to route all the traffic.
Each node will store a private and public for connecting to each server of the private network.
You should setup a private network for using mesos.
After that, you can add in /etc/hosts all the hosts with the IP of the internal network.
You will be able to bind zookeeper using the private network :
zk://master-1:2181,master-2:2181,master-3:2181
Then the proper way to start the master and slave processes is to bind to the secondary private IPs of each instance.

How to stop an start Juju instances on Amazon EC2

I'm testing a Hadoop cluster with juju and Amazon EC2, and I would like to know how could I stop a cluster and then start it again, manteining the cluster configuration.
The problem is that after start the instances again the public addresses changes, and the juju-status command shows the machines are down.
The problem is that after start the instances again the public addresses changes
Above is true for EC2 instance. Its Public IP address changes when it is stopped and then started.
To avoid this, you have 2 options:
use Elastic IP. You can attach an EIP to you instance so that the instance will have the same IP address accross the start/stop cycles. Caveat: you can get only 5 EIPs per account.
To get rid of above limitation of 5 EIPs per account, you can setup your cluster in side a VPC where all your instances will have a private IP address and it will remain same across start/stop cycles. But you have to understand how VPC works in order to use that. Please read this.

Resources