Adding EC2 Cassandra Nodes to my already running premise Apache Cassandra Cluster - amazon-ec2

Here is my use case:
I have a single DC Cassandra cluster(3 nodes) with RF 2. This Cluster is running on my on-premises DC. Here is my question, I am hoping to add 3 EC2 nodes into this cluster and then change the RF to say 4, also add one of the ec2 nodes as a seed after all 3 nodes fully join the cluster.
Do I need to change the Snitch on the EC2 nodes? or can I just add each nodes?
If you have implemented this use case, I will appreciate a clean clear step and what gotcha I should look out for?

You can add 3 EC2 nodes to your on-prem cluster, but... you will have to set up 2 data centers to do it with 3 nodes in each DC, 2 DCs in your cluster. You won't be able to use SimpleSnitch; as markc commented, your best bet is GossipingPropertyFileSnitch. Although you didn't mention it, SimpleStrategy is not recommended for production and you'll need to change it to NetworkTopologyStrategy. Markc should post his comments as the answer here :)

Related

Geo cluster with pacemaker - quorum vs booth

I configured a geo cluster using pacemaker and DRBD.
The cluster has 3 different nodes, each node is in a different geographic location.
The locations are pretty close to one another and the communication between them is fast enough for our requirements (around 80MB/s).
I have one master node, one slave node and the third node is an arbitrator.
I use AWS route 53 failover DNS record to do a failover between the nodes in the different sites.
A failover will happen from the master to the slave only if the slave has a quorum, thus ensuring it has communication to the outside world.
I have read that using booth is advised to perform failover between clusters/nodes in different locations - but having a quorum between different geographic locations seems to work very well.
I want to emphasize that I don't have a cluster of clusters - it is a single cluster, with each node in a different geo-location.
My question is - do I need booth in my case? If so - why? Am I missing something?
Booth helps in overlay cluster consisting of clusters running at different sites.
You have one single cluster and hence you should be okay with just Quorum.

Are there any downsides to running Elasticsearch on a multi-purpose (i.e. non-dedicated) cluster?

I just set up an Elasticsearch (ES) 3 node cluster using one of GKE's click to deploy configurations. Each node is of n1-standard-4 machine type (4vCPUs/15GB RAM). I have always run ES on clusters dedicated to that single purpose (performance reasons, separation of concerns, make my life easier to debug machine faults), and currently, this GKE cluster is the same.
However, i have a group of batch jobs i would like to port to run on a GKE cluster. Since it updates several large files, I would like this to also run on a stateful cluster (just like ES) so I can move updated files to the cloud once a day rather than round tripping on every run. The batch jobs in question run at 5min, 15min or daily frequency for about 18hrs every day.
My question now is, what is the best way to deploy this batch process given the existing ES cluster...
Create an entirely new cluster?
Create another node pool?
Create a separate namespace and increase the cluster's autoscaling?
Some other approach i'm missing?
Note: I'm a few days into using GKE and containerization in general
Based on my knowledge I would go for another nodepool or autoscaler.
Create an entirely new cluster?
For me it would be an overkill for just running the jobs.
Create another node pool?
I would say it's the best option equally with the autoscaler, create a new nodepool just for the jobs which would scale down to 0 if there is nothing more to do.
Create a separate namespace and increase the cluster's autoscaling?
Same as another node pool, but from my point of view if you would like to do that, then you would have to label your nodes to the Elasticsearch, then jobs can't take any resources from them, so answering your question from comment
my question is more about if doing this with autoscaler within the same cluster would in any way affect elasticsearch esp with all the ES specific yaml configs?
It shouldn't, as I said above, you can always label the 3 specific nodes(default nodepool) to work only with elasticsearch then nothing will take their resources, cluster will rescale when it will need more resources for jobs and rescale to 3 ES nodes when jobs end their 18hrs work.
Also with regards to the 6h node pool doing nothing comment, wouldn't I be able to avoid this on a new cluster or node pool with a minimum scaling parameter of zero?
Based on gcp documentation it would work for nodepool, but not for new cluster.
If you specify a minimum of zero nodes, an idle node pool can scale down completely. However, at least one node must always be available in the cluster to run system Pods.
tldr Go for the autoscaler or another nodepool, if you're worried about resources for your ES label the 3 nodes just for ES.
I hope it answer your question. Let me know if you have any more questions.

3 Node Cluster for Elastic, Kafka and Cassandra - On 3 Machines

We are creating a 3 node elastic cluster, but want to use each of our 3 elastic nodes for other things, like Kafka and Cassandra. We need high availability, so we want to have 3 nodes for everything, but we don't want to have 9 machines, we just want to put them on one bigger computer. Is this a typical scenario?
I would say no.
One sandbox machine running a PoC with all the components local, sure, why not. But Production with HA requirements, you are just asking for trouble putting everything in one place. They're going to compete for resource, one blowing the box up kills the others, touching the machine to change one risks the others, etc, etc.
IMO keep your architecture clean and deploy on separate nodes for each component.

Datastax hadoop nodes basics

I'm trying to set up some hadoop nodes along with some cassandra nodes in my datastax enterprise cluster. Two things are not clear to me at this point. One, how many hadoop nodes do I need? Is it the same number of cassandra nodes? Does the data still live on the cassandra nodes? Second--the tutorials mention that I should have vnodes disabled on the hadoop nodes. Can I still use vnodes on the cassandra nodes in that cluster? Thank you.
In Datastax Enterprise you run Hadoop on nodes that are also running Cassandra. The most common deployment is to make two datacenters (logical groupings of nodes.) One Datacenter is devoted to analytics and contains your machines which run Hadoop and C* at the same time, the other datacenter is C* only and servers the OLTP function of your cluster. The C* processes on the Analytics nodes are connected to the rest of your cluster (like any other C* node) and receives updates when mutations are written so it is eventually consistent with the rest of your database. The data lives both on these nodes and on the other nodes in your cluster. Again most folks end up having a replication pattern with NetworkTopologyStrategy which specifies several replicas in their C* only DC and a single replica in their Analytics DC but your usecase may differ. The number of nodes does not have to be equal in the two datacenters.
For your second question, yes you can have Vnodes enabled in the C* only datacenter. In addition if your batch jobs are of a signficantly large enough size you could also run vnodes in your analytics datacenterr with only a slight performance hit. Again this is completely based on your use case. If you want many faster shorter analytics jobs you do NOT want vnodes enabled in your Analytics datacenter.

cassandra: strategy for single datacenter deployment

We are planning to use apache shiro & cassandra for distributed session management very similar to mentioned # https://github.com/lhazlewood/shiro-cassandra-sample
Need advice on deployment for cassandra in Amazon EC2:
In EC2, we have below setup:
Single region, 2 Availability Zones(AZ), 4 Nodes
Accordingly, cassandra is configured:
Single DataCenter: DC1
two Racks: Rack1, Rack2
4 Nodes: Rack1_Node1, Rack1_Node2, Rack2_Node1, Rack2_Node2
Data Replication Strategy used is NetworkTopologyStrategy
Since Cassandra is used as session datastore, we need high consistency and availability.
My Questions:
How many replicas shall I keep in a cluster?
Thinking of 2 replicas, 1 per rack.
What shall be the consistency level(CL) for read and write operations?
Thinking of QUORUM for both read and write, considering 2 replicas in a cluster.
In case 1 rack is down, would Cassandra write & read succeed with the above configuration?
I know it can use the hinted-hands-off for temporary down node, but does it work for both read/write operations?
Any other suggestion for my requirements?
Generally going for an even number of nodes is not the best idea, as is going for an even number of availability zones. In this case, if one of the racks fails, the entire cluster will be gone. I'd recommend to go for 3 racks with 1 or 2 nodes per rack, 3 replicas and QUORUM for read and write. Then the cluster would only fail if two nodes/AZ fail.
You probably have heard of the CAP theorem in database theory. If not, You may learn the details about the theorem in wikipedia: https://en.wikipedia.org/wiki/CAP_theorem, or just google it. It says for a distributed database with multiple nodes, a database can only achieve two of the following three goals: consistency, availability and partition tolerance.
Cassandra is designed to achieve high availability and partition tolerance (AP), but sacrifices consistency to achieve that. However, you could set consistency level to all in Cassandra to shift it to CA, which seems to be your goal. Your setting of quorum 2 is essentially the same as "all" since you have 2 replicas. But in this setting, if a single node containing the data is down, the client will get an error message for read/write (not partition-tolerant).
You may take a look at a video here to learn some more (it requires a datastax account): https://academy.datastax.com/courses/ds201-cassandra-core-concepts/introduction-big-data

Resources