How to replicate consul catalog services from data center to another? - consul

We have two data centers in consul
Machine 1 & 2 as primary data center
Machine 3 & 4 as secondary data center
Is it possible to replicate catalog services registration from primary to secondary?

Related

Single index or multiple index

My environment architecture is I have 15 server. The details as below
Group 1 (Standalone)
server 1 (2 app)
server 2 3 app)
server 3 (2 app)
server 4 (3 app)
Group 2 (master and slave)
server 5 master (2 app)
server 6 slave (2 app)
Group 3 (master and 2 slave)
server 7 master (3 app)
server 8 slave (3 app)
server 9 slave (3 app)
Group 4 (1 master 5 slave)
server 10 master (1 app)
server 11 slave (1 app)
server 12 slave (1 app)
server 13 slave (1 app)
server 14 slave (1 app)
server 15 slave (1 app)
Each application have 15 -20 logs
What is the best way in creating the index in logstash?
Is below better for index pattern
app_name-log_name-YYMMDD
Later i want to visual in Kibana in below form table panel form consist of time, message based on respective log name
The below is data for a sinngle log. other log have simillar pattern but different data. i want to dispaly only for single log show single table
https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#_writing_to_different_indices_best_practices has a good starting point about creating custom index names. In general, I would think about combining into a single index what has both similar lifecycle (keep the data for 30 days for example) and similar structure (most of the same fields). As long as you have a field for the app, you can filter on that and it will work just as well.
Also, I'd strongly recommend to use ILM to get evenly sized indices rather than what happens with a daily index pattern.

Rethink DB Cross Cluster Replication

I have 3 different pool of clients in 3 different geographical locations.
I need configure Rethinkdb with 3 different clusters and replicate data between the (insert, update and deletes). I do not want to use shard, only replication.
I didn't found in documentation if this is possible.
I didn't found in documentation how to configure multi-cluster replication.
Any help is appreciated.
I think that multi cluster is just same a single clusters with nodes in different data center
First, you need to setup a cluster, follow this document: http://www.rethinkdb.com/docs/start-a-server/#a-rethinkdb-cluster-using-multiple-machines
Basically using below command to join a node into cluster:
rethinkdb --join IP_OF_FIRST_MACHINE:29015 --bind all
Once you have your cluster setup, the rest is easy. Go to your admin ui, select the table, in "Sharding and replication", click Reconfigure and enter how many replication you want, just keep shard at 1.
You can also read more about Sharding and Replication at http://rethinkdb.com/docs/sharding-and-replication/#sharding-and-replication-via-the-web-console

Cassandra - Enabling Virtural Node causes new data unavailable from new DC

Cassandra Version : 2.0.3
Environment : Linux OS, 2GB RAM, 180 GB Hard Disk,
DC1(Old) - 3 Nodes,
DC2(New) - 3 Nodes,
All machine have same configuration.
I followed this link to enable virtual nodes in an existing data center.
Data is continuously written into Cassandra. After rebuilding nodes in the new data center, all nodes in the new data center receive data. As described in the link above, I start to migrate Cassandra from initial token to virtual nodes. But in between I found data (data which was added after rebuild) that is unavailable in the new data center. Data in the new data center is missing since the rebuild. After decommissioning nodes in the old data center, everything comes normal.

How to migrate single datacenter cluster to multiple datacenter cluster in cassandra>

Provide recommended configuration to migrate the data from the single data center cassandra cluster to multiple data center cassandra cluster. Currenlty i have the single data center cluster environment with following configurations,
i) No of nodes: 3
ii) Replication Factor : 2
iii) Strategy: SimpleStrategy
iv) endpoint_snitch: SimpleSnitch
And now i am planning to add 2 more nodes which is in different location. So i thought of moving to Multiple data center cluster with following confiruations.
i) No of nodes: 5
ii) RF: dc1=2, dc2=2
iii) Strategy: NetworkTopolofyStrategy
iv). endpoint_snitch: PropertyFileSnitch (I have the cassandra.topolofy.properties file)
What is the procedure to migrate the data without losing any data?
Please let me know the recommended steps to follow or any guide which i can refer. Please let me know if further info is required.
Complete repairs on all nodes.
Take snapshot on all nodes to have a fall back point.
Decommission each node that is not a pure Cassandra workload. Repair the ring each time you decommission a node.
Update keyspaces with NetworkTopologyStrategy and replication factor to match the original RF
ALTER KEYSPACE keyspace_name
WITH REPLICATION =
{ 'class' : 'NetworkTopologyStrategy', 'datacenter_name' : 2 };
Change snitch on each node with restart.
Add nodes in a different datacenter. Make sure that when you add them you have auto_bootstrap: false in the cassandra.yaml
Run nodetool rebuild original_dc_name on each new node.
I just found this excellent tutorial on migrating Cassandra:
Cassandra Migration To EC2 by highscalability.com
Although the details will be found at the original article, an outline of the main steps are:
1. Cassandra Multi DC Support
Configure the PropertyFileSnitch
Update the replication strategy
Update the client connection setting
2. Setup Cassandra On EC2
Start the nodes
Stop the EC2 nodes and cleanup
Start the nodes
Place data replicas in the cluster on EC2
3. Decommission The Old DC And Cleanup
Decommission the seed node(s)
Update your client settings
Decommission the old data center

couchbase xdcr replication with view

i have a question.
I have couchbase installed in this situation:
2 cluster with:
cluster 1:
192.168.1.91
192.168.1.92
192.168.1.93
and cluster 2:
192.168.1.94
192.168.1.95
192.168.1.96
i want to set up replication...so i have created a bucket (test) with 2 replicas, so...
i think that data is replicated in cluster 1... and in cluster 2..
i have set 2 xdcr...
one in cluster 1 to cluster 2 and another one
in cluster 2 to cluster 1....
and seem working but i don't understand some thinks...
1) data is replicated from cluster 1 to cluster 2... but there is a way to replicated also the views?..
2) i have seen another think... in bucket test i have for example 1000 record.
so.. more or less 300 for node.
if a node go down i thoght that i see anywhere 1000 record (for this reason i need replication and i set 2 replicas for bucket) but instead i see only 600 record of my bucket test,why this?
thanks a lot to anyone..
1) views aren't replicated. What you should do is create the same views on both sides of the cluster and they will be updated as data is replicated between your clusters.
2) My guess is that when your node crashes you are not actually failing it over. This needs to be done in order to active the replicas on the other nodes.

Resources