I have two servers in two different geographic locations (alfa1 and alfa2).
r.tableCreate('dados', {shards:1, replicas:{alfa1:1, alfa2:1}, primaryReplicaTag:'alfa1'})
I need to be able to write for both servers, but when I try to shutdown alfa1, and write to alfa2, rethinkdb only allow reads: Table test.dados is available for outdated reads, but not up-to-date reads or writes.
I need a way to write for all replicas, not only for Primary.
Is this possible ? Does rethinkdb allow multidatacenter replication ?
I think that multidatacenter replication need to permit write for both datacenters.
I tried to remove "primaryReplicaTag" but system don't accept !
Any help is welcome !!!
RethinkDB does support multi-datacenter replication/sharding.
I think the problem here is that you've setup a cluster of two, which means that when one fails you only have 50% of the nodes in the cluster which means you have less than 51%.
From the failover docs - https://rethinkdb.com/docs/failover/
To perform automatic failover for a table, the following requirements
must be met:
The cluster must have three or more servers
The table must be configured to have three or more replicas
A majority (greater thanhalf) of replicas for the table must be available
Try adding just one additional server and your problems should be resolved.
Related
I have to work on a ClickHouse cluster, but I don't understand well the purpose of the double table.
For example :
For each table I have a clone with the suffix '_nd', I know that the difference is about the replication the data over the cluster. The one without '_nd' is local to the server and the other one is for the cluster.
But why is there those 'duplicates' ?
Is it for performance reason ? What will be the impact for me in my code ?
I need to export Cassandra schema and data to a file in order to quickly setup identical cluster when needed.
Identical likely means the same topology, same number of nodes and replication factor.
In case of NetworkTopologyStrategy simple file backup/sstable snapshot is not helpful cause peer IPs are recorded with other data. After restore on another node it tries to reach source cluster seeds.
I was surprised there is almost no ready solution for such task.
Suppose i have to use DESC SCHEMA; then parse output for all the tables, backup them with COPY keyspace.table TO /backup/keyspace.table.csv; and later use sstableloader to restore on other node.
Any better solutions?
You can use the solution you've specified.
Or you can use snapshots option (looks easier for me). Here's a docs describing how to copy snapshots between clusters:
http://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_snapshot_restore_new_cluster.html
I have 5 servers. On my first "primary" I have in config:
join=ip2:port
join=ip3:port
join=ip4:port
join=ip5:port
I am connection to rethinkdb via proxy:
proxy --join ip1:port --join ip2:port
When I stop rethinkdb on ip1 everything stops. I do not know how to solve this. Rethinkdb docs are not complete. Do I have to define this joins in every config?
UPDATE
In fact when I stop any server in cluster my app crash! I am getting in webui something like "Table db.table is available for outdated reads, but not up-to-date reads or writes."
Except table shards I do not see point.
Yes, you usually want every node to know the address of every other node so that they can connect to each other if any subset of the nodes is down.
I have 3 different pool of clients in 3 different geographical locations.
I need configure Rethinkdb with 3 different clusters and replicate data between the (insert, update and deletes). I do not want to use shard, only replication.
I didn't found in documentation if this is possible.
I didn't found in documentation how to configure multi-cluster replication.
Any help is appreciated.
I think that multi cluster is just same a single clusters with nodes in different data center
First, you need to setup a cluster, follow this document: http://www.rethinkdb.com/docs/start-a-server/#a-rethinkdb-cluster-using-multiple-machines
Basically using below command to join a node into cluster:
rethinkdb --join IP_OF_FIRST_MACHINE:29015 --bind all
Once you have your cluster setup, the rest is easy. Go to your admin ui, select the table, in "Sharding and replication", click Reconfigure and enter how many replication you want, just keep shard at 1.
You can also read more about Sharding and Replication at http://rethinkdb.com/docs/sharding-and-replication/#sharding-and-replication-via-the-web-console
I am new to Cassandra. When i tried to set up a Cassandra cluster, i noticed that any node can join the cluster if it has the IP address of seed nodes, then your data can be seen by any one. How to prevent this? (i thought about requiring password before joining but don't know how to do).
use kerberos!! check Datastax documentation - http://www.datastax.com/docs/datastax_enterprise3.0/security/security_setup_kerberos