Cassandra - Enabling Virtural Node causes new data unavailable from new DC - cassandra-2.0

Cassandra Version : 2.0.3
Environment : Linux OS, 2GB RAM, 180 GB Hard Disk,
DC1(Old) - 3 Nodes,
DC2(New) - 3 Nodes,
All machine have same configuration.
I followed this link to enable virtual nodes in an existing data center.
Data is continuously written into Cassandra. After rebuilding nodes in the new data center, all nodes in the new data center receive data. As described in the link above, I start to migrate Cassandra from initial token to virtual nodes. But in between I found data (data which was added after rebuild) that is unavailable in the new data center. Data in the new data center is missing since the rebuild. After decommissioning nodes in the old data center, everything comes normal.

Related

Cassandra node does not pull data after cleanup and start

I deleted by mistake some data files from one of Cassandra nodes.
After that I stopped the said node, removed data, commitlog and saved_caches dirs from it, and started it again.
The node joined and is UN in nodetool status and in OpsCenter, also it owns 15.3% tokens.
I expect it to start to pull the data from the other nodes, but its data stays on 157.31 KB and it's not doing anything.
In log it can be seen that last log entry was half an hour ago and it was Handshaking version with DB03/10.2.106.3 (it's its own IP).
How can I balance the data again?
EDIT: Cassandra version we use is 2.1 2.0.12
EDIT: in cassandra.yaml there is no entry auto_bootstrap, thus it is supposed to be the default true setting, according to http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/configuration/configCassandra_yaml_r.html
try [nodetool rebuild][1] which Datastax describes as "rebuilds data by streaming from other nodes"

Backup Hadoop in order to install new cluster, best practice

I am building a new Hadoop cluster (expanding number of nodes and extending capacity of current nodes) and need to back up all of the existing data. Right now I am just tar-ing everything and sending it to another server.
Is there a smarter way of doing this which will allow me to easily deploy once the new cluster is set up?
Edit: I should also point out that I don't store any data on the cluster. I bring data to the cluster, process it, and then send the processed data back to the original server. Any temporary data on the cluster is the deleted.
Use Distcp to transfer the HDFS data to other cluster or any cloud inorder to store the data.
If you want to schedule the Backup process you may avail OOZIE-DISTCP for backup process!!

How to migrate single datacenter cluster to multiple datacenter cluster in cassandra>

Provide recommended configuration to migrate the data from the single data center cassandra cluster to multiple data center cassandra cluster. Currenlty i have the single data center cluster environment with following configurations,
i) No of nodes: 3
ii) Replication Factor : 2
iii) Strategy: SimpleStrategy
iv) endpoint_snitch: SimpleSnitch
And now i am planning to add 2 more nodes which is in different location. So i thought of moving to Multiple data center cluster with following confiruations.
i) No of nodes: 5
ii) RF: dc1=2, dc2=2
iii) Strategy: NetworkTopolofyStrategy
iv). endpoint_snitch: PropertyFileSnitch (I have the cassandra.topolofy.properties file)
What is the procedure to migrate the data without losing any data?
Please let me know the recommended steps to follow or any guide which i can refer. Please let me know if further info is required.
Complete repairs on all nodes.
Take snapshot on all nodes to have a fall back point.
Decommission each node that is not a pure Cassandra workload. Repair the ring each time you decommission a node.
Update keyspaces with NetworkTopologyStrategy and replication factor to match the original RF
ALTER KEYSPACE keyspace_name
WITH REPLICATION =
{ 'class' : 'NetworkTopologyStrategy', 'datacenter_name' : 2 };
Change snitch on each node with restart.
Add nodes in a different datacenter. Make sure that when you add them you have auto_bootstrap: false in the cassandra.yaml
Run nodetool rebuild original_dc_name on each new node.
I just found this excellent tutorial on migrating Cassandra:
Cassandra Migration To EC2 by highscalability.com
Although the details will be found at the original article, an outline of the main steps are:
1. Cassandra Multi DC Support
Configure the PropertyFileSnitch
Update the replication strategy
Update the client connection setting
2. Setup Cassandra On EC2
Start the nodes
Stop the EC2 nodes and cleanup
Start the nodes
Place data replicas in the cluster on EC2
3. Decommission The Old DC And Cleanup
Decommission the seed node(s)
Update your client settings
Decommission the old data center

restore a cassandra cluster from snapshot failed

Hope someone can help. We are having issues restoring all nodes of a cassandra 2.0 cluster from a snapshot. I have reviewed the instructions [Restoring from a snapshot][1]
Specific steps done include:
All data had been flushed from the memtables.
All nodes were compacted down to 1 sstable
Snapshots were taken on all nodes and saved off elsewhere
New cluster stood up, install from sratch of identical cluster (less data)
keyspace and column families were created
All nodes were stopped
commitlogs were cleared on all nodes and verified no sstable files existed
snapshot sstables were copied to each corresponding node under the base table folder
All nodes were restarted
Nodetool repair was run on all nodes
Result of these steps that appear to match the documentation is:
For a 2 node cluster, nodetool cfstats on each node seems to report approximate number of keys each node would have. nodetool status shows correct division of data by host
logging into cqlsh and doing a select count(*) on one of the columnfamily with limit high enough to return all rows does not report back the correct/original number of rows. It appears to report just the results of one node.
Is there a step missing from the documentation? Why doesn't a select count(*) show all the rows?
Thanks,
dfgriffith

How to balance load of HBase while loading file?

I am new to Apache-Hadoop. I have Apache-Hadoop cluster of 3 nodes. I am trying to load a file having 4.5 billion records,but its not getting distributed to all nodes. The behavior is kind of region hotspotting.
I have removed "hbase.hregion.max.filesize" parameter from hbase-site.xml config file.
I observed that if I use 4 node's cluster then it distributes data to 3 nodes and if I use 3 node's cluster then it distributes to 2 nodes.
I think, I am missing some configuration.
Generaly with HBase the main issue is to prepare rowkeys that are not monotonically.
If they are, only oneregion server is used at the time:
http://ikaisays.com/2011/01/25/app-engine-datastore-tip-monotonically-increasing-values-are-bad/
This is HBase Reference Guide about RowKey Design:
http://hbase.apache.org/book.html#rowkey.design
And one more really good article:
http://hortonworks.com/blog/apache-hbase-region-splitting-and-merging/
In our case predefinition of Region servers also improved the loading time:
create 'Some_table', { NAME => 'fam'}, {SPLITS=> ['a','d','f','j','m','o','r','t','z']}
Regards
Pawel

Resources