Cassandra cluster is not recovering after restarting? - cassandra-2.0

I restarted my Cassandra cluster and now after its restarts, it shows that other nodes are unavailable. But when I check by going to those servers, it shows that Cassandra is running in those. Your help is highly appreciate.
nodetool repair - output
Repair session {session-id} for range (id] failed with error java.io.IOException: Cannot proceed on repair because a neighbor (/{ip}) is dead: session failed
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN {ip1} 2.06 GB 256 22.6% {token 1} 1b
DN {ip1} ? 256 24.5% {token 2} 1c
DN {ip1} ? 256 28.9% {token 3} 1c
DN {ip1} ? 256 24.0% {token 4} 1d

One thing to note is you should alway restart one node at a time and wait for it to join the cluster (UN) before restarting others.
I am assuming all the nodes had previously joined the cluster and after a restart, they went out of sync. Do a rolling restart of all the nodes (one at a time) and wait for the node to join the cluster.
Cassandra prints and stores the communication and peers information in system.peer and system.local tables and they might go out of sync if you restart a node when another node is still in joining state.

Related

What is the behavior of Redis slave when using it as a cache?

I am quite new to Redis, and I am trying to figure out the behavior of Redis slaves in caching. Two of my Redis slaves has a 0% hit rate, where one of them has 100+ keyspace_misses while the other has 900+ keyspace_misses. I have the master slave configured like this:
Master Slave
1 5
2 6
3 7
4 8
The other slave has 0 keyspace_misses while the last slave has 0 keyspace_misses and 2 keyspace_hits. Is it normal for Redis slaves to do lookups? Or is it caused by by a problem in master? Are there logs to show this problem?
So how this works is,
set command is executed in master.
this data is sent for replication to slave.
when there is a get request, it lands on any of the node (master or slave) where it is searched and the value is returned if found.
What you say:
Two of my Redis slaves has a 0% hit rate -
You might be missing slaveof ip_to_contact_master port_to_contact_master in your redis.conf file
one of them has 100+ keyspace_misses while the other has 900+ keyspace_misses - Keyspace misses are normal as the incoming key may not be in redis or may have been expired or may have not been replicated yet.
You can read about scaling reads in redis here

How to migrate single datacenter cluster to multiple datacenter cluster in cassandra>

Provide recommended configuration to migrate the data from the single data center cassandra cluster to multiple data center cassandra cluster. Currenlty i have the single data center cluster environment with following configurations,
i) No of nodes: 3
ii) Replication Factor : 2
iii) Strategy: SimpleStrategy
iv) endpoint_snitch: SimpleSnitch
And now i am planning to add 2 more nodes which is in different location. So i thought of moving to Multiple data center cluster with following confiruations.
i) No of nodes: 5
ii) RF: dc1=2, dc2=2
iii) Strategy: NetworkTopolofyStrategy
iv). endpoint_snitch: PropertyFileSnitch (I have the cassandra.topolofy.properties file)
What is the procedure to migrate the data without losing any data?
Please let me know the recommended steps to follow or any guide which i can refer. Please let me know if further info is required.
Complete repairs on all nodes.
Take snapshot on all nodes to have a fall back point.
Decommission each node that is not a pure Cassandra workload. Repair the ring each time you decommission a node.
Update keyspaces with NetworkTopologyStrategy and replication factor to match the original RF
ALTER KEYSPACE keyspace_name
WITH REPLICATION =
{ 'class' : 'NetworkTopologyStrategy', 'datacenter_name' : 2 };
Change snitch on each node with restart.
Add nodes in a different datacenter. Make sure that when you add them you have auto_bootstrap: false in the cassandra.yaml
Run nodetool rebuild original_dc_name on each new node.
I just found this excellent tutorial on migrating Cassandra:
Cassandra Migration To EC2 by highscalability.com
Although the details will be found at the original article, an outline of the main steps are:
1. Cassandra Multi DC Support
Configure the PropertyFileSnitch
Update the replication strategy
Update the client connection setting
2. Setup Cassandra On EC2
Start the nodes
Stop the EC2 nodes and cleanup
Start the nodes
Place data replicas in the cluster on EC2
3. Decommission The Old DC And Cleanup
Decommission the seed node(s)
Update your client settings
Decommission the old data center

couchbase xdcr replication with view

i have a question.
I have couchbase installed in this situation:
2 cluster with:
cluster 1:
192.168.1.91
192.168.1.92
192.168.1.93
and cluster 2:
192.168.1.94
192.168.1.95
192.168.1.96
i want to set up replication...so i have created a bucket (test) with 2 replicas, so...
i think that data is replicated in cluster 1... and in cluster 2..
i have set 2 xdcr...
one in cluster 1 to cluster 2 and another one
in cluster 2 to cluster 1....
and seem working but i don't understand some thinks...
1) data is replicated from cluster 1 to cluster 2... but there is a way to replicated also the views?..
2) i have seen another think... in bucket test i have for example 1000 record.
so.. more or less 300 for node.
if a node go down i thoght that i see anywhere 1000 record (for this reason i need replication and i set 2 replicas for bucket) but instead i see only 600 record of my bucket test,why this?
thanks a lot to anyone..
1) views aren't replicated. What you should do is create the same views on both sides of the cluster and they will be updated as data is replicated between your clusters.
2) My guess is that when your node crashes you are not actually failing it over. This needs to be done in order to active the replicas on the other nodes.

restore a cassandra cluster from snapshot failed

Hope someone can help. We are having issues restoring all nodes of a cassandra 2.0 cluster from a snapshot. I have reviewed the instructions [Restoring from a snapshot][1]
Specific steps done include:
All data had been flushed from the memtables.
All nodes were compacted down to 1 sstable
Snapshots were taken on all nodes and saved off elsewhere
New cluster stood up, install from sratch of identical cluster (less data)
keyspace and column families were created
All nodes were stopped
commitlogs were cleared on all nodes and verified no sstable files existed
snapshot sstables were copied to each corresponding node under the base table folder
All nodes were restarted
Nodetool repair was run on all nodes
Result of these steps that appear to match the documentation is:
For a 2 node cluster, nodetool cfstats on each node seems to report approximate number of keys each node would have. nodetool status shows correct division of data by host
logging into cqlsh and doing a select count(*) on one of the columnfamily with limit high enough to return all rows does not report back the correct/original number of rows. It appears to report just the results of one node.
Is there a step missing from the documentation? Why doesn't a select count(*) show all the rows?
Thanks,
dfgriffith

Poor write Performance by HBase client

I'm using HBase client in my application server (-cum web-server) with HBase
cluster setup of 6 nodes using CDH3u4 (HBase-0.90). HBase/Hadoop services
running on cluster are:
NODENAME-- ROLE
Node1 -- NameNode
Node2 -- RegionServer, SecondaryNameNode, DataNode, Master
Node3 -- RegionServer, DataNode, Zookeeper
Node4 -- RegionServer, DataNode, Zookeeper
Node5 -- RegionServer, DataNode, Zookeeper
Node6 -- Cloudera Manager, RegionServer, DataNode
I'm using following optimizations for my HBase client:
auto-flush = false
ClearbufferOnFail=true
HTable bufferSize = 12MB
Put setWriteToWAL = false (I'm fine with loss of 1 data).
In order to be closely consistent between read and write, I'm calling
flush-commits on all the buffered tables at every 2 sec.
In my application, I place the HBase write call in a Queue (async manner) and
draining the queue using 20 Consumer threads. On hitting web-server locally
using curl, I'm able to see TPS of 2500 for HBase after curl completes, but
with Load-test where request is coming at high rate of 1200 hits per second
on 3 application servers,the Consumer(drain) threads which are responsible to
write to HBase are not writing data at a rate comparable to input rate. I'm
seeing not more than 600 TPS when request rate is 1200 hits per second.
Can anyone suggest what we can do to improve performance? I've tried with
reduced threads to 7 on each of 3 app server but still no effect. An expert
opinion would be helpful. As this is a production server, so not thinking
to swap the roles, unless someone point severe performance benefit.
[EDIT]:
Just to highlight/clarify our HBase writing pattern, our 1st Transaction checks the row in Table-A (using HTable.exists). It fails to find the row first time and so write to three tables. Subsequent 4 Transaction make exist check on Table-A and as it finds the row, it writes only to 1 Table.
So that's a pretty ancient version of HBase. As of Aug 18, 2013, I would recommend upgrading to something based off of 0.94.x.
Other than that it's really hard to tell you for sure. There are lots of tuning knobs. You should :
Make sure that HDFS has enough xceivers.
Make sure that HBase has enough heap space.
Make sure there is no swapping
Make sure there are enough handlers.
Make sure that you have compression turned on. [1]
Check disk io
Make sure that your row keys, column family names, column qualifiers, and values are as small as possible
Make sure that your writes are well distributed across your key space'
Make sure your regions are (pre-)split
If you're on a recent version then you might want to look at encoding [2]
After all of those things are taken care of then you can start looking at logs and jstacks.
https://hbase.apache.org/book/compression.html
https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.html

Resources