facing 2 problems, 1. Failure in segment mirroring, 2. Segment postmaster has exited. I am having 1 master and 8 segment(contains 4 mirror and 4 primary). Out of that 2 segment didn't turned up. So DB can't able to start.
I tried CT_METADATA change and added gp_crash_recovery_abort_suppress_fatal =on in conf also. But didnt turned up.
Major issue is in one primary segment data size is reduced from 1.8TB to 3 GB.
I need DB to start and recover the data
Related
I am quite new to Redis, and I am trying to figure out the behavior of Redis slaves in caching. Two of my Redis slaves has a 0% hit rate, where one of them has 100+ keyspace_misses while the other has 900+ keyspace_misses. I have the master slave configured like this:
Master Slave
1 5
2 6
3 7
4 8
The other slave has 0 keyspace_misses while the last slave has 0 keyspace_misses and 2 keyspace_hits. Is it normal for Redis slaves to do lookups? Or is it caused by by a problem in master? Are there logs to show this problem?
So how this works is,
set command is executed in master.
this data is sent for replication to slave.
when there is a get request, it lands on any of the node (master or slave) where it is searched and the value is returned if found.
What you say:
Two of my Redis slaves has a 0% hit rate -
You might be missing slaveof ip_to_contact_master port_to_contact_master in your redis.conf file
one of them has 100+ keyspace_misses while the other has 900+ keyspace_misses - Keyspace misses are normal as the incoming key may not be in redis or may have been expired or may have not been replicated yet.
You can read about scaling reads in redis here
Cassandra Version : 2.0.3
Environment : Linux OS, 2GB RAM, 180 GB Hard Disk,
DC1(Old) - 3 Nodes,
DC2(New) - 3 Nodes,
All machine have same configuration.
I followed this link to enable virtual nodes in an existing data center.
Data is continuously written into Cassandra. After rebuilding nodes in the new data center, all nodes in the new data center receive data. As described in the link above, I start to migrate Cassandra from initial token to virtual nodes. But in between I found data (data which was added after rebuild) that is unavailable in the new data center. Data in the new data center is missing since the rebuild. After decommissioning nodes in the old data center, everything comes normal.
i have a question.
I have couchbase installed in this situation:
2 cluster with:
cluster 1:
192.168.1.91
192.168.1.92
192.168.1.93
and cluster 2:
192.168.1.94
192.168.1.95
192.168.1.96
i want to set up replication...so i have created a bucket (test) with 2 replicas, so...
i think that data is replicated in cluster 1... and in cluster 2..
i have set 2 xdcr...
one in cluster 1 to cluster 2 and another one
in cluster 2 to cluster 1....
and seem working but i don't understand some thinks...
1) data is replicated from cluster 1 to cluster 2... but there is a way to replicated also the views?..
2) i have seen another think... in bucket test i have for example 1000 record.
so.. more or less 300 for node.
if a node go down i thoght that i see anywhere 1000 record (for this reason i need replication and i set 2 replicas for bucket) but instead i see only 600 record of my bucket test,why this?
thanks a lot to anyone..
1) views aren't replicated. What you should do is create the same views on both sides of the cluster and they will be updated as data is replicated between your clusters.
2) My guess is that when your node crashes you are not actually failing it over. This needs to be done in order to active the replicas on the other nodes.
Hope someone can help. We are having issues restoring all nodes of a cassandra 2.0 cluster from a snapshot. I have reviewed the instructions [Restoring from a snapshot][1]
Specific steps done include:
All data had been flushed from the memtables.
All nodes were compacted down to 1 sstable
Snapshots were taken on all nodes and saved off elsewhere
New cluster stood up, install from sratch of identical cluster (less data)
keyspace and column families were created
All nodes were stopped
commitlogs were cleared on all nodes and verified no sstable files existed
snapshot sstables were copied to each corresponding node under the base table folder
All nodes were restarted
Nodetool repair was run on all nodes
Result of these steps that appear to match the documentation is:
For a 2 node cluster, nodetool cfstats on each node seems to report approximate number of keys each node would have. nodetool status shows correct division of data by host
logging into cqlsh and doing a select count(*) on one of the columnfamily with limit high enough to return all rows does not report back the correct/original number of rows. It appears to report just the results of one node.
Is there a step missing from the documentation? Why doesn't a select count(*) show all the rows?
Thanks,
dfgriffith
I'm using HBase client in my application server (-cum web-server) with HBase
cluster setup of 6 nodes using CDH3u4 (HBase-0.90). HBase/Hadoop services
running on cluster are:
NODENAME-- ROLE
Node1 -- NameNode
Node2 -- RegionServer, SecondaryNameNode, DataNode, Master
Node3 -- RegionServer, DataNode, Zookeeper
Node4 -- RegionServer, DataNode, Zookeeper
Node5 -- RegionServer, DataNode, Zookeeper
Node6 -- Cloudera Manager, RegionServer, DataNode
I'm using following optimizations for my HBase client:
auto-flush = false
ClearbufferOnFail=true
HTable bufferSize = 12MB
Put setWriteToWAL = false (I'm fine with loss of 1 data).
In order to be closely consistent between read and write, I'm calling
flush-commits on all the buffered tables at every 2 sec.
In my application, I place the HBase write call in a Queue (async manner) and
draining the queue using 20 Consumer threads. On hitting web-server locally
using curl, I'm able to see TPS of 2500 for HBase after curl completes, but
with Load-test where request is coming at high rate of 1200 hits per second
on 3 application servers,the Consumer(drain) threads which are responsible to
write to HBase are not writing data at a rate comparable to input rate. I'm
seeing not more than 600 TPS when request rate is 1200 hits per second.
Can anyone suggest what we can do to improve performance? I've tried with
reduced threads to 7 on each of 3 app server but still no effect. An expert
opinion would be helpful. As this is a production server, so not thinking
to swap the roles, unless someone point severe performance benefit.
[EDIT]:
Just to highlight/clarify our HBase writing pattern, our 1st Transaction checks the row in Table-A (using HTable.exists). It fails to find the row first time and so write to three tables. Subsequent 4 Transaction make exist check on Table-A and as it finds the row, it writes only to 1 Table.
So that's a pretty ancient version of HBase. As of Aug 18, 2013, I would recommend upgrading to something based off of 0.94.x.
Other than that it's really hard to tell you for sure. There are lots of tuning knobs. You should :
Make sure that HDFS has enough xceivers.
Make sure that HBase has enough heap space.
Make sure there is no swapping
Make sure there are enough handlers.
Make sure that you have compression turned on. [1]
Check disk io
Make sure that your row keys, column family names, column qualifiers, and values are as small as possible
Make sure that your writes are well distributed across your key space'
Make sure your regions are (pre-)split
If you're on a recent version then you might want to look at encoding [2]
After all of those things are taken care of then you can start looking at logs and jstacks.
https://hbase.apache.org/book/compression.html
https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.html