Two Node cluster Node A , Node B .
Service X running on Node A, Node B is DC.
We are using stack corosync with Pacemaker.
Failure Timeout is 10 sec .
Target-Role is started .
Events happens like this
Node A sends event to Node B Service X is down
Node B prints Ignoring expired failure for Service X
After this Service X is never restarted by the Cluster.
Now questions are:
Why is Node B (DC) ignoring the expired failure?
Even for this time DC ignored but as the Service X is down, Node A should monitor the service and again send failure status to Node B and at that time Node B should restart the service. Why this no hapenning?
One Reason for this may be time difference between two servers (DC and Other Machine) .
So , DC thinks that this event is old and ignore it . Please sync time and then try to re-create the issue .
U can add the following property to your crm configuration which will try to start failed, expired resources.
start-failure-is-fatal="false"
Related
Im running a spring boot application using infinispan 10.1.8 in a 2 node cluster. The 2 nodes are communicating via jgroups TCP. I configured several REPL_ASYNC.
The problem:
One of these caches, at some point is causing the two nodes to exchange the same message over and over, causing high CPU and memory usage. The only way to stop this is to stop one of the two nodes.
More details, here is the configuration.
org.infinispan.configuration.cache.Configuration replAsyncNoExpirationConfiguration = new ConfigurationBuilder()
.clustering()
.cacheMode(CacheMode.REPL_ASYNC)
.transaction()
.lockingMode(LockingMode.OPTIMISTIC)
.transactionMode(TransactionMode.NON_TRANSACTIONAL)
.statistics().enabled(cacheInfo.isStatsEnabled())
.locking()
.concurrencyLevel(32)
.lockAcquisitionTimeout(15, TimeUnit.SECONDS)
.isolationLevel(IsolationLevel.READ_COMMITTED)
.expiration()
.lifespan(-1) //entries do not expire
.maxIdle(-1) // even when they are idle for some time
.wakeUpInterval(-1) // disable the periodic eviction process
.build();
One of these caches (named formConfig) is causing me abnormal communication between the two nodes, this is what happens:
with jmeter I generate traffic load targeting only node 1
for some time node 2 receives cache entries from node 1 via SingleRpcCommand, no anomalies, even formConfig cache behaves properly
after some time a new cache entry is sent to the formConfig cache
At this point the same message seems to keep bouncing between the two nodes:
node 1 sends entry mn-node1.company.acme-develop sending command to all: SingleRpcCommand{cacheName='formConfig', command=PutKeyValueCommand{key=SimpleKey [form_config,MECHANICAL,DESIGN,et,7850]
node 2 receives the entry mn-node2.company.acme-develop received command from mn-node1.company.acme-develop: SingleRpcCommand{cacheName='formConfig', command=PutKeyValueCommand{key=SimpleKey [form_config,MECHANICAL,DESIGN,et,7850]
node 2 sends the entry back to node 1 mn-node2.company.acme-develop sending command to all: SingleRpcCommand{cacheName='formConfig', command=PutKeyValueCommand{key=SimpleKey [form_config,MECHANICAL,DESIGN,et,7850]
node 1 receives the entry mn-node1.company.acme-develop received command from mn-node2.company.acme-develop: SingleRpcCommand{cacheName='formConfig', command=PutKeyValueCommand{key=SimpleKey [form_config,MECHANICAL,DESIGN,et,7850],
node 1 sends the entry to node 2 and so on and on...
Some other things:
the system is not under load, jmeter is running only few users in parallel
Even stopping jmeter this loop doesn't stop
formConfig is the only cache that behaves this way. All the other REPL_ASYNC caches work properly. I deactivated only formConfig cache and the system is working correctly.
I cannot reproduce the problem with two nodes running on my machine
Here's a more complete log file including logs from both nodes.
Other infos:
opendjdk 11 hot spot
spring boot 2.2.7
infinispan spring boot starter 2.2.4
using JbossUserMarshaller
I'm suspecting
something related to transactional configuration
or something related to serialization/deserialization of the cached object
The only scenario where this can happen is when the SimpleKey has different hashCode().
Are there any exceptions in the log? Are you able to check if the hashCode() is the same after serialization & deserialization of the key?
I've got the following configuration:
Redis_version:3.2.0
3 master nodes and 3 slave nodes
Each master node is replicated to a slave Everything is correct. When one master node fails by a "kill" command, the corresponding slave node becomes the master as expected. After few seconds, cluster_state returns to the OK state.
BUT, if two master nodes fail simultaneously, none of the associated slave nodes become the master. The cluster_state stays in "fail" state.
cluster nodes command output.
b60c284a515b31aa6b11022fc07cf1a399171e04 127.0.0.1:7000 master,fail? - 1464690455030 1464690454930 1 disconnected 0-5460
637d1f074419963653b206c5ed7cbed4c3d0ace0 127.0.0.1:7001 master,fail? - 1464690455030 1464690454930 2 disconnected 5461-10922
d2aae2a3d87c6407e002076740c8febf80f37865 127.0.0.1:7003 myself,slave b60c284a515b31aa6b11022fc07cf1a399171e04 0 0 4 connected
72d4c9ce140fb57436c1b21702bf3c646ef29db3 127.0.0.1:7002 master - 0 1464690718480 3 connected 10923-16383
af34a7b2241943baf23e634e81b552d8bf23cdd0 127.0.0.1:7005 slave 72d4c9ce140fb57436c1b21702bf3c646ef29db3 0 1464690718480 6 connected
d0fec0609c9e786ac9ca4629f36cabd7c5c3130c 127.0.0.1:7004 slave 637d1f074419963653b206c5ed7cbed4c3d0ace0 0 1464690718480 5 connected
The slave auto-failover won't happen when at least half of the masters get disconnected, because the failover election is required more than half of the masters come into consensus.
To start a manual failover, connect to the slave node with redis-cli and send a cluster failover TAKEOVER command (the takeover is required).
In your case
redis-cli -h 127.0.0.1 -p 7003 cluster failover takeover
After the :7003 becomes a master, the other slave will start an automatic failover as well since there are more than half (2/3) of the masters are alive.
I am creating an Empheral node with the help of CuratorFrameworkFactory.newClient method which takes, znodes addresses,sessiontimeoutinms,connectiontimeoutinms,Retry) . I have pass 5*1000 as sessiontimeoutinms and 15*1000 as connectiontimeoutinms. This method is able to create the EPHEMERAL node in my zookeeper but this EPHEMERAL node does not deleted till the application run.
Why this happens as sessiontimeout is 5 seconds.
Most probable cause is your heartbeat setting for Zookeeper (aka tickTime) is higher, and minimum session timeout can't be lower than 2*tickTime.
to debug, when an ephemeral node is created check the ephemeralOwner from the zkCli. the value is the session id.
when the client that owns the node, in the zookeeper logs, you should get this line :
INFO [ProcessThread(sid:0 cport:2182)::PrepRequestProcessor#486] -
Processed session termination for sessionid: 0x161988b731d000c
in this case the ephemeralOwener was 0x161988b731d000c. If you don't get that, you would have got some error. In my case it was EOF exception, which was because of a client library and server mismatch.
I can't see the difference between two parameters for the recovery phase of the gateway module.
In the documentation :
The gateway.recover_after_nodes setting (which accepts a number) controls after how many (...) eligible nodes (...) recovery will start.
The gateway.expected_nodes allows to set how many (...) eligible nodes are expected to be in the cluster, and once met, (...) recovery starts
From what I understand, these two settings trigger the recovery phase once the number of node is equal to the value set.
Why using one over the other?
And what is the point of using both of them?
For example :
gateway:
recover_after_nodes: 3
expected_nodes: 5
In this case, what is the purpose of expected_nodes? recovery will be triggered as soon as there will be 3 nodes. There must be another reason to use it.
I hope my question is clear enough.
Thanks in advance!
When using recovery_after_nodes, recover_after_data_nodes or recovery_after_master_nodes, once all set conditions are met the cluster will then start waiting recover_after_time before starting recovery:
The gateway.recover_after_time setting (which accepts a time value)
sets the time to wait till recovery happens once all
gateway.recover_after...nodes conditions are met.
When using expected_nodes, expected_data_nodes or expected_master nodes, recovery will start once all conditions are met - the cluster will not wait. In addition, it will also default recovery_after_time to 5 min.
In your test case:
gateway:
recover_after_nodes: 3
expected_nodes: 5
Once you hit 3 nodes a countdown clock starts and the cluster will then recovery in either 5 minutes (the default) or if you hit 5 nodes. Basically it allows you to set a minimum threshold (recovery_after_nodes), with a timeout (recovery_after_time) to wait for a desired state (expected_nodes). You will either recovery recovery_after_time after recovery_after_nodes is hit, or when expected_nodes is hit (no additional waiting) - whichever comes first.
from the public document, there are misunderstandings in this threads.
http://www.elastic.co/guide/en/elasticsearch/reference/1.x/modules-gateway.html
gateway:
recover_after_time: 5m
expected_nodes: 2
In an expected 2 nodes cluster will cause recovery to start 5 minutes
after the first node is up, but once there are 2 nodes in the cluster,
recovery will begin immediately (without waiting).
so, the timer defined by recover_after_time will start already after a first node is up. not start after finding nodes defined in recover_after_nodes
I have the cassandra cluster of 12 nodes on EC2.
Because of some failure we lost one of the node completely.I mean that machine do not exist anymore.
So i have created the new EC2 instance with different ip and same token as that of the dead node and i also had the backup of data on that node so it works fine
But the problem is the dead nodes ip still appears as a unreachable node in describe cluster.
As that node (EC2 instance) does not exist anymore I can not use the nodetool decommission or nodetool disablegossip
How can i get rid of this unreachable node
I had the same problem and I resolved it with removenode, which does not require you to find and change the node token.
First, get the node UUID:
nodetool status
DN 192.168.56.201 ? 256 13.1% 4fa4d101-d8d2-4de6-9ad7-a487e165c4ac r1
DN 192.168.56.202 ? 256 12.6% e11d219a-0b65-461e-babc-6485343568f8 r1
UN 192.168.2.91 156.04 KB 256 12.4% e1a33ed4-d613-47a6-8b3b-325650a2bbd4 RAC1
UN 192.168.2.92 156.22 KB 256 13.6% 3a4a086c-36a6-4d69-8b61-864ff37d03c9 RAC1
UN 192.168.2.93 149.6 KB 256 11.3% 20decc72-8d0a-4c3b-8804-cc8bc98fa9e8 RAC1
As you can see the .201 and .202 are dead and on a different network. These have been changed to .91 and .92 without proper decommissioning and recommissioning. I was working on installing the network and made a few mistakes...
Second, remove the .201 with the following command:
nodetool removenode 4fa4d101-d8d2-4de6-9ad7-a487e165c4ac
(in older versions it was nodetool remove ...)
But just like for the nodetool removetoken ..., it blocks... (see comment by samarth in psandord answer) However, it has a side effect, it puts that UUID in a list of nodes to be removed. So next we can force the removal with:
nodetool removenode force
(in older versions it was nodetool remove ...)
Now the node accepts the command it tells me that it is removing the invalid entry:
RemovalStatus: Removing token (-9136982325337481102). Waiting for replication confirmation from [/192.168.2.91,/192.168.2.92].
We also see that it communicates with the two other nodes that are up and thus it takes a little time, but it is still quite fast.
Next a nodetool status does not show the .201 node. I repeat with .202 and now the status is clean.
After that you may also want to run a cleanup as mentioned in psanford answer:
nodetool cleanup
The cleanup should be run on all nodes, one by one, to make sure the change is fully taken in account.
Normally when replacing a node you want to set the new node's token to (failure node's token) - 1 and let it bootstrap. As of 1.0 there is now a flag you can specify on startup to replace a dead node: "cassandra.replace_token=".
Since you have already added the new node with the same token there's an extra step:
Move the new node's token to (failure node's token) - 1 using nodetool move
Run nodetool removetoken <failed node's token> from one of the up nodes
Run nodetool cleanup on each node
These are basically the pre 1.0 instructions for replacing a dead node with the additional token move.