SQL Server Failover Clustering With two Nodes - windows-clustering

We have sql server failover clustering with two Nodes,
when Primary Node active getting updated database.
but when Secondary Node active,its getting 2 year old data,
Now my question how can i make update second node data.
Thanks in advance

Related

Clustered elasticsearch setup (two master nodes)

We are currently setting up an environment with two elasticsearch instances (clustered servers).
Since it's clustered, we need to make sure that data (indexes) are synched between the two instances.
We do not have the possibility to setup an additional (3rd) server/instance to act as the 'master'.
Therefore we have configured both instances as master and data nodes. So instance 1 is master & node and instance 2 is also master & node.
The synchronization works fine when both instances are up and running. But when one instance is down, the other instance keeps trying to connect with the instance that is down, which obviously fails because the instance is down. Therefore the node that is up is also not functioning anymore, because it can not connect to his 'master' node (which is the node that is down), even though the instance itself is also a 'master'.
The following errors are logged in this case:
org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/2/no master];
org.elasticsearch.transport.ConnectTransportException: [xxxxx-xxxxx-2][xx.xx.xx.xx:9300] connect_exception
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: no further information: xx.xx.xx.xx/xx.xx.xx.xx:9300
In short: two elasticsearch master instances in a clustered setup. When one is down, the other one does not function because it can not connect to the 'master' instance.
Desired result: If one of the master instances is down, the other should continue functioning (without throwing errors).
Any recommendations on how to solve this, without having to setup an additional server that is the 'master' and the other 2 the 'slaves'?
Thanks
To be able to vote, masters must be a minimum of 2.
That's why you must have a minimum of 3 master nodes if you want your cluster to resist to the loss of one node.
You can just add a specialized small master node by settings all other roles to false.
This node can have very few resources .
As describe in this post :
https://discuss.elastic.co/t/master-node-resource-requirement/84609
Dedicated master nodes need persistent storage, but not a lot of it. 1-2 CPU cores and 2-4GB RAM is often sufficient for smaller deployments. As dedicated master nodes do not store data you can also set the heap to a higher percentage (75%-80%) of total RAM that is recommended for data nodes.
If there are no options to increase 1 more node then you can set
minimum_master_nodes=1 . This will let your es cluster up even if 1 node is up. But it may lead to split brain issue as we restricted only 1 node to be visible to form cluster.
In that scenario you have to restart the cluster to resolve split brain.
I would suggest you to upgrade to elasticsearch 7.0 or above. There you can live with two nodes each master eligible and split brain issue will not come.
You should not have 2 master eligible nodes in the cluster as its a very risky thing and can lead to split brain issue.
Master nodes doesn't require much resources, but as you have just two data nodes, you can still live without having a dedicated master nodes(but please aware that it has downsides) to just save the cost.
So simply, remove master role from another node and you should be good to go.

How to add a node for failover in Elasticsearch

I currently have single node for elasticsearch in a windows server. Can you please explain how to add one extra node for failover in different machine? I also wonder how two nodes can be kept identical using NEST.
Usually, you don't run a failover node, but run a cluster of nodes to provide High Availability.
A minimum topology of 3 master eligible nodes with minimum_master_nodes set to 2 and a sharding strategy that distributes primary and replica shards over nodes to provide data redundancy is the minimum viable topology I'd consider running in production.

Creating keyspace with replication factor 1 on a 4 node cluster

I just started learning cassandra and I have a dumb question;
Say for example I have a cassandra cluster of 4 nodes and I create a keyspace myKeySpace using SimpleStrategy and ReplicationFactor 1. Since I have chosen RF as 1, I mean to say that my data for this keyspace to be replicated to 1 node in the cluster.
But when I created table and inserted a row in this keyspace/table, I saw this new row is getting inserted in to all nodes in my cluster (select * on all nodes showed this row).
My question is since I have chosen RF as 1 for this keyspace, I would have expected one node in this cluster should have owned this data, not the rest of the nodes.
PLease clarify and correct me if my understanding is wrong.
Replication factor 1 does not mean that a single node holds all your data, it means that the cluster only holds a single copy of your data.
It basically means that every node in your cluster holds 25% of your data, and if any node is lost, your data won't be fully available.
You can also calculate how your cluster behaves using the cassandra calculator.

How Connection Pool/distribution are across Vertica cluster is done?

How Connection Pool/distribution are across Vertica cluster ?
I am trying to understand how connections are handeled in Vertica! Like Oracle handles it's connections thou it's listener or how the connections are balanced inside the cluster (for better distribution).
Vertica's process of handling a connection is basically as follows:
A node receives the connection, making it the Initiator Node.
The initiator node generates the query execution plan and distributes it to the other nodes.
The nodes fill in any node specific details of the execution plan
The nodes execute the query
(ignoring some stuff here)*
The nodes send the result set back to the initiator node
The initiator node collects the data and does final aggregations
The initiator node sends the data back to the client.
The recommended way to connect through Vertica is through a load balancer so no single node becomes a failure point. Vertica itself does not distribute connections between nodes, it distributes the query to the other nodes.
I'm not well versed in Oracle or the details of how systems do their data connection process; so hopefully I'm not too far off the mark of what you're looking for.
From /my/ experience, each node can handle a number of connections. Once you try to connect more than that to a node, it will reject the connection. That was experienced from a map-reduce job that connected in the map function.
*Depending on the query/data/partitioning it may need to do some data transfer behind the scene to complete the query for each node. It slows the query down when this happens.

Unable to add nodes to existing Cassandra Cluster

We have cassandra cluster of 6 nodes on EC2,we have to double its capacity to 12 nodes.
So to add 6 more nodes i followed the following steps.
1 Calculated the tokens for 12 nodes and configured the new nodes accordingly.
2 With proper configuration started the new nodes so that they new nodes will bisect the
existing token ranges.
In the beginning all the new nodes were showing the streaming in
progress.
In ring status all the node were in "Joining" state
After 12 hours 2 nodes completed the streaming and came into the normal state.
But on the remaining 4 nodes after streaming some amount of data they are not showing any progress , look like they are stuck
We have installed Cassandra-0.8.2 and have around 500 GB of data on each existing nodes and storing data on EBS volume.
How can i resolve this issue and get the balanced cluster of 12 nodes?
Can i restart the nodes?
If i cleaned the data directory of stuck Cassandra nodes and restarted with fresh installation, will it cause any data loss?
There will not be any data loss if you replication factor 2 or greater.
Version 0.8.2 of Cassandra has several known issues - please upgrade to 0.8.8 on all original nodes as well as the new the nodes that came up and then start the procedure over for the nodes that did not complete.
Also, be aware that storing data on EBS volumes is a bad idea :
http://www.mail-archive.com/user#cassandra.apache.org/msg11022.html
While this won't answer your question directly, hopefully it points you in the right direction:
There is a fairly active #cassandra IRC channel on freenode.org.
So here is the answer why our some of the nodes were stuck.
1) We have upgraded from cassandra-0.7.2 to cassandra0.8.2
2) And we are loading the sstables with sstable-loader utility
3) But some data for some of the column families are directly inserted from hadoop job.
And the data of these column families are showing some other version as we have not upgraded the cassandra api in hadoop.
4) Because of this version mismatch cassandra throws 'version mismatch exception' and terminate the streaming
5) So the solution for this is to use "nodetool scrub keyspace columnfamily". I have used this and my issue is resolved
So the main thing here is if you are upgrading the cassandra cluster capacity u must do the nodetool scrub

Resources