I have a question regarding Clustering (session replication/failover) in tomcat 6 using BackupManager. Reason I chose BackupManager, is because it replicates the session to only one other server.
I am going to run through the example below to try and explain my question.
I have 6 nodes setup in a tomcat 6 cluster with BackupManager. The front end is one Apache server using mod_jk with sticky session enabled
Each node has 1 session each.
node1 has a session from client1
node2 has a session from client2
..
..
Now lets say node1 goes down ; assuming node2 is the backup, node2 now has two sessions (for client2 and client1)
The next time client1 makes a request, what exactly happens ?
Does Apache "know" that node1 is down and does it send the request directly to node2 ?
=OR=
does it try each of the 6 instances and find out the hard way who the backup is ?
Not too sure about the workings with BackupManager, my reading of this good URL suggests the replication is intelligent enough in identifying the backup.
In-memory session replication, is
session data replicated across all
Tomcat instances within the cluster,
Tomcat offers two solutions,
replication across all instances
within the cluster or replication to
only its backup server, this solution
offers a guaranteed session data
replication ...
SimpleTcpCluster uses Apache Tribes to maintain communicate with the communications group. Group membership is established and maintained by Apache Tribes, it handles server crashes and recovery. Apache Tribes also offer several levels of guaranteed message delivery between group members. This is achieved updating in-session memory to reflect any session data changes, the replication is done immediately between members ...
You can reduce the amount of data by
using the BackupManager (send only to
one node, the backup node)
You'll be able to see this from the logs if notifyListenersOnReplication="true" is set.
On the other hand, you could still use DeltaManager and split your cluster into 3 domains of 2 servers each.
Say these will be node 1 <-> node 2, 3 <-> 4 and 5 <-> 6.
In such a case - configuring the domain worker attribute, will ensure that session replication will only happen within the domain.
And mod_jk then definitely knows which server to look on when node1 fails.
http://tomcat.apache.org/tomcat-6.0-doc/cluster-howto.html states
Currently you can use the domain
worker attribute (mod_jk > 1.2.8) to
build cluster partitions with the
potential of having a more scaleable
cluster solution with the
DeltaManager(you'll need to configure
the domain interceptor for this).
And a better example on this link:
http://people.apache.org/~mturk/docs/article/ftwai.html
See the "Domain Clustering model" section.
Related
I have a consul cluster which normally should have 5 servers and a bunch of clients. Our script to start the servers originally configured like this
consul agent -server -bootstrap-expect 5 -join <ips of all 5 servers>
However, we had to reOS all servers and bootstrap again -- one of our servers was down with hardware issues and the bootstrap no longer works.
My question is -- in a situation where there are 5 servers, but 3 are sufficient for quorum, should -bootstrap-expect be set to 3?
The documentation here https://www.consul.io/docs/agent/options.html#_bootstrap_expect seems to imply that -bootstrap-expect should be set to the total number of servers which means that even a single machine being down will prevent the cluster from bootstrapping
To be clear our startup scripts are static files, so when I say there are 5 servers it means that up to 5 could be started with the server tag.
In your case, if you don't explicitly need all 5 servers to be online during initial cluster setup, you should set -bootstrap-expect to 3. This will avoid situations similar to what happened i.e. you have 5 servers and you tell them they must wait for all 5 to be online, for initial cluster setup. As documentation suggests:
When provided, Consul waits until the specified number of servers are available and then bootstraps the cluster. This allows an initial leader to be elected automatically.
With --bootstrap-expect=3 as soon as 3 of your 5 Consul servers have joined cluster, the leader election will start, and in case last 2 join much later, cluster will function. And for that matter you can have any number of servers join at later time.
I have two servers in a HA mode. I'd like to know if is it possible to deploy an application on the slave server? If yes, how to configure it in jgroups? I need to run a specific program that access the master database, but I would not like to run on master serve to avoid overhead on it.
JGroups itself does not know much about WildFly and the deployments, it only creates a communication channel between nodes. I don't know where you get the notion of master/slave, but JGroups always has single* node marked as coordinator. You can check the membership through Channel.getView().
However, you still need to deploy the app on both nodes and just make it inactive if this is not its target node.
*) If there's no split-brain partition, or similar rare/temporal issues
I have an Elasticsearch cluster in a VPN.
How can my Spring Boot application access the cluster securely if it is located on a separate server outside of the VPN and how can I configure it in the Spring boot configuration (application.yml/application.properties)?
I also want the application to connect to the cluster i an way so that if i have e.g. 2 Master eligible nodes and one fails, the connection remains intact.
if you have only 2 master eligble nodes, you are at risk of the "split brain problem". there is a easy formula for the calculation of the required number of master nodes:
M = 2F +1 ( m=master node count, f=number of master nodes possible to fail at same time)
in your application define all master nodes as target for the elasticsearch client. The client will handle the failover. see elasticsearc client documentation or https://qbox.io/blog/rest-calls-made-easy-part-2-sniffing-elasticsearch for a example
the vpn should not be handled by your application. the infrastructure (server, firewall) is the right place to address it. try to develop your application environment agnostic. this will make your app easier to develop, maintain and make it more robust to infrastructure changes
I have Ignite instance started as a 'server mode' on computer A, created cache in it and stored 1M Key->Values inside the cache.
Then I started Ignite instance as a 'server mode' on computer B which joined Ignite instance on computer A and now have a cluster of 2 nodes.
Is it possible to move all 1M K->V from computer A to computer B (without any interruption for querying data or ingesting data) so that computer A can be shut down for maintenance and everything continue to work from computer B?
If this is possible - what are the steps and code to do that (move data from A -> B)?
Ignite distributes data across server nodes according to Cache Modes.
In REPLICATED mode each server holds a copy of all data, so you can shut down any node and data won't be lost.
In PARTITIONED mode you can set CacheConfiguration.backups to 1 (or more) so that data is evenly distributed across server nodes, but each server also holds a copy of data from some other server. In this scenario you can shut down any single node and data won't be lost.
There are the features named "backup" and "CacheRebalanceMode" of IgniteCache.I think you can try these.
I have a scenario where we want to use redis, but I am not sure how to go about setting it up. Here is what we want to achieve eventually:
A redundant central redis cluster where all the writes will occur with servers in two aws regions.
Local redis caches on servers which will hold a replica of the complete central cluster.
The reason for this is that we have many servers which need read access only, and we want them to be independent even in case of an outage (where the server cannot reach the main cluster).
I know there might be a "stale data" issue withing the caches, but we can tolerate that as long as we get eventual consistency.
What is the correct way to achieve something like that using redis?
Thanks!
You need the Redis Replication (Master-Slave) Architecture.
Redis Replication :
Redis replication is a very simple to use and configure master-slave replication that allows slave Redis servers to be exact copies of master servers. The following are some very important facts about Redis replication:
Redis uses asynchronous replication. Starting with Redis 2.8, however, slaves periodically acknowledge the amount of data processed from the replication stream.
A master can have multiple slaves.
Slaves are able to accept connections from other slaves. Aside from connecting a number of slaves to the same master, slaves can also be connected to other slaves in a cascading-like structure.
Redis replication is non-blocking on the master side. This means that the master will continue to handle queries when one or more slaves perform the initial synchronization.
Replication is also non-blocking on the slave side. While the slave is performing the initial synchronization, it can handle queries using the old version of the dataset, assuming you configured Redis to do so in redis.conf. Otherwise, you can configure Redis slaves to return an error to clients if the replication stream is down. However, after the initial sync, the old dataset must be deleted and the new one must be loaded. The slave will block incoming connections during this brief window (that can be as long as many seconds for very large datasets).
Replication can be used both for scalability, in order to have multiple slaves for read-only queries (for example, slow O(N) operations can be offloaded to slaves), or simply for data redundancy.
It is possible to use replication to avoid the cost of having the master write the full dataset to disk: a typical technique involves configuring your master redis.conf to avoid persisting to disk at all, then connect a slave configured to save from time to time, or with AOF enabled. However this setup must be handled with care, since a restarting master will start with an empty dataset: if the slave tries to synchronized with it, the slave will be emptied as well.
Go through the Steps : How to Configure Redis Replication.
So I decided to go with redis-sentinel.
Using a redis-sentinel I can set the slave-priority on the cache servers to 0, which will prevent them from becoming masters.
I will have one master set up, and a few "backup masters" which will actually be slaves with slave-priority set to a value which is not 0, which will allow them to take over once the master goes down.
The sentinel will monitor the master, and once the master goes down it will promote one of the "backup masters" and promote it to be the new master.
More info can be found here