My application runs on 2 physical servers hosting 3 managed servers in a cluster. We have 2 Databases that are not clustered. I would like to configure the connection pools in such a way that all managed servers in Physical Machine A will go to DB1, and on failover goes to DB2. Similarily Machine B goes to DB2 always , moved to DB1 on failover. How do i configure the Connection Pool to acheive this desired behavior.
I haven't tried this but a MultiDatasource seems to be what you're looking for.
Note this is not a MultiPool since that will pick up from any of the pools based on Load Balancing or High Availability.
Define 2 Datasources, one for DB1 and one for DB2.
MultiDataSource allows you to pick up from either Datasource based on Failover or Load Balancing algorithm
http://albinoraclesoa.blogspot.in/2012/02/jdbc-multi-data-sources-in-weblogic.html
Related
We have configured ActiveMQ to use JDBC Master Slave. Our data center is an active and passive model. So we are thinking of replicating database that is used for Master-Slave from Active center to Passive center. But we are seeing three tables activemq_msgs, activemq_lock and activemq_ack. We are not sure which one or all to replicate to passive center and even if replicates whether bring up Master-Slave using replicated database works. This is the first time we are configuring and we don't find many documents in the internet to get started. Please provide your inputs.
If the "active" broker creates and uses those tables in the database then it stands to reason that the "passive" broker would too once it becomes active. In fact, it stands to reason that any table created by the "active" broker would be used by the "passive" broker once it becomes active. Therefore you should replicate all ActiveMQ related database tables.
I have an Elasticsearch cluster in a VPN.
How can my Spring Boot application access the cluster securely if it is located on a separate server outside of the VPN and how can I configure it in the Spring boot configuration (application.yml/application.properties)?
I also want the application to connect to the cluster i an way so that if i have e.g. 2 Master eligible nodes and one fails, the connection remains intact.
if you have only 2 master eligble nodes, you are at risk of the "split brain problem". there is a easy formula for the calculation of the required number of master nodes:
M = 2F +1 ( m=master node count, f=number of master nodes possible to fail at same time)
in your application define all master nodes as target for the elasticsearch client. The client will handle the failover. see elasticsearc client documentation or https://qbox.io/blog/rest-calls-made-easy-part-2-sniffing-elasticsearch for a example
the vpn should not be handled by your application. the infrastructure (server, firewall) is the right place to address it. try to develop your application environment agnostic. this will make your app easier to develop, maintain and make it more robust to infrastructure changes
IBM DB2 has a feature for HADR database - read on standby. This allows the standby database to be connected to for read-only queries (with certain restrictions on datatypes and isolation levels)
I am trying to configure this as a datasource in an application which runs on websphere liberty profile.
Previously, this application was using the Automatic Client Re-route (which ensures that all connections are directed to the current primary)
However, I would like to configure it in such a way that I can have SELECTs / read-only flows to run on the standby database, and others to run on primary. This should also work when a takeover has been performed on the database (that is, standby becoming primary and vice-versa). The purpose of doing this is to divide the number of connections created between all available databases
What is the correct way to do this?
Things I have attempted (assume my servers are dbserver1 and dbserver2):
Create 2 datasources, one with the db url of dbserver1 and the other with dbserver2.
This works until a takeover is performed and the roles of the servers are switched.
Create 2 datasources, one with the db url of dbserver1 (with the Automatic Client Re-route parameters) and the other with dbserver2 only.
With this configuration, the application works fine, but if dbserver2 becomes the primary then all queries are executed on it.
Setup haproxy and use it to identify which is the primary and which is the standby. Create 2 datasources pointing to haproxy
When takeover is carried out on the database, connection exceptions start to occur (not just at the time of takeover, but for some time following it)
The appropriate way is described in a Whitepaper "Enabling continuous access to read on standby databases using Virtual IP addresses" linked off the Db2 documentation for Read-on-standby.
Virtual IP addresses are assigned to both roles, primary and standby. They are cataloged as database aliases. Websphere or other clients would connect to either the primary or standby datasource. When there is a takeover or failover, the virtual IP addresses are reassigned to the specific server. The client would continue to be routed to the desired server, e.g. the standby.
SnappyData documentation and architecture diagrams seem to indicate that a JDBC thin client connection goes from a client to a Locator and then it is routed to a direct connection to a Server.
If this is true, then I can run JDBC queries without a Lead node, correct?
Yes, that is correct. The locator provides load and connectivity information back to the client that is now able to connect to one or more servers either for direct access to a bucket for low latency queries but more importantly, is HA - can failover and failback.
So, yes, your connected clients will continue to function even when the locator goes away. Note that the "lead" plays a different role than the locator. Its primary function is to host Spark driver, orchestrate Spark Jobs and provide HA to Spark. With no lead, you won't be able to run such Jobs.
In addition to what #jagsr has mentioned, if you do not intend to run the lead nodes (and thus no Spark jobs or column store), then you can run the cluster as pure row store using snappy-start-all.sh rowstore (see rowstore docs)
I have a question regarding Clustering (session replication/failover) in tomcat 6 using BackupManager. Reason I chose BackupManager, is because it replicates the session to only one other server.
I am going to run through the example below to try and explain my question.
I have 6 nodes setup in a tomcat 6 cluster with BackupManager. The front end is one Apache server using mod_jk with sticky session enabled
Each node has 1 session each.
node1 has a session from client1
node2 has a session from client2
..
..
Now lets say node1 goes down ; assuming node2 is the backup, node2 now has two sessions (for client2 and client1)
The next time client1 makes a request, what exactly happens ?
Does Apache "know" that node1 is down and does it send the request directly to node2 ?
=OR=
does it try each of the 6 instances and find out the hard way who the backup is ?
Not too sure about the workings with BackupManager, my reading of this good URL suggests the replication is intelligent enough in identifying the backup.
In-memory session replication, is
session data replicated across all
Tomcat instances within the cluster,
Tomcat offers two solutions,
replication across all instances
within the cluster or replication to
only its backup server, this solution
offers a guaranteed session data
replication ...
SimpleTcpCluster uses Apache Tribes to maintain communicate with the communications group. Group membership is established and maintained by Apache Tribes, it handles server crashes and recovery. Apache Tribes also offer several levels of guaranteed message delivery between group members. This is achieved updating in-session memory to reflect any session data changes, the replication is done immediately between members ...
You can reduce the amount of data by
using the BackupManager (send only to
one node, the backup node)
You'll be able to see this from the logs if notifyListenersOnReplication="true" is set.
On the other hand, you could still use DeltaManager and split your cluster into 3 domains of 2 servers each.
Say these will be node 1 <-> node 2, 3 <-> 4 and 5 <-> 6.
In such a case - configuring the domain worker attribute, will ensure that session replication will only happen within the domain.
And mod_jk then definitely knows which server to look on when node1 fails.
http://tomcat.apache.org/tomcat-6.0-doc/cluster-howto.html states
Currently you can use the domain
worker attribute (mod_jk > 1.2.8) to
build cluster partitions with the
potential of having a more scaleable
cluster solution with the
DeltaManager(you'll need to configure
the domain interceptor for this).
And a better example on this link:
http://people.apache.org/~mturk/docs/article/ftwai.html
See the "Domain Clustering model" section.