Solace application HA across regions - high-availability

Currently intra-region we achieve HA (hot/hot) between applications by using exclusive queues to ensure 1 application is Active and the rest are standby.
How do I achieve the same thing across region when the appliances are linked via cspf neighbour links? As queues are local to an appliance the approach above doesn't work.

Not possible using your design of CSPF neighbors - they are meant for direct messages and not guaranteed.
Are you able to provide more details about your use case?
Solace can easily do active/standby across regions using Data Center Replication.
Solace can also allow consumers to consume messages from the endpoints on both the active and standby regions by Allowing Clients to Connect to Standby Sites. However this means two consumers will be active - one on the active and one on the standby site.

Related

Best approach for JMS based spring boot microservice deployed in multiple data center

I am using PCF for deploying my event driven microservice. It receives message from queue and then process and publish output to another topic. The application is going to be deployed in multiple data center for high availability. What I am currently considering is to deploy the application to two datacenter and it listens to the messages in both datacenter and then get the message processed. But will it be better to make the nearest datacenter of the queue to primarily listen the message and then when this datacenter is down only make the second datacenter listen the message. For this second approach, I may need to implement some circuit breaker to pause the primary and UP the secondary. Can you please provide your suggestion/experience in handling these applications.
In general there is no simple straightforward solution as both DCs will be working in active-active mode and traffic can be routed to any side, but you can do followings:
Deploy the apps in both data centre with different configuration so it will always listen to the queue(s) as defined in the code. Not the ideal...
Talk to your infra guys (ex: MQ) to provide you a solution - technically its possible by having a GTM that sends you to the nearest site according to 'who' is asking

How to create a cluster for TIBCO EMS?

I've created an administration domain on a Windows node, now I need to create a cluster and add that node into that cluster.How do I go about it?
There is no concept of "Cluster" (in a WMQ sense) with EMS, but there are notions of "connected EMS servers" (routes), and bridging of queues (bridges, not unlike WMQ remote queues). There is also the notion of "Multi-instance local HA".
You may want to :
Plan your EMS deployment strategy, to proper balance latency, availability, license cost and performance
Execute the plan by:
Seting up the local HA, with a shared FS if shared state is needed (here is my article on that)
Make use of the "Routes" feature of EMS to link your multiple EMS HA instances together (see page 569 of the user guide)
Note : If you have a limited number of instances but MASSIVE amount of clients, consider multicast OR another product (like FTL or RabbitMQ).

Does websphere MQ create duplicate queues in a clustered environment

If I want to set up Websphere MQ in a distributed environment (across 2 machines in a cluster), will the queues and topics(which I understand as physical storage spaces for messages) be created on both machines?
Or will the queues and topics be created on one machine only but the program (I guess it is called the websphere MQ broker) will be deployed on 2 machines and both the instances will access the same queue and topic.
Cluster concept in WebSphere MQ is different from the traditional high availability (HA) clusters. In traditional HA cluster two systems access the same storage/data to provide HA feature. Both systems can be configured to be active at anytime and processing requests. You can also have a active/passive type of HA configuration.
Unlike traditional HA cluster, WebSphere MQ cluster is different. Two queue managers do not share the same storage/data. Each queue manager is unique. WebSphere MQ cluster is more suitable for workload balance than a HA. You can have a queue with same name in multiple queue managers in a MQ cluster and when messages are put, MQ cluster will load balance them to all queues in that cluster. It should be noted the messages in each instance of a queue in cluster are independent and are not shared. If for some reason one of the queue manager in a cluster goes down, then messages in that queue manager become unavailable till the queue manager comes back.
Are you aiming for workload balance or HA? If you are aim is to achieve HA, then you could look at the multi-instance queue manager feature of MQ or any other HA solutions. If you are aiming for workload balance then you can go for MQ clustering. You can also have a mix of mutli-instance queue manager and MQ clustering to achieve HA and workload balance.
No, MQ doesnot create duplicate queues in the cluster if you don't(manually).
Further, check whether your queue manager is a Partial repository or a Full repository for the cluster.
A partial repository will only contain information about its own objects whereas a full repository will have information about the objects of all queue managers in the cluster.
A cluster needs at least one full repository in it, and other partial repository can use this full repository for accessing objects of other queue managers.
But, the object information in full repository is just a list. Actual physical object will only be there in the queue manager where it was created.

Websphere MQ and High Availability

When I read about HA in Websphere MQ I always come to the point, when the best practise is to create two Queue Managers handling the same queue and use out-of-the-box load balancing. Therefore, when one is down, the other takes over his job.
Well, this is great but what about the messages in the queue that belong to the Queue Manager that went down? I mean do these messages reside there (when queue is persistent of course) until QM is up and running again?
Furthermore, is it possible to create a common storage for this doubled Queue Managers? Then no message would wait for the QM to be up. Every message would be delivered in the proper order. Is this correct?
WebSphere MQ provides different capabilities for HA, depending on your requirements. WebSphere MQ clustering uses parallelism to distribute load across multiple instances of a queue. This provides availability of the service but not for in-flight messages.
Hardware clustering and Multi-Instance Queue Manager (MIQM) are both designs using multiple instances of a queue manager that see a single disk image of that queue manager's state. These provide availability of in-flight messages but the service is briefly unavailable while the cluster fails over.
Using these in combination it is possible to provide recovery of in-flight messages as well as availability of the service across multiple queue instances.
In hardware cluster model the disk is mounted to only one server and the cluster software monitors for failure and swaps the disk, IP address and possibly other resources to the secondary node. This requires a hardware cluster monitor such as PowerHA to manage the cluster.
The Multi-Instance QMgr is implemented entirely within WebSphere MQ and needs no other software. It works by having two running instances of the QMgr pointing to the same NFS 4 shared disk mount. Both instances compete for locks on the files. The first one to acquire a lock becomes the active QMgr. Because there is no hardware cluster monitor to perform IP address takeover this type of cluster will have multiple IP addresses. Any modern version of WMQ allows for this using multi-instance CONNAME where you can supply a comma-separated list of IP or DNS names. Client applications that previously used Client Channel Definition Tables (CCDT) to manage failover across multiple QMgrs will continue to work and CCDT continues to be supported in current versions of WMQ.
Please see the Infocenter topic Using WebSphere MQ with high availability configurations for details of hardware cluster and MIQM support.
Client Channel Definition Table files are discussed in the Infocenter topic Client Channel Definition Table file.

How To Load-Distribution in RabbitMQ cluster?

Hi I create three RabbitMQ servers running in cluster on EC2
I want to scale out RabbitMQ cluster base on CPU utilization but when I publish message only one server utilizes CPU and other RabbitMQ-server not utilize CPU
so how can i distribute the load across the RabbitMQ cluster
RabbitMQ clusters are designed to improve scalability, but the system is not completely automatic.
When you declare a queue on a node in a cluster, the queue is only created on that one node. So, if you have one queue, regardless to which node you publish, the message will end up on the node where the queue resides.
To properly use RabbitMQ clusters, you need to make sure you do the following things:
have multiple queues distributed across the nodes, such that work is distributed somewhat evenly,
connect your clients to different nodes (otherwise, you might end up funneling all messages through one node), and
if you can, try to have publishers/consumers connect to the node which holds the queue they're using (in order to minimize message transfers within the cluster).
Alternatively, have a look at High Availability Queues. They're like normal queues, but the queue contents are mirrored across several nodes. So, in your case, you would publish to one node, RabbitMQ will mirror the publishes to the other node, and consumers will be able to connect to either node without worrying about bogging down the cluster with internal transfers.
That is not really true. Check out the documentation on that subject.
Messages published to the queue are replicated to all mirrors. Consumers are connected to the master regardless of which node they connect to, with mirrors dropping messages that have been acknowledged at the master. Queue mirroring therefore enhances availability, but does not distribute load across nodes (all participating nodes each do all the work).

Resources