Does websphere MQ create duplicate queues in a clustered environment - ibm-mq

If I want to set up Websphere MQ in a distributed environment (across 2 machines in a cluster), will the queues and topics(which I understand as physical storage spaces for messages) be created on both machines?
Or will the queues and topics be created on one machine only but the program (I guess it is called the websphere MQ broker) will be deployed on 2 machines and both the instances will access the same queue and topic.

Cluster concept in WebSphere MQ is different from the traditional high availability (HA) clusters. In traditional HA cluster two systems access the same storage/data to provide HA feature. Both systems can be configured to be active at anytime and processing requests. You can also have a active/passive type of HA configuration.
Unlike traditional HA cluster, WebSphere MQ cluster is different. Two queue managers do not share the same storage/data. Each queue manager is unique. WebSphere MQ cluster is more suitable for workload balance than a HA. You can have a queue with same name in multiple queue managers in a MQ cluster and when messages are put, MQ cluster will load balance them to all queues in that cluster. It should be noted the messages in each instance of a queue in cluster are independent and are not shared. If for some reason one of the queue manager in a cluster goes down, then messages in that queue manager become unavailable till the queue manager comes back.
Are you aiming for workload balance or HA? If you are aim is to achieve HA, then you could look at the multi-instance queue manager feature of MQ or any other HA solutions. If you are aiming for workload balance then you can go for MQ clustering. You can also have a mix of mutli-instance queue manager and MQ clustering to achieve HA and workload balance.

No, MQ doesnot create duplicate queues in the cluster if you don't(manually).
Further, check whether your queue manager is a Partial repository or a Full repository for the cluster.
A partial repository will only contain information about its own objects whereas a full repository will have information about the objects of all queue managers in the cluster.
A cluster needs at least one full repository in it, and other partial repository can use this full repository for accessing objects of other queue managers.
But, the object information in full repository is just a list. Actual physical object will only be there in the queue manager where it was created.

Related

Solace application HA across regions

Currently intra-region we achieve HA (hot/hot) between applications by using exclusive queues to ensure 1 application is Active and the rest are standby.
How do I achieve the same thing across region when the appliances are linked via cspf neighbour links? As queues are local to an appliance the approach above doesn't work.
Not possible using your design of CSPF neighbors - they are meant for direct messages and not guaranteed.
Are you able to provide more details about your use case?
Solace can easily do active/standby across regions using Data Center Replication.
Solace can also allow consumers to consume messages from the endpoints on both the active and standby regions by Allowing Clients to Connect to Standby Sites. However this means two consumers will be active - one on the active and one on the standby site.

What are the implications of using NFS3 file system for multi-instance queue managers in WebSphere MQ

We are stuck in a difficult scenario in our new MQ infrastructure implementation using multi-instance queue managers using WebSphere MQ v7.5 in Linux platform.
The concern is our Network Team is not able to configure NFS4 and hence we are still having the NFS3 version. We understand multi-instance queue managers will not function properly with NFS3. But are there any issues if we define queue managers in multi-instance fashion in NFS3 and expect to work perfect for single instance mode.
Thanks
I would not expect you to have issues running single-node queue managers with NFS3, we do so on a regular basis. The requirement for NFS4 was for the file locking mechanism required by multi-instance queue managers to determine when the primary instance has lost control and an a secondary queue manager should take over.
If you do define the queue manager as multi-instance, and the queue manager attempt to failover, it may not do so successfully, at worst it may corrupt your queue manager files.
If you control the failover yourself - as in, shutdown the queue manager on one node and start it again on another node - that should work for you, as there is no file sharing taking place and all files would be shutdown on the primary node before being opened on the secondary node. You would have to make sure the secondary queue manager is NOT running in standby node -- ever.
I hope this helps.
Dave

MQ Cluster - how to properly disable one node in production environments

I have some messages flowing through the MQ cluster by using cluster and alias queues. Some queues are defined multiple times, though the loadbalancing mechanism is used.
What is the propper way to extract one QM from the cluster without disturbing the whole message flow? Disabling the cluster-receiver channel, cluster-sender channels, or else?
Use the
suspend qmgr
command.
This suspends the queue manager from the cluster.
command reference

Websphere MQ and High Availability

When I read about HA in Websphere MQ I always come to the point, when the best practise is to create two Queue Managers handling the same queue and use out-of-the-box load balancing. Therefore, when one is down, the other takes over his job.
Well, this is great but what about the messages in the queue that belong to the Queue Manager that went down? I mean do these messages reside there (when queue is persistent of course) until QM is up and running again?
Furthermore, is it possible to create a common storage for this doubled Queue Managers? Then no message would wait for the QM to be up. Every message would be delivered in the proper order. Is this correct?
WebSphere MQ provides different capabilities for HA, depending on your requirements. WebSphere MQ clustering uses parallelism to distribute load across multiple instances of a queue. This provides availability of the service but not for in-flight messages.
Hardware clustering and Multi-Instance Queue Manager (MIQM) are both designs using multiple instances of a queue manager that see a single disk image of that queue manager's state. These provide availability of in-flight messages but the service is briefly unavailable while the cluster fails over.
Using these in combination it is possible to provide recovery of in-flight messages as well as availability of the service across multiple queue instances.
In hardware cluster model the disk is mounted to only one server and the cluster software monitors for failure and swaps the disk, IP address and possibly other resources to the secondary node. This requires a hardware cluster monitor such as PowerHA to manage the cluster.
The Multi-Instance QMgr is implemented entirely within WebSphere MQ and needs no other software. It works by having two running instances of the QMgr pointing to the same NFS 4 shared disk mount. Both instances compete for locks on the files. The first one to acquire a lock becomes the active QMgr. Because there is no hardware cluster monitor to perform IP address takeover this type of cluster will have multiple IP addresses. Any modern version of WMQ allows for this using multi-instance CONNAME where you can supply a comma-separated list of IP or DNS names. Client applications that previously used Client Channel Definition Tables (CCDT) to manage failover across multiple QMgrs will continue to work and CCDT continues to be supported in current versions of WMQ.
Please see the Infocenter topic Using WebSphere MQ with high availability configurations for details of hardware cluster and MIQM support.
Client Channel Definition Table files are discussed in the Infocenter topic Client Channel Definition Table file.

How To Load-Distribution in RabbitMQ cluster?

Hi I create three RabbitMQ servers running in cluster on EC2
I want to scale out RabbitMQ cluster base on CPU utilization but when I publish message only one server utilizes CPU and other RabbitMQ-server not utilize CPU
so how can i distribute the load across the RabbitMQ cluster
RabbitMQ clusters are designed to improve scalability, but the system is not completely automatic.
When you declare a queue on a node in a cluster, the queue is only created on that one node. So, if you have one queue, regardless to which node you publish, the message will end up on the node where the queue resides.
To properly use RabbitMQ clusters, you need to make sure you do the following things:
have multiple queues distributed across the nodes, such that work is distributed somewhat evenly,
connect your clients to different nodes (otherwise, you might end up funneling all messages through one node), and
if you can, try to have publishers/consumers connect to the node which holds the queue they're using (in order to minimize message transfers within the cluster).
Alternatively, have a look at High Availability Queues. They're like normal queues, but the queue contents are mirrored across several nodes. So, in your case, you would publish to one node, RabbitMQ will mirror the publishes to the other node, and consumers will be able to connect to either node without worrying about bogging down the cluster with internal transfers.
That is not really true. Check out the documentation on that subject.
Messages published to the queue are replicated to all mirrors. Consumers are connected to the master regardless of which node they connect to, with mirrors dropping messages that have been acknowledged at the master. Queue mirroring therefore enhances availability, but does not distribute load across nodes (all participating nodes each do all the work).

Resources