PUT on a specific MQ Cluster - ibm-mq

I'm using WebSphere MQ. I have 3 QMs: QM1, QM2, and QM3. QM1 and QM2 form together an MQ Cluster named CLS12, while the QM2 and QM3 form CLS23. In other words, QM2 is in two clusters.
I'd like to put a message (actually, it will be IIB to put a message) on QM2, but the target queues will actually be local on QM1 and QM3, but shared in the clusters. However, I would like to be able to choose to which cluster (not QM) the message should be put to.
Is that possible?

Short answer? No.
MQ performs name resolution by queue and queue manager name. At no time during name resolution is the cluster name available to the application putting the message as a way to resolve the destination.
It is possible to create QMgr aliases with names that match a particular cluster and get behavior similar to what you seek but it isn't reliable. Clusters are a namespace in which the queues and topics can reside. When clusters overlap, the namespaces overlap. So even though it's possible to fake the routing using aliases, changes in the namespace for either the queue, the alias or the queue manager that cause a name collision or cause cross-contamination of the overlapping cluster namespaces will break the name rresolution.

Related

Multiple remote queues to a single local queue

Is it possible to have this scenario using Websphere MQ? We want to have multiple MQ servers each of which has the same queue manager, remote queue, transmission queue and chanel defined on it (i.e.the MQ servers are effectively clones). Each server is on its own domain.
Each of these remote queues will point to a local queue on another (centralised) MQ box which is used to aggregate the messages coming in from the various remote queues.
Is this possible? If not, what would you suggest as an alternative option?
Kind regards.
This is not a good design. Your queue managers and channel names should have unique names.
Edge queue managers: MQE1, MQE2 & MQE3
Central queue manager: MQC1
There are 2 typical channel naming standards:
{fromQMgr}.TO.{toQMgr}
{fromQMgr}.{toQMgr}
i.e. MQE1.TO.MQC1 or MQE1.MQC1
I like to go with the shorter version since a channel name has a maximum length of 20.

MQ Cluster - how to properly disable one node in production environments

I have some messages flowing through the MQ cluster by using cluster and alias queues. Some queues are defined multiple times, though the loadbalancing mechanism is used.
What is the propper way to extract one QM from the cluster without disturbing the whole message flow? Disabling the cluster-receiver channel, cluster-sender channels, or else?
Use the
suspend qmgr
command.
This suspends the queue manager from the cluster.
command reference

Does websphere MQ create duplicate queues in a clustered environment

If I want to set up Websphere MQ in a distributed environment (across 2 machines in a cluster), will the queues and topics(which I understand as physical storage spaces for messages) be created on both machines?
Or will the queues and topics be created on one machine only but the program (I guess it is called the websphere MQ broker) will be deployed on 2 machines and both the instances will access the same queue and topic.
Cluster concept in WebSphere MQ is different from the traditional high availability (HA) clusters. In traditional HA cluster two systems access the same storage/data to provide HA feature. Both systems can be configured to be active at anytime and processing requests. You can also have a active/passive type of HA configuration.
Unlike traditional HA cluster, WebSphere MQ cluster is different. Two queue managers do not share the same storage/data. Each queue manager is unique. WebSphere MQ cluster is more suitable for workload balance than a HA. You can have a queue with same name in multiple queue managers in a MQ cluster and when messages are put, MQ cluster will load balance them to all queues in that cluster. It should be noted the messages in each instance of a queue in cluster are independent and are not shared. If for some reason one of the queue manager in a cluster goes down, then messages in that queue manager become unavailable till the queue manager comes back.
Are you aiming for workload balance or HA? If you are aim is to achieve HA, then you could look at the multi-instance queue manager feature of MQ or any other HA solutions. If you are aiming for workload balance then you can go for MQ clustering. You can also have a mix of mutli-instance queue manager and MQ clustering to achieve HA and workload balance.
No, MQ doesnot create duplicate queues in the cluster if you don't(manually).
Further, check whether your queue manager is a Partial repository or a Full repository for the cluster.
A partial repository will only contain information about its own objects whereas a full repository will have information about the objects of all queue managers in the cluster.
A cluster needs at least one full repository in it, and other partial repository can use this full repository for accessing objects of other queue managers.
But, the object information in full repository is just a list. Actual physical object will only be there in the queue manager where it was created.

Websphere MQ and High Availability

When I read about HA in Websphere MQ I always come to the point, when the best practise is to create two Queue Managers handling the same queue and use out-of-the-box load balancing. Therefore, when one is down, the other takes over his job.
Well, this is great but what about the messages in the queue that belong to the Queue Manager that went down? I mean do these messages reside there (when queue is persistent of course) until QM is up and running again?
Furthermore, is it possible to create a common storage for this doubled Queue Managers? Then no message would wait for the QM to be up. Every message would be delivered in the proper order. Is this correct?
WebSphere MQ provides different capabilities for HA, depending on your requirements. WebSphere MQ clustering uses parallelism to distribute load across multiple instances of a queue. This provides availability of the service but not for in-flight messages.
Hardware clustering and Multi-Instance Queue Manager (MIQM) are both designs using multiple instances of a queue manager that see a single disk image of that queue manager's state. These provide availability of in-flight messages but the service is briefly unavailable while the cluster fails over.
Using these in combination it is possible to provide recovery of in-flight messages as well as availability of the service across multiple queue instances.
In hardware cluster model the disk is mounted to only one server and the cluster software monitors for failure and swaps the disk, IP address and possibly other resources to the secondary node. This requires a hardware cluster monitor such as PowerHA to manage the cluster.
The Multi-Instance QMgr is implemented entirely within WebSphere MQ and needs no other software. It works by having two running instances of the QMgr pointing to the same NFS 4 shared disk mount. Both instances compete for locks on the files. The first one to acquire a lock becomes the active QMgr. Because there is no hardware cluster monitor to perform IP address takeover this type of cluster will have multiple IP addresses. Any modern version of WMQ allows for this using multi-instance CONNAME where you can supply a comma-separated list of IP or DNS names. Client applications that previously used Client Channel Definition Tables (CCDT) to manage failover across multiple QMgrs will continue to work and CCDT continues to be supported in current versions of WMQ.
Please see the Infocenter topic Using WebSphere MQ with high availability configurations for details of hardware cluster and MIQM support.
Client Channel Definition Table files are discussed in the Infocenter topic Client Channel Definition Table file.

How to connect queue managers for WebSphere MQ 7.0 distributed publish/subscribe

As always, IBM documenation is great for what it tells you but leaves important details out. Apologies if this is already answered here - the search terms are unfortunately mostly generic or at least ambiguous and I have looked through a few hundred questions with no luck.
I have two (IBM i) servers, each with a single WMQ 7.0 queue manager. I have two channels running between them - one each direction.
I have a topic defined on "server A" with Publication and Subscription Scopes of "All" and Proxy Subscription Behaviour of "Force".
I have a subscription defined on "server B" with Scope "All".
Everything is up and running but when I drop a message into the topic on server A (using MQ Explorer), nothing appears on server B.
I have read about the "proxy subscriptions" required to make this work but I cannot for the life of me figure out how these get created.
Any assistance appreciated. I've got this far (never used pub/sub before) in a few hours only to trip at this hurdle.
You have to setup a hierarchy between these two queue managers for publications to flow to queue manager on B.
You have to setup a hierarchy between these two queue managers for publications to flow to queue manager on B.
Assuming queue manager on A as parent and queue manager on B as child, you have to issue "ALTER QMGR PARENT ()" in a RUNMQSC prompt of queue manager on B. This will create the hierarchy between two queue managers. Once a subscription is created on queue manager on B, a proxy subscription will automatically flow to queue manager on A. Assuming queue manager on A as parent and queue manager on B as child, you have to issue "ALTER QMGR PARENT ()" in a RUNMQSC prompt of queue manager on B. This will create the hierarchy between two queue managers. Once a subscription is created on queue manager on B, a proxy subscription will automatically flow to queue manager on A.
EDIT: More detail on my configuration (with slightly more meaningful - to me - server names)
On server A7:
Queue manager A7.QUEUE.MANAGER
Sender channel A7.TO.A2 with transmission queue A7.TO.A2
Alias queue A2.QUEUE.MANAGER pointing to A7.TO.A2
Receiver channel A2.TO.A7
In server A2:
Queue manager A2.QUEUE.MANAGER
Sender channel A2.TO.A7 with transmission queue A2.TO.A7
Alias queue A7.QUEUE.MANAGER pointing to A2.TO.A7
Receiver channel A7.TO.A2
I then issued ALTER QMGR PARENT('A7.QUEUE.MANAGER')
I have the topic on A7 and after issuing the ALTER (above) I added a subscription on A2 to the topic.
display pubsub type(ALL)
3 : display pubsub type(ALL)
AMQ8723: Display pub/sub status details.
QMNAME(A2.QUEUE.MANAGER) TYPE(LOCAL)
display pubsub type(ALL)
1 : display pubsub type(ALL)
AMQ8723: Display pub/sub status details.
QMNAME(A7.QUEUE.MANAGER) TYPE(LOCAL)
Cluster the two QMgrs and advertise the topic to the cluster. WMQ will then make publications available across the cluster.
On QMGR01
# Make the QMgr a cluster repository
ALTER QMGR REPOS('CLUSTERNAME')
# Create CLUSRCVR and advertise to cluster
# Substitute your CONNAME parms, QMgr name, channel names, etc.
DEF CHL(CLUSTERNAME.QMGR01) CHLTYPE(CLUSRCVR) +
TRPTYPE(TCP) +
CONNAME('127.0.0.1(1414)') +
CLUSTER('CLUSTERNAME') +
REPLACE
# Create topic object and advertise to cluster
DEF TOPIC('ROOT') TOPICSTR('ROOT') CLUSTER('CLUSTERNAME') REPLACE
On QMGR02
# Always create CLUSRCVR first and advertise to cluster
# Substitute your CONNAME parms, QMgr name, channel names, etc.
DEF CHL(CLUSTERNAME.QMGR02) CHLTYPE(CLUSRCVR) +
TRPTYPE(TCP) +
CONNAME('127.0.0.1(1415)') +
CLUSTER('CLUSTERNAME') +
REPLACE
# Then conecct to the repos and advertise the CLUSSDR to the cluster too
# Substitute your CONNAME parms, QMgr name, channel names, etc.
DEF CHL(CLUSTERNAME.QMGR01) CHLTYPE(CLUSSDR) +
TRPTYPE(TCP) +
CONNAME('127.0.0.1(1414)') +
CLUSTER('CLUSTERNAME') +
REPLACE
Now you can publish to the topic that was advertised to the cluster:
On QMgr01
amqspub ROOT/Whatever QMGR01
On QMgr02
amqssub ROOT/Whatever QMGR02
You don't have to name your object ROOT or use that as the top of the topic namespace. This was an arbitrary example and you can use anything you want. In Production you would probably have some topic objects at the 2nd or 3rd level in the topic hierarchy to hang access control lists from. Generally it is these objects that are used to advertise the topics to the cluster.
Some additional notes:
You cannot advertise SYSTEM.BASE.TOPIC or SYSTEM.DEFAULT.TOPIC to the cluster.
A clustered topic need be defined on only one node in the cluster. It can be any node but it's a good practice to advertise it on the primary full repository. That way you know where all your clustered topic objects are defined in one place and the repository is (or should be) highly available.
If there are local and clustered topic objects that overlap, the local one takes precedence.
Please see Creating a new cluster topic for more info. Also, Creating and configuring a queue manager cluster has tasks to create the cluster and add QMgrs to it. However, I tested with the above on my Windows host and in this minimal cluster, the pub/sub worked great.
You have to setup a hierarchy between these two queue managers for publications to flow to queue manager on B.
Assuming queue manager on A as parent and queue manager on B as child, you have to issue "ALTER QMGR PARENT ()" in a RUNMQSC prompt of queue manager on B. This will create the hierarchy between two queue managers. Once a subscription is created on queue manager on B, a proxy subscription will automatically flow to queue manager on A.

Resources