Multiple remote queues to a single local queue - ibm-mq

Is it possible to have this scenario using Websphere MQ? We want to have multiple MQ servers each of which has the same queue manager, remote queue, transmission queue and chanel defined on it (i.e.the MQ servers are effectively clones). Each server is on its own domain.
Each of these remote queues will point to a local queue on another (centralised) MQ box which is used to aggregate the messages coming in from the various remote queues.
Is this possible? If not, what would you suggest as an alternative option?
Kind regards.

This is not a good design. Your queue managers and channel names should have unique names.
Edge queue managers: MQE1, MQE2 & MQE3
Central queue manager: MQC1
There are 2 typical channel naming standards:
{fromQMgr}.TO.{toQMgr}
{fromQMgr}.{toQMgr}
i.e. MQE1.TO.MQC1 or MQE1.MQC1
I like to go with the shorter version since a channel name has a maximum length of 20.

Related

PUT on a specific MQ Cluster

I'm using WebSphere MQ. I have 3 QMs: QM1, QM2, and QM3. QM1 and QM2 form together an MQ Cluster named CLS12, while the QM2 and QM3 form CLS23. In other words, QM2 is in two clusters.
I'd like to put a message (actually, it will be IIB to put a message) on QM2, but the target queues will actually be local on QM1 and QM3, but shared in the clusters. However, I would like to be able to choose to which cluster (not QM) the message should be put to.
Is that possible?
Short answer? No.
MQ performs name resolution by queue and queue manager name. At no time during name resolution is the cluster name available to the application putting the message as a way to resolve the destination.
It is possible to create QMgr aliases with names that match a particular cluster and get behavior similar to what you seek but it isn't reliable. Clusters are a namespace in which the queues and topics can reside. When clusters overlap, the namespaces overlap. So even though it's possible to fake the routing using aliases, changes in the namespace for either the queue, the alias or the queue manager that cause a name collision or cause cross-contamination of the overlapping cluster namespaces will break the name rresolution.

MQ: Same queue name under 2 queue manager

I have two MQ queue manager with same queue names configured. Both are configured to send data to different servers. Currently queue manager(QM1) is stopped(status Ended Immediately) and QM2 is running
Now my program opens the queue and sends data. It doesnot specify queue manager name. When I execute the program, MQ connection request returns error 2059.
My questions are:
What happens when multiple queue managers have same queue name?
How to tackle situation without changing the code?
Please forgive if the description is vague. It would be helpful if anyone provide links so that newbie like me can learn something.
Thanks
It would be helpful if could provide details on your application. Whether it's using server bindings or client mode connection to queue manager. What version of MQ are you using?
The below information is valid for MQ v7.x:
If you are using client mode then you can use multiple CONNNAMEs to connect. If one queue manager is down, your application will connect to next queue manager in CONNAME list. One of the simplest way to do when using client mode connection is to define MQSERVER environment variable and specify multiple CONNNAMEs.
SET MQSERVER=<channel name>/TCP/host1(port1), host2(port2)
For example when both queue managers are on local host:
SET MQSERVER=MYSVRCONCHN/TCP/localhost(1414),localhost(1415)
In server bindings mode if queue manager name is not specified, then application will attempt to connect to the default queue manager. If the default queue manager is down, then 2059 is thrown.
Your explaination doesn't provide clarity about your requirements.
You wrote:
My questions are 1. What happens when multiple queue managers have same queue name.
Nothing. Its a normal scenario. Different queue managers may have queues with same name and it doesn't create any ambiguity. Although, scenario will be a little different when the queue managers are in same cluster and the queue is also a cluster queue. Then everything will depend on requirements and design.
You wrote:
2. How to tackle situation without changing the code
Run the queue manager which is stopped.
You wrote:
Now my program opens the queue and sends data. It doesnot specify
queue manager name.
What application are you using?For a client application, you access a queue using an object of queue manager.
I am asssuming that you are using an application(client) which doesn't take queue manager details from you, only takes queue details. And may be the queue manager is hard coded within the code. And it sends the message first to the queue of Queue manager 1 and then to queue manager 2. But, in your case queue manager 1 is down.
If above is the case, then the application's code needs to be changed. You should have exception handling in such a way that it executes the code for sending the message to the second queue manager even though the first lines of code throws error.

MQ (Websphere 7) persist message to file system

How would I set up MQ so that every message received is immediately written to file system?
I have the "redbooks", but at least need someone at least point me to a chapter or heading in the book to figure it out.
We are a .NET shop. I have written C# via API to read the queue, and we currently use BizTalk MQ adapter. Our ultimate goal is to write same message to multiple directories in file system to "clone" the feed for our various test environments (DEV, STAGE, TRAINING, etc..). The problem with BizTalk is that when we consume the message, we map it at the same time to a new message, so the message is already changed, and we want the original raw message to be cloned, not the morphed one. Our vendors don't offer multiple copies of the feed, for example, they offer DEV and PROD, but we have 4 systems internally.
I suppose I could do a C# Windows Service to do it, but I would rather use built-in features of MQ if possible.
There is no configuration required. If the message is persistent, WMQ writes it to disk. However, I don't think that's going to help you because they are not written as discrete messages. There's no disk file to copy and replication only works if the replicated QMgr is identical to the primary and is offline during the replication.
There are a number of solutions to this problem but as of WMQ V7, the easiest one is to use the built-in Pub/Sub functionality. This assumes that the messages are arriving over a QMgr-to-QMgr channel and landing on a queue where you then consume them.
In that case, it is possible to delete the queue and create an alias of the same name over a topic. You then create a new queue and define an administrative subscription that delivers messages on the topic into the new queue. Your app consumes from the new queue.
When you need to send a feed to another QMgr or application, define a new subscription and point it at the new destination queue. Since this is Pub/Sub, MQ will replicate the original message as many times as there are subscriptions and the first application and its messages are not affected. If the destination you need to send to isn't accessible over MQ channels (perhaps DEV and QA are not connected, for example), you can deliver the messages to the new queue, use QLoad from SupportPac MO03 to write them to a file and then use another instance of QLoad to load them onto a different QMgr. If you wanted to move them in real time, you could set up the Q program from SupportPac MA01 to move them direct from the new subscription queue on QMgr1 to the destination queue on QMgr2. And you can replicate across as many systems as you need.
The SupportPacs main page is here.
If all you are using is the Redbooks, you might want to have a look at the Infocenters. Be sure to use the Infocenter matching the version of WMQ you are using.
WMQ V7.0 Infocenter
WMQ V7.1 Infocenter
WMQ V7.5 Infocenter

Websphere MQ and High Availability

When I read about HA in Websphere MQ I always come to the point, when the best practise is to create two Queue Managers handling the same queue and use out-of-the-box load balancing. Therefore, when one is down, the other takes over his job.
Well, this is great but what about the messages in the queue that belong to the Queue Manager that went down? I mean do these messages reside there (when queue is persistent of course) until QM is up and running again?
Furthermore, is it possible to create a common storage for this doubled Queue Managers? Then no message would wait for the QM to be up. Every message would be delivered in the proper order. Is this correct?
WebSphere MQ provides different capabilities for HA, depending on your requirements. WebSphere MQ clustering uses parallelism to distribute load across multiple instances of a queue. This provides availability of the service but not for in-flight messages.
Hardware clustering and Multi-Instance Queue Manager (MIQM) are both designs using multiple instances of a queue manager that see a single disk image of that queue manager's state. These provide availability of in-flight messages but the service is briefly unavailable while the cluster fails over.
Using these in combination it is possible to provide recovery of in-flight messages as well as availability of the service across multiple queue instances.
In hardware cluster model the disk is mounted to only one server and the cluster software monitors for failure and swaps the disk, IP address and possibly other resources to the secondary node. This requires a hardware cluster monitor such as PowerHA to manage the cluster.
The Multi-Instance QMgr is implemented entirely within WebSphere MQ and needs no other software. It works by having two running instances of the QMgr pointing to the same NFS 4 shared disk mount. Both instances compete for locks on the files. The first one to acquire a lock becomes the active QMgr. Because there is no hardware cluster monitor to perform IP address takeover this type of cluster will have multiple IP addresses. Any modern version of WMQ allows for this using multi-instance CONNAME where you can supply a comma-separated list of IP or DNS names. Client applications that previously used Client Channel Definition Tables (CCDT) to manage failover across multiple QMgrs will continue to work and CCDT continues to be supported in current versions of WMQ.
Please see the Infocenter topic Using WebSphere MQ with high availability configurations for details of hardware cluster and MIQM support.
Client Channel Definition Table files are discussed in the Infocenter topic Client Channel Definition Table file.

How to monitor an existing queue from WebSphere MQ?

I have a .NET application that needs to monitor a queue in WebSphere MQ. I need to react to each message without impacting the current process. The client application can't explicity send me the same message.
Can I read a message without removing it from the queue? Can I be notified for each message? Can I configure the MQ to duplicate the current queue?
Is there another solution?
If you are using WMQ v7 then you can do this without any impact to the existing applications other than to change the queue name for one of them.
Currently the message producer and consumer use the same queue. In v7 of WMQ you can create an alias over a topic so that the message producer thinks it's a queue. Then you can create two administrative, durable subscriptions such that one points to the existing input queue and another points to a queue dedicated to your new application.
Of course you are already using v7 since v6 goes out of service next year, right? You can upgrade the QMgr to v7 which enables this behavior while still using v6 client code for the apps.
If you are using WMQ v6 then the MirrorQ program might work for you.
You could change from using a queue to a durable topic and have both your reader and your browser subscribe to it.
You could also create a distribution list on your queue manager. A distribution list is used to send a copy of the same message to multiple queues. You would then have a processing queue and a browsing/monitoring queue.

Resources