IBM MQ transmit queue messages not placed on dead letter queue - ibm-mq

I currently have an XMIT queue with a SDR channel pointed to a QREMOTE. In a scenario where either the local or remote channels are down, I would like to forward the messages on the XMIT queue to the DLQ. It appears that in this scenario, messages remain on the XMIT queue until the channel is reestablished. Is it possible to do this?

I'm thinking not. From an ibm red paper: http://www.redbooks.ibm.com/redpapers/pdfs/redp0021.pdf
A transmission queue is a local queue with the USAGE(XMITQ) attribute
configured. It is a staging point for messages that are destined for a
remote queue manager. Typically, there is one transmission queue for
each remote queue manager to which the local queue manager might
connect directly. If the destination is unavailable, messages build up
on the transmission queue until the connection can be successfully
completed. Transmission queues are transparent to the application.
When an application opens a remote queue, the queue manager
internally creates a reference to the relevant transmission queue and
messages are put there.

Related

Routing messages between queue managers using websphere message broker

How can I use WebSphere Message broker instance to route messages between queues residing in two queue managers. The message broker instance can only be associated with one queue manager during creation time. So I create an MQInputNode and put messages to the specific source queue. My concern is to route this message to second queue residing in another queue manager using the same broker instance. How? I am using WebSphere Message Broker version 8.0.0.8. Not yet into IIB.
Below is a simple and efficient way of doing it.
Suppose, your broker is on QM1. You have a local queue in QM2 named LQ_QM2.
And you want the messages to go to LQ_QM2. Follow below steps:
At QM1, create a local queue of usage 'Transmission'. Let us name this transmission queue as "QM2".
At QM1, create a sender channel named "QM1.QM2" with proper connection name (Contains host(port) of target queue manager, for eg
10.1.5.2(1144)) and set transmission queue as QM2 (The one we created in step 1).
Create a receiver channel at QM2 named "QM1.QM2).
Now create a remote queue definition at QM1. Let's name it as RQ_LQ_QM2. Set remote queue property as LQ_QM2 and transmission queue
as QM2 and remote queue manager as QM2.
The messages which you want to send to the queue LQ_QM2 can now be written by the broker to RQ_LQ_QM2 in QM1 itself.
If you can't do the above MQ stuff and must use only Message broker capability then the way of doing it in WMB 8 would be to use Java and write an MQ client code using the MQ API libraries. You will then establish remote connection with the remote queue manager using SVRCONN channel and put messages on the remote queue manager's queue.

What is the difference between remote, local and alias queues

Can somebody help to understand the basic of these 3 queues with example.when do we use all 3
Perhaps a more simple explanation: Think of a local queue as a queue which exists on the queue manager it is defined.. you can PUT and GET messages off a local queue. A remote queue is like a pointer to a queue on another queue manager which is usually on different host. Therefore messages can be PUT to it (and they will usually arrive on a local queue at that remote host) but you cannot GET messages from a remote queue.
Simply put, a queue manager only ever hosts messages on local or transmition queues on that queue manager. If you want to go to another queue manager, you can use definitions which tell the queue manager that the 'put' is running on, how to route the message to a destination queue manager - however this ends up with a message on a transmit queue which is then picked up and sent down a channel towards that destination. Alias queues are just an opportunity to use a different name for another queue. Remote queues are definitions on one queuemanager with information about where the message should be routed.
MQ documentation:
https://www.ibm.com/support/knowledgecenter/SSFKSJ_8.0.0/com.ibm.mq.explorer.doc/e_queues.htm
Another description:
https://www.scribd.com/doc/44887119/Different-Types-Queues-in-Websphere-MQ
A queue is known to a program as local if it is owned by the queue manager to which the program is connected; the queue is known as remote if it is owned by a different queue manager. The important difference between these two types of queue is that you can get messages only from local queues. (You can put messages on both types of queue.)
References:
https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_7.0.1/com.ibm.mq.csqzal.doc/fg10950_.htm
https://www.ibm.com/support/knowledgecenter/SSFKSJ_8.0.0/com.ibm.mq.explorer.doc/e_queues.htm
An application which is connected to a local QueueManager which host a queue, So that queue will be local queue for that application. An application which is connected to a QueueManager which is located remotely and that host a queue that queue will be remote queue. we should always keeps in mind that we always read the message from the local queue. A message put on the remote queue, will be routed to local queue through the object defined on the local queue thrugh the channel and the transmission queue.

Why To Use Transmission Queue

Why we use Transmission queue to store message in Sending a message to remote queue as we can also store the message in local queue.
So creating a transmission queue is taking memory and space.
You need to learn more about MQ. A good place to start is the MQ Primer.
You will learn that a transmission queue (aka XMITQ) is a local queue that is used by the MCA (Message Channel Agent) to transfer messages from the local queue manager to the remote queue manager. Messages should only be in the XMITQ very briefly.

messages from a remote queue to another remote queue

We have 2 MQ Queue Managers, and one WAS server. We need to send message from QM01 to QM02 and then from QM02 to WAS server.
For doing this we have built a sender channel between QM01 and QM02, message is placed over a remote queue which has a definition of another remote queue for the WAS Service Integration Bus queue.
Is there any harm in sending message as following;
Remote Queue on QM01 ==> Remote Queue on QM02 ==> Queue on SI
Bus on WAS
Yes, this is a perfectly valid method of hopping messages from one queue manager to another, especially if the queue managers are not clustered. There is no harm to the message or the environment with this approach, though it does has a little more administrative overhead as opposed to a cluster-based solution.

Where is a message under syncpoint control stored prior to COMMIT?

With WebSphere MQ, where is a message addressed to a remote queue manager and PUT under syncpoint control stored until MQCMIT is issued?
Messages that are addressed to a remote queue manager resolve to a transmit queue. Which transmit queue they resolve to depends on how the message will be sent to the remote QMgr. The message will resolve locally to either a user-defined transmit queue for a SDR or SVR channel, or it will resolve to SYSTEM.CLUSTER.TRANSMIT.QUEUE for a clustered channel.
For any message that is put under syncpoint, the message is written to the transaction logs and, if the message is persistent, to the queue file. The queue depth increases to reflect that the message is in the queue but the message is not made available to other programs (such as the channel agent) until the COMMIT occurs.
So if your message is going to a clustered queue manager and you PUT under syncpoint, you will see the depth of the cluster transmit queue increase. At that point the message is at least in the transaction log and possibly also written to the queue file. As soon as the message is committed, it becomes available to the channel.

Resources