Where is a message under syncpoint control stored prior to COMMIT? - ibm-mq

With WebSphere MQ, where is a message addressed to a remote queue manager and PUT under syncpoint control stored until MQCMIT is issued?

Messages that are addressed to a remote queue manager resolve to a transmit queue. Which transmit queue they resolve to depends on how the message will be sent to the remote QMgr. The message will resolve locally to either a user-defined transmit queue for a SDR or SVR channel, or it will resolve to SYSTEM.CLUSTER.TRANSMIT.QUEUE for a clustered channel.
For any message that is put under syncpoint, the message is written to the transaction logs and, if the message is persistent, to the queue file. The queue depth increases to reflect that the message is in the queue but the message is not made available to other programs (such as the channel agent) until the COMMIT occurs.
So if your message is going to a clustered queue manager and you PUT under syncpoint, you will see the depth of the cluster transmit queue increase. At that point the message is at least in the transaction log and possibly also written to the queue file. As soon as the message is committed, it becomes available to the channel.

Related

IBM MQ transmit queue messages not placed on dead letter queue

I currently have an XMIT queue with a SDR channel pointed to a QREMOTE. In a scenario where either the local or remote channels are down, I would like to forward the messages on the XMIT queue to the DLQ. It appears that in this scenario, messages remain on the XMIT queue until the channel is reestablished. Is it possible to do this?
I'm thinking not. From an ibm red paper: http://www.redbooks.ibm.com/redpapers/pdfs/redp0021.pdf
A transmission queue is a local queue with the USAGE(XMITQ) attribute
configured. It is a staging point for messages that are destined for a
remote queue manager. Typically, there is one transmission queue for
each remote queue manager to which the local queue manager might
connect directly. If the destination is unavailable, messages build up
on the transmission queue until the connection can be successfully
completed. Transmission queues are transparent to the application.
When an application opens a remote queue, the queue manager
internally creates a reference to the relevant transmission queue and
messages are put there.

Can channels send a message to different queues?

I was reading some information about the difference between the message channels and the message queues,
I understand that the channel is used for connecting to a queue
manager and not to a queue.
So the channel can retrieve/send message to different queues or just a particular queue? But when a producer needs to place a message into a queue, it specifies the name of the queue and the queue manager, but if that information was specified by the producer is not needed that the channel knows that information, right?
When is has a Publish/Subscribe messaging style, is always used a sender/receiver channel?
A message channel connects together two queue managers. There are various different pairs of channel types that have slightly different behaviours, but all those types which send from one queue manager to the other are the same from the perspective of your question. For the rest of this answer I will use the SENDER-RECEIVER pair.
A SENDER channel will ALWAYS read from one queue - a transmission queue. It is named on the SENDER channel definition. The transmission queue is a safe storage area for the message until it is successfully transmitted to the the target queue manager.
An application connected to the sending queue manager can put messages to many different queues on the target queue manager and they will all, initially, be stored on the transmission queue.
This is possible because the queue manager adds a special header (called a transmission header - MQXQH) to the front of the message while it resides on the transmission queue. This header contains the target queue name and the target queue manager name as provided originally by the message producer. The channel does not know this information, it is provided by the producer.
Once the channel has moved the message across the network to the target queue manager, the RECEIVER channel removes the transmission header and uses the data in it, the queue name and queue manager name, to put the message to the appropriate queue.
In this way a single channel pair can deliver messages to many different queues.

Why To Use Transmission Queue

Why we use Transmission queue to store message in Sending a message to remote queue as we can also store the message in local queue.
So creating a transmission queue is taking memory and space.
You need to learn more about MQ. A good place to start is the MQ Primer.
You will learn that a transmission queue (aka XMITQ) is a local queue that is used by the MCA (Message Channel Agent) to transfer messages from the local queue manager to the remote queue manager. Messages should only be in the XMITQ very briefly.

Remote Queue Manager backup using dmpmqcfg is not working

While taking backup of remote queue manager using dmpmqcfg command I am getting MQSC timed out waiting for a response from command server. Below is the command I am using.
dmpmqcfg -m <Queue Manager> -r <remote qmgr> -x all -a -o mqsc > D:\RemoteQueueManagerBackup\RemoteQmgr.txt
Before running this command I created a server and requester channel at both queue manager ends and when executed the above command channels are coming into running state but not able to take backup.
For few queue managers its working fine and for few its not working. I checked all the properties between running qmgr and other both looks same.
Update
Xmitq queue with the name queue manager is defined both ends eg: Queue Manager A Xmitq as B, For Queue Manager B Xmitq as A. Server and Requester channels are created. When the command is triggered channel are coming into Running state.
Any Errors returned from dmpmqcfg? --- No.
Whether DLQ is enabled and a queue of the specified name defined? - Yes
Presence or absence of messages on the DLQ at either end? - Messages are getting stored in Remote Queue Manager Dead Letter Queue.
Whether channels between the two QMgrs run when manually started (this can be a triggering problem instead of name resolution)? - Channels are running automatically ( triggering is enabled) No
Auth related errors? The only error we are seeing is
AMQ9544: Messages not put to destination queue.
Explanation: During the processing of channel "x.Y" one or more
messages could not be put to the destination queue and attempts were
made to put them to a dead-letter queue. The location of the queue is
2, where 1 is the local dead-letter queue and 2 is the remote
dead-letter queue
Action: Examine the contents of the DLQ. Each message is contained in
a structure that describes why the message was put to the queue, and
to where it was originally addressed. Also look at previous error
messages to see if the attempt to put messages to a dlq failed. The
program identifier (PDI) of the processing program was '5608(10796)'
Examined Remote Dead Letter Queue Reason:
MQRC_PERSISTENCE_NOT_ALLOWED
You do not mention the details of the channels and their XMitQs. In order for the messages to get to the remote machine and the replies to get back each QMgr needs to be able to resolve a path to the other. That means something must have the name of the remote QMgr. That something must either be the XMitQ itself or a QMgr alias that points to the XMitQ.
For example you have <Queue Manager> and <remote qmgr>. The local QMgr may have a transmit queue named <remote qmgr> which resolves that destination. Since you do not report any errors from dmpmqcfg, I will assume the messages are getting sent out OK.
That means the messages are not getting back. This may be because the remote QMgr's XMitQ has a name like <Queue Manager>.XMITQ instead of <Queue Manager>. This can be solved by defining a QMgr alias:
DEF QREMOTE('<Queue Manager>') +
XMITQ('<Queue Manager>.XMITQ') +
RNAME(' ') +
RQMNAME('<Queue Manager>') +
REPLACE
If this is in fact what's wrong, you should see some messages in the DLQ on the remote QMgr - assuming that one is defined and specified in the QMgr attributes.
If this does not solve the problem, please provide additional information including:
Any errors returned from dmpmqcfg
Whether DLQ is enabled and a queue of the specified name defined
Presence or absence of messages on the DLQ at either end
Whether channels between the two QMgrs run when manually started (this can be a triggering problem instead of name resolution)
Platforms of the QMgrs involved
Any relevant entries from the error logs, including auths errors (for example, mqm is a valid ID on Linux but not on Windows so would generate auths errors in this scenario if the sending QMgr were Linux and the remote QMgr Windows)
Update responding to additional info in question
So it looks like you have at least a couple of issues. The one that you have discovered is that temporary dynamic queues do not take persistent messages. If you think about it, this makes perfect sense. If a message is persistent, this tells MQ to take all precautions against losing it. But a temporary dynamic queue is deleted when the last input handle is released, discarding any messages that it holds.
Any attempt to put a persistent message onto a temporary dynamic queue sends conflicting signals to MQ. Should it accept a persistent message that it knows will be implicitly deleted? Or should it not delete the temporary dynamic queue in order to preserve the messages? Rather than try to guess the user's intent, it simply disallows the action. Thus, your persistent reply messages arrive at the local QMgr, find a temporary dynamic queue, and then are diverted to a DLQ.
You have already found the solution to this problem. Well, one of the solutions to this problem, anyway -- alter DEFPSIST so that the messages are non-persistent. Another solution would be to use the client connection capabilities of dmpmqcfg to connect directly to the remote QMgrs rather than routing through a local QMgr.
As to the remaining few QMgrs, you need to run the same diagnostic again. Check for error messages, depth in DLQ at both ends, channels running, auth errors. Remember that the resource errors, auth errors, routing problems, etc. can occur at either end so look for error messages and misrouted messages at both QMgrs. Also, verify channels by sending messages to and from both QMgrs to a known queue such as SYSTEM.DEFAULT.LOCAL.QUEUE or an application queue.
Another useful technique when playing a game of "where's my message" is to trace the message flow by stopping the channels in both directions prior to sending the commands. You can then run dmpmqcfg and see depth on the outbound XMitQ to verify the commands were sent. (You will have to GET-enable the XMitQ if you want to browse the messages since the channel agent GET-disables it. This will let you verify their persistence, expiry values, etc.)
Assuming the commands look OK you start the outbound channel only and let the messages flow to the remote QMgr where they are processed. Since the return channel is still stopped, the replies stack up in the return XMitQ. You can view them there to determine things like their persistence, expiry, and return codes/results of the command. If they look OK, start the channel and then look for them on the local QMgr.
For the few QMgrs where you still have issues, you should easily be able to find out where the messages are getting lost or being discarded. Keep in mind that non-persistent messages are sent across a channel outside any units of work so if there is no valid destination (or they aren't authorized to it) they are discarded silently. This diagnostic method of trapping them on XMitQs isolates each step so that if they are being discarded you can find out where and why.

How does the DEADQ work for SVRCONN channel in MQ?

i've a question about the DEADQ in MQ. I know that DEADQ has been used when the msg cannot be delived into the target queue, for example queue full or put inhibited like this. But if the client applicatoin connects to QMGR through SVRCONN channel and that target queue is full right now, will the msg sent by client application go to DEADQ or just that put operation would return with failure saying queue full?
if it works as the latter, does it mean the DEADQ not being used for client-server mode, like through SVRCONN channel?
Thanks
The DLQ is used by QMgr-to-QMgr channels because at that point the message has been entrusted to WMQ to deliver and/or persist as necessary. There is no option to directly notify the sending application in the API call that something went wrong. Best the QMgr can do is to send back a report message, if the app has requested it, and if the app specified a reply-to queue.
When an application is connected over a client channel the QMgr can tell the application during the PUT API call "hey, the destination queue is full," and let the application decide the appropriate course of action. Usually that "correct action" does NOT include putting the message onto the DLQ. In fact, routing this type of message to a secondary queue is VERY unusual. Conventional wisdom says it's better for the app to raise an alert than to provision overflow queues.
The one exception to this is when using JMS classes. These will move a poison message (one where the backout count exceeds the BOTHRESH value on the queue) to a backout queue. If the backout queue is not specified or the app is not authorized to it or if it is full, the app will then try to use the DLQ. Again, allowing an app to directly put messages to the DLQ is very unusual and not considered good practice, but it can be done.

Resources