Remote Queue Manager backup using dmpmqcfg is not working - ibm-mq

While taking backup of remote queue manager using dmpmqcfg command I am getting MQSC timed out waiting for a response from command server. Below is the command I am using.
dmpmqcfg -m <Queue Manager> -r <remote qmgr> -x all -a -o mqsc > D:\RemoteQueueManagerBackup\RemoteQmgr.txt
Before running this command I created a server and requester channel at both queue manager ends and when executed the above command channels are coming into running state but not able to take backup.
For few queue managers its working fine and for few its not working. I checked all the properties between running qmgr and other both looks same.
Update
Xmitq queue with the name queue manager is defined both ends eg: Queue Manager A Xmitq as B, For Queue Manager B Xmitq as A. Server and Requester channels are created. When the command is triggered channel are coming into Running state.
Any Errors returned from dmpmqcfg? --- No.
Whether DLQ is enabled and a queue of the specified name defined? - Yes
Presence or absence of messages on the DLQ at either end? - Messages are getting stored in Remote Queue Manager Dead Letter Queue.
Whether channels between the two QMgrs run when manually started (this can be a triggering problem instead of name resolution)? - Channels are running automatically ( triggering is enabled) No
Auth related errors? The only error we are seeing is
AMQ9544: Messages not put to destination queue.
Explanation: During the processing of channel "x.Y" one or more
messages could not be put to the destination queue and attempts were
made to put them to a dead-letter queue. The location of the queue is
2, where 1 is the local dead-letter queue and 2 is the remote
dead-letter queue
Action: Examine the contents of the DLQ. Each message is contained in
a structure that describes why the message was put to the queue, and
to where it was originally addressed. Also look at previous error
messages to see if the attempt to put messages to a dlq failed. The
program identifier (PDI) of the processing program was '5608(10796)'
Examined Remote Dead Letter Queue Reason:
MQRC_PERSISTENCE_NOT_ALLOWED

You do not mention the details of the channels and their XMitQs. In order for the messages to get to the remote machine and the replies to get back each QMgr needs to be able to resolve a path to the other. That means something must have the name of the remote QMgr. That something must either be the XMitQ itself or a QMgr alias that points to the XMitQ.
For example you have <Queue Manager> and <remote qmgr>. The local QMgr may have a transmit queue named <remote qmgr> which resolves that destination. Since you do not report any errors from dmpmqcfg, I will assume the messages are getting sent out OK.
That means the messages are not getting back. This may be because the remote QMgr's XMitQ has a name like <Queue Manager>.XMITQ instead of <Queue Manager>. This can be solved by defining a QMgr alias:
DEF QREMOTE('<Queue Manager>') +
XMITQ('<Queue Manager>.XMITQ') +
RNAME(' ') +
RQMNAME('<Queue Manager>') +
REPLACE
If this is in fact what's wrong, you should see some messages in the DLQ on the remote QMgr - assuming that one is defined and specified in the QMgr attributes.
If this does not solve the problem, please provide additional information including:
Any errors returned from dmpmqcfg
Whether DLQ is enabled and a queue of the specified name defined
Presence or absence of messages on the DLQ at either end
Whether channels between the two QMgrs run when manually started (this can be a triggering problem instead of name resolution)
Platforms of the QMgrs involved
Any relevant entries from the error logs, including auths errors (for example, mqm is a valid ID on Linux but not on Windows so would generate auths errors in this scenario if the sending QMgr were Linux and the remote QMgr Windows)
Update responding to additional info in question
So it looks like you have at least a couple of issues. The one that you have discovered is that temporary dynamic queues do not take persistent messages. If you think about it, this makes perfect sense. If a message is persistent, this tells MQ to take all precautions against losing it. But a temporary dynamic queue is deleted when the last input handle is released, discarding any messages that it holds.
Any attempt to put a persistent message onto a temporary dynamic queue sends conflicting signals to MQ. Should it accept a persistent message that it knows will be implicitly deleted? Or should it not delete the temporary dynamic queue in order to preserve the messages? Rather than try to guess the user's intent, it simply disallows the action. Thus, your persistent reply messages arrive at the local QMgr, find a temporary dynamic queue, and then are diverted to a DLQ.
You have already found the solution to this problem. Well, one of the solutions to this problem, anyway -- alter DEFPSIST so that the messages are non-persistent. Another solution would be to use the client connection capabilities of dmpmqcfg to connect directly to the remote QMgrs rather than routing through a local QMgr.
As to the remaining few QMgrs, you need to run the same diagnostic again. Check for error messages, depth in DLQ at both ends, channels running, auth errors. Remember that the resource errors, auth errors, routing problems, etc. can occur at either end so look for error messages and misrouted messages at both QMgrs. Also, verify channels by sending messages to and from both QMgrs to a known queue such as SYSTEM.DEFAULT.LOCAL.QUEUE or an application queue.
Another useful technique when playing a game of "where's my message" is to trace the message flow by stopping the channels in both directions prior to sending the commands. You can then run dmpmqcfg and see depth on the outbound XMitQ to verify the commands were sent. (You will have to GET-enable the XMitQ if you want to browse the messages since the channel agent GET-disables it. This will let you verify their persistence, expiry values, etc.)
Assuming the commands look OK you start the outbound channel only and let the messages flow to the remote QMgr where they are processed. Since the return channel is still stopped, the replies stack up in the return XMitQ. You can view them there to determine things like their persistence, expiry, and return codes/results of the command. If they look OK, start the channel and then look for them on the local QMgr.
For the few QMgrs where you still have issues, you should easily be able to find out where the messages are getting lost or being discarded. Keep in mind that non-persistent messages are sent across a channel outside any units of work so if there is no valid destination (or they aren't authorized to it) they are discarded silently. This diagnostic method of trapping them on XMitQs isolates each step so that if they are being discarded you can find out where and why.

Related

What is the difference between remote, local and alias queues

Can somebody help to understand the basic of these 3 queues with example.when do we use all 3
Perhaps a more simple explanation: Think of a local queue as a queue which exists on the queue manager it is defined.. you can PUT and GET messages off a local queue. A remote queue is like a pointer to a queue on another queue manager which is usually on different host. Therefore messages can be PUT to it (and they will usually arrive on a local queue at that remote host) but you cannot GET messages from a remote queue.
Simply put, a queue manager only ever hosts messages on local or transmition queues on that queue manager. If you want to go to another queue manager, you can use definitions which tell the queue manager that the 'put' is running on, how to route the message to a destination queue manager - however this ends up with a message on a transmit queue which is then picked up and sent down a channel towards that destination. Alias queues are just an opportunity to use a different name for another queue. Remote queues are definitions on one queuemanager with information about where the message should be routed.
MQ documentation:
https://www.ibm.com/support/knowledgecenter/SSFKSJ_8.0.0/com.ibm.mq.explorer.doc/e_queues.htm
Another description:
https://www.scribd.com/doc/44887119/Different-Types-Queues-in-Websphere-MQ
A queue is known to a program as local if it is owned by the queue manager to which the program is connected; the queue is known as remote if it is owned by a different queue manager. The important difference between these two types of queue is that you can get messages only from local queues. (You can put messages on both types of queue.)
References:
https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_7.0.1/com.ibm.mq.csqzal.doc/fg10950_.htm
https://www.ibm.com/support/knowledgecenter/SSFKSJ_8.0.0/com.ibm.mq.explorer.doc/e_queues.htm
An application which is connected to a local QueueManager which host a queue, So that queue will be local queue for that application. An application which is connected to a QueueManager which is located remotely and that host a queue that queue will be remote queue. we should always keeps in mind that we always read the message from the local queue. A message put on the remote queue, will be routed to local queue through the object defined on the local queue thrugh the channel and the transmission queue.

RabbitMQ Consumer Disconnect Event

Is there any way we can know when a consumer disconnects from a queue or when a queue is deleted?
The requirement is as follows:
I'm building a system in which multiple clients can subscribe to certain events from the system. All clients create their own queue and registers themselves with the system using some sort of authentication. The system, as the events are generated, filters the events and forwards them to clients who are eligible for them.
I have implemented a POC for most part of it and it works well. An issue that I'm not able to fix is that, if a client just disconnects from the queue (due to program termination or so), the registration still exists and the system keeps trying to push messages to that client.
So we would like to be notified when a client disconnects or a queue gets deleted so that we can remove that client's registration data and no longer push messages to him.
Let your publisher utilize Confirms (aka Publisher Acknowledgements) and make client queue be exclusive and transient, so only one client at a time will be consuming from one queue and after it disconnection it will be deleted.
If you publish message that get routed to only one queue and that queue gone (assume you utilize publisher confirms and publish message with mandatory flag set) publisher will be notified that message cannot be routed with that message returned back to it, so you can stop publishing messages.
For details see How Confirms Work section in RabbitMQ blog post "Introducing Publisher Confirms" and Confirms (aka Publisher Acknowledgements) official docs.

Intended use of Transmission Queue

This is very basic question about IBM WebSphere MQ V7.
Regarding the Transmission Queue, my understanding is it is only used with remote queue that resides in the same queue manager. Therefore, if I want to put message to the queue, I need to put it to remote queue.
It is like this.
App --> Remote queue --> Transmission Queue
My question is:
Is it possible to put the message directly into transmission queue like this?
App --> Transmission Queue
--Modified on 2014.03.17 --
I found a way to put message directly into transmission queue. I do not know this is a common use, but in order to do that I needed to prepend MQXQH to the message. I tried and confirmed it works. See the Infocenter reference here.
Do not ever put directly to a transmission queue. It is dangerous if you do not know what you are doing.
You should put your message to a remote queue. A remote queue is not the same as a local queue. A remote queue is simply a pointer to a queue on another queue manager.
Although it is possible to put messages directly on the XMitQ, there is considerable risk in allowing that to occur so most admins will prevent applications from directly accessing that queue. As you have found, it is possible to construct a message with the transmission queue header and behind that a normal message with the MQMD and payload. (This is, in fact, excatly how the MCA works.)
The problem here is that the QMgr does not check the values in the MQMD residing in the payload so you can put mqm as the MQMD.UserID and then address the message to the remote command queue and grant yourself admin access to that remote QMgr.
Security-conscious administrators typically use two security controls to prevent this. First, they disallow direct access to the XMitQ. That helps for outbound messages. More importantly, they set the MCAUSER of their RCVR/RQSTR/CLUSRCVR channels to a non-admin user ID that is not authorized to put messages onto any sensitive queues.
The other issue is, of course, that what you describe completely defeats WMQ's name resolution. By embedding routing into the app, you prevent the administrator from adjusting channel weights, cluster settings, failover and load distribution at the network level. Need to redistribute traffic? Redeploy the code. Not a good plan.
So for security reasons and because you paid a lot of money to get WMQ's reliability - much of which comes from dynamic addressing and name resolution features - coding apps to write directly to the XMitQ is strongly discouraged.
You should not directly be using the transmission queue. Its used by the message channel agent (MCA) as temporary storage when sending messages across to a remote queue manager.
This is distributed queuing - i.e you publish a message to Queue Manager A, and want it routed to a local queue on Queue Manager B. So you define a reference on QM-A referring to the local queue on QM-B. This reference is the 'remote queue definition'.
The remote queue definition specifies the transmission queue name. The transmission queue is bound to the MCA, which in turn knows about the remote QM.

How does the DEADQ work for SVRCONN channel in MQ?

i've a question about the DEADQ in MQ. I know that DEADQ has been used when the msg cannot be delived into the target queue, for example queue full or put inhibited like this. But if the client applicatoin connects to QMGR through SVRCONN channel and that target queue is full right now, will the msg sent by client application go to DEADQ or just that put operation would return with failure saying queue full?
if it works as the latter, does it mean the DEADQ not being used for client-server mode, like through SVRCONN channel?
Thanks
The DLQ is used by QMgr-to-QMgr channels because at that point the message has been entrusted to WMQ to deliver and/or persist as necessary. There is no option to directly notify the sending application in the API call that something went wrong. Best the QMgr can do is to send back a report message, if the app has requested it, and if the app specified a reply-to queue.
When an application is connected over a client channel the QMgr can tell the application during the PUT API call "hey, the destination queue is full," and let the application decide the appropriate course of action. Usually that "correct action" does NOT include putting the message onto the DLQ. In fact, routing this type of message to a secondary queue is VERY unusual. Conventional wisdom says it's better for the app to raise an alert than to provision overflow queues.
The one exception to this is when using JMS classes. These will move a poison message (one where the backout count exceeds the BOTHRESH value on the queue) to a backout queue. If the backout queue is not specified or the app is not authorized to it or if it is full, the app will then try to use the DLQ. Again, allowing an app to directly put messages to the DLQ is very unusual and not considered good practice, but it can be done.

Where is a message under syncpoint control stored prior to COMMIT?

With WebSphere MQ, where is a message addressed to a remote queue manager and PUT under syncpoint control stored until MQCMIT is issued?
Messages that are addressed to a remote queue manager resolve to a transmit queue. Which transmit queue they resolve to depends on how the message will be sent to the remote QMgr. The message will resolve locally to either a user-defined transmit queue for a SDR or SVR channel, or it will resolve to SYSTEM.CLUSTER.TRANSMIT.QUEUE for a clustered channel.
For any message that is put under syncpoint, the message is written to the transaction logs and, if the message is persistent, to the queue file. The queue depth increases to reflect that the message is in the queue but the message is not made available to other programs (such as the channel agent) until the COMMIT occurs.
So if your message is going to a clustered queue manager and you PUT under syncpoint, you will see the depth of the cluster transmit queue increase. At that point the message is at least in the transaction log and possibly also written to the queue file. As soon as the message is committed, it becomes available to the channel.

Resources