i've a question about the DEADQ in MQ. I know that DEADQ has been used when the msg cannot be delived into the target queue, for example queue full or put inhibited like this. But if the client applicatoin connects to QMGR through SVRCONN channel and that target queue is full right now, will the msg sent by client application go to DEADQ or just that put operation would return with failure saying queue full?
if it works as the latter, does it mean the DEADQ not being used for client-server mode, like through SVRCONN channel?
Thanks
The DLQ is used by QMgr-to-QMgr channels because at that point the message has been entrusted to WMQ to deliver and/or persist as necessary. There is no option to directly notify the sending application in the API call that something went wrong. Best the QMgr can do is to send back a report message, if the app has requested it, and if the app specified a reply-to queue.
When an application is connected over a client channel the QMgr can tell the application during the PUT API call "hey, the destination queue is full," and let the application decide the appropriate course of action. Usually that "correct action" does NOT include putting the message onto the DLQ. In fact, routing this type of message to a secondary queue is VERY unusual. Conventional wisdom says it's better for the app to raise an alert than to provision overflow queues.
The one exception to this is when using JMS classes. These will move a poison message (one where the backout count exceeds the BOTHRESH value on the queue) to a backout queue. If the backout queue is not specified or the app is not authorized to it or if it is full, the app will then try to use the DLQ. Again, allowing an app to directly put messages to the DLQ is very unusual and not considered good practice, but it can be done.
Related
While taking backup of remote queue manager using dmpmqcfg command I am getting MQSC timed out waiting for a response from command server. Below is the command I am using.
dmpmqcfg -m <Queue Manager> -r <remote qmgr> -x all -a -o mqsc > D:\RemoteQueueManagerBackup\RemoteQmgr.txt
Before running this command I created a server and requester channel at both queue manager ends and when executed the above command channels are coming into running state but not able to take backup.
For few queue managers its working fine and for few its not working. I checked all the properties between running qmgr and other both looks same.
Update
Xmitq queue with the name queue manager is defined both ends eg: Queue Manager A Xmitq as B, For Queue Manager B Xmitq as A. Server and Requester channels are created. When the command is triggered channel are coming into Running state.
Any Errors returned from dmpmqcfg? --- No.
Whether DLQ is enabled and a queue of the specified name defined? - Yes
Presence or absence of messages on the DLQ at either end? - Messages are getting stored in Remote Queue Manager Dead Letter Queue.
Whether channels between the two QMgrs run when manually started (this can be a triggering problem instead of name resolution)? - Channels are running automatically ( triggering is enabled) No
Auth related errors? The only error we are seeing is
AMQ9544: Messages not put to destination queue.
Explanation: During the processing of channel "x.Y" one or more
messages could not be put to the destination queue and attempts were
made to put them to a dead-letter queue. The location of the queue is
2, where 1 is the local dead-letter queue and 2 is the remote
dead-letter queue
Action: Examine the contents of the DLQ. Each message is contained in
a structure that describes why the message was put to the queue, and
to where it was originally addressed. Also look at previous error
messages to see if the attempt to put messages to a dlq failed. The
program identifier (PDI) of the processing program was '5608(10796)'
Examined Remote Dead Letter Queue Reason:
MQRC_PERSISTENCE_NOT_ALLOWED
You do not mention the details of the channels and their XMitQs. In order for the messages to get to the remote machine and the replies to get back each QMgr needs to be able to resolve a path to the other. That means something must have the name of the remote QMgr. That something must either be the XMitQ itself or a QMgr alias that points to the XMitQ.
For example you have <Queue Manager> and <remote qmgr>. The local QMgr may have a transmit queue named <remote qmgr> which resolves that destination. Since you do not report any errors from dmpmqcfg, I will assume the messages are getting sent out OK.
That means the messages are not getting back. This may be because the remote QMgr's XMitQ has a name like <Queue Manager>.XMITQ instead of <Queue Manager>. This can be solved by defining a QMgr alias:
DEF QREMOTE('<Queue Manager>') +
XMITQ('<Queue Manager>.XMITQ') +
RNAME(' ') +
RQMNAME('<Queue Manager>') +
REPLACE
If this is in fact what's wrong, you should see some messages in the DLQ on the remote QMgr - assuming that one is defined and specified in the QMgr attributes.
If this does not solve the problem, please provide additional information including:
Any errors returned from dmpmqcfg
Whether DLQ is enabled and a queue of the specified name defined
Presence or absence of messages on the DLQ at either end
Whether channels between the two QMgrs run when manually started (this can be a triggering problem instead of name resolution)
Platforms of the QMgrs involved
Any relevant entries from the error logs, including auths errors (for example, mqm is a valid ID on Linux but not on Windows so would generate auths errors in this scenario if the sending QMgr were Linux and the remote QMgr Windows)
Update responding to additional info in question
So it looks like you have at least a couple of issues. The one that you have discovered is that temporary dynamic queues do not take persistent messages. If you think about it, this makes perfect sense. If a message is persistent, this tells MQ to take all precautions against losing it. But a temporary dynamic queue is deleted when the last input handle is released, discarding any messages that it holds.
Any attempt to put a persistent message onto a temporary dynamic queue sends conflicting signals to MQ. Should it accept a persistent message that it knows will be implicitly deleted? Or should it not delete the temporary dynamic queue in order to preserve the messages? Rather than try to guess the user's intent, it simply disallows the action. Thus, your persistent reply messages arrive at the local QMgr, find a temporary dynamic queue, and then are diverted to a DLQ.
You have already found the solution to this problem. Well, one of the solutions to this problem, anyway -- alter DEFPSIST so that the messages are non-persistent. Another solution would be to use the client connection capabilities of dmpmqcfg to connect directly to the remote QMgrs rather than routing through a local QMgr.
As to the remaining few QMgrs, you need to run the same diagnostic again. Check for error messages, depth in DLQ at both ends, channels running, auth errors. Remember that the resource errors, auth errors, routing problems, etc. can occur at either end so look for error messages and misrouted messages at both QMgrs. Also, verify channels by sending messages to and from both QMgrs to a known queue such as SYSTEM.DEFAULT.LOCAL.QUEUE or an application queue.
Another useful technique when playing a game of "where's my message" is to trace the message flow by stopping the channels in both directions prior to sending the commands. You can then run dmpmqcfg and see depth on the outbound XMitQ to verify the commands were sent. (You will have to GET-enable the XMitQ if you want to browse the messages since the channel agent GET-disables it. This will let you verify their persistence, expiry values, etc.)
Assuming the commands look OK you start the outbound channel only and let the messages flow to the remote QMgr where they are processed. Since the return channel is still stopped, the replies stack up in the return XMitQ. You can view them there to determine things like their persistence, expiry, and return codes/results of the command. If they look OK, start the channel and then look for them on the local QMgr.
For the few QMgrs where you still have issues, you should easily be able to find out where the messages are getting lost or being discarded. Keep in mind that non-persistent messages are sent across a channel outside any units of work so if there is no valid destination (or they aren't authorized to it) they are discarded silently. This diagnostic method of trapping them on XMitQs isolates each step so that if they are being discarded you can find out where and why.
Is it possible to configure the topic to store a copy of just the last message and send this to new connections without knowing client identifiers or other info?
Update:
From the info provided by Shashi I found this two pages where they describe a use case similar to mine (applied over stock prices) by using retroactive consumer and a subscription recovery policy. How ever I'm not getting the desired behaviour. What I currently do is:
Include in the activemq the folowing lines in the policyEntry for topic=">"
<subscriptionRecoveryPolicy>
<fixedCountSubscriptionRecoveryPolicy maximumSize="1"/>
</subscriptionRecoveryPolicy>
Add to the URL used to connect to the brocker (using activemq-cpp) consumer.retroactive=true.
Set the consumer has durable. (But I strongly think this is not want since I only need the last one, but without it I didn't get any message when starting the consumer for the second time)
Start up the broker.
Start the consumer.
Send a message to the topic using the activemq web admin console. (I receive it in the consumer, as expected)
Stop consumer.
Send another message to the topic.
Start consumer. I receive the message, also as expected.
However, if the consumer receives a message, then it goes offline (stop process) and then I restart it, it doesn't get the last message back.
The goal is to whenever the consumer starts get the last message, no mater what (obviously, except when there weren't messages sent to the topic).
Any ideas on what I'm missing?
Background:
I have a device which publishes his data to a topic when ever its data changes. A variable number of consumer may be connected to this topic, from 0 to less than 10. There is only one publisher in the topic and always publish all of his data as a single message (little data, just a couple of fields of a sensor reading). The publication rate of this information is variable, not necessarily time based, when something changes a new updated message is sent to the broker.
The problem is that when a new consumer connects to the topic it has no data of the device readings until a new message is send to the topic by the device. This could be solve by creating an additional queue so new connections can subscribe to the topic and then request the device for the current reading through the queue (the device would consume the queue message which would be a request for data, and then response in the same queue).
But Since the messages send to the topic are always information complete I was wondering if is it possible to configure the topic to store a copy of just the last message and send this to new connections without know client identifiers or other info?
Current broker in use is ActiveMQ.
What you want is to have retroactive consumers and to set the lastImageSubscriptionRecoveryPolicy subscription recovery policy on the topic. Shashi is correct in saying that the following syntax for setting a consumer to be retroactive works only with Openwire
topic = new ActiveMQTopic("TEST.Topic?consumer.retroactive=true");
In your case, what you can do is to configure all consumers to be retroactive in broker config with alwaysRetroactive="true". I tested that this works even for the AMQP protocol (library qpid-jms-client) and I suspect it will work for all protocols.
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic="FOO.>" alwaysRetroactive="true">
<subscriptionRecoveryPolicy>
<lastImageSubscriptionRecoveryPolicy />
</subscriptionRecoveryPolicy>
</policyEntry>
The configuration example is taken from https://github.com/apache/activemq/blob/master/activemq-unit-tests/src/test/resources/org/apache/activemq/test/retroactive/activemq-message-query.xml
Messaging providers (WebSphere MQ for example) have a feature called Retained Publication. With this feature the last published message on a topic is retained by the messaging provider and delivered to a new consumer who comes in after a message has been published on a given topic.
Retained Publication may be supported by Active MQ in it's native interface. This link talks about consumer.retroactive which is available for OpenWire only.
A publisher will tell the messaging provider to retain a publication by setting a property on the message before publishing. Below is how it is done using WebSphere MQ.
// set as a retained publication
msg.setIntProperty(JmsConstants.JMS_IBM_RETAIN, JmsConstants.RETAIN_PUBLICATION)
This is very basic question about IBM WebSphere MQ V7.
Regarding the Transmission Queue, my understanding is it is only used with remote queue that resides in the same queue manager. Therefore, if I want to put message to the queue, I need to put it to remote queue.
It is like this.
App --> Remote queue --> Transmission Queue
My question is:
Is it possible to put the message directly into transmission queue like this?
App --> Transmission Queue
--Modified on 2014.03.17 --
I found a way to put message directly into transmission queue. I do not know this is a common use, but in order to do that I needed to prepend MQXQH to the message. I tried and confirmed it works. See the Infocenter reference here.
Do not ever put directly to a transmission queue. It is dangerous if you do not know what you are doing.
You should put your message to a remote queue. A remote queue is not the same as a local queue. A remote queue is simply a pointer to a queue on another queue manager.
Although it is possible to put messages directly on the XMitQ, there is considerable risk in allowing that to occur so most admins will prevent applications from directly accessing that queue. As you have found, it is possible to construct a message with the transmission queue header and behind that a normal message with the MQMD and payload. (This is, in fact, excatly how the MCA works.)
The problem here is that the QMgr does not check the values in the MQMD residing in the payload so you can put mqm as the MQMD.UserID and then address the message to the remote command queue and grant yourself admin access to that remote QMgr.
Security-conscious administrators typically use two security controls to prevent this. First, they disallow direct access to the XMitQ. That helps for outbound messages. More importantly, they set the MCAUSER of their RCVR/RQSTR/CLUSRCVR channels to a non-admin user ID that is not authorized to put messages onto any sensitive queues.
The other issue is, of course, that what you describe completely defeats WMQ's name resolution. By embedding routing into the app, you prevent the administrator from adjusting channel weights, cluster settings, failover and load distribution at the network level. Need to redistribute traffic? Redeploy the code. Not a good plan.
So for security reasons and because you paid a lot of money to get WMQ's reliability - much of which comes from dynamic addressing and name resolution features - coding apps to write directly to the XMitQ is strongly discouraged.
You should not directly be using the transmission queue. Its used by the message channel agent (MCA) as temporary storage when sending messages across to a remote queue manager.
This is distributed queuing - i.e you publish a message to Queue Manager A, and want it routed to a local queue on Queue Manager B. So you define a reference on QM-A referring to the local queue on QM-B. This reference is the 'remote queue definition'.
The remote queue definition specifies the transmission queue name. The transmission queue is bound to the MCA, which in turn knows about the remote QM.
I'm dealing with a standalone MQ JMS application, our app need to "aware" that client already consumed the message producer put on the queue. Because client app is not responsible by us. So we cannot let them to write something like "msg.acknowledge();" thing on their side (msg.acknowledge() is not the right approach on my condition.). I search the history answer in the stackoverflow. Find following is quite the same what I want:
https://stackoverflow.com/questions/6521117/how-to-guarantee-delivery-of-the-message-in-jms
Do the JMS spec or the various implementations support delivery confirmation of messages?
My question is, is there any other way to archive this in the MQ API or JMS API? I need to do the coding only on the msg produce side, it is can be queue or topic.
Another question is in the JMS the acknowledge mode CLIENT_ACKNOWLEDGE, is that produce irrelevant? I always believe that this mode can block the application when call send() method until the client consume the message and call the msg.acknowledge(), but seems not like that. The produce just exit the app after message be delivered, and the message just store in the queue until client call the acknowledge(). Is that possible let the producer app hang there wait until the message be acknowledged by the client?
If my concept is not right, just correct me, thanks.
The main intention of message queuing is to decouple producer and consumer. Producer does not need to wait for the message to be consumed by the consumer, it can continue it's job. Ideally if producer needs to know if the message has been processed by consumer or not, it should wait for consumer to send a response message on another queue.
Message acknowledgement has nothing to do with producer. Message acknowledgement is the way a consumer tells the messaging provider to remove the message from a queue after the message has been delivered to an application.
There is auto acknowledge where the JMS providers (like MQ JMS), after delivering message to an application, tell the messaging provider to remove the message from queue. Then there is client acknowledge where, after receiving a message, the application explicitly tells the messaging provider to remove message from a queue.
Is there is a reason why the producer has to wait for consumer to receive the message? One way, though not elegant, could be: Once the message is sent, use the message id of the sent message and try to browse for that message. If message is not found, you can assume it has been consumed
With WebSphere MQ, where is a message addressed to a remote queue manager and PUT under syncpoint control stored until MQCMIT is issued?
Messages that are addressed to a remote queue manager resolve to a transmit queue. Which transmit queue they resolve to depends on how the message will be sent to the remote QMgr. The message will resolve locally to either a user-defined transmit queue for a SDR or SVR channel, or it will resolve to SYSTEM.CLUSTER.TRANSMIT.QUEUE for a clustered channel.
For any message that is put under syncpoint, the message is written to the transaction logs and, if the message is persistent, to the queue file. The queue depth increases to reflect that the message is in the queue but the message is not made available to other programs (such as the channel agent) until the COMMIT occurs.
So if your message is going to a clustered queue manager and you PUT under syncpoint, you will see the depth of the cluster transmit queue increase. At that point the message is at least in the transaction log and possibly also written to the queue file. As soon as the message is committed, it becomes available to the channel.