MQ - Handling of messages exceeding max length moving to dead queue - ibm-mq

This is a general question. Let me say I have a queue manager locally. I have a transmission queue/remote queue definition setup through which I connect to destination queue manager queue. If destination queue manager queue's maximum message length capacity is 1000 and if I put a message length more than that it automatically moves to destination queue manager dead letter queue provided that my transmission queue max message length is more than what I input. This is the expected behaviour. But is there any way on MQ world to handle this and not move it to dead queue? Or is it the sole responsibility of the application that puts this message to not put over the max length?
Thanks in advance.

Changing the default Maximum Message Length (i.e. MAXMSGL) from 4MB to a small value is a bad idea.
Myth: MQ does NOT allocate space based on the value in maximum message length field. Setting it to a very small value or to a very large value has not bearing on the disk space. MQ ONLY writes the size/amount of the real message.
Secondly, the application team should tell the MQAdmin what the largest message will be that the application will send. If they say 10MB then the MQAdmin can increase max. message length to 10MB or something a little larger i.e. 12MB.
The largest value that can be used is 100MB.
Note: The MQAdmin will need to increase the max. message length for: channel, XMITQ, local queue and the dead letter queue for any message larger than the default size of 4MB or it will not flow.

Thanks Roger and JoshMc. Infact I tried both options, client to QM and between QM and QM. Client to QM is fine as the client receives the error code and basically nothing happens. But the concern is only between sender QM and receiver QM. What I have seen mostly is that there will be only one transmission queue with maximum message length to connect to a particular queue manager. All the different remote connection/queues use that transmission queue. So if the sender commits a mistake by sending a large message than the destination queue cannot accept, it usually end up passing through the transmission queue but failing in destination and reaching the destination's dead queue. Now the destination owner is alerted/or need to take some remediation for a mistake that he didn't commit. That's the whole reason for me asking this question. Thanks a lot for you both on shedding more light and spending your time for me.
I think Morag Hughson has given out something for me to try out but still it will have its own negative impact. But I was looking for something like that where we can control at the MQ level to not allow the message to go to destination dead queue.

Related

Springjmslistner - Max number of Redeliver in CLIENT_ACKNOWLEDGE mode

I would like to understand what is the maximum number of re-deliver in CLIENT_ACKNOWLEDGE if you are not doing the acknowledgment.
Do we have any maximum number configured, if so what is that
property and can we override it?
If we don't have any maximum
number, then the message will always stay in the queue? is there any
way to clear it.
It's not part of the JMS specification; some vendors have a mechanism to deliver to a dead letter queue after some number of (configurable) attempts.
Recent brokers provide the delivery count in the JMSXDeliveryCount header so you can decide to throw it away when the count reaches some number.
If you are using CLIENT_ACKNOWLEDGE and don't do so, it won't be redelivered at all (unless you close the consumer/connection).

JMS Priority Messages Causing Starvation of Lower Priority Message

I have a queue that is loaded with high priority JMS messages throughout the day, I want to get them out the door quickly. The queue is also being loaded periodically with lower priority messages in large batches. The problem that I see on busy days, is that there are always enough high priority messages at the front of the queue that none of the lower priority messages get selected until that volume drops off. Often they will sit on the queue until they middle of the night. The app is distributed over a number of servers, but the CPUs are not even breathing hard, the JMS seems to be the choak point.
My hunch is to implement some sort of aging algorithm that increases priority for messages that have been on the queue for a very long time, but of course, that is what middleware is supposed to do for me. I can't imagine that the JMS provider (IBM WebsphereMQ) or the application server (TIBCO BusinessWorks) doesn't have some sort of facility to cope with this. So before I go write some code, I thought I would ask, is there any way to get either of these technologies to help me out with this problem?
The BusinessWorks activity that is reading the queue is a JMS SOAP Event Source, but I could turn it into a JMS Queue Receiver activity or whatever.
All thoughts on how to solve this are welcome :-) TIA
That's like tying 1 hand behind your back and then complaining that you cannot swim properly. D'oh! First off, who's bright idea was it to mix messages. Just because you can do something does not mean you should.
The app is distributed over a number of servers, but the CPUs are not
even breathing hard, the JMS seems to be the choak point.
Well then, the solution is easy. Put high priority messages into queue "A" (the existing queue) and low priority messages into a new queue "B". Next, startup another instance of your JMS application to read the messages off queue "B".
Also, JMS is probably not the choke-point. It is what the application is doing with the message data after the JMS layer picks up the message that is taking a long time (i.e. backend work).
Finally, how many instances of your JMS application is running against the existing queue? If you are only running 1 instance, why? If you have lots of CPU capacity then why don't you run 10 instances of your JMS application. Do some true parallel processing of messages.
If you really want to keep you messages mixed on the same queue and have the high priority messages processed first, and yet your volume of messages is such that you cannot work through all the volume sometimes until the middle of the night, then you quite simply do not have enough processing applications. MQ is a parallel processing system, it is designed to allow many applications to put or get from a queue at once. Make use of this by running more of your getting applications at the same time. They will work through your high priority messages quicker and then get back to processing the lower priority ones.
From your description it's clear that you want the high priority messages to processed first. In such a case lower priority messages will have to wait.
MQ will not increase the priority of messages if they are sitting in queue for long time. How will it know that it has to change property of a message :)?. You will need to develop an application to do that.
I would think segregating messages based on priority, for example, high priority messages are put to one queue and lower priority messages to another queue could be one option you could look at.
Second option would be to look at the changing the delivery sequence (MSGDLVSQ) to FIFO. This makes to messages to be delivered to consumers in the order they arrived into queue. But note this will ignore the message priority, meaning if there is a lower priority message followed by a higher priority message, then higher priority message will wait till the lower priority message is delivered.

Websphere MQ 7.1 on z/OS and big messages

How to handle messages bigger than 4MB on z/OS? I can't use segmentation, because it is not supported on z/OS.
Some OS have even bigger limits measured even in kB.
What is the common approach in this case?
You are not hitting a z/OS limit but rather the default maximum message length on WebSphere MQ. Note that the Infocenter says "On z/OS, specify a value in the range zero through 100 MB (104 857 600 bytes)".
To fix this, change the MAXMSGL on any queues and channels through which the message might pass. Don't forget to update the Dead Letter Queue's MAXMSGL as well as transmission queues.
Be aware that MAXMSGL is there to save you! Many people set the value to its highest possible size and then run out of disk space. If the application hits a soft limit such as MAXMSGL or MAXDEPTH the effect is limited and generally recoverable. If the disk space is exhausted, the entire QMgr comes to a screeching halt and all connected apps are impacted.
For more on this, please see the Mice and Elephants article on developerWorks.
UDATE:
Update based on comments asking about specifics of HP NonStop and WMQ V5.3.
Please see the WMQ V5.3 manuals available in the WMQ Documentation library. The second link is the System Administration Guide for WMQ V5.3 on HP NonStop. Message lengths are discussed on P4:
The default maximum message length is 4 MB, although you can increase
this to a maximum length of 100 MB (where 1 MB equals 1 048 576
bytes). In practice, the message length might be limited by:
The maximum message length defined for the receiving queue
The maximum message length defined for the queue manager
The maximum message length defined by the queue
The maximum message length defined by either the sending or receiving application
The amount of storage available for the message
So there's no arbitrarily small max message length on HP NonStop or associated with V5.3 of WMQ.
Maybe message grouping can help you there.
Of course the applications then have to be customized.

Is it possible to declare a maximum queue size with AMQP?

As the title says — is it possible to declare a maximum queue size and broker behaviour when this maximum size is reached? Or is this a broker-specific option?
I ask because I'm trying to learn about AMQP, not because I have this specific problem with any specific broker… But broker-specific answers would still be insightful.
AFAIK you can't declare maximum queue size with RabbitMQ.
Also there's no such setting in the AMQP sepc:
http://www.rabbitmq.com/amqp-0-9-1-quickref.html#queue.declare
Depending on why you're asking, you might not actually need a maximum queue size. Since version 2.0 RabbitMQ will seamlessly persist large queues to disk instead of storing all the messages in RAM. So if your concern the broker crashing because it exhausts its resources, this actually isn't much of a problem in most circumstances - assuming you aren't strapped for hard disk space.
In general this persistence actually has very little performance impact, because by definition the only "hot" parts of the queue are the head and tail, which stay in RAM; the majority of the backlog is "cold" so it makes little difference that it's sitting on disk instead.
We've recently discovered that at high throughput it isn't quite that simple - under some circumstances the throughput can deteriorate as the queue grows, which can lead to unbounded queue growth. But when that happens is a function of CPU, and we went for quite some time without hitting it.
You can read about RabbitMQ maximum queue implementation here http://www.rabbitmq.com/maxlength.html
They do not block the incoming messages addition but drop the messages from the head of the queue.
You should definitely read about Flow control here:
http://www.rabbitmq.com/memory.html
With qpid, yes
you can confire maximun queue size and politic in case raise the maximum. Ring, ignore messages,broke connection.
you also have lvq queues (las value) very configurable
There are some things that you can't do with brokers, but you can do in your app. For instance, there are two AMQP methods, basic.get and queue.declare, which return the number of messages in the queue. You can use this to periodically get a count of outstanding messages and take action (like start new consumer processes) if the message count gets too high.

queue storage filesystem full in websphere mq

We come across a scenario where disk space was occupied for empty queues in linux environment.
Our queue manager ended unexpectedly as the file system become full and we need to empty the q file to bring back the queue manager.
But actually we dont have any messages at all in queue. This is showing a particularl queue.
Why the disk space is held here? what is the root cause?
WMQ does not shrink the queue files in real time. For example, you have 100 messages on a queue and you consume the first one. WMQ does not then shrink the file and move all the messages up by one position. If it tried to do that for each message, you'd never be able to get the throughput that you currently see in the product.
What does occur is that WMQ will shrink the queue files at certain points in the processing lifecycle. There is some latency between a queue becoming empty and the file under it shrinking it but this latency is normally so small as to be unnoticeable.
The event you are describing could in theory occur under some very specific conditions however it would be an extremely rare. In fact in the 15 years I've been working with WMQ I've only ever seen a couple of instances where the latency in shrinking a queue file was even noticeable. I would guess that what is actually going on here is that one of your assumptions or observations is faulty. For example:
Was the queue actually empty?
The queue was most definitely empty after you blew away the file. How do you know it was empty before you blew away the file?
If there were non-persistent messages on any queue, the queue will be empty after the QMgr restarts. This is another case where the queue can appear to be empty after the QMgr is restarted but was not at the time of failure.
If a message is retrieved from a queue under syncpoint, the queue depth decrements but the message is still active in the queue file. If a queue is emptied in a single transaction it retains it's full depth until the COMMIT occurs. This can make it look like the queue is empty when it is not.
Was it actually the queue file that filled up the file system?
Log extents can fill the file system, even with circular logs. For example, with a large value for secondary extents log files can expand significantly and then disappear just as quickly.
FDC files can fill up the file system, depending on how the allocations were made.
Was it even MQ?
If the QMgr shares filesystem space with other users or apps, transient files can fill up the space.
One of the issues that we see very frequently is that an application will try to put more than 5,000 messages on a queue and receive a QFULL error. The very first thing most people then do is set MAXDEPTH(999999999) to make sure this NEVER happens again. The problem with this is that QFULL is a soft error from which an application can recover but filling up the filesystem is a hard error which can bring down the entire QMgr. Setting MAXDEPTH(999999999) trades a manageable soft error for a fatal error. It is the responsibility of the MQ administrator to make sure that MAXDEPTH and MAXMSGL on the queues are set such that the underlying filesystem does not fill. In most shops additional monitoring is in place on all the filesystems to raise alerts well before they fill.
So to sum up, WMQ does a very good job of shrinking queue files in most cases. In particular, when a queue empties this is a natural point of synchronization at which the file can be shrunk and this usually occurs within seconds of the queue emptying. You have either hit a rare race condition in which the file was not shrunk fast enough or there is something else going on here that is not readily apparent in your initial analysis. In any case, manage MAXDEPTH and MAXMSGL such that no queue can fill up the filesystem and write the code to handle QFULL conditions.

Resources