I am wondering what the fields on the oracle table GV$PERSISTENT_QUEUES really mean.
The Documentation:
ENQUEUED_MSGS NUMBER Number of messages enqueued
DEQUEUED_MSGS NUMBER Number of messages dequeued
Note: This column will not be incremented until all the subscribers of the message have dequeued the message and its retention time has elapsed.
...
ENQUEUED_EXPIRY_MSGS NUMBER Number of messages enqueued with expiry
ENQUEUED_DELAY_MSGS NUMBER Number of messages enqueued with delay
MSGS_MADE_EXPIRED NUMBER Number of messages expired by time manager
MSGS_MADE_READY NUMBER Number of messages made ready by time manager
...
ENQUEUE_TRANSACTIONS NUMBER Number of enqueue transactions
DEQUEUE_TRANSACTIONS NUMBER Number of dequeue transactions
Oracle Documentation (11.2)
My Questions:
How can the number of dequeued messages be larger than the number of enqueued messages?
If messages with a certain delay get added to the queue, do they get counted at ENQUEUED_MSGS and ENQUEUED_DELAY_MSGS?
If a message with a certain delay gets delivered after the delay, will it get counted at DEQUEUED_MSGS and MSGS_MADE_READY?
If so, how can MSGS_MADE_READY be larger than ENQUEUED_DELAY_MSGS?
What do the fields ENQUEUED_EXPIRY_MSGS and MSGS_MADE_EXPIRED mean?
What's the difference between ENQUEUED_MSGS and ENQUEUE_TRANSACTIONS, same with dequeueing?
Thank you in advance for help!
I am pretty sure of having found the solution to most of the above questions.
DEQUEUED_MSGS can be greater than ENQUEUED_MSGS in case of reboot of a database. Queue Entries that are still in the Queue Table will remain there. After database reboot, the entries will get dequeued and added to the number of dequeued messages, but they won't get added to the number of enqueued messages.
The Field ENQUEUED_MSGS is the sum of all messages that got enqueued into the Queue.
The Field ENQUEUED_DELAY_MSGS is the sum of all messages enqueued with delay.
ENQUEUED_MSGS - ENQUEUED_DELAY_MSGS = All messages that got enqueued without delay
The same is for DEQUEUED_MSGS (all) and MSGS_MADE_READY (only with delay).
I don't know yet what ENQUEUE_TRANSACTIONS and DEQUEUE_TRANSACTIONS mean (maybe DEQUEUE_TRANSATIONS describes the number of dequeues of one message in a multi consumer queue), but I won't use those fields.
Related
I would like to understand what is the maximum number of re-deliver in CLIENT_ACKNOWLEDGE if you are not doing the acknowledgment.
Do we have any maximum number configured, if so what is that
property and can we override it?
If we don't have any maximum
number, then the message will always stay in the queue? is there any
way to clear it.
It's not part of the JMS specification; some vendors have a mechanism to deliver to a dead letter queue after some number of (configurable) attempts.
Recent brokers provide the delivery count in the JMSXDeliveryCount header so you can decide to throw it away when the count reaches some number.
If you are using CLIENT_ACKNOWLEDGE and don't do so, it won't be redelivered at all (unless you close the consumer/connection).
This is a general question. Let me say I have a queue manager locally. I have a transmission queue/remote queue definition setup through which I connect to destination queue manager queue. If destination queue manager queue's maximum message length capacity is 1000 and if I put a message length more than that it automatically moves to destination queue manager dead letter queue provided that my transmission queue max message length is more than what I input. This is the expected behaviour. But is there any way on MQ world to handle this and not move it to dead queue? Or is it the sole responsibility of the application that puts this message to not put over the max length?
Thanks in advance.
Changing the default Maximum Message Length (i.e. MAXMSGL) from 4MB to a small value is a bad idea.
Myth: MQ does NOT allocate space based on the value in maximum message length field. Setting it to a very small value or to a very large value has not bearing on the disk space. MQ ONLY writes the size/amount of the real message.
Secondly, the application team should tell the MQAdmin what the largest message will be that the application will send. If they say 10MB then the MQAdmin can increase max. message length to 10MB or something a little larger i.e. 12MB.
The largest value that can be used is 100MB.
Note: The MQAdmin will need to increase the max. message length for: channel, XMITQ, local queue and the dead letter queue for any message larger than the default size of 4MB or it will not flow.
Thanks Roger and JoshMc. Infact I tried both options, client to QM and between QM and QM. Client to QM is fine as the client receives the error code and basically nothing happens. But the concern is only between sender QM and receiver QM. What I have seen mostly is that there will be only one transmission queue with maximum message length to connect to a particular queue manager. All the different remote connection/queues use that transmission queue. So if the sender commits a mistake by sending a large message than the destination queue cannot accept, it usually end up passing through the transmission queue but failing in destination and reaching the destination's dead queue. Now the destination owner is alerted/or need to take some remediation for a mistake that he didn't commit. That's the whole reason for me asking this question. Thanks a lot for you both on shedding more light and spending your time for me.
I think Morag Hughson has given out something for me to try out but still it will have its own negative impact. But I was looking for something like that where we can control at the MQ level to not allow the message to go to destination dead queue.
According to the SQS documentation, the maximum number of inflight messages is set to 120,000 for standard queues. However, sometimes I see my queues maxing out at lower numbers, such as here:
Does anyone know why this might be the case? I have code that dynamically changes the number of SQS listeners depending on the number of messages in the queue, but I don't want to do anything if I've hit the maximum. My problem is now that the max limit doesn't seem to be consistent. Some queues go to 120K, but this one is stuck at 100K instead, and as far as I can tell there is no setting that allows me to set this limit.
approximateNumberOfMessagesNotVisible indicates the number of messages in-flight, as you are rightly said. It depends on how many consumers you have, and what is througput of each consumer.
If the actual number is caping at 100k, then your consumers are swamped and have no more receiving capacity.
Anyways, it's better if you provide more info on the use-case as 100k in-flight messages look out of ordinary and you may be not using correct solution for your problem.
I want to understand how ApacheMQ's prefetch limit works. Are all the messages sent in one burst? What if there are concurrent consumers, what happens then?
What is the difference between prefetch limit of 0 and 1?
Read the link recommended by #Tim Bish -- the quotes I offer are from that page.
So ActiveMQ uses a prefetch limit on how many messages can be streamed
to a consumer at any point in time. Once the prefetch limit is
reached, no more messages are dispatched to the consumer until the
consumer starts sending back acknowledgements of messages (to indicate
that the message has been processed). The actual prefetch limit value
can be specified on a per consumer basis.
Specifically on the 0 versus 1 prefetch limit difference:
If you have very few messages and each message takes a very long time
to process you might want to set the prefetch value to 1 so that a
consumer is given one message at a time. Specifying a prefetch limit
of zero means the consumer will poll for more messages, one at a time,
instead of the message being pushed to the consumer.
I have a need to dequeue messages coming from an Oracle Queue on a continuous basis.
As far as I could imagine, we can deuque the message in two ways, either through Asyncronous Auto-Notification approach or by manual polling process where one can dequeue one message at a time.
I can't go for Asyncronous notification feature as the number of messages it receives could go upto 1000 within 5 mintues during peak hours and
I do not want to overload the database by spawning multiple callback procedures in the background.
With the manual polling process,I can create a one-time scheduler job that runs 24*7 which calls a stored proc that dequeus the messages in a loop in WAIT mode(kind of listening for a message).
The problem with this approach is that
1) the scheduler job runs continously and occupies one permanent job slot
2) the stored procedure does not EXIT as it runs in a loop waiting for messages.
Are there any alternative/better solutions where I do not need to have a job/procedure running continuously looking for messages?
Can I use auto-notification approach to get notification for the very first message,un-subscribe the subscriber and dequeue further messages and
subscribe to the queue again when there are no more messages ? Is this a safe approach and will i lose any message in between subscription and un-subscription ?
BTW, We use Oracle 10gR2 database, so I can't use PURGE ON NOTIFICATION option.
Appreciate your expert solution!!
You're right, it's not a good idea to use auto-notification for a high-volume queue.
At one client I've seen a one-time scheduler job which runs 24*7, it seems to work reasonably well, and they can enqueue a special "STOP" message (which goes to the top of the queue) that it listens for and stops processing messages.
However, generally I'd lean towards a job that runs regularly (e.g. once per minute, or whatever granularity is suitable for you) which would dequeue all the messages. I'd put the dequeue in a loop with a loop counter and a "maximum messages" limiter based on the maximum number of messages you'd expect in a 1-minute period. The job would keep processing messages until (a) there are no more messages in the queue, or (b) the maximum limit has been reached.
You can then set the schedule for the job based on the maximum delay you want to see between an enqueue and a dequeue. E.g. if it doesn't matter if a message isn't processed within 5 minutes, you could set the job to run once every 5 minutes.
The maximum limit needs to be quite a high figure - e.g. 10x or 100x the expected maximum number - otherwise a spike could flood your queue and it might not keep up. The idea of the maximum limit is to ensure that the job never runs forever. This should give ops enough time to detect a problem with the queue (e.g. if some rogue process is flooding the queue with bogus messages).