One slow ActiveMQ consumer causing other consumers to be slow - performance

I'm looking for help regarding a strange issue where a slow consumer on a queue causes all the other consumers on the same queue to start consuming messages at 30 second intervals. That is all consumers but the slow one don't consumer messages as fast as they can, instead they wait for some magical 30s barrier before consuming.
The basic flow of my application goes like this:
a number of producers place messages onto a single queue. Messages can have different JMSXGroupIDs
a number of consumers listen to messages on that single queue
as standard practice the JMSXGroupIDs get distributed across the consumers
at some point one of the consumers becomes slow and can't process messages very quickly
the slow consumer ends up filling its prefetch buffer on the broker and AMQ recognises that it is slow (default behaviour)
at that point - or some 'random' but close time later - all consumers except the slow one start to only consume messages at the same 30s intervals
if the slow consumer becomes fast again then things very quickly return to normal operation and the 30s barrier goes away
I'm at a loss for what could be causing this issue, or how to fix it, please help.
More background and findings
I've managed to reliably reproduce this issue on AMQ 5.8.0, 5.9.0 (where the issue was originally noticed) and 5.9.1, on fresh installs and existing ops-managed installs and on different machines some vm and some not. All linux installs, different OSs and java versions.
It doesn't appear to be affected by anything prefetch related, that is: changing the prefetch value from 1 to 10 to 1000 didn't stop the issue from happening
[red herring?] Enabling debug logs on the amq instance shows logs relating to the periodic check for messages that can be expired. The queue doesn't have an expiry policy so I can only think that the scheduled expireMessagesPeriod time is just waking amq up in such a way that it then sends messages to the non-slow consumers.
If the 30s mode is entered then left then entered again the seconds-past-the-minute time is always the same, for example 14s and 44s past the minute. This is true across all consumers and all machines hosting those consumers. Those barrier points do change after restarts of amq.

While not strictly a solution to the problem, further investigation has uncovered the root cause of this issue.
TL;DR - It's known behaviour and won't be fixed before Apollo
More Details
Ultimately this is caused by the maxPageSize property and the fact that AMQ will only apply selection criteria to messages in memory. Generally these are message selectors (property = value), but in my case they are JMSXGroupID=>Consumer assignments.
As messages are received by the queue they get paged into memory and placed into a collection (named pagedInPendingDispatch in the source). To dispatch messages AMQ will scan through this list of messages and try to find a consumer that will accept it. That includes checking the group id, message selector and prefetch buffer space. For our use case we aren't using message selectors but we are using groups. If no consumer can take the message then it is left in the collection and will be checked again at the next tick.
In order to stop the pagedInPendingDispatch collection from eating up all the resources available there is a suggested limit to the size of this queue configured via the maxPageSize property. This property isn't actually a maximum, it's more a hint as to whether, under normal conditions, new message arrivals should be paged in memory or paged to disk.
With these two pieces of information and a slow consumer it turns out that eventually all the messages in the pagedInPendingDispatch collection end up only being consumable by the slow consumer, and hence the collection effectively gets blocked and no other messages get dispatched. This explains why the slow consumer wasn't affected by the 30s interval, it had maxPageSize messages waiting delivery already.
This doesn't explain why I was seeing the non-slow consumers receive messages every 30s though. As it turns out, paging messages into memory has two modes, normal and forced. Normal follows the process outlined above where the size of the collection is compared to the maxPageSize property, when forced, however, messages are always paged into memory. This mode exists to allow you to browse through messages that aren't in memory. As it happens this forced mode is also used by the expiry mechanism to allow AMQ to expire messages that aren't in memory.
So what we have now is a collection of messages in memory that are all targeted for dispatch to the same consumer, a consumer that won't accept them because it is slow or blocked. We also have a backlog of messages awaiting delivery to all consumers. Every expireMessagesPeriod milliseconds a task runs that force pages messages into memory to check if they should be expired or not. This adds those messages onto the pages in collection which now contains maxPageSize messages for the slow consumer and N more messages destined for any consumer. Those messages get delivered.
QED.
References
Ticket referring to this issue but for message selectors instead
Docs relating to the configuration properties
Somebody else with this issue but for selectors

Related

Number of messages consumed from MQTT broker seems to be capped

I'm running a Go service that uses the Paho Go MQTT client for subscribing to a topic. The clients that produce the MQTT messages (also Paho, but on Android devices) log when they produce and my service logs when it receives. As you can see from this graph, there seems to be a pretty consistent "cap" right below 36.000 messages per day on the receiving side. The graphs follow each other almost perfectly up to the cap, but then it seems that the go service caps out on slightly below 600 messages per minute, which means around 10 msgs per second.
Where should I look for the solution to this? I cannot find any setting (options) that could explain this cap.
As per the comments paho.mqtt.golang defaults to ordered delivery of messages (the MQTT spec provides some guarantees re message ordering and and calling handlers in a go routine may break this). The upshot of this is that messages will be delivered one-by-one and, if your handler is not keeping up, a queue may form (at QOS1+ the broker needs to retain messages as it may be necessary to resend them).
Some brokers limit the number of messages queued for a client; for example the max_queued_messages option in Mosquitto defaults to 1000 (this default was lower in Mosquitto 1.X) and, if the queue exceeds the limit, "messages will be silently dropped".
This is what appears to have been happening here; the application was not keeping up with incoming messages so the broker began dropping messages when the queue exceeded a limit.
In many cases using the paho.mqtt.golangoption ClientOptions.SetOrderMatters(false) will help; with this option set the message handler will be called in a separate go routine (so the handler must be threadsafe). Alternatively start a go routine within the handler but note that this approach results in the ACK being sent before the handler completes (which may result in message loss if your application terminates unexpectedly).

Spring JMS Message Listener - DMLC - what is benefit of polling?

I know the DefaultMessageListenerContainer polls by design. And that the receiveTimeout which sets the polling interval defaults to 1 second.
The way I understand it is that the DMLC will issue a get, and waits the 'receiveTimeout' defined interval (1 second) before it times out and issues another get.
From what I have read, we can set this receiveTimout value to a larger value and have NO effect on messages getting picked up from the MQ because the active 'get' will sit on the listener until a message arrives... and once/if the timeout interval expires it will just submit another get which remains active on the queue until a message arrives.
So my questions is, what is the benefit of a smaller receiveTimout interval? If we are always going to process a message when it arrives, why on earth would we want to poll the queue every second?
We are running many large applications, and the polling is simply running the CPU usage/bill through the roof, and I cannot find a justification for this.
Yes - the 1 second receive timeout can be very CPU intensive with a large number of queues.
The general idea for the DefaultMessageListenerContainer was to wait for a bit (1 second seems to be a very short wait period), and then, if you don't get a message, it actually tears everything down and does a full reconnect. This is kind of a poor-mans error handling. "If I haven't heard from the broker, assume that something is broken, drop everything and reconnect". If the reconnect were not so expensive, it might not be a bad strategy. Or if you have only one queue. Or maybe you are expecting 10 messages a second and do want to reconnect if a second goes by. If you have a reasonable number of destinations, the reconnect traffic can get downright abusive.
For IBM MQ, failures on the JMS connection/session are reliably picked up. You don't have the, "it just sits there not getting any messages for some reason" scenario. So setting the timeout to 10 minutes (whatever) would be fine.
Note that if you are running in a JEE application server, and your JMS connections are managed by the JCA, then that layer is responsible for detecting bad connections and you don't have to worry about it up in the application layer.
With Camel and for SpringBoot GitHub might be useful.

Does EventStoreDB provide message ordering by an event-key on the consumer side?

I have been exploring EventStoreDB and trying to understand more about the ordering of messages on the consumer side. Read about persistent subscriptions and also the Pinned consumer strategy here.
I have a scenario wherein inventory updates get pushed to eventstore and different streams get created by the different unique inventoryIds in the inventory event.
We have multiple consumers with the same consumerGroup name to read these inventory events. We are using Pinned Persistent Subscription with ResolveLinkTos enabled.
My question:
Will every message from a particular stream always go to the same consumer instance of the consumerGroup?
If the answer to the above question is yes, will every message from that particular stream reach the particular consumer instance in the same order as the events were ingested?
The documentation has a warning that ordered message processing using persistent subscriptions is not guaranteed. Any strategy delivers messages with the best-effort level of ordering guarantees, if applicable.
There are a few reasons for this, some of those are:
Spreading out messages across consumer groups lead to a non-linearised checkpoint commit. It means that some messages can be processed before other messages.
Persistent subscriptions attempt to buffer messages, but when a timeout happens on the client side, the whole buffer is redelivered, which can eventually break the processing order
Built-in retry policies essentially can break the message order at any time
Most event log-based brokers, if not all, don't even attempt to guarantee ordered message delivery across multiple consumers. I often hear "but Kafka does it", ignoring the fact that Kafka delivers messages from one partition to at most one consumer in a group. There's no load balancing of one partition between multiple consumers due to exactly the same issue. That being said, EventStoreDB is still not a broker, but a database for events.
So, here are the answers:
Will every message from a particular stream always go to the same consumer instance of the consumer group?
No. It might work most of the time, but it will eventually break.
will every message from that particular stream reach the particular consumer instance in the same order as the events were ingested?
Most of the time, yes, but again, if a message is being retried, you might get the next message before the previous one is Acked.
Overall, load-balancing ordered processing of messages, which aren't pre-partitioned on the server is not an easy task. At most, you get messages re-delivered if the checkpoint fails to persist at some point, and the consumers restart.

WMQ Message Logging Scenarios v 7.5

In the following scenario, i'm curious as to what happens as it relates to what's in the active LOG files of the queue manager in question. Linear Logging is being used.
What activity (if any) is experienced by the MQ Active LOGS, during a scenario where a queue with say, 100 messages, is being READ with a JMS context attribute (looking for a specific message) -- that, for the case of this arguement, it will NEVER find. All messages are read off the queue, but none are committed. The messages therefore were never actually deleted from the queue; does the queue manager, however, record such GET operations so as to recover these "in flight" conditions, should the queue manager Crash while this is happening? We recently experienced a situation where the dequeue rate from a specific queue was in the 4000-4500 msg / min range, while the queue depth was only about 2500. We surmise that more than 1 such process thread were trying to read off a JMS message by context (sort of like with correlation ID I suppose), but without any hope of ever actually finding a message it was looking for (due to a probable misconfiguration). During this time, the active LOGS filled up rapidly. Is it likely that such wanton dequeue rates as we saw were the culprit?
MQ writes log records for persistent messages during get and put. More details can be found here:
http://pic.dhe.ibm.com/infocenter/wmqv7/v7r5/topic/com.ibm.mq.dev.doc/q023070_.htm

ActiveMQ: Slow processing consumers

Concerning ActiveMQ: I have a scenario where I have one producer which sends small (around 10KB) files to the consumers. Although the files are small, the consumers need around 10 seconds to analyze them and return the result to the producer. I've researched a lot, but I still cannot find answers to the following questions:
How do I make the broker store the files (completely) in a queue?
Should I use ObjectMessage (because the files are small) or blob messages?
Because the consumers are slow processing, should I lower their prefetchLimit or use a round-robin dispatch policy? Which one is better?
And finally, in the ActiveMQ FAQ, I read this - "If a consumer receives a message and does not acknowledge it before closing then the message will be redelivered to another consumer.". So my question here is, does ActiveMQ guarantee that only 1 consumer will process the message (and therefore there will be only 1 answer to the producer), or not? When does the consumer acknowledge a message (in the default, automatic acknowledge settings) - when receiving the message and storing it in a session, or when the onMessage handler finishes? And also, because the consumers are so slow in processing, should I change some "timeout limit" so the broker knows how much to wait before giving the work to another consumer (this is kind of related to my previous questions)?
Not sure about others, but here are some thoughts.
First: I am not sure what your exact concern is. ActiveMQ does store messages in a data store; all data need NOT reside in memory in any single place (either broker or client). So you should actually be good in that regard; earlier versions did require that all ids needed to fit in memory (not sure if that was resolved), but even that memory usage would be low enough unless you had tens of millions of in-queue messages.
As to ObjectMessage vs blob; raw byte array (blob) should be most compact representation, but since all of these get serialized for storage, it only affects memory usage on client. Pre-fetch mostly helps with access latency; but given that they are slow to process, you probably don't need any prefetching; so yes, either set it to 1 or 2 or disable altogether.
As to guarantees: best that distributed message queues can guarantee is either at-least-once (with possible duplicates), or at-most-once (no duplicates, can lose messages). It is usually better to take at-least-once, and make clients to de-duping using client-provided ids. How acknowledgement is sent is defiend by JMS specification so you can read more about JMS; this is not ActiveMQ specific.
And yes, you should set timeout high enough that worker typically can finish up work, including all network latencies. This can slow down re-transmit of dropped messages (if worked dies), but it is probably not a problem for you.

Resources