Given MassTransit is configured with a concurrency of 1
and has a retry policy of 1 hour for failed messages
and the queue starts with 2 messages
and consuming the first message fails:
Does MassTransit
1) wait for an hour before trying the first message again while the second message stays enqueued
or
2) wait for an hour before trying the first message again while proceeding to try the second message?
Simple answer: 1.
There are two ways to retry using MassTransit.
.UseMessageRetry(r => r.???);
This is in-memory and keeps the message locked. It is also an active message consumption, so if a prefetch count or concurrency limit is used, it will continue to count towards that limit.
.UseScheduledRedelivery(r => r.???);
This reschedules the message for delivery using a scheduler (which may be supported by the broker, or via Quartz.NET). It does not block subsequent messages and will enqueue the message for future delivery.
Both are documented here.
Related
I'm running a Go service that uses the Paho Go MQTT client for subscribing to a topic. The clients that produce the MQTT messages (also Paho, but on Android devices) log when they produce and my service logs when it receives. As you can see from this graph, there seems to be a pretty consistent "cap" right below 36.000 messages per day on the receiving side. The graphs follow each other almost perfectly up to the cap, but then it seems that the go service caps out on slightly below 600 messages per minute, which means around 10 msgs per second.
Where should I look for the solution to this? I cannot find any setting (options) that could explain this cap.
As per the comments paho.mqtt.golang defaults to ordered delivery of messages (the MQTT spec provides some guarantees re message ordering and and calling handlers in a go routine may break this). The upshot of this is that messages will be delivered one-by-one and, if your handler is not keeping up, a queue may form (at QOS1+ the broker needs to retain messages as it may be necessary to resend them).
Some brokers limit the number of messages queued for a client; for example the max_queued_messages option in Mosquitto defaults to 1000 (this default was lower in Mosquitto 1.X) and, if the queue exceeds the limit, "messages will be silently dropped".
This is what appears to have been happening here; the application was not keeping up with incoming messages so the broker began dropping messages when the queue exceeded a limit.
In many cases using the paho.mqtt.golangoption ClientOptions.SetOrderMatters(false) will help; with this option set the message handler will be called in a separate go routine (so the handler must be threadsafe). Alternatively start a go routine within the handler but note that this approach results in the ACK being sent before the handler completes (which may result in message loss if your application terminates unexpectedly).
EDIT: Solved this one while I was writing it up :P -- I love those kind of solutions. I figured I'd post it anyway, maybe someone else will have the same problem and find my solution. Don't care about points/karma, etc. I just already wrote the whole thing up, so figured I'd post it and the solution.
I have an SQS FIFO queue. It is using a dead letter queue. Here is how it had been configured:
I have a single producer microservice, and I have 10 ECS images that are running as consumers.
It is important that we process the messages close to the time they are delivered in the queue for business reasons.
We're using a fairly recent version of the AWS SDK Golang client package for both producer and consumer code (if important, I can go look up the version, but it is not terribly outdated).
I capture the logs for the producer so I know exactly when messages were put in the queue and what the messages were.
I capture aggregate logs for all the consumers, so I have a full view of all 10 consumers and when messages were received and processed.
Here's what I see under normal conditions looking at the logs:
Message put in the queue at time x
Message received by one of the 10 consumers at time x
Message processed by consumer successfully
Message deleted from queue by consumer at time x + (0-2 seconds)
Repeat ad infinitum for up to about 700 messages / day at various times per day
But the problem I am seeing now is that some messages are not being processed in a timely manner. Occasionally we fail processing a message deliberately b/c of the state of the system for that message (e.g. maybe users still logged in, so it should back off and retry...which it does). The problem is if the consumer fails a message it is causing the queue to stop delivering any other messages to any other consumers.
"Failure to process a message" here just means the message was received, but the consumer declared it a failure, so we just log an error, and do not proceed to delete it from the queue. Thus, the visibility timeout (here 5m) will expire and it will be re-delivered to another consumer and retried up to 10 times, after which it will go to the dead letter queue.
After delving into the logs and analyzing it, here's what I'm seeing:
Process begins like above (message produced, consumed, deleted).
New message received at time x by consumer
Consumer fails -- logs error and just returns (does not delete)
Same message is received again at time x + 5m (visibility timeout)
Consumer fails -- logs error and just returns (does not delete)
Repeat up to 10x -- message goes to dead-letter queue
New message received but it is now 50 minutes late!
Now all messages that were put in the queue between steps 2-7 are 50 minutes late (5m visibility timeout * 10 retries)
All the docs I've read tells me the queue should not behave this way, but I've verified it several times in our logs. Sadly, we don't have a paid AWS support plan, or I'd file a ticket with them. But just consider the fact that we have 10 separate consumers all reading from the same queue. They only read from this queue. We don't have any other queues it is using.
For de-duplication we are using the automated hash of the message body. Messages are small JSON documents.
My expectation would be if we have a single bad message that causes a visibility timeout, that the queue would still happily deliver any other messages it has available while there are available consumers.
OK, so turns out I missed this little nugget of info about FIFO queues in the documentation:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html
When you receive a message with a message group ID, no more messages
for the same message group ID are returned unless you delete the
message or it becomes visible.
I was indeed using the same Message Group ID. Hadn't given it a second thought. Just be aware, if you do that and any one of your messages fails to process, it will back up all other messages in the queue, until the time that the message is finally dealt with. The solution for me was to change the message group id. There is some business logic id I can postfix on it that will work for me.
I am in charge maintaining a production software written in Golang which uses RabbitMq as its message queue.
Consider the following situation:
A number of goroutines are publishing to a queue name logs.
Another set goroutines read from the queue and write the messages to a MongoDB collection.
Each publisher or consumer has its Own connection, and its own channel respectively, they are working in an infinite loop and never die. (The connections and channels are established when the program starts.)
autoAck, exclusive and noWait are all set to false and prefetch is set to 20 with global set to false for all
channels. All queues are durable with autoDelete, exclusive
and noWait all set to false.
The basic assumption was that each message in the queue will be delivered to one and only one consumer, so each message would be inserted in the database exactly once.
The problem is that there are duplicate messages in the MongoDB collection.
I would like to know if it is possible that more than one consumer gets the same message causing them to insert duplicates?
The one case I could see with your setup where a message would be processed more than once is if one of the consumers has an issue at some point.
The situation would follow such a scenario:
Consumer gets a bunch of messages from the queue
Consumer starts processing a message
Consumer commits the message to mongodb
either due to rabbitmq channel/connection issue, or other type of issue consumer side, the consumer never acknowledges the message
the message as it hasn't been acknowledged is requeued at the top of the queue
same message is processed again, causing the duplication
Such cases should show some errors in your consumers logs.
I am using weblogic 11g but this question applies to JMS messaging in general.
Lets assume i have messages in queue in the order 5-4-3-2-1
If message#1 fails to deliver and there is a re-delivery delay of 30 secs on the JMS queue. Will the messages behind 1 get delivered during those 30 secs or will they also have to wait for 30 secs on this case ?
I found the answer .. leaving reference for future.
Following article
http://middlewaremagic.com/weblogic/?p=6334
lists the following in Poison Messages section -
"Note that messages with a redelivery delay do not prevent other messages from being delivered"
We have 10 messages in Activemq and we started 2 consumers.But only first consumer consume and processing the messages. Second consumer not consuming the messages.
If I send one more message to Queue while first consumer processing time, second consumer consuming and processing that particular message(What we sent 1 message while first consumer processing time) only.After it's not consuming pending messges.
Finally What I understand, All pending messages are processing by first consumer not remaining consumers.
I want to make involve all consumers for processing of pending messages.
Thanks.
I think what you are looking at is the prefetch limit causing one consumer to hog a bunch of messages up front and thereby starving the other consumers. You need to lower the consumer prefetch limit so that the broker won't eagerly dispatch messages to the first connected consumer and allow other consumers to come online to help balance the load.
In your case a prefetch limit of one would allow all consumers to jump in and get some work.