Resuming consumption of messages from a previously created durable subscription - RabbitMQ - SQMP 1.0 - - amqp

I have a issue which we are encountering in our RabitMQ instance where, we are publishing the default Topic and there are dynamic consumer are coming up.
now scenario is, if a consumer B is consuming message and for some reason, that goes down, and later come up, it does not Resuming consumption of messages from a previously created durable subscription.
we need to re-create binding around that. this is a big concern.
we are seeing this issue with only AMQP 1.0

Related

MassTransit consumers didn't acknowledge some messages

I have a question about some strange behaviour of consumer.
Recently we had strange situation on production environment. Two consumers on two different microservices were stuck at some messages. The first one was holding 20 messages from rabbitMQ queue and the second one 2 messages and they weren't processing them. These messages were visible as Unacked in RabbitMQ for two days. They went back to Ready state just when that two microservices were restarted. At that time when consumers took this messages the whole program was processing thousands messages per hour, so basically our Saga and all consumers were working. When these messages went back to Ready state they were processed in one second after that so I don't think that it's problem with them.
The messages are published by Saga to Exchange and besides these two stucked consumers we have also EventLogger consumer subscribed to all messages and this EventLogger processed this 22 messages normally without any problems (from his own queue). Also we have connected Application Insights to consumers and there is no information about receiving these 22 messages by these two consumers (there are information about receiving it by EventLogger).
The other day we had the same issue with one message on test environment.
Recently we updated version of MassTransit in our project from version 6.2.0 to 7.1.6 and before that we didn't notice any similar issues with consumers but maybe it's just coincidence. We also have retry, redelivery, circuit breaker and in memory outbox mechanisms but I don't think that's problem with them because the consumer didn't even start to process these 22 messages.
Do you have any suggestions what could happened to this consumers?
Usually when a consumer doesn't even start to consume the message once it has been delivered to MassTransit by RabbitMQ, it could be an issue resolving the consumer from the container, such as a dependency to another backing service (database, log server, file, network connection, device, etc.).
The message remains unacknowledged on the broker because the transport/delivery mechanism to the consumer is waiting for a resource to become available. If there isn't anything in the logs for that time period indicating an issue with a resource, it's hard to know what could have blocked those messages from being consumed. The fact that they were ultimately consumed once the services were restarted seems to indicate the message content itself was fine.
Monitoring the lack of message consumption (and likely an associated queue depth increase) would give an indication that the situation has occurred. If it happens again, I'd increase the logging detail levels to see if the issue occurs again and can then be identified.

MassTransit - how to handle MessageLockLostException scenario

In the event where consumer is executing and the lock on the message is lost in the meantime due to duration expiring or intermittent connection loss - how can this scenario be potentially handled from within the consumer because currently if consumer finishes, the MessageLockLostException is thrown and another instance of consumer would have been started already.
To solve that - we wrote idempotent consumers, however, this is still an issue now where the first instance (the one that was executing prior to losing the lock) throws an exception because that causes the message to go the {queueName}_error by default and the MassTransit pipeline seems to ignore the fact that the message lock has been lost at that point and a new instance of consumer for the same message has been started.
So the summary of this predicament is that we can effectively get a message successfully acknowledged as well as it being in the error queue. I am looking if anyone has maybe successfully dealt with similar scenario or if there are some hooks in MassTransit where a consumer can work out whether it still has an active lock on the message it is processing.

Apache Kafka: How to check, that an event has been fully handled?

I am facing an issue when decoupling two systems by an event/message broker like Apache Kafka. The issue is related to a frontend triggering actions in a backend:
How does the producer (frontend service) know, that the published event has been properly handled by all the backend services (as consumers), if the publisher does not know neither the "identities" nor the count of consuming backends?
To be precise: Users can change for example their email address using a frontend UI. An associated service publishes that "change request" event to an appropriate topic within Kafka. The UI form is then "locked" to prevent subsequent change requests, until the change event has been fully processed by every consumer. But it's unclear how to detect this state.
You can use another topic to publish handled jobs. So your front-end publishes to one topic and your back-end publishes to another once it is done.
In Kafka terms, neither the producer nor consumer are considered backend - they're both clients connecting to a broker, which is generally considered to be the backend.
A producer will know that it has produced a message successfully, by virtue of the acks setting. A consumer will read a message, and then at a later point, its offset will be updated to a point corresponding to the last message it read. However, there is generally no interaction between a producer and a consumer, and they are generally completely unaware of one another.

ActiveMQ not delivering/dispatching persistent messages on queues

I am using ActiveMQ v5.10.0 and having an issue almost every weekend where my ActiveMQ instance stops delivering persistent messages sent on queues to the consumers. I have not been able to figure out what could be causing this.
While the issue was happening I tried following things:
I added a new consumer on the affected queue but it didn't receive
any messages.
I restarted the original consumer but it didn't receive any messages after the restart.
I purged the messages that were held on the queue but then messages started accumulating again and broker didn't deliver any of the new messages. When I purged the expiry count didn't increase neither the dequeue and dispatch counters.
I sent 100 non-persistent messages on the affected queue, surprisingly it received those messages.
I tried sending 100 persistent messages on that queue, it didn't deliver anyone of them, all the messages were held by broker.
I created a fresh new queue and sent 100 persistent messages and none of them was delivered to the consumer whereas all the non-persistent messages were delivered.
The same things happen if I send persistent or non-persistent messages from STOMP producers. Surprisingly all this happened only for queues, topic consumers were able to receive persistent as well as non-persistent messages.
I have already posted this on ActiveMQ user forum: http://activemq.2283324.n4.nabble.com/Broker-not-delivering-persistent-messages-to-consumer-on-queue-td4691245.html but no one from ActiveMQ has suggested anything.
The jstack output also isn't very helping.
More details:
1. I am not using any selectors, message groups feature
2. I have disabled producer flow control in my setup
I want some suggestions as to what configuration values might cause this issue- memory limits, message TTL etc.

ActiveMQ with slow consumer skips 200 messages

I'm using ActiveMQ along with Mule (a kind of ESB based on Spring).
We got a fast producer and a slow consumer.
It's synchronous configuration with only one consumer.
Here the configuration of the consumer in spring style: http://pastebin.com/vweVd1pi
The biggest requirement is to keep the order of the messages.
However, after hours of running this code, suddenly, ActiveMQ skips 200 messages, and send the next ones.The 200 messages are still there in the activeMQ, they are not lost.
But our client (Mule), does have some custom code to check the order of the messages, using an unique identifier.
I had this issue already a few month ago. We change the consumer by using the parameter "jms.prefetchPolicy.queuePrefetch=1". It seemed to have worked well and to be the fix we needed unti now when the issue reappeared on another consumer.
Is it a bug, or a configuration issue ?
I can't talk about the requirement from a Mule perspective, but there are a couple of broker features that you should take a look at. There are two ways to guarantee message ordering in ActiveMQ:
Message groups are a way of ensuring that a set of related messages will be consumed by the same consumer in the order that they are placed on a queue. To use it you need to specify a JMSXGroupID header on related messages, and assign them an incrementing JMSXGroupSeq number. If a consumer dies, remaining messages from that group will be sent to another single consumer, while still preserving order.
Total message ordering applies to all messages on a topic. It is configured on the broker on a per-destination basis and requires no particular changes to client code. It comes with a synchronisation overhead.
Both features allow you to scale out to more than one consumer.

Resources