Handle AWS SQS Dead Letter Queue - spring

I've configured a DLQ in AWS following this article:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-dead-letter-queue.html
This is the consumer I have in a sprint boot application:
#SqsListener(value = "${cloud.aws.endpoint.queue-name}", deletionPolicy = SqsMessageDeletionPolicy.DEFAULT)
private void getMessage(final Event event) {
// process event
}
However, when there is an exception during the event processing, say a NPE, an exception calling an external REST API etc, the messages that failed to be processed are not flowing into the dead letter queue.
Am I missing anything?

It's difficult to help with the few info you provide but check the following:
Perhaps your queue is not properly setup
Dead letter queues redirection are not related to the consumer setup but to the way that the queues have been configured at creation in aws. So use the aws cli get-queue-attributes command
https://docs.aws.amazon.com/cli/latest/reference/sqs/get-queue-attributes.html
to check if your source queue has the proper RedrivePolicy. The output should display something like this example. Also controls that the maxReceiveCount is setup properly in your source queue. From the doc:
maxReceiveCount – The number of times a message is delivered
to the source queue before being moved to the dead-letter queue.
When the ReceiveCount for a message exceeds the maxReceiveCount for a queue,
Amazon SQS moves the message to the dead-letter-queue.
If the dead letter queue has not been configured properly follow the aws create-queue
https://docs.aws.amazon.com/cli/latest/reference/sqs/create-queue.html
cli example to setup a dead letter queue to the source queue.
Try to send a message to the queue outside of spring using cli only with send-message https://docs.aws.amazon.com/cli/latest/reference/sqs/send-message.html and ensure that the dead letter queue is properly popolated.
Perhaps SqsListener is not pointing to the correct source queue
Messages may not be reaching the correct source queue that has a dead-letter queue attached. cloud.aws.endpoint.queue-name should be the path to find the queue name in your application properties application.yml like you just defined it:
cloud:
aws:
endpoint:
queue-name: "my-queue-name"

Related

How DLQ works in Azure Service bus queue?

I am learning how DLQ works in Azure service bus queue. i.e., unconsumed messages will be in DLQ. I have enabled dead lettering (deadLetteringOnMessageExpiration) on message expiration.
References:
Azure Service Bus - Subscriptions and DLQ
Azure Service Bus - *move* message from DLQ to main
https://learn.microsoft.com/en-us/azure/service-bus-messaging/enable-dead-letter
AMR template:
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-resource-manager-namespace-queue
Questions:
If deadLetteringOnMessageExpiration is enabled, would DLQ be available for that queue (like below screenshot)?
If yes, how can I process messages from DLQ? (I guess I can view such messages here but not sure what will happen next)
My goal is to create a queue with DLQ where unprocessed message can be processed at some point and what is the best way to achieve that.
If deadLetteringOnMessageExpiration is enabled, would DLQ be available for that queue?
Dead-letter queue is always there for queues and subscriptions, unrelated to how you configure your entity.
If yes, how can I process messages from DLQ?
Up to you. You can peek DLQ-ed messages, recieve and process, etc. It really depends on how you want to handle those dead-lettered messages in the context of your system.

Create SQS queue for consuming events with spring boot

I have a spring boot application which needs to consume different types of events. The events will be published by another service to an SQS queue.
My question is, do I have to create one SQS queue where all the events will be published/consumed or do I have to create one SQS queue for each and every event that will be published/consumed?
It is possible to create a queue for each message, but I don't recommend it ass you'll have to add a listener to each queue every time you add a new message.
First thing that you can do is to create a message structure that holds the topic inside it. Once your SQS handler receives a message, it will check the topic and route it to the right handler or handlers.
I still don't like this option because this means what you are implementing is a message broker and I don't think that's what you are aiming for.
The best solution would be to use SNS and publish each message to a different topic. Then configure SNS to route the message to that SQS and let your spring SQS message handler to get the right handler by the topic.
While this solution is similar to the previous one, the use of SNS here gives you the ability to publish a message to more than one queue (client) and handles, and you get the topic outside the message which is where it should be.

Requeue received SQS message through lambda

What works: Using the AWS-SQS SDK for .Net, I'm able to receive a message batch and delete individual messages within the message visibility timer window. I can also not do anything and effectively requeue the message which then gets dead lettered if it's requeued a configured number of times.
What doesn't work: I'm trying to do the same thing using a Lambda. I've created a trigger which works meaning SQS triggers the lambda sending it a batch of messages. These messages seem to get deleted from the queue automatically when this happens. I've no control over deleting an individual message or requeuing it.
Throwing an exception in the lambda seems to get all the messages in the batch to remain in the queue. Is there a more elegant way to do this and also is there a way to do it for individual messages instead of the entire batch?
When you use the SQS Lambda Trigger your messages will be automatically deleted from the Queue in case of successful processing.
If you want to, you can poll messages from SQS instead of having the messages trigger your Lambda. In order to do it, just don't configure a trigger for your Lambda function and have it execute every X amount of time via a Cloud Watch Event. Upon every execution, you then poll your SQS queue for new messages. I really don't see why you'd do it though, since having the trigger and auto deletion of messages is very handy. If you want to send failed messages to a DLQ, simply set the DLQ on your source SQS queue itself. You can then customise maxReceiveCount and, once this threshold is reached, messages will then go to the configured DLQ.
From the docs:
If a message fails processing multiple times, Amazon SQS can send it
to a dead letter queue. Configure a dead letter queue on your source
queue to retain messages that failed processing for troubleshooting.
Set the maxReceiveCount on the queue's redrive policy to at least 5 to
avoid sending messages to the dead letter queue due to throttling.
Set the batch size to one and throw an exception if you want to requeue it.

Do messages get deleted from the queue after a read operation in IBM MQ?

I am using Nifi to get data from IBM MQ. It is working fine. My question is once the message is read from an MQ queue, does it get deleted from the queue? How to just read messages from the queue without deleting them from the queue?
My question is once the message is read from an MQ queue, does it get
deleted from the queue?
Yes, that is the default behavior.
How to just read messages from the queue without deleting them from
the queue?
You use the option: MQGMO_BROWSE_FIRST followed by MQGMO_BROWSE_NEXT on the MQGET API calls.
You can also open the queue for browse only. i.e. MQOO_BROWSE option for MQOPEN API call.
It sounds as if you would like to use a "publish/subscribe" model rather than a "point-to-point" model.
From ActiveMQ:
Topics In JMS a Topic implements publish and subscribe semantics. When
you publish a message it goes to all the subscribers who are
interested - so zero to many subscribers will receive a copy of the
message. Only subscribers who had an active subscription at the time
the broker receives the message will get a copy of the message.
Queues A JMS Queue implements load balancer semantics. A single
message will be received by exactly one consumer. If there are no
consumers available at the time the message is sent it will be kept
until a consumer is available that can process the message. If a
consumer receives a message and does not acknowledge it before closing
then the message will be redelivered to another consumer. A queue can
have many consumers with messages load balanced across the available
consumers.
If you have a queue, when a consumer consumes that message, it is removed from the queue so that the next consumer consumes the next message. With a topic, multiple consumers can be subscribed to that topic and retrieve the same message without being exclusive.
If neither of these work for you, I'm not sure what semantics you're looking for -- a "queue" which doesn't delete the message when it is consumed will never let a consumer access any but the first message.

MassTransit - publish to all consumer instances

I am looking for a way for each consumer instance to receive a message that is published to RabbitMQ via MassTransit. The scenario would be, we have multiple microservices that need to invalidate a cache on notification. Pub-Sub won't work in this instance as there will be 5 consumers of the same type as its the same code per service instance, so only one would receive the message in a traditional PubSub.
Message observation could be an option but this means the messages would never be consumed and hang around forever on the bus.
Can anyone suggest a pattern to use in the context of MassTransit?
Thanks in advance.
You should create a management endpoint in each service, which could even be a temporary queue (just request a receive endpoint without a queue name and one will be dynamically generated). Then, put your queue invalidation consumers on that endpoint. Each service instance will receive a unique instance of the message (when Publish is called), and those queues and bindings will automatically be removed once the service exits.
This is exactly how the bus endpoint works, but in your case, you're creating a receive endpoint which can have consumer message type bindings, so that published messages are received, one copy per service.
cfg.ReceiveEndpoint(cfg => { ... });
Note that the queue name is not specified, and will be automatically generated uniquely.

Resources