DLQ redrive failed events back to DynamoDB streams? - aws-lambda

I have a DynamoDB stream triggering a Lambda, and I want to push any failed events to a DLQ.
If the source of a DLQ is an SQS queue, it looks like you can do something called a redrive back to the source queue, where messages in DLQ will be moved back to the source queue.
I am guessing that this isn't possible with if the source is a DynamoDB stream?

AWS doesn't provide any mechanism as of now to replay failed dynamo DB streams from a DLQ. The messages in the DLQ will have the metadata of the event rather than the actual failed records.
In case there is a need to replay the failed dynamo DB streams, it can be done in two step approach.
Get the shard iterator from the event metadata
Using the shard iterator, get the actual failed records from the Dynamo DB and process accordingly
https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_streams_GetShardIterator.html

Related

How to reprocess failed events from Kinesis?

I have a lambda producer which putRecords to a kinesis steam. Sometimes while writing to kinesis I get Internal service failure. What is the best way to handle such cases where lambda fails to write to kinesis ? I have a retry mechanism on my producer lambda but even after retry attempts it fails to write in some cases.
A good approach could be to use the DeadletterQueue with Lambda functions. You can configure SNS/SQS to create a queue and write all the failed events there and retry to process.
https://aws.amazon.com/about-aws/whats-new/2016/12/aws-lambda-supports-dead-letter-queues/

How DLQ works in Azure Service bus queue?

I am learning how DLQ works in Azure service bus queue. i.e., unconsumed messages will be in DLQ. I have enabled dead lettering (deadLetteringOnMessageExpiration) on message expiration.
References:
Azure Service Bus - Subscriptions and DLQ
Azure Service Bus - *move* message from DLQ to main
https://learn.microsoft.com/en-us/azure/service-bus-messaging/enable-dead-letter
AMR template:
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-resource-manager-namespace-queue
Questions:
If deadLetteringOnMessageExpiration is enabled, would DLQ be available for that queue (like below screenshot)?
If yes, how can I process messages from DLQ? (I guess I can view such messages here but not sure what will happen next)
My goal is to create a queue with DLQ where unprocessed message can be processed at some point and what is the best way to achieve that.
If deadLetteringOnMessageExpiration is enabled, would DLQ be available for that queue?
Dead-letter queue is always there for queues and subscriptions, unrelated to how you configure your entity.
If yes, how can I process messages from DLQ?
Up to you. You can peek DLQ-ed messages, recieve and process, etc. It really depends on how you want to handle those dead-lettered messages in the context of your system.

Multiple processes on Single SQS with in a Lambda function

Currently , we have a sqs subscribed to a Standard SNS topic which triggers a lambda to publish some data based upon these events to a downstream X .
We have come with another usecase where we want to listen to this exiting SNS and publish another set of data based on these events to downstream Y . In future we might have another use case where we want to listen to this exiting SNS and publish another set of data based on these events to downstream Z .
I was wondering if we can re use this existing SQS and lambda for these new use case . I am just curious how wan we handle failure scenarios in case one of publish fails . Failure of 1 process out of x will lead the message back to DLQ from where the re drive would be required , so all the consumer processes of this message with in the lambda will have again process this redrived message ?
Another way could be have a separate SQS and separate lambda for each of such use cases .
has someone had a similar problem statement and what was the approach followed out of the above two or anything that could help reusing some of the existing infra ?
You should subscribe multiple Amazon SQS queues to the Amazon SNS topic, so that each of them receives a copy of the message. Each SQS queue can have its own Dead Letter Queue for error handling.
It is also possible to subscribe the AWS Lambda function to the Amazon SNS topic directly, without using an Amazon SQS queue. The Lambda function can send failed messages directly to a Dead Letter Queue.

Create SQS queue for consuming events with spring boot

I have a spring boot application which needs to consume different types of events. The events will be published by another service to an SQS queue.
My question is, do I have to create one SQS queue where all the events will be published/consumed or do I have to create one SQS queue for each and every event that will be published/consumed?
It is possible to create a queue for each message, but I don't recommend it ass you'll have to add a listener to each queue every time you add a new message.
First thing that you can do is to create a message structure that holds the topic inside it. Once your SQS handler receives a message, it will check the topic and route it to the right handler or handlers.
I still don't like this option because this means what you are implementing is a message broker and I don't think that's what you are aiming for.
The best solution would be to use SNS and publish each message to a different topic. Then configure SNS to route the message to that SQS and let your spring SQS message handler to get the right handler by the topic.
While this solution is similar to the previous one, the use of SNS here gives you the ability to publish a message to more than one queue (client) and handles, and you get the topic outside the message which is where it should be.

Requeue received SQS message through lambda

What works: Using the AWS-SQS SDK for .Net, I'm able to receive a message batch and delete individual messages within the message visibility timer window. I can also not do anything and effectively requeue the message which then gets dead lettered if it's requeued a configured number of times.
What doesn't work: I'm trying to do the same thing using a Lambda. I've created a trigger which works meaning SQS triggers the lambda sending it a batch of messages. These messages seem to get deleted from the queue automatically when this happens. I've no control over deleting an individual message or requeuing it.
Throwing an exception in the lambda seems to get all the messages in the batch to remain in the queue. Is there a more elegant way to do this and also is there a way to do it for individual messages instead of the entire batch?
When you use the SQS Lambda Trigger your messages will be automatically deleted from the Queue in case of successful processing.
If you want to, you can poll messages from SQS instead of having the messages trigger your Lambda. In order to do it, just don't configure a trigger for your Lambda function and have it execute every X amount of time via a Cloud Watch Event. Upon every execution, you then poll your SQS queue for new messages. I really don't see why you'd do it though, since having the trigger and auto deletion of messages is very handy. If you want to send failed messages to a DLQ, simply set the DLQ on your source SQS queue itself. You can then customise maxReceiveCount and, once this threshold is reached, messages will then go to the configured DLQ.
From the docs:
If a message fails processing multiple times, Amazon SQS can send it
to a dead letter queue. Configure a dead letter queue on your source
queue to retain messages that failed processing for troubleshooting.
Set the maxReceiveCount on the queue's redrive policy to at least 5 to
avoid sending messages to the dead letter queue due to throttling.
Set the batch size to one and throw an exception if you want to requeue it.

Resources