I wonder if there are any config that help control the throught put of SQS queue. For example, limit the queue to only send 30 requests/sec.
Related
We are going to work on springboot application which will be deployed on two ECS containers to support the cluster environment. This application will accept the request and drop message into SQS. Another flow in the application should pick the message from queue and process it. As same application will be running on two different servers in cluster environment, I am not sure which server will pick the message from queue. How can I make sure that only one server picks up the message from queue. It could be either server.
Ordinary SQS queues do not even guarantee that a message only appears once on the queue - see AWS Standard SQS Queue docs
Using a reasonable value for visibility timeout, the time that a message can’t be seen by other consumers, vs the time it takes to consume a message should solve it.
Alternatively you can use an SQS FIFO queue but it’s much slower and can, in my experience, get stuck on a corrupt message.
Currently , we have a sqs subscribed to a Standard SNS topic which triggers a lambda to publish some data based upon these events to a downstream X .
We have come with another usecase where we want to listen to this exiting SNS and publish another set of data based on these events to downstream Y . In future we might have another use case where we want to listen to this exiting SNS and publish another set of data based on these events to downstream Z .
I was wondering if we can re use this existing SQS and lambda for these new use case . I am just curious how wan we handle failure scenarios in case one of publish fails . Failure of 1 process out of x will lead the message back to DLQ from where the re drive would be required , so all the consumer processes of this message with in the lambda will have again process this redrived message ?
Another way could be have a separate SQS and separate lambda for each of such use cases .
has someone had a similar problem statement and what was the approach followed out of the above two or anything that could help reusing some of the existing infra ?
You should subscribe multiple Amazon SQS queues to the Amazon SNS topic, so that each of them receives a copy of the message. Each SQS queue can have its own Dead Letter Queue for error handling.
It is also possible to subscribe the AWS Lambda function to the Amazon SNS topic directly, without using an Amazon SQS queue. The Lambda function can send failed messages directly to a Dead Letter Queue.
What works: Using the AWS-SQS SDK for .Net, I'm able to receive a message batch and delete individual messages within the message visibility timer window. I can also not do anything and effectively requeue the message which then gets dead lettered if it's requeued a configured number of times.
What doesn't work: I'm trying to do the same thing using a Lambda. I've created a trigger which works meaning SQS triggers the lambda sending it a batch of messages. These messages seem to get deleted from the queue automatically when this happens. I've no control over deleting an individual message or requeuing it.
Throwing an exception in the lambda seems to get all the messages in the batch to remain in the queue. Is there a more elegant way to do this and also is there a way to do it for individual messages instead of the entire batch?
When you use the SQS Lambda Trigger your messages will be automatically deleted from the Queue in case of successful processing.
If you want to, you can poll messages from SQS instead of having the messages trigger your Lambda. In order to do it, just don't configure a trigger for your Lambda function and have it execute every X amount of time via a Cloud Watch Event. Upon every execution, you then poll your SQS queue for new messages. I really don't see why you'd do it though, since having the trigger and auto deletion of messages is very handy. If you want to send failed messages to a DLQ, simply set the DLQ on your source SQS queue itself. You can then customise maxReceiveCount and, once this threshold is reached, messages will then go to the configured DLQ.
From the docs:
If a message fails processing multiple times, Amazon SQS can send it
to a dead letter queue. Configure a dead letter queue on your source
queue to retain messages that failed processing for troubleshooting.
Set the maxReceiveCount on the queue's redrive policy to at least 5 to
avoid sending messages to the dead letter queue due to throttling.
Set the batch size to one and throw an exception if you want to requeue it.
Is there a way to create new EC2 instances on an increasing number of messages in RabbitMq queue?
Giving for granted that you know how to set up an Auto Scaling Group, you can configure your group to adjust in capacity according to demand, in response to Amazon CloudWatch metrics.
The thing is, you can store your own metrics in CloudWatch using the PutMetricData function.
So you should:
somehow send to CloudWatch the number of messages RabbitMq is managing, maybe with a cron script;
check that CloudWatch is receiving your data;
create a Launch Template for your scaling EC2 instances;
create an Auto Scaling Group setting a trigger tied to your new
CloudWatch metric.
What is the most efficient way to receive messages from a Amazon SQS queue?
I've been using the Peddler Gem to create, register and subscribe to an Amazon SQS queue that captures Amazon Marketplace order changes. All good there, the SQS queue is receiving the messages fine. The next step I'm a bit fuzzy on and need some help before I go down a rabbit hole.
It seems like the SQS queue should just be like a webhook that I can subscribe to, too receive notices. But I'm not seeing that option anywhere.
But then it looks like I can use the Shoryuken Gem or maybe Amazon's own AWS SDK for Ruby to create workers to poll the queue in order to get notified of new messages.
Is the Shoryuken gem the most efficient way to pull messages from SQS? Or is there a better way?
IMO Shoryuken is currently the most efficient way for polling SQS messages in Ruby.
You can go ahead and use only the aws-sdk, that would work - with certain limitations. If you go on that path, you will ended implementing a lot of stuff around the aws-sdk, which Shoryuken already does. With the sdk you can receive messages in a loop, call a Ruby class to consume them etc. Shoryuken is a process for polling messages, which uses multithread for performance wise. Besides that, a single process can receive messages from multiple queues.
It seems like the SQS queue should just be like a webhook that I can subscribe to, too receive notices. But I'm not seeing that option anywhere.
That is not SQS, the service that's like that is AWS SNS. If Amazon Marketplace can also integrate with SNS, you can implement a pub/sub calling webhooks.
PS: Shoryuken author here :)