Is there a way to create new EC2 instances on an increasing number of messages in RabbitMq queue?
Giving for granted that you know how to set up an Auto Scaling Group, you can configure your group to adjust in capacity according to demand, in response to Amazon CloudWatch metrics.
The thing is, you can store your own metrics in CloudWatch using the PutMetricData function.
So you should:
somehow send to CloudWatch the number of messages RabbitMq is managing, maybe with a cron script;
check that CloudWatch is receiving your data;
create a Launch Template for your scaling EC2 instances;
create an Auto Scaling Group setting a trigger tied to your new
CloudWatch metric.
Related
I had successfully deployed an AWS Lambda function to receive Image Scan events from AWS ECR. The region I was using was ap-southeast-1. However, I had noticed that the Lambda function could not receive events from AWS ECR from another region (i.e eu-central-1).
Is there a way to make my Lambda to receive event from AWS ECR in another region without having to deploy it in multiple regions?
Thanks!
Genzer
This depends on how ECR sends events to EventBridge. I'm not certain but most AWS services send events within the same region only. So eu-central-1 events are in eu-central-1 only. The easiest workaround would be to deploy the same function in all regions.
You can also leverage API-Gateway's multi-region abilities. This blog shows a slightly different use-case but may be helpful in understanding how to call a cross-region lambda https://aws.amazon.com/blogs/compute/building-a-multi-region-serverless-application-with-amazon-api-gateway-and-aws-lambda/
In all cases, you need to create a rule in each region you need to reach ECR events and send them to a same-region target.
My understanding of EventBridge events is similar to #blr's, but I've gotten around this by having the events go to a queue first, which the lambda can then be subscribed to across regions. It seems to be a bit lower overhead than deploying the lambda function in every region.
I wonder if there are any config that help control the throught put of SQS queue. For example, limit the queue to only send 30 requests/sec.
I have multiple EC2s on an autoscaling group. They all run the same java application. In the application, I want to trigger a functionality every month. So, I have a function that uses Spring Schedule and runs every month. But, that function is run on every single EC2 instance in the autoscaling group while it must run only once. How should I approach this issue? I am thinking of using services like Amazon SQS but they would have the same problem.
To be more specific on what I have tried, in one attempt the function puts a record with a key unique to this month on a database which is shared among all the ec2 instances. If the record for this month is already there, the put request is ignored. Now the problems transfer to the reading part? I have a function that reads the database and do the job. But that function is run by every single ec2 instance.
Interesting! You could put a configuration on one of the servers to trigger a monthly activity, but individual instances in an Auto Scaling group should be treated as identical, fragile systems that could be replaced during a month. So, there would be no guarantee that this specific server would be around in one month.
I would suggest you take a step back and look at the monthly event as something that is triggered external to the servers.
I'm going to assume that the cluster of servers is running a web application and there is a Load Balancer in front of the instances that distributes traffic amongst the instances. If so, "something" should send a request to the Load Balancer, and this would be forwarded to one of the instances for processing, just like any normal request.
This particular request would be to a URL used specifically trigger the monthly processing.
This leaves the question of what is the "something" that sends this particular request. For that, there are many options. A simple one would be:
Configure Amazon CloudWatch Events to trigger a Lambda function based on a schedule
The AWS Lambda function would send the HTTP request to the Load Balancer
I have an architecture which looks like as follows:-
Multiple SNS -> (AWS Lambda or SQS with Poller)??? -> Dynamo Db
So, basically multiple SNS have subscribed to AWS Lambda or SQS with Poller and that thing pushes data to Dynamo Db.
But this ? thing do lot of transformation of message in between. So, now for such case, I can either use AWS Lambda or SQS with Poller. With AWS Lambda, I can do transformation in Lambda function and with SQS with Poller, I can do transformation in Poller. With AWS Lambda, I see one problem that code would become quite large as transformation is quite complex(has lot of rules), so I am thinking to use SQS. But before finalising on SQS, I wanted to know of the drawbacks of SQS which AWS Lambda removes?
Please help. Let me know if you need further information.
Your question does not contain much detail, so I shall attempt to interpret your needs.
Option 1: SQS Polling
Information is sent to an Amazon SNS topic
An SQS queue is subscribed to the SNS topic
An application running on Amazon EC2 instance(s) regularly poll the SQS queue to ask for a message
If a message is available, the data in the message is transformed and saved to an Amazon DynamoDB table
This approach is good if the transformation takes a long time to process. The number of EC2 instances can be scaled based upon the amount of work in the queue. Multiple messages can be received at the same time. It is a traditional message-based approach.
Option 2: Using Lambda
Information is sent to an Amazon SNS topic
An AWS Lambda function is subscribed to the SNS topic
A Lambda function is invoked when a message is sent to the SNS topic
The Lambda function transforms the data in the message and saves it to an Amazon DynamoDB table
AWS Lambda functions are limited to five minutes of execution time, so this approach will only work if the transformation process can be completed within that timeframe.
No servers are required because Lambda will automatically run multiple functions in parallel. When no work is to be performed, no Lambda functions execute and there is no compute charge.
Between the two options, using AWS Lambda is much more efficient and scalable but it might vary depending upon your specific workload.
We can now use SQS messages to trigger AWS Lambda Functions.
28 JUN 2018: AWS Lambda Adds Amazon Simple Queue Service to Supported
Event Sources
Moreover, no longer required to run a message polling service or create an SQS to SNS mapping.
AWS Serverless Model supports a new event source as following:
Type: SQS
PropertiesProperties:
QueueQueue: arn:aws:sqs:us-west-2:213455678901:test-queue arn:aws:sqs:us-west-2:123791293
BatchSize: 10
AWS Console also support:
Further details:
https://aws.amazon.com/blogs/aws/aws-lambda-adds-amazon-simple-queue-service-to-supported-event-sources/
https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html
I am new to RabbitMQ and I am evaluating it for my next project. Is it possible to use AWS autoscaling with RabbitMQ? How would the multiple instances coordinate messages across multiple instance queues? I see that RabbitMQ has clustering capabilities but appears not to fit in an autoscaling model. I did find this post,
How to set up autoscaling RabbitMQ Cluster AWS
It fixed the scale-up issues but did not address what to do when the instances scale-down. The issue with scaling-down is the potential for messages still in the queues when the instance is removed. Clustering is ok but would like to leverage autoscaling whenever possible.