How can I inform to my sqs polling that my spot ec2 in going to terminated in next two minutes? - amazon-ec2

How can I inform to my sqs polling that my ec2 in going to terminated in next two minutes ?
I have used sqs and autos calling group, my sqs simple queue pooling request for ce2. ec2 takes task request from sqs and excute.
problem scenario : as i used spot ece aws terminate ec2 to manage capacity. now i want to stop new task execution over ec2 once it got termination request from autos calling or from aws.
How can I deal with it?

Normally you would setup CloudWatch Event (CWE) rule:
{
"source": [
"aws.ec2"
],
"detail-type": [
"EC2 Spot Instance Interruption Warning"
]
}
The CWE rule would trigger a lambda function, and the function could perform a number of actions on your instance, depending on what you want to do. This is use-case and application specific, and depends how your app gets notified about such events and what it does.
It could, for example:
use SSM Run Command to execute a bash/powershell commands on your instance to do cleaning before termination occurs.
it could call http endpoint on your instance which is exposed by your application. This way your application can directly get notified that it is going to be terminated soon.
copy some logs or data files to s3 before it gets terminated
and more

Related

AWS Cloudwatch Subscription Filter and Dead Letter Queue for Lambda

I am using CloudWatch log subscription filters to get logs from a specific log group and send them to a Lambda function, which after processing will send the results to another service. I wonder if there's any possibility to send failed events by Lambda to a Dead Letter Queue, noting that in the above settings, we have no SNS/SQS setup to trigger the Lambda.
Destinations gives you the ability to handle the Failure of function
invocations along with their Success. When a function invocation
fails, such as when retries are exhausted or the event age has been
exceeded (hitting its TTL), Destinations routes the record to the
destination resource for every failed invocation for further
investigation or processing.
To configure destination in Lambda function, Kindly refer

AWS SQS List Triggers from SDK

I'm looking for a method to programmatically identify the triggers associated with an SQS queue. Looking through the SQS sdk docs, it doesn't seem this is possible. I had thought instead to try from the other end, and it appears the Lambda ListEventSourceMappings function would likely do what I want, since I'm able to provide it with the queue ARN. However, this requires the ListSourceMappings permission on all lambdas (*), which isn't really ideal - though it shouldn't really hurt, just not what I want. Is there another mechanism for this that I'm missing, or another approach?
Lambda polls SQS queues. It doesn't appear that way in the console, because they hide some of the details from you, but behind the scenes there is a process running within the AWS Lambda system that is polling your SQS queue and invoking your Lambda function when a message is available.
SQS doesn't push messages to Lambda (or anywhere else). SQS just holds messages and hands them out to anything that asks for them. So from an SQS perspective, there is no knowledge of who the message consumers are.
Given the above, the only way to find what you want is to use the Lambda ListEventSourceMappings API.

aws instances EC2. It turns off automatically. As you can turn it on automatically

I have an aws instances EC2. But lately it's stopped working for reasons I don't know about.
Does AWS have a way that if it shuts down, it automatically turns on the instance?.
Same way I have an instance of staging. Same thing happened to him.
Yes, follow this tutorial but make the cloudwatch event fire when an EC2 instance is stopped instead. https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/LogEC2InstanceState.html
One general strategy for reacting to events in AWS is:
Define a CloudWatch Event that "listens" for a certain event to happen
Define a Lambda function that gets triggered when the event happens
Write some code in the Lambda function that does something
In your case this would be
Your cloudwatch event listens for the "EC2 instance stopped" event
Your lambda function gets triggered, and cloudwatch passes in your stopped EC2 instance's details as a parameter
You write some lambda code that starts the EC2 instance

Best way to schedule one-time events in serverless environments

Example use case
Send the user a notification 2 hours after signup.
Options considered
setTimeout(() => { /* send notification */ }, 2*60*60*1000); is not an option in serverless environments since the function terminates after execution (so it has to be stateless).
CloudWatch events can schedule lambda invocations using cron expressions - but this was designed for repetitive invocations (there's a limit of 100 rules/region).
I have not seen scheduling options in AWS SNS/SQS or GCP Pub/Sub. Are there alternatives with scheduling?
I want to avoid (if possible) setting up a dedicated message broker (overkill) or stateful/non-serverless instance - is there a serverless way to do this?
I can queue the events in a database and invoke a lambda function every minute to poll the database for events to execute in that minute... is there a more elegant solution?
Use AWS Step functions, they are like serverless functions that don't have the 15 minute limit like AWS Lambda does. You can design a workflow in AWS step that integrates with API Gateway, Lambda and SNS to send email and text notifications as follows:
Create a REST API via API gateway that will invoke a Lambda function passing in for example, the destination address (email, phone #) of the SNS notification, when it should be sent, notification method (e.g. email, text, etc.).
The Lambda function on invocation will invoke the Step function passing in the data (Lambda is needed because API Gateway currently can't invoke Step functions directly).
The Step function is basically a workflow, you can define states for waiting (like waiting for the specified time to send the notification e.g. 30 seconds), and states for invoking other Lambda functions that can use SNS to send out an email and/or text notifications.
A rudimentary example is provided by AWS w/ their Task Timer example.
Things are coming on GCP for doing this, but not very soon. Thereby, today, the solution is to poll a database.
You can to that with Datastore/firestore with the execution datetime indexed (to prevent to read all the documents each minute). But be careful of traffic spike, you could create hotspot.
You can use Cloud Scheduler on Google Cloud Platform. As is is stated in the official documentation :
Cloud Scheduler is a fully managed enterprise-grade cron job scheduler. It allows you to schedule virtually any job, including batch, big data jobs, cloud infrastructure operations, and more. You can automate everything, including retries in case of failure to reduce manual toil and intervention. Cloud Scheduler even acts as a single pane of glass, allowing you to manage all your automation tasks from one place.
Here you can check a quickstart for using it with Pub/Sub and Cloud Functions.

Schedule a task in EC2 Auto Scaling Group

I have multiple EC2s on an autoscaling group. They all run the same java application. In the application, I want to trigger a functionality every month. So, I have a function that uses Spring Schedule and runs every month. But, that function is run on every single EC2 instance in the autoscaling group while it must run only once. How should I approach this issue? I am thinking of using services like Amazon SQS but they would have the same problem.
To be more specific on what I have tried, in one attempt the function puts a record with a key unique to this month on a database which is shared among all the ec2 instances. If the record for this month is already there, the put request is ignored. Now the problems transfer to the reading part? I have a function that reads the database and do the job. But that function is run by every single ec2 instance.
Interesting! You could put a configuration on one of the servers to trigger a monthly activity, but individual instances in an Auto Scaling group should be treated as identical, fragile systems that could be replaced during a month. So, there would be no guarantee that this specific server would be around in one month.
I would suggest you take a step back and look at the monthly event as something that is triggered external to the servers.
I'm going to assume that the cluster of servers is running a web application and there is a Load Balancer in front of the instances that distributes traffic amongst the instances. If so, "something" should send a request to the Load Balancer, and this would be forwarded to one of the instances for processing, just like any normal request.
This particular request would be to a URL used specifically trigger the monthly processing.
This leaves the question of what is the "something" that sends this particular request. For that, there are many options. A simple one would be:
Configure Amazon CloudWatch Events to trigger a Lambda function based on a schedule
The AWS Lambda function would send the HTTP request to the Load Balancer

Resources