It is possible to schedule Instance Stop, Start, creating an image but I can't figure out how to change instance type?
Idea behind is to change the type for a cheaper instance during off peak(night time) and move back to the regular one in the morning. LoadBalancing won't work here.
The flow would look like:
1) Stop Instance
2) Create AMI
3) Change instance type
4) Start Instance
Have you tried to use ModifyInstanceAttribute? It can be applied to stopped instances and allows you to change the instance type. You can do it from inside a Lambda by using the Node.js SDK (https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/EC2.html#modifyInstanceAttribute-property)
Related
All,
Am really stuck and have tried almost everything. Can some one please help.
I provision 2 instances while creating my Auto-scaling group. I trigger a Lambda ( manipulates the tags) which changes the instance name to a unique name.
Desired State
I want first instance of Lambda to give first instance the name "web-1"
Then second instance of lambda would run just fine to assign a name "web-2"
Current State
I start with a search on running instances to see if "web-1" exists or not.
So in this case my Lambda executes twice and creates both instances with the same name ( web-1, web-1).
How do I get around this ? I know that the problem is due to Lambda listening to Cloud Watch events. ASG Launch creates 2 events at the same time in my case leading to the problem I have.
Thanks.
You are running into a classic multi-threading issue. Both lambda functions execute simultaneously, see the same "unused" web-1 and mark both with the same function.
What you need is an atomic operation that gives each Lambda execution "permission" to proceed. You can try using a helper DynamoDB table to serialize the tag attempts.
Have your lambda function decide which tag to set (web-1, web-2, etc.)
Check a DynamoDB table to see if that tag has been set in the last 30 seconds. If so, someone else got to it first, so go back to step 1.
Try to write your "ownership" of your sought-after tag to the DynamoDB along with your current timestamp. Try using some attribute_not_exists or other DynamoDB conditions to ensure only one simultaneous such write succeeds.
If you fail at writing, go back to step 1.
If you succeed at writing, then you're free to set your tag.
The reason for the timestamps is to allow for "web-1" to be terminated, and then having a new EC2 instance launched and labelled "web-1".
The above logic is not proven to work, but hopefully should give enough guidance to develop a working solution.
I wanted to make sure there is no better way to do what I am doing without having to register a new AMI each time I make a change to my instance.
My current workflow is the following:
Create an AMI with my default settings and all my cronjobs (lets call it A1)
Create a cronjob on another instance wich buys a spot request for instance A1 every X hours
A1 runs its jobs and automatically shutdown itself
Terminate the spot request for the A1 instance
Edit the files in A1 if there are changes that need to be done
The problem I have is that many times I need to make changes to my A1 instance and then I have to edit my other instance in order to make changes to the file which buys the spot request to reflect the new AMI. As this is done with tons of instances it gets a little messy.
Is it possible to only change the "content" of instances without having to register a new instance ID? In that way spot requests can keep calling the same ID?
Any tip much appreciated. Thanks!
I use 1 spot instance and would like to be emailed when prices for my instance size and region are above a threshold. I can then take appropriate action and shut down and move instance to another region if needed. Any ideas on how to be alerted to the prices?
There's two ways to go about this that I can think of:
1) Since you only have one instance, you could set a CloudWatch alarm for your instance in a region that will notify you when the spot price rises above what you're willing to pay hourly.
If you create an Alarm, and tell it to use the EstimatedCharges metric for the AmazonEC2 service, and choose a period of an hour, then you are basically telling CloudWatch to send you an email whenever the hourly spot price for your instance in the region it's running in is above your threshold for wanting to pay.
Once you get the email, you can then shut the instance down and start one up in another region, and leave it running with its own alarm.
2) You could automate the whole process with a client program that polls for changes in the spot price for your instance size in your desired regions.
This has the advantage that you could go one step further and use the same program to trigger instance shutdowns when the price rises and start another instance in a different region.
Amazon recently released a sample program to detect changes in spot prices by region and instance type: How to Track Spot Instance Activity with the Spot-Notifications Sample Application.
Simply combine that with the ec2 command-line tools to stop and start instances and you don't need to manually do it yourself.
If you have one instance, and auto scaling needs to create one more, then you have two instances. But when auto scaling wants to remove one because it's not needed, the new or the old one can be removed.
So, the instance I had with the Elastic IP now it's removed ...
How can I apply a Elastic IP always to one of the instances of a auto scaling activity ?
Thank you
Hmm.. You could have a small code that checks if the ip is available, and will attached to one of your instance. You can write it such as when the instance is launch it automatically attached that Elastic IP to itself if that IP is available.
You could create 2x scaling groups as described here.
I have two EC2 instances. I want that if one finish a job, it will sign the other one to do other stuff.
So, how to make the communication? I don't want to use CURL.. coz it seems like expensive. I think AWS should have some simple solution but I still can't find relevant help in the documentation.
:(
also, how to send data between two instances without giong through SSH in a fast way? I know ssh can be done. but it seems slow. once again, any tool that EC2 provide to do that?
Actually, I need two methods:
1) Instance A tells Instance B to grab the data from Instance A.
This is answered by Adrian that I can use SQS. I will try that.
2) Once Instance B get the signal, then the data (EBS) data in Instance A needs to transfer to Instance B. The amount of data can be big even I zip it. It is around 50 MB. And I need Instance B to get the data fast so that Instance B will have enough time to process the data before next interval comes in.
So, I am thinking of either these methods:
a) Instance A has the data dump from DB, upload to S3. Then signal Instance B. Instance B gets the data from S3.
b) Instance A has the data dump from DB. Then signal Instance B. Instance B establish SSH (or any connection) to Instance A and grabs the data.
The data may need to be stored permanently but it is not a concern at this moment. It is mainly for Instance B to process.
This is a simple scenario. I'm thinking of what if I scale it with multiple instances, what the proper approach is. :)
Thanks.
Amazon has a special service for this -- it's called SQS, and it allows instances to send messages to each other through special queues. There are SDKs for SQS in various languages, like Java and PHP. This should serve your signaling needs.
For actually sending the bulky data over, it's best to use S3 (and send the object key in the SQS message). You're right that you're introducing latency by adding the extra middle-man, but you'll find that S3 is very fast from EC2 instances (if you put them in the same availability zone, that is), and more importantly than performance, S3 is very reliable. If you try to manage the transfer yourself through SSH, you'll have to work out a lot of error checking and retry logic that S3 handles for you. You can use S3FS to easily write and read to/from S3 from EC2.
Edited to address your updated question.
You may want to look at SNS... which is kind of like push SQS.
How fast do you need this communication to be? SSH is pretty darn fast. The only thing that I can think of that might be faster is raw sockets (from within whatever program is running the jobs).
You could use a distributed workflow managing service.
If Instance B has already completed the task, it can go on to pick another task. Usually, you would want Instance B to signal that is has "picked" up a task and is doing it. Then other instances should try to pick up other tasks on your list. You need a central service which knows which task has been picked up already, and which ones are left for grabs.
When Instance B completes the task successfully, it should signal the central service that it is free for a new task, and pick one up if there is something left.
If it fails to complete the task, the central service should be able to detect it (via heartbeats and timeouts you defined) and put the task back on the list so that some other instance can pick it up.
Amazon SWF is the central service which will provide you with all of this.
For data required by each instance, you should put it in a central store like s3, and configure s3 paths in a way such that each task knows where to download data from, without having to sync up.
e.g. data for task 1 could be placed in something like s3://my-bucket/task1