Using reserved instances in an Elastic Beanstalk Load Balancer - amazon-ec2

I am running an Elastic Beanstalk load-balanced application for a year now. I'm looking for ways to cut back on costs and have discovered I could potentially use reserved ec2 instances instead of the On-Demand instances we are currently using. Currently, my load balancer uses two instances.
I want to make the switch but am unsure about how the process is actually done. I want everything to be crystal clear before doing anything.
From my understanding, if I reserve two of the same type of instance as used in my App, (t2.large with Linux) for the same availability zones (1 in eu-west1b, another in eu-west1c) I could use these instances for the load balancer. Will the same-type instances I currently have deployed immediately fall under rates of a reserved instance? Will I have to rebuild my environment and and build two new instances that match the reserved ones?

A Reserved Instance a method of pre-paying for Amazon EC2 capacity.
If you were to buy two Reserved Instances (in your case, 2 x t2.large Linux), then for every hour of the year while the Reserved Instance is valid you will be entitled to run the matching instance types (2xt2.large Linux) at no hourly charged.
There is no need to identify which instance is a Reserved Instance. Rather, the billing system will pick a matching instance that is running each hour and will not bill any hourly charges.
Therefore, if these are the only matching instances you are running, then they will (by default) be identified as Reserved Instances and will not receive hourly charges. If you run other instances, however, there is no way to control which instance(s) receive the pricing benefit.
It is possible to purchase a Reserved Instance with, or without, identifying the Availability Zone. If an AZ is selected, then the pricing benefit of the Reserved Instance only matches an instance running in that AZ, and there is also a capacity reservation to give you priority when running instances that match the Reserved Instance. If no AZ is selected, then the pricing benefit applies across any instances running in that region, but there is no capacity reservation.
Bottom line: Yes, it will apply immediately (for the number of instances for which you have purchased Reserved Instances). There is no need to start/stop/rebuild anything.

For anyone looking for a bit more certainty than John's (correct) answer, here's the official AWS docs on the subject:
In this scenario, you have a running On-Demand Instance (T2) in your account, for which you're currently paying On-Demand rates. You purchase a Reserved Instance that matches the attributes of your running instance, and the billing benefit is immediately applied. Next, you purchase a Reserved Instance for a C4 instance. You do not have any running instances in your account that match the attributes of this Reserved Instance. In the final step, you launch an instance that matches the attributes of the C4 Reserved Instance, and the billing benefit is immediately applied.
From here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-reserved-instances.html

Related

Lambda in VPC deletion takes more time

I have created a stack that lambda in VPC using cloud formation. When I try to delete the entire stack, it takes 40-45 minutes of time.
My Iam Role has the following permission:
Action:
- ec2:DescribeInstances
- ec2:CreateNetworkInterface
- ec2:AttachNetworkInterface
- ec2:DescribeNetworkInterfaces
- ec2:DeleteNetworkInterface
- ec2:DetachNetworkInterface
- ec2:ModifyNetworkInterfaceAttribute
- ec2:ResetNetworkInterfaceAttribute
- autoscaling:CompleteLifecycleAction
- iam:CreateRole
- iam:CreatePolicy
- iam:AttachRolePolicy
- iam:PassRole
- lambda:GetFunction
- lambda:ListFunctions
- lambda:CreateFunction
- lambda:DeleteFunction
- lambda:InvokeFunction
- lambda:GetFunctionConfiguration
- lambda:UpdateFunctionConfiguration
- lambda:UpdateFunctionCode
- lambda:CreateAlias
- lambda:UpdateAlias
- lambda:GetAlias
- lambda:ListAliases
- lambda:ListVersionsByFunction
- logs:FilterLogEvents
- cloudwatch:GetMetricStatistics
How to improve the deletion time of the stack?
When a Lambda function executes within your VPC, an Elastic Network Interface (ENI) is created in order to give it network access. You can think of an ENI as a virtual NIC. It has a MAC address and at least one private IP address, and is "plugged into" any resource that connects to the VPC network and has an IP address inside the VPC (EC2 instances, RDS instances, ELB, ALB, NLB, EFS, etc.).
While it does not appear to be explicitly documented, these interfaces as used by Lambda appear to be mapped 1:1 to container instances, each of which hosts one or more containers, depending on the size of each container's memory allocation. The algorithm Lambda uses for provisioning these machines is not documented, but there is a documented formula for approximating the number that Lambda will create:
You can use the following formula to approximately determine the ENI requirements.
Projected peak concurrent executions * (Memory in GB / 3GB)
https://docs.aws.amazon.com/lambda/latest/dg/vpc.htm
This formula suggests that you will see more ENIs if you have either high concurrency or large memory footprints, or fewer ENIs if neither of those conditions is true. (The reason for 3GB boundary seems to be based on the smallest instance Lambda appears to use, in the background, which is the m3.medium general purpose EC2 instance. You can't see these among your EC2 instances, and you are not billed for them.)
In any event, Lambda doesn't shut down containers or their host instances immediately after function execution because it might need them for reuse on subsequent invocations, and since containers (and their host instances) are not destroyed right away, neither are their associated ENIs. To do so would be inefficient. In any event, the delay is documented:
There is a delay between the time your Lambda function executes and ENI deletion.
http://docs.aws.amazon.com/lambda/latest/dg/vpc.html
This makes sense, when we consider that the Lambda infrastructure's priorities should be focused on making resources available as needed and keeping them available for quick access performance reasons -- so tearing things down again is a secondary consideration that the service attends to in the background.
In short, this delay is normal and expected.
Presumably, CloudFormation has used tags to identify these interfaces, since it isn't readily apparent how to otherwise distinguish among them.
ENIs are visible in the EC2 console's left hand navigation pane under Network Interfaces, so it's possible that you could delete these yourself and hasten the process... but note that this action, assuming the system allows it, needs to be undertaken with due caution -- because if you delete an ENI that is attached to a container instance that Lambda subsequently tries to use, Lambda will not know that the interface is missing and the function will time out or throw an error at least until Lambda decides to destroy the attached container instance.

When purchasing a reserved instance it means there will always be an instance for me?

I am a little bit confused with the way EC2 pricing works with reserved instances. It is my understanding that EC2 reserved instances are just a way to price instances.
For my application I need to randomly create a new instance and terminate the current, but that has to happen with no major delays as sometimes happen with spot instances where you have to bid the right price or wait for the price to drop to your level.
In the case of purchasing a reserved instance that would mean there will always be an instance for me?
Any clarification on this will be appreciated.
Thanks
Unless there is some other system issue. On demand instances always launch. A reservation is simply a billing feature and will bill a matching on demand instance at the reservation price.
Reservations have two parts to them, a cost savings, and an capacity "guarantee." I'm putting that in quotes because there's no way to completely guarantee capacity.
So, for instance, if you have a reservation for a m3.xlarge running Linux in us-east-1a, and there's a run on that instance type in that AZ, you'll be given priority over someone requesting that instance without a reservation.

Querying instance details long after it is terminated

In Amazon-ec2, the instances page shows details of a machine like its IP, size, key-pair, security group, how long it has run etc.
once the instance is terminated, the line-item stays visible for about an hour. within this period, we can know the details of the machine as it was while running. but once the line item gets removed, there is no way to know that.
say, some instances are manually instantiated, used for some time and then terminated. after an hour of that event there is no way to find out what happened.
there is one detailed-bill feature, but it only provides the instance-ids and size. i am interested in key-pair, ip, OS, security group and name-of-machine if any. is there any way to find out them?
Edit
I understand that i can have a cron job periodically list all instances (and its details) and store it in a database. thing is, to host that cron process, i would need a machine 24x7. what i need is sort of hook, a callback, event.
even if not readily available, can such a solution be made?
Once the instance has been terminated, like you mentioned, most of the information will be available through the API before it completely disappears after an hour or so. (IP address an DNS will not be available since every time you stop or terminate an instance the IP address is relinquished) After the instance completely disappears it means that it's gone for good.
The workaround is to query the instances API every so ofter and save the state and instance information. You can save it in memory, a database or just text files, depending on what you are trying to do or what application you are trying to create.
Here's an example of saving the instance information into a Python dictionary in memory using the boto Python interface to the API:
reservations = conn.get_all_instances()
for res in reservations:
instance = res.instances[0]
if instance.id == 'i-xxxxxx':
instance_dict[instance.id] = instance
The dictionary instance_dict will always have the IP address, DNS and other instance info for the duration of your program as long as you don't overwrite it. To terminate the instance you can run something like:
instance_dict['i-xxxxxx'].terminate()
but later you can still use:
instance_dict['i-xxxxxx'].ip_address
and:
instance_dict['i-xxxxxx'].dns_name
Hope this helps.

Amazon EC2 Spot Alert

I use 1 spot instance and would like to be emailed when prices for my instance size and region are above a threshold. I can then take appropriate action and shut down and move instance to another region if needed. Any ideas on how to be alerted to the prices?
There's two ways to go about this that I can think of:
1) Since you only have one instance, you could set a CloudWatch alarm for your instance in a region that will notify you when the spot price rises above what you're willing to pay hourly.
If you create an Alarm, and tell it to use the EstimatedCharges metric for the AmazonEC2 service, and choose a period of an hour, then you are basically telling CloudWatch to send you an email whenever the hourly spot price for your instance in the region it's running in is above your threshold for wanting to pay.
Once you get the email, you can then shut the instance down and start one up in another region, and leave it running with its own alarm.
2) You could automate the whole process with a client program that polls for changes in the spot price for your instance size in your desired regions.
This has the advantage that you could go one step further and use the same program to trigger instance shutdowns when the price rises and start another instance in a different region.
Amazon recently released a sample program to detect changes in spot prices by region and instance type: How to Track Spot Instance Activity with the Spot-Notifications Sample Application.
Simply combine that with the ec2 command-line tools to stop and start instances and you don't need to manually do it yourself.

Amazon EC2 reserved and retired instances

I can see from billing that we purchased 4 reserved EC2 instances in 2 batches of 2 earlier this year.
We are currently using 2 EC2 instances.
In the list of purchased reserved instances, I can see 2 listed as active, and 2 listed as retired. Can you tell me what "retired" means and if they are still usable?
Thanks
"Retired" means that a reserved instance purchase is no longer in effect.
Usually this would be because the term expired (1 year, 3 years, etc). However, according to this thread, it looks like it could also mean that there was a problem processing payment.
Either way, retired instances are no longer usable.
To add, since I think this point is important enough to put as an answer, Amazon WILL NOT notify you by email or whatever that your reserved instances are about to expire (at least they didn't me).
So if you suddenly get a huge bill then chances are your terms have run out. At this point Amazon will put your reserved instances to retired and put your active instances, which were previously using the reserved policy, to normal rate pay per hour instances.
So it is important to monitor exactly when your term expires to avoid retiring of reserved instances.
We have a different situation with the same symptom -- instances we're using, which happen to be of a type we reserved are described in "Events" on the console as "The instances is running on degraded hardware". We did get an email and stopping (in my case, I had to try a second time to force), then starting got us back on shiny new hardware.
In addition to the previously mentioned term expiration or billing issue, one other reason for a retired reservation is when you modify your reservation. This is when you convert an instance size to another. For example, 1x m1.xlarge instances into 8x m1.small instances.

Resources