Simple question:
If I had six identical EC2 instances process data for exactly ten minutes and turn off would I be charged six hours or one hour?
Update: EC2 and EBS are now based on usage down to the second
Old answer
Granularity for changes are measure down to the hour.
From the AWS pricing site http://aws.amazon.com/ec2/pricing/:
Pricing is per instance-hour consumed for each instance, from the time
an instance is launched until it is terminated or stopped. Each
partial instance-hour consumed will be billed as a full hour.
Unless you are calculating time to be under a threshold for a free tier, the second you use an EC2 instance you're charged for the full hour. If you go one second over the first hour, your charged for a full second hour.
One caveat: Spot Instances.
If spot instances are interrupted by AWS (not you) before reaching a full hour use, you're not charged at all. If you interrupt the spot instance, you're charge for the partial hour usage (which is a full hour rounded up as per the on-demand instances).
AWS has introduced per second billing for EC2/EBS effective October 2, 2017.
New EC2 Billing
You always get 750 hours per month for all your ec2 instances.
Different cases:
Case 1: Your have created and running one instance for 10 minutes.And then stopped.
It will be counted as 1 hour.
Case 2: Your have two running instances for each for 5 or 10 minutes and then stopped. It will be counted as 2 hours.
Case 3: You have one instance running for 10 minutes, then stopped and then again started for 10 or 5 minutes. It will be counted as 2 hours.
You can only use t2.micro servers and some limited OS in free tier.
This way, you can prevent your unwanted billing.
I hope, this has cleared some doubts.
Note: This is just my understanding till today. Please check their(aws) pricing docs for updated information.
Related
We have the algorithm to reuse AWS EC2 instances for jobs. This was very useful when the payments were by using time rounded by hours. Now, due to the change in the AWS policy the payments can be done by time expressed in minutes. At first glance there is no reason to keep the reuse algorithm because a job lasts at least 10 minutes up to many hours. Are there any suggestions why this algorithm can still be useful?
Unless you're running an EC2 instance for less than a minute (which it sounds like you're not) and need per second billing or are using an EBS volume with it, I would say probably not. See more details here
Recently I was required to reboot my EC2 instance due to an AWS maintenance alert. After reboot I noticed my CPU credit balance was consumed. Why is that? What's going on?
Stopping and Starting a t2-standard class instance moves your instance to a new host system, clears your credit balance and then ordinarily¹ gives you a baseline of 30 credits per vCPU to ensure a smooth start-up.
T2 Standard instances get 30 launch credits per vCPU at launch or start. For example, a t2.micro has one vCPU and gets 30 launch credits, while a t2.xlarge has four vCPUs and gets 120 launch credits.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-std.html
Rebooting doesn't do this, but restarting (that is, a stop followed by a start) does, and the stop/start required for most maintenance events is a restart, not a reboot.
Tip
To ensure that your workloads always get the performance they need, switch to T2 Unlimited or consider using a larger T2 instance size.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-std.html
T2 unlimited machines are allowed to borrow against future CPU credit earnings for the upcoming 24 hours, so they don't receive the initial credit balance. You aren't charged extra for these borrowed credits, unless your workload is so heavy that over the subsequent 24 hour period is you continue to use credits at a rate that causes you to spend more than you could have earned.
¹ordinarily unless you have performed more than 100 stop/starts or launches of more than 100 t2-standard machines in the past 24 hours or your account is new, to prevent gaming the system. New accounts gradually ramp up to the 100 threshold.
I launched a m3.medium reserved instance yesterday, but the usage is already 163hrs and charge me 5 dollars, what does it mean by the usage of that amount of hours?
ps: I launched one on-demand instance before, but I just ran it for 2 hours and stopped it. After purchasing the reserved instance, I restarted it.
You can see amazon's billing documentation for an explanation of this pricing.
http://aws.amazon.com/ec2/purchasing-options/reserved-instances/
Light and Medium Utilization Reserved Instances also are billed by the instance-hour for the time that instances are in a running state; if you do not run the instance in an hour, there is zero usage charge. Partial instance-hours consumed are billed as full hours. Heavy Utilization Reserved Instances are billed for every hour during the entire Reserved Instance term (which means you’re charged the hourly fee regardless of whether any usage has occurred during an hour).
Ec2 instance hour calculated by hour by hour. If you just start and close an instance, it still counted as one hour.
How Heroku handle this? By Minute or By Hour?
Lets assume my app usage exceeds 750 Free Dyno Hour Limit
Heroku prorata to the second. A dyno costs $0.05 per hour. So if you go over 750 hours you will be charged at $0.05 per hour or $0.000833333 per minute. In fact, pretty much all addons also follow the same billing model too.
You can read about billing and charges as https://devcenter.heroku.com/articles/usage-and-billing#cost
I will say, though, that the previous answer seems to be more accurate for the web dyno versus a worker dyno. Heroku's automated sleep cycle for your web dyno tries to prevent it from running too long when it's idle, say, for more than an hour. For the free web dyno it must sleep at least six hours per day for it not to incur charges. As long as you set the scaling to 1 for your web dyno and it sleeps then it should be free.
That said, when you add your first worker dyno those same automations aren't applied to this dyno. It presumably won't be triggered to sleep on idling for an hour. This means that unless you manage it you'll likely be charged $34.50 for each worker dyno per month. I wouldn't exactly call this lying to the customer but most people start off with that first free dyno, get comfortable with that and then innocently think that the next dyno will behave in a similar way--it won't and you'll likely get tagged paying more money than you'd bargained for. That's $414-per-year for a dyno. Compare this with Amazon's t2.micro cost of $150-per-year for one instance or $75-per-year for a 50% duty cycle of same.
As they say, "the devil is in the details". Heroku might be cheap for vanity websites but it's a bit costly if you have a database and worker thread (without any scaling otherwise whatsoever).
We have three EC2 instances—one in each availability zone (AZ) in the eu-west-1 region. They are loadbalanced using ELB. We'd like to monitor how many instances are registered at the loadbalancer, using CloudWatch. The problem ist: I don't really understand the HealthyHostCount metric.
For a deployment, we'd like to be able to de-register a single instance (take it out of the LB) without being notified. So the alarm would be: Notify if there is only 1 healthy instance left behind the loadbalancer for 5 minutes.
As far as I understand, HealthyHostCount (HHC) is the number of healthy instances that are registered with a given ELB, averaged over all AZs. If everything is okay, the HHC should be 1 (no matter over what period of time) because there is 1 instance in each AZ.
A couple of days ago, someone deployed without re-registering the instances, so there was only 1 instance being balanced. When we noticed that, we created an alarm that was to notify us when the average HHC sunk below 0.6 after 5 minutes. (If only 1 instance is registered in ELB, the HHC should average 0.33 for any period of time.) However, the alarm never changed to state "ALARM."
When I checked the HHC in CloudWatch, the HHC were numbers that didn't make sense (sum of 10.0 for a 5-minute interval is all I remember now).
It's all a big mess to me. Any time I think I understand the metric, the CloudWatch charts are all gibberish to me.
Could someone please explain how to use HHC to get an alarm when only 1 instance is registered? Is average HHC the way to go or should I use another metric?
The HealthyHostCount metric records one data value with the count of available hosts for each availability zone, each time a health check is executed. Your ELB health check has an Interval parameter that defines how many health checks are executed per minute.
If you are watching a Per-AZ metric, with a health check Interval of 10 seconds, with 2 healthy hosts in that AZ, you will see 6 data points per minute (60/10) with a value of 2. The average, max and min will be 2, but the sum will be 6*2=12.
If you have 3 AZs with 2 hosts each, again with an Interval=10, but you are looking at the Per-LB metric, you will see 3*6=18 data points per minute, each one with a value of 2. The average, max and min will be 2, but the sum will be 18*2=36
I recommend you to set-up an interval value that can divide 60 seconds (either 5, 6, 10, 15, 20, 30 or 60 seconds).
In your case, if your interval is 30 seconds, and you have 3 AZs and 1 server per AZ: You should expect 2 data points per AZ per minute, so set-up an alarm Per-LB, with a Period of 1 minute, for Sum of HealthyHostCount that triggers when value is LowerOrEqual than 2 (2 data values * 1 Healthy AZ * 1 healthy server = 2, the other 4 data values of the unhealthy AZs should be 0 so they won't affect the sum).
UPDATE:
It turns out that the number of health check executed also depends on the number of internal instances that shapes the ELB (ussually one per AZ), so if you are suffering a traffic spike, or enough load to saturate a single elb-internal-instance, the amount of internal servers inside the ELB will grow and you will have more data points unexpectedly. This may affect the sum value, only if you have lots of traffic. I didn't saw this issue with a peak load of 6k RPM distributed in 3 AZs. If this is your scenario, then using average is a safer bet, but I would recommend that you use LowerThan 0.65 as your threshold.
The link also makes me wonder how does the Cross-Zone Load Balancing feature affects the amount of data points...
This is an area where the CloudWatch web console doesn't expose everything that cloud watch can do. As the docs explain, HealthyHostCount is a per availability zone metric. The console lets you have HealthHostCount by availability zone (but across all load balancers) or by load balancer (but across all zones) but not sliced both ways.
If you only have one load balancer the simplest thing would be to setup one alarm on each of the per zone metrics. If you have multiple availability zones then you should be able to use the api to create an alarm slicing across availability zone and load balancer (again, one alarm per load balancer) but you can't do this from the web UI as far as I know.