Amazon AWS EC2 usage [closed] - amazon-ec2

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
I have two questions on Amazon AWS EC2 usage computation:
If I installed a packaged AMI that is fedora core 5, tomcat mysql based and configure it to point to my subdomain demo.mydomain.com, I am interested in finding out how computational hours are calculated. The application that I am publishing is not really mission critical, more towards a demo for a web based tutorial that I am hosting to showcase my application development portfolio. Therefore, it makes sense for me not to invest in a full blown remote virtual dedicated server or something similar.
Say, in a 24 hour window:
0000 (hours) - Accessed by one user for 15 mins
0100 (hours) - Not accessed
0200 (hours) - Not accessed
0300 (hours) - Accessed by one user for 30 mins
.
.
.
1300 (hours) - Not accessed
1400 (hours) - Accessed by one user for 15 mins
.
.
.
2200 (hours) - Accessed by four users for 60 mins
2300 (hours) - Accessed by eight users for 60 mins
2400 (hours) - Not accessed
Total: 180 mins (3 hours)
Assuming that the average use is consistent based on the above hours for 30 days (to get a monthly total).
30days x 3 hours = 180 hours (per month)
Here are my questions:
Am I billed for the time that the service is not accessed although the web application is available online?
Are the charges based on per block hour used or per minute based used mileage?
i.e 0000 hours, the service used was for 15 minutes, am I getting charged for the rest of 45 minutes?
Would anyone be able to advice?
Thanks

Pricing is per instance-hour consumed for each instance type, from the time an instance is launched until it is terminated. Each partial instance-hour consumed will be billed as a full hour.
http://aws.amazon.com/ec2/

Related

How I can estimate maximum number of requests per second for J-meter with 8 users

Scenario is
Total Number of Users 50000
Ramp up time = 2 Minutes
Test Duration = 5 Minutes
while I've login credentials of 8 users
So, Please guide that how I can send 50000 requests in 2 minutes with 8 users
Request per Second (RPS) is a result of your load test execution. You cannot estimate it beforehand.
Typically, you have a number in mind like ex. 15rps based on your application history or the research you might have done. While you do load test, you assert if actual rps >= expected rps. Accordingly, you can work report your findings to business team / development team.
There are various factors like server configuration, network, think time which can affect your answer. With a lower server config (1 vcpu and 1 gb ram) you can expect a relatively low rps. And this number will improve as you increase server capacity.
Perhaps, follow this thread.

What will be Concurrent user count to load test? - I have Google Analytics report [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I am working on Web App whose peak is only once an year. From the google analytics reports it tells like on the peak day, on peak hour, there was around 3 000 Sessions between an 2 peak hours (15 000 sessions #12:00pm & 18 000 at 13:00). User spend almost 8 minutes on the site. Every year during that time our application fails. So at what concurrency, I should test my application when I am doing load test.
Once article I read formula is this Concurrent Visitors = Hourly visitors * Time Spent on Site / 3600 . If this is right, we need only to test till 400 users. But on Google Analytics, I saw 3800 users on its real time monitor.Is this right?
3600 is the number of seconds in an hour, so if I understand you correctly your calculation should be as follows -
18000 users an hour * 8 minutes * 60 seconds in a minute / 3600 = 2400
Which is closer to 3800.
I guess that your visitors weren't evenly distributed over one hour, that will explain the peak. If the cost difference of preparing for 4k visitors is less than the potential losses that can be caused by downtime on that critical day, I would go for it.

How exactly does AWS EC2 count hourly costs?

Simple question:
If I had six identical EC2 instances process data for exactly ten minutes and turn off would I be charged six hours or one hour?
Update: EC2 and EBS are now based on usage down to the second
Old answer
Granularity for changes are measure down to the hour.
From the AWS pricing site http://aws.amazon.com/ec2/pricing/:
Pricing is per instance-hour consumed for each instance, from the time
an instance is launched until it is terminated or stopped. Each
partial instance-hour consumed will be billed as a full hour.
Unless you are calculating time to be under a threshold for a free tier, the second you use an EC2 instance you're charged for the full hour. If you go one second over the first hour, your charged for a full second hour.
One caveat: Spot Instances.
If spot instances are interrupted by AWS (not you) before reaching a full hour use, you're not charged at all. If you interrupt the spot instance, you're charge for the partial hour usage (which is a full hour rounded up as per the on-demand instances).
AWS has introduced per second billing for EC2/EBS effective October 2, 2017.
New EC2 Billing
You always get 750 hours per month for all your ec2 instances.
Different cases:
Case 1: Your have created and running one instance for 10 minutes.And then stopped.
It will be counted as 1 hour.
Case 2: Your have two running instances for each for 5 or 10 minutes and then stopped. It will be counted as 2 hours.
Case 3: You have one instance running for 10 minutes, then stopped and then again started for 10 or 5 minutes. It will be counted as 2 hours.
You can only use t2.micro servers and some limited OS in free tier.
This way, you can prevent your unwanted billing.
I hope, this has cleared some doubts.
Note: This is just my understanding till today. Please check their(aws) pricing docs for updated information.

Does partial instanc-hour appear frequently in EC2 on-demand instance [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Because pricing is per instance-hour consumed for each instance, from the time an instance is launched until it is terminated. Each partial instance-hour consumed will be billed as a full hour.
Here is my question:
Does the partial instance-hour appear frequently or rarely?
Or in what kind of context, the partial instance-hour appear frequently?
Would anyone has these experiences on it?
Partial hours happen most frequently when using systems that scale often. For example, in my system I launch 10-20 servers extra each saturday and sunday to handle the extra traffic. When these servers are stopped I will be charged a partial hour. Amazon has a new feature for auto scaling groups that tells it to terminate ( if it has to ) the servers closer to the hour marker in order to save money.
Other possible uses are for services like MapReduce where a large number of instances will be started and then when the job is complete they will be terminated.
My experiences though is that the actual cost of partial hours is insignificant for me. Maybe if you're using larger servers it costs a lot but i'm using the c1.medium and i barely notice the $5 i get charged on a weekend for partial hours.

How do I use ELB's HealthyHostCount for monitoring in CloudWatch?

We have three EC2 instances—one in each availability zone (AZ) in the eu-west-1 region. They are loadbalanced using ELB. We'd like to monitor how many instances are registered at the loadbalancer, using CloudWatch. The problem ist: I don't really understand the HealthyHostCount metric.
For a deployment, we'd like to be able to de-register a single instance (take it out of the LB) without being notified. So the alarm would be: Notify if there is only 1 healthy instance left behind the loadbalancer for 5 minutes.
As far as I understand, HealthyHostCount (HHC) is the number of healthy instances that are registered with a given ELB, averaged over all AZs. If everything is okay, the HHC should be 1 (no matter over what period of time) because there is 1 instance in each AZ.
A couple of days ago, someone deployed without re-registering the instances, so there was only 1 instance being balanced. When we noticed that, we created an alarm that was to notify us when the average HHC sunk below 0.6 after 5 minutes. (If only 1 instance is registered in ELB, the HHC should average 0.33 for any period of time.) However, the alarm never changed to state "ALARM."
When I checked the HHC in CloudWatch, the HHC were numbers that didn't make sense (sum of 10.0 for a 5-minute interval is all I remember now).
It's all a big mess to me. Any time I think I understand the metric, the CloudWatch charts are all gibberish to me.
Could someone please explain how to use HHC to get an alarm when only 1 instance is registered? Is average HHC the way to go or should I use another metric?
The HealthyHostCount metric records one data value with the count of available hosts for each availability zone, each time a health check is executed. Your ELB health check has an Interval parameter that defines how many health checks are executed per minute.
If you are watching a Per-AZ metric, with a health check Interval of 10 seconds, with 2 healthy hosts in that AZ, you will see 6 data points per minute (60/10) with a value of 2. The average, max and min will be 2, but the sum will be 6*2=12.
If you have 3 AZs with 2 hosts each, again with an Interval=10, but you are looking at the Per-LB metric, you will see 3*6=18 data points per minute, each one with a value of 2. The average, max and min will be 2, but the sum will be 18*2=36
I recommend you to set-up an interval value that can divide 60 seconds (either 5, 6, 10, 15, 20, 30 or 60 seconds).
In your case, if your interval is 30 seconds, and you have 3 AZs and 1 server per AZ: You should expect 2 data points per AZ per minute, so set-up an alarm Per-LB, with a Period of 1 minute, for Sum of HealthyHostCount that triggers when value is LowerOrEqual than 2 (2 data values * 1 Healthy AZ * 1 healthy server = 2, the other 4 data values of the unhealthy AZs should be 0 so they won't affect the sum).
UPDATE:
It turns out that the number of health check executed also depends on the number of internal instances that shapes the ELB (ussually one per AZ), so if you are suffering a traffic spike, or enough load to saturate a single elb-internal-instance, the amount of internal servers inside the ELB will grow and you will have more data points unexpectedly. This may affect the sum value, only if you have lots of traffic. I didn't saw this issue with a peak load of 6k RPM distributed in 3 AZs. If this is your scenario, then using average is a safer bet, but I would recommend that you use LowerThan 0.65 as your threshold.
The link also makes me wonder how does the Cross-Zone Load Balancing feature affects the amount of data points...
This is an area where the CloudWatch web console doesn't expose everything that cloud watch can do. As the docs explain, HealthyHostCount is a per availability zone metric. The console lets you have HealthHostCount by availability zone (but across all load balancers) or by load balancer (but across all zones) but not sliced both ways.
If you only have one load balancer the simplest thing would be to setup one alarm on each of the per zone metrics. If you have multiple availability zones then you should be able to use the api to create an alarm slicing across availability zone and load balancer (again, one alarm per load balancer) but you can't do this from the web UI as far as I know.

Resources