amazon ec2 set max cost per month - amazon-ec2

I see a lot of questions that are similar but no direct answer. I just want to play around with an instance to run some light hourly data aggregation and familiarize myself with the ec2 instances. I feel like I do not understand what my risk is however. Is there a way to set my max cost per month? I don't care if my instances get shutdown when the cost is hit, I just want limited liability for my experiment.
Also, running some monitoring of logs myself is not really a solution I will undertake for such a simple experiment. I just want to know if there is some easy way of limiting liability.

Short answer is no, you wont be able to set a cap and stop services.
You can however easily monitor your costs. Starting with billing alerts. Account activity shows you a nearly realtime accounting of the costs you have occurred so far.

Related

Can a reuse AWS instances algorithm be still useful when switching to payments by usage minutes?

We have the algorithm to reuse AWS EC2 instances for jobs. This was very useful when the payments were by using time rounded by hours. Now, due to the change in the AWS policy the payments can be done by time expressed in minutes. At first glance there is no reason to keep the reuse algorithm because a job lasts at least 10 minutes up to many hours. Are there any suggestions why this algorithm can still be useful?
Unless you're running an EC2 instance for less than a minute (which it sounds like you're not) and need per second billing or are using an EBS volume with it, I would say probably not. See more details here

What's the correct Cloudwatch/Autoscale settings for extremely short traffic spikes on Amazon Web Services?

I have a site running on amazon elastic beanstalk with the following traffic pattern:
~50 concurrent users normally.
~2000 concurrent users for 1/2 minutes when post is made to Facebook page.
Amazon web services claim to be able to rapidly scale to challenges like this but the "Greater than x for more than 1 minute" setup of cloudwatch doesn't appear to be fast enough for this traffic pattern?
Usually within seconds all the ec2 instances crash, killing all cloudwatch metrics and the whole site is down for 4/6 minutes. So far I've yet to find a configuration that works for this senario.
Here is the graph of a smaller event that also killed the site:
Are these links posted predictably? If so, you can use Scaling by Schedule or as alternative you might change DESIRED-CAPACITY value of Auto Scaling Group or even trigger as-execute-policy to scale out straight before your link is posted.
Do you know you can have multiple scaling policies in one group? So you might have special Auto Scaling policy for your case, something like SCALE_OUT_HIGH which adds say 10 more instances at once. Take a look at as-put-scaling-policy command.
Also, you need to check your code and find bottle necks.
What HTTPD do you use? Consider of switching to Nginx as it's much more faster and less resource consuming software than Apache. Try to use Memcache... NoSQL like Redis for hight read and writes is fine option as well.
The suggestion from AWS was as follows:
We are always working to make our systems more responsive, but it is
challenging to provision virtual servers automatically with a response
time of a few seconds as your use case appears to require. Perhaps
there is a workaround that responds more quickly or that is more
resilient when requests begin to increase.
Have you observed whether the site performs better if you use a larger
instance type or a larger number of instances in the steady state?
That may be one method to be resilient to rapid increases in inbound
requests. Although I recognize it may not be the most cost-effective,
you may find this to be a quick fix.
Another approach may be to adjust your alarm to use a threshold or a
metric that would reflect (or predict) your demand increase sooner.
For example, you might see better performance if you set your alarm to
add instances after you exceed 75 or 100 users. You may already be
doing this. Aside from that, your use case may have another indicator
that predicts a demand increase, for example a posting on your
Facebook page may precede a significant request increase by several
seconds or even a minute. Using CloudWatch custom metrics to monitor
that value and then setting an alarm to Auto Scale on it may also be a
potential solution.
So I think the best answer is to run more instances at lower traffic and use custom metrics to predict traffic from an external source. I am going to try, for example, monitoring Facebook and Twitter for posts with links to the site and scaling up straight away.

Capacity planning and estimating costs with Amazon Web Services for 1M users

I'm in the planning stages of estimating server costs for my web application. How can I determine how many Amazon EC2 instances will I need to handle a database backed web application with 1M active users? How should I go about filling out this monthly calculator on Amazon's site?
http://calculator.s3.amazonaws.com/calc5.html
The web application will be somewhat akin to a social networking site. There will be most likely small, but anywhere from 100,000 to 500,000 data transfers from users to the servers on a daily basis.
To get an accurate estimate of your costs, you will have understand the application architecture, usage patterns and how many servers (instances) and storage and data transfer you expect to use.
Take a look at this video: https://www.youtube.com/watch?v=PsEX3W6lHN4&list=PLhr1KZpdzukcAtqFF32cjGUNNT5GOzKQ8 This video might help you understand how to use the calculator and fill up different values in it.
Jin
Capacity planning is yours, it's specific to app so nobody including Amazon can suggest anything on that regards. Regarding cost estimation, yes you can use monthly calculator. Only thing I could suggest is that when you do your capacity planning make sure that you do your homework like which AWS service you are going to use. For each service you might want to find out unit of measure used for pricing. Once you know that you should do your capacity planning accordingly to find out how much you are projecting to use for a given UOM of a given service on monthly basis. One exception to that is, as you can reserve instances for 3-5 years with upfront fees so you might want to spread that cost across 3 or 5 years based on your choice.

Prototyping for amazon Ec2

How do people (and start up companies) actually go about prototyping/deploying things on amazon and keep costs reasonable? Last month we were experimenting with some specific applications and running own hadoop cluster and managed to spend almost 1.5k just for tests ? Sure - they have micro instances, but what if you application is so intensive it actually requires a larger instance to even test? So I'd like some input as to how people go about doing this?
Several key issues:
Consider a local testbed for some purposes & consider if a given test really needs EC2. If it's really so hard to wrangle 2-4 machines to use as a testbed for Hadoop, there's a different problem. Get your head around whatever you're going to run, how Hadoop will play a role, and kick the tires on that. In time, you will also want to change your grid, upgrade software, tinker with other ideas, etc. When you go to EC2, you'll have smoothed some rough edges already.
Don't use a larger capacity machine than you need while getting the hang of things. If you're not pushing lots of data or compute cycles through at this stage, don't bother with cluster compute nodes, massive RAM instances, etc. Just focus on getting things set up correctly.
When you are ready to retarget to more powerful machines, try a few different machine setups. Maybe the cluster compute instances will pay off, maybe you don't need that kind of throughput: until you know your bottlenecks, don't overspend.
Be sure to use spot instances frequently during the testing phase. You will typically pay about 50% of the on-demand price.
If you get to a point where you want to pay for on-demand instances, have a separate instance start and stop Hadoop instances as needed - unless you need a big cluster all on cluster compute instances.
Prepare your AMIs to get launched as quickly as possible (under 1 minute) and never leave anything running overnight or over a weekend if it isn't necessary.
Until you get the system set up and running, you're basically paying tuition to learn how to get everything tailored to your needs. Just pay the "tuition" to learn each lesson (configurations, bottlenecks, scaling up, etc.), rather than try to take on everything at once. When you approach it as a series of lessons to be learned, it is less painful to spend the money, but as long as you know what you're about to test and learn, you will also spend money more judiciously.
Finally, compare the $1500 to the labor costs of this learning experience - it probably isn't a big deal in the long run. Once you know that something is going to be a reasonable block of computational effort, it's well engineered, and will finish quickly (albeit on many machines), it isn't so painful to spend money on it. Right now, it's hard to appreciate what you're learning because it doesn't yet benefit your org's goals.
To address cost issue while doing proof-of-concept of using Amazon Cloud.
I created a light-weight Java Application using Amazon AWS API, which creates the amazon cloud instances when I want to run a test on them. Once the test is finished or failed-to-start the application terminates the instances immediately by sending out diagnostic mail.
So, no amazon instance kept running or sitting ideal. Which can happen if you create/terminate manually or through a separate program.
Consider using spot instances. If you overbid, you can be almost sure it won't be terminated. In longer run they have price on a level of reserved instances, but you don't need to pay upfront. I believe you could also schedule the tests for non-peak hours, reaching even better prices, or switch to on-demand if spot instance price exceeds on-demand one - Hadoop should handle it nicely. Check this article about spot instances. It has also references to two other articles that analyze the potential of spot instances.

AWS Amazon EC2 Spot pricing

I'd like a non-amazon answer to this quandry...
It looks like, via spot instance pricing, you could run an instance for 22 or 23 cents an hour, for as many hours as you want, because the historical charts for hours/days/months show the spot price never goes over 21 (22?) cents per hour. That's like half of the non-reserved instance cost for the same sized instance and its even less than a reserved instance would ever work out to be per hour. With no commitment.
Am I missing something, do I have a complete and total misunderstanding of the spot/bid/ask instance mechanisim? Or is this a cheap way to get an 24/7 instance while Amazon has a bunch of extra capacity?
Jeremy
No, you are not missing anything. I asked the same question many times when I first looked at Spot, followed by "why doesn't everyone use this all the time?"
So what's the downside? Amazon reserves the right to terminate a Spot instance at any time for any reason. Now, a normal "on-demand" instance might die at any time too, but Amazon goes to great efforts to keep them online and to serve customers with warnings well in advance (days / weeks) if the host server needs to be powered down for maintenance. If you have a Spot instance running on a server they want to reboot ... they will just shut it off. In practice, both are pretty reliable (but NOT 100%!!), and many roles can run 24/7 on spot without issues. Just don't go whining to Amazon that your Spot instance got shut off and your entire database was stored on the ephemeral drive... of course if you do that on ANY instance, you are taking a HUGE (and very stupid) risk.
Some companies are saving tons of money with Spot. Here's a writeup on Vimeo saving 50%, and one on Pinterest saving 60%+ ($54/hr => $20/hr).
Why don't more companies use Spot for their instances? Many of the companies buying EC2 instance hours aren't very price sensitive and are very very risk-adverse, especially when it comes to outages and to operational events that sap engineering effort. They don't want to deal with the hassle to save a few bucks, especially if AWS fees aren't a significant cost-center versus personel. And for 24/7 instances, they already pay 1/2 price via "reserved instances", so the savings aren't as dramatic as they seem versus full-priced "on-demand" instances. Spot isn't fully relevant to large customers. You can be nearly certain that when a customer gets to be the size of a Netflix, they 1) need to coordinate with Amazon on capacity planning because you can't just spin up 1/2 a datacenter on a whim, and 2) getting significant volume discounts that bring their usage costs down into the Spot price range anyways. Plus, the first tier of cost cutting is to reclaim hardware that isn't really needed; at my last company, one guy found a bug where as we cycled through boxes we would "forget" about some of them and shutting that down saved $100+k / month (yikes). Once companies burn through that fat, they start looking at Spot.
There's a second, less discussed reason Spot doesn't get used... It's a different API. Think about how this interacts with "organizational inertia" .... Working at a company that continuously spends $XX / hr on EC2 (and coming from a company that spent $XXXX / hr), engineers start instances with the tools they are given. Our Chef deployment doesn't know how to talk to spot. Rightscale (prev place) defaulted to launching on-demand instances. With some quantity of work, I could probably figure out how to make a spot instance, but why bother if my priority is to get role XYZ up and running by tomorrow? I'm not about to engineer a spot-based solution just for my one role and then evangelize why that was a good idea; it's gotta be an org-wide decision. If you read the Pinterest case-study I linked above, you'll notice they talk about migrating their whole deployment over from $54/hr to $20/hr on spot. Reading between the lines, they didn't choose to launch Spot instances 1-by-1; one day, they woke up and made a company-wide decision to "solve the spot problem" and 'migrate' their deployment tools to using Spot by default (probably with support for a flag that keeps their DB instances off Spot). I can't imagine how much money Amazon has made by making Spot a different API instead of being a flag on the normal EC2 API; Hint: it's boatloads .. as in, you could buy a boat and then fill it with cash until it sinks.
So if you are willing to tolerate slightly higher risk and / or you are somewhat price-sensitive ... then, yes, you absolutely can save a crapton of money by running your service under Spot 24/7.
Just make sure you are double-prepared to unexpectedly lose your instance (ie, take backups) .... something you ALREADY need to be prepared for with an "on-demand" instance that doesn't have 100.0% uptime either.
Think of it this way:
Instead of getting something 99.9% reliable, you are getting something 99.5% reliable and paying half-price
(I made those numbers up to convey the idea, but they probably aren't too far off from the truth).
So long as your bid price is above the spot instance market price, you can continue to run whatever spot instances you want, and only pay the market price.
However, when the market price goes above your bid price, you lose your instances. Without any warning. They just terminate. While the spot price rarely spikes, and when it does it tends to come back down again quickly, for many applications the possibility of losing all your instances without warming is unacceptable. You can insulate yourself against that possibility by bidding higher, but then you risk having to pay that much.
TL;DR: If your application is tolerant to sudden termination, then spot instances are great. But there is a risk involved in using them.
I think these answers are slightly missing the point...
You need to select the most appropriate pricing for your workload and architect your solution with this in mind. AWS offers 3 pricing types:
Reserved Instances (low cost, high reliability, but pay up-front)
On-demand instances (highest cost, high reliability, but pay as you go)
Spot Instances (generally lowest cost, but can terminate unexpectedly)
Reserved Instances
- Use these for cost savings on long running / constant / predictable workloads.
On-demand Instances
- Use these for temporary workloads e.g. development / proof of concept / unpredictable workloads that can't be interrupted.
Spot Instances
- Use these for transitory workloads. Ensure applications are designed with this in mind (e.g. maintain state somewhere permanent and support the ability for new instances to resume where previous ones left off).
A useful design pattern can be to have a "pilot light" instance and use auto-scaling to bring spot instances on as required, and with a bit of cunning bring on-demand instances on if spot-instances fail to appear.
TL;DR: Spot Instances are suitable for workloads that can pause and resume, but are not mission critical. They can be subject to extraordinary peaks (e.g. N. California m2.2xlarge spot price is usually $0.11/hr but has sustained peaks of $10.00/hr!).
Or is this a cheap way to get an 24/7 instance while Amazon has a bunch of extra capacity?
Spot on, if your bid price always remains above the spot price.
I couldn't find any other explicit mention of when they will terminate your instance.
I would have assumed it would be when they would require that capacity for customers willing to pay full charges for the instance, but then again, the spot price could technically go above the on-demand price.

Resources