I'm writing a ruby program that need raw cpu power (I know ruby is a very bad choice for this!).. but I don't have a powerful computer, so I wanted to rent something online that you pay per hour..
Any idea? Something simple to use yet very powerful, with multiple cores. I took a look at amazon ec2, that's a possibility. Anything else, more CPU oriented?
As far as on-demand compute power, Amazon's EC2 is a good choice. You can either pay market rates or you can go to their special discount spot market which is similar, but your instance can and will be terminated without warning when demand picks up again.
It's best to have a system that either uses a persistent EBS drive to save results, or saves them to something like S3 frequently.
If you can parallelize your processing, try and split it across the most cost-effective instance type instead of having to pay a premium for a single instance. For example, the XL Hi-CPU On-Demand Instance gives you 20 compute units for $0.68/hr. compared with the 4XL Cluster Compute Instance which is only 33.5 for $1.60/hr.
Remember that a single Ruby process can only use one CPU core unless you're using a combination of JRuby and threads. You will need to support multiple processes in order to make full use of the machine if this is not the case.
SimpleWorker - I think it's more simpler then EC2.
Related
We are considering using azure functions to run some compute on. However our computes could take up lots of memory, lets say more than 5GB.
As I understand there is no easy way to scale azure functions based on memory usage. Ie If you reach 15GB start a new instance (since you don't want it to run over the maximum memory of your instance)?
Is there a way around this limitation?
OR
Is there another technical alternative to azure functions that provide pay per use and allows rapid scaling on demand?
Without any further details about what you are actually trying to compute, it is very hard to give you any meaningful advice.
But there are a few things you could consider.
For example, if you are processing CSV files and you know you need 1GB of memory to process a file, then "spin up" a AWS Lambda per file. You could use serverless orchestration like AWS Step Function to coordinate your processing. This way you are effectively splitting up your compute intensive tasks.
Another option would be to automate "pay per use" yourself. You could use automation tools like Terraform to start a EC2 spot instance, run your computing task and after it finishes just shut down the EC2 instance. That is more or less pay as you go with a bit of operations overhead on your part.
There are also other services like AWS Fargate, which are marketed as "Serverless compute for containers", allowing you to run Docker containers in a pay per use manner. Fargate allows provisioning of up to 30GB of memory.
Another option would be to use services like ElastiCache or Memcached to "externalize" your memory.
Which of these options would be the best for you is hard to tell, because it depends on your constraints: do you need everything to be stored in the "same memory" or can it be split up etc, what are your data structures, can it be processed in chunks (to minimise memory usage), is latency important, how long does it take, how often do you need to run your tasks, etc...
Note: There are probably equivalent Azure services to the AWS services I talked about in this answer.
Which is better in terms of performance, 2 medium role instances or 4 small role instances?
What are the pro's and cons of each configuration?
The only real way to know if you gain advantage of using larger instances is trying and measuring, not asking here. This article has a table that says that a medium instance has everything twice as large as a small one. However in real life your mileage may vary and how this affects your application only you can measure.
Smaller roles have one important advantage - if instances fail separately you get smaller performance degradation. Supposing you know about "guaranteed uptime" requirement of having at least two instances, you have to choose between two medium and four small instances. If one small instance fails you lose 1/4 of your performance, but if one medium instance fails you lose half of performance.
Instances will fail if for example you have an unhandled exception inside Run() of your role entry point descendant and sometimes something just goes wrong big time and your code can't handle this and it'd better just restart. Not that you should deliberately target for such failures but you should expect them and take measures to minimize impact to your application.
So the bottom line is - it's impossible to say which gets better performance, but uptime implications are just as important and they are clearly in favor of smaller instances.
Good points by #sharptooth. One more thing to consider: When scaling in, the fewest number of instances is one, not zero. So, say you have a worker role that does some nightly task for an hour, and it requires either 2 Medium or 4 Small instances to get the job done in that timeframe. When the work is done, you may want to save costs by scaling to one instance and let it run as one instance for 23 hours until the next nightly job. With a single Small instance, you'll burn 23 core-hours, and with a single Medium instance, you'll burn 46 core-hours. This thinking also applies to your Web role, but probably more-so since you will probably have minimum two instances to make sure you have uptime SLA (it may not be as important for you to have SLA on your worker if, say, your end user never interacts with it and it's just for utility purposes).
My general rule of thumb when sizing: Pick the smallest VM size that can properly do the work, and then scale out/in as needed. Your choice will primarily be driven by CPU, RAM, and network bandwidth needs (and don't forget you need network when moving data between Compute and Storage).
For a start, you won't get the guaranteed uptime of 99% unless you have at least 2 roles role instances, this allows one to die and be restarted while the other one takes the burden. Otherwise, it is a case of how much you want to pay and what specs you get on each. It has not caused me any hassle having more than one role role instance, Azure hides the difficult stuff.
One other point maybe worth a mention if you use four small roles you would be able to run two in one datacenter and two in another datacenter and use traffic manager to route people at least which is closer. This might give you some performance gains.
Two mediums will give you more options to store stuff in cache at compute level and thus more in cache rather than coming off SQL Azure it is going to be faster.
Ideally you have to follow #sharptooth and measure and test. This is all very subjective and I second David also you want to start as small as possible and scale outwards. We run this way, you really want to think about designing your app around a more sharding aspect as this fits azure model better than working in traditional sense of just getting a bigger box to run everything on, at some point you run out into limits thinking in the bigger box process, ie.Like SQL Azure Connection limits.
Using technologies like Jmeter is your friend here and should give you some tools to test your app.
http://jmeter.apache.org/
How do people (and start up companies) actually go about prototyping/deploying things on amazon and keep costs reasonable? Last month we were experimenting with some specific applications and running own hadoop cluster and managed to spend almost 1.5k just for tests ? Sure - they have micro instances, but what if you application is so intensive it actually requires a larger instance to even test? So I'd like some input as to how people go about doing this?
Several key issues:
Consider a local testbed for some purposes & consider if a given test really needs EC2. If it's really so hard to wrangle 2-4 machines to use as a testbed for Hadoop, there's a different problem. Get your head around whatever you're going to run, how Hadoop will play a role, and kick the tires on that. In time, you will also want to change your grid, upgrade software, tinker with other ideas, etc. When you go to EC2, you'll have smoothed some rough edges already.
Don't use a larger capacity machine than you need while getting the hang of things. If you're not pushing lots of data or compute cycles through at this stage, don't bother with cluster compute nodes, massive RAM instances, etc. Just focus on getting things set up correctly.
When you are ready to retarget to more powerful machines, try a few different machine setups. Maybe the cluster compute instances will pay off, maybe you don't need that kind of throughput: until you know your bottlenecks, don't overspend.
Be sure to use spot instances frequently during the testing phase. You will typically pay about 50% of the on-demand price.
If you get to a point where you want to pay for on-demand instances, have a separate instance start and stop Hadoop instances as needed - unless you need a big cluster all on cluster compute instances.
Prepare your AMIs to get launched as quickly as possible (under 1 minute) and never leave anything running overnight or over a weekend if it isn't necessary.
Until you get the system set up and running, you're basically paying tuition to learn how to get everything tailored to your needs. Just pay the "tuition" to learn each lesson (configurations, bottlenecks, scaling up, etc.), rather than try to take on everything at once. When you approach it as a series of lessons to be learned, it is less painful to spend the money, but as long as you know what you're about to test and learn, you will also spend money more judiciously.
Finally, compare the $1500 to the labor costs of this learning experience - it probably isn't a big deal in the long run. Once you know that something is going to be a reasonable block of computational effort, it's well engineered, and will finish quickly (albeit on many machines), it isn't so painful to spend money on it. Right now, it's hard to appreciate what you're learning because it doesn't yet benefit your org's goals.
To address cost issue while doing proof-of-concept of using Amazon Cloud.
I created a light-weight Java Application using Amazon AWS API, which creates the amazon cloud instances when I want to run a test on them. Once the test is finished or failed-to-start the application terminates the instances immediately by sending out diagnostic mail.
So, no amazon instance kept running or sitting ideal. Which can happen if you create/terminate manually or through a separate program.
Consider using spot instances. If you overbid, you can be almost sure it won't be terminated. In longer run they have price on a level of reserved instances, but you don't need to pay upfront. I believe you could also schedule the tests for non-peak hours, reaching even better prices, or switch to on-demand if spot instance price exceeds on-demand one - Hadoop should handle it nicely. Check this article about spot instances. It has also references to two other articles that analyze the potential of spot instances.
I'd like a non-amazon answer to this quandry...
It looks like, via spot instance pricing, you could run an instance for 22 or 23 cents an hour, for as many hours as you want, because the historical charts for hours/days/months show the spot price never goes over 21 (22?) cents per hour. That's like half of the non-reserved instance cost for the same sized instance and its even less than a reserved instance would ever work out to be per hour. With no commitment.
Am I missing something, do I have a complete and total misunderstanding of the spot/bid/ask instance mechanisim? Or is this a cheap way to get an 24/7 instance while Amazon has a bunch of extra capacity?
Jeremy
No, you are not missing anything. I asked the same question many times when I first looked at Spot, followed by "why doesn't everyone use this all the time?"
So what's the downside? Amazon reserves the right to terminate a Spot instance at any time for any reason. Now, a normal "on-demand" instance might die at any time too, but Amazon goes to great efforts to keep them online and to serve customers with warnings well in advance (days / weeks) if the host server needs to be powered down for maintenance. If you have a Spot instance running on a server they want to reboot ... they will just shut it off. In practice, both are pretty reliable (but NOT 100%!!), and many roles can run 24/7 on spot without issues. Just don't go whining to Amazon that your Spot instance got shut off and your entire database was stored on the ephemeral drive... of course if you do that on ANY instance, you are taking a HUGE (and very stupid) risk.
Some companies are saving tons of money with Spot. Here's a writeup on Vimeo saving 50%, and one on Pinterest saving 60%+ ($54/hr => $20/hr).
Why don't more companies use Spot for their instances? Many of the companies buying EC2 instance hours aren't very price sensitive and are very very risk-adverse, especially when it comes to outages and to operational events that sap engineering effort. They don't want to deal with the hassle to save a few bucks, especially if AWS fees aren't a significant cost-center versus personel. And for 24/7 instances, they already pay 1/2 price via "reserved instances", so the savings aren't as dramatic as they seem versus full-priced "on-demand" instances. Spot isn't fully relevant to large customers. You can be nearly certain that when a customer gets to be the size of a Netflix, they 1) need to coordinate with Amazon on capacity planning because you can't just spin up 1/2 a datacenter on a whim, and 2) getting significant volume discounts that bring their usage costs down into the Spot price range anyways. Plus, the first tier of cost cutting is to reclaim hardware that isn't really needed; at my last company, one guy found a bug where as we cycled through boxes we would "forget" about some of them and shutting that down saved $100+k / month (yikes). Once companies burn through that fat, they start looking at Spot.
There's a second, less discussed reason Spot doesn't get used... It's a different API. Think about how this interacts with "organizational inertia" .... Working at a company that continuously spends $XX / hr on EC2 (and coming from a company that spent $XXXX / hr), engineers start instances with the tools they are given. Our Chef deployment doesn't know how to talk to spot. Rightscale (prev place) defaulted to launching on-demand instances. With some quantity of work, I could probably figure out how to make a spot instance, but why bother if my priority is to get role XYZ up and running by tomorrow? I'm not about to engineer a spot-based solution just for my one role and then evangelize why that was a good idea; it's gotta be an org-wide decision. If you read the Pinterest case-study I linked above, you'll notice they talk about migrating their whole deployment over from $54/hr to $20/hr on spot. Reading between the lines, they didn't choose to launch Spot instances 1-by-1; one day, they woke up and made a company-wide decision to "solve the spot problem" and 'migrate' their deployment tools to using Spot by default (probably with support for a flag that keeps their DB instances off Spot). I can't imagine how much money Amazon has made by making Spot a different API instead of being a flag on the normal EC2 API; Hint: it's boatloads .. as in, you could buy a boat and then fill it with cash until it sinks.
So if you are willing to tolerate slightly higher risk and / or you are somewhat price-sensitive ... then, yes, you absolutely can save a crapton of money by running your service under Spot 24/7.
Just make sure you are double-prepared to unexpectedly lose your instance (ie, take backups) .... something you ALREADY need to be prepared for with an "on-demand" instance that doesn't have 100.0% uptime either.
Think of it this way:
Instead of getting something 99.9% reliable, you are getting something 99.5% reliable and paying half-price
(I made those numbers up to convey the idea, but they probably aren't too far off from the truth).
So long as your bid price is above the spot instance market price, you can continue to run whatever spot instances you want, and only pay the market price.
However, when the market price goes above your bid price, you lose your instances. Without any warning. They just terminate. While the spot price rarely spikes, and when it does it tends to come back down again quickly, for many applications the possibility of losing all your instances without warming is unacceptable. You can insulate yourself against that possibility by bidding higher, but then you risk having to pay that much.
TL;DR: If your application is tolerant to sudden termination, then spot instances are great. But there is a risk involved in using them.
I think these answers are slightly missing the point...
You need to select the most appropriate pricing for your workload and architect your solution with this in mind. AWS offers 3 pricing types:
Reserved Instances (low cost, high reliability, but pay up-front)
On-demand instances (highest cost, high reliability, but pay as you go)
Spot Instances (generally lowest cost, but can terminate unexpectedly)
Reserved Instances
- Use these for cost savings on long running / constant / predictable workloads.
On-demand Instances
- Use these for temporary workloads e.g. development / proof of concept / unpredictable workloads that can't be interrupted.
Spot Instances
- Use these for transitory workloads. Ensure applications are designed with this in mind (e.g. maintain state somewhere permanent and support the ability for new instances to resume where previous ones left off).
A useful design pattern can be to have a "pilot light" instance and use auto-scaling to bring spot instances on as required, and with a bit of cunning bring on-demand instances on if spot-instances fail to appear.
TL;DR: Spot Instances are suitable for workloads that can pause and resume, but are not mission critical. They can be subject to extraordinary peaks (e.g. N. California m2.2xlarge spot price is usually $0.11/hr but has sustained peaks of $10.00/hr!).
Or is this a cheap way to get an 24/7 instance while Amazon has a bunch of extra capacity?
Spot on, if your bid price always remains above the spot price.
I couldn't find any other explicit mention of when they will terminate your instance.
I would have assumed it would be when they would require that capacity for customers willing to pay full charges for the instance, but then again, the spot price could technically go above the on-demand price.
I am looking at the pricing of various cloud computing platforms, particularly Amazon's EC2, and a lot of the quotes are based on a unit called an Instance-Hour.
I am trying to get a handle on the exact definition of an instance-hour to better compare the costs of continuing to host a web-application versus putting it out on the cloud.
(1) Does it correspond to any of the Windows performance counters in such a way that I could benchmark our current implmentation and use it in their pricing calculators?
(2) How does a multi-processor instance figure into the instance-hour calculation?
An instance hour is simply a regular hour where the instance was available to you, wether you used it or not. Amazon has priced their different types of instances differently, so you pay for the type of resource you are getting, not how much you use it.
So... 1. No, it's just a regular hour. 2. It doesn't, it's already factored into the price you pay for the instance pr hour.
Note also that instance hours are billed rounded up (for Amazon EC2). So starting up an instance and immediately shutting it down again incurs the cost of 1 instance hour.
if you plan to start/stop the AWS EMR cluster rapidly within a single hour, and you want to avoid being billed for a full hour each time you do so, then
start the cluster with the --alive argument from cli, which means to leave it running.
then rapidly add steps to the same cluster, instead:
http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/AddingStepstoaJobFlow.html
Don't forget to stop the cluster when you're done! :)