What exactly is a Heroku compute on a Dyno? - heroku

Heroku describes their dynos here and it lists the amount of memory each one has and also the amount of Compute resources. Nowhere do I see the definition of a "Compute".
When I run this command on the performance-l dynos it tells me it has 8 cores.
grep -c processor /proc/cpuinfo
I don't see how this relates to the 46x Compute that's on the chart. It seems like an arbitrary number to me and I don't understand exactly what it is.

Heroku's compute units are just Amazon's compute units (because Heroku runs on top of AWS).
One compute unit on AWS is defined as the computer power of a 1.0-1.2Ghz of a 2007 server CPU.
Keep in mind though: these units are typically pretty variable depending on how many other active dynos are on the same underlying EC2 host.

Related

On memory intensive on-demand compute

We are considering using azure functions to run some compute on. However our computes could take up lots of memory, lets say more than 5GB.
As I understand there is no easy way to scale azure functions based on memory usage. Ie If you reach 15GB start a new instance (since you don't want it to run over the maximum memory of your instance)?
Is there a way around this limitation?
OR
Is there another technical alternative to azure functions that provide pay per use and allows rapid scaling on demand?
Without any further details about what you are actually trying to compute, it is very hard to give you any meaningful advice.
But there are a few things you could consider.
For example, if you are processing CSV files and you know you need 1GB of memory to process a file, then "spin up" a AWS Lambda per file. You could use serverless orchestration like AWS Step Function to coordinate your processing. This way you are effectively splitting up your compute intensive tasks.
Another option would be to automate "pay per use" yourself. You could use automation tools like Terraform to start a EC2 spot instance, run your computing task and after it finishes just shut down the EC2 instance. That is more or less pay as you go with a bit of operations overhead on your part.
There are also other services like AWS Fargate, which are marketed as "Serverless compute for containers", allowing you to run Docker containers in a pay per use manner. Fargate allows provisioning of up to 30GB of memory.
Another option would be to use services like ElastiCache or Memcached to "externalize" your memory.
Which of these options would be the best for you is hard to tell, because it depends on your constraints: do you need everything to be stored in the "same memory" or can it be split up etc, what are your data structures, can it be processed in chunks (to minimise memory usage), is latency important, how long does it take, how often do you need to run your tasks, etc...
Note: There are probably equivalent Azure services to the AWS services I talked about in this answer.

Behavior on startup times of ec2

I have a use case where we have a very large computation job, which can be broken up into many small units of work fairly efficiently. There could be effectively lets say 1,000 hours of computational work for an m4.large instance. Lets say I wanted the result back within the next 10 minutes, that would mean I would need 6,000 instances to get the job done in time.
So far I have setup AWS batch, I haven't used any more than the 20 m4.large instances your account comes with. I know I can up the amount of instances requested by AWS but I still don't really know much about what the behaviour is if you suddenly try and provision thousands of on-demand instances or if AWS limits how many instances you can use.
So my question is am I able to launch thousands of m4.large instances on-demand? And if so what are sort of times would I be looking at for all instances to get to the Running state.
I have done this many times with ~100 instances but never in the thousands of instances.
STEP 1: Open a support ticket with AWS. You will need to get your account approved, credit checked, etc. My customers are very big companies, so for them the credit and approval process is easy. If you are a little guy, I don't know.
STEP 2: Think thru your VPC design and how you will address that many instances. If is one thing to have 5 instances going thru a NAT Gateway, but a hundred systems will bring Internet connectivity to its knees.
STEP 3: Think thru the networking bandwidth required. Do you need placement groups or very high speed Intranet or Internet connectivity?
STEP 4: Be prepared that you cannot launch all instances with a specific instance type (capacity not available error). Have a selection of instances that you can fall back on.
STEP 5: Create your own software, I use Python, to launch the instances, perform updates, install software, etc. You can then poll the instances using the Boto3 EC2 API to determine when all the instances are running. The length of time for 1,000 instances won't be much different than 1 instance.
Now for the real world. If your job takes 1,000 hours, launching 1,000 instances will not reduce it to 1 hour unless you have a really scalable software design with minimum inter-machine communications required. Once you go beyond 10 systems, networking bandwidth and communications overhead becomes an issue. Even though AWS's resources are huge, launching 1,000 EC2 instances at one time by one customer is not a common launch case.
I would also NOT launch 1,000 instances to get processing down to 10 minutes. It can take 10 minutes for your instances to come online, get updated, synchronize, etc. This means that you will be spending 50% of your budget on waiting time. For really large jobs today we prefer to use Hadoop / Spark where scaling to hundreds of machines is realistic.
You can contact AWS Customer Service to increase your EC2 limits (use the link shown in the Limits section of the EC2 management console). They will verify your use-case.
You might also consider using Spot Pricing to lower your costs. Spot instances take longer to provision.
Sample use-case: Gigaom | Cycle Computing once again showcases Amazon’s high-performance computing potential
There are also services like Spotinst that can help you provision servers at the lowest possible cost.

Rapid AWS autoscaling

How do you configure AWS autoscaling to scale up quickly? I've setup an AWS autoscaling group with an ELB. All is working well, except it takes several minutes before the new instances are added and are online. I came across the following in a post about Puppet and autoscaling:
The time to scale can be lowered from several minutes to a few seconds if the AMI you use for a group of nodes is already up to date.
http://puppetlabs.com/blog/rapid-scaling-with-auto-generated-amis-using-puppet/
Is this true? Can time to scale be reduced to a few seconds? Would using puppet add any performance boosts?
I also read that smaller instances start quicker than larger ones:
Small Instance 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of instance storage, 32-bit platform with a base install of CentOS 5.3 AMI
Amount of time from launch of instance to availability:
Between 5 and 6 minutes us-east-1c
Large Instance 7.5 GB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each), 850 GB of instance storage, 64-bit platform with a base install of CentOS 5.3 AMI
Amount of time from launch of instance to availability:
Between 11 and 18 minutes us-east-1c
Both were started via command line using Amazons tools.
http://www.philchen.com/2009/04/21/how-long-does-it-take-to-launch-an-amazon-ec2-instance
I note that the article is old and my c1.xlarge instances are certainly not taking 18min to launch. Nonetheless, would configuring an autoscale group with 50 micro instances (with an up scale policy of 100% capacity increase) be more efficient than one with 20 large instances? Or potentially creating two autoscale groups, one of micros for quick launch time and one of large instances to add CPU grunt a few minutes later? All else being equal, how much quicker does a t1.micro come online than a c1.xlarge?
you can increase or decrease the time of reaction for an autoscaller by playing with
"--cooldown" value (in seconds).
regarding the types of instances to be used, this is mostly based on the application type and a decision on this topic should be taken after close performance monitor and production tuning.
The time to scale can be lowered from several minutes to a few seconds
if the AMI you use for a group of nodes is already up to date. This
way, when Puppet runs on boot, it has to do very little, if anything,
to configure the instance with the node’s assigned role.
The advice here is talking about having your AMI (The snapshot of your operating system) as up to date as possible. This way, when auto scale brings up a new machine, Puppet doesn't have to install lots of software like it normally would on a blank AMI, it may just need to pull some updated application files.
Depending on how much work your Puppet scripts do (apt-get install, compiling software, etc) this could save you 5-20 minutes.
The two other factors you have to worry about are:
How long it takes your load balancer to determine you need more resources (e.g a policy that dictates "new machines should be added when CPU is above 90% for more then 5 minutes" would be less responsive and more likely to lead to timeouts compared to "new machines should be added when CPU is above 60% for more then 1 minute")
How long it takes to provision a new EC2 instance (smaller Instance Types tend to take shorted times to provision)
How soon ASG responds would depend on 3 things:
1. Step - how much to increase by % or fixed number - a large step - you can rapidly increase. ASG will launch the entire Step in one go
2. Cooldown Period - This applies 'how soon' the next increase can happen. If the previous increase step is still within the defined cooldown period (seconds), ASG will wait and not take action for next increase yet. Having a small cooldown period will enable next Step quicker.
3 AMI type- how much time a AMI takes to launch, this depends on type of AMI - many factors come into play. All things equal Fully Baked AMIs launch much faster

Prototyping for amazon Ec2

How do people (and start up companies) actually go about prototyping/deploying things on amazon and keep costs reasonable? Last month we were experimenting with some specific applications and running own hadoop cluster and managed to spend almost 1.5k just for tests ? Sure - they have micro instances, but what if you application is so intensive it actually requires a larger instance to even test? So I'd like some input as to how people go about doing this?
Several key issues:
Consider a local testbed for some purposes & consider if a given test really needs EC2. If it's really so hard to wrangle 2-4 machines to use as a testbed for Hadoop, there's a different problem. Get your head around whatever you're going to run, how Hadoop will play a role, and kick the tires on that. In time, you will also want to change your grid, upgrade software, tinker with other ideas, etc. When you go to EC2, you'll have smoothed some rough edges already.
Don't use a larger capacity machine than you need while getting the hang of things. If you're not pushing lots of data or compute cycles through at this stage, don't bother with cluster compute nodes, massive RAM instances, etc. Just focus on getting things set up correctly.
When you are ready to retarget to more powerful machines, try a few different machine setups. Maybe the cluster compute instances will pay off, maybe you don't need that kind of throughput: until you know your bottlenecks, don't overspend.
Be sure to use spot instances frequently during the testing phase. You will typically pay about 50% of the on-demand price.
If you get to a point where you want to pay for on-demand instances, have a separate instance start and stop Hadoop instances as needed - unless you need a big cluster all on cluster compute instances.
Prepare your AMIs to get launched as quickly as possible (under 1 minute) and never leave anything running overnight or over a weekend if it isn't necessary.
Until you get the system set up and running, you're basically paying tuition to learn how to get everything tailored to your needs. Just pay the "tuition" to learn each lesson (configurations, bottlenecks, scaling up, etc.), rather than try to take on everything at once. When you approach it as a series of lessons to be learned, it is less painful to spend the money, but as long as you know what you're about to test and learn, you will also spend money more judiciously.
Finally, compare the $1500 to the labor costs of this learning experience - it probably isn't a big deal in the long run. Once you know that something is going to be a reasonable block of computational effort, it's well engineered, and will finish quickly (albeit on many machines), it isn't so painful to spend money on it. Right now, it's hard to appreciate what you're learning because it doesn't yet benefit your org's goals.
To address cost issue while doing proof-of-concept of using Amazon Cloud.
I created a light-weight Java Application using Amazon AWS API, which creates the amazon cloud instances when I want to run a test on them. Once the test is finished or failed-to-start the application terminates the instances immediately by sending out diagnostic mail.
So, no amazon instance kept running or sitting ideal. Which can happen if you create/terminate manually or through a separate program.
Consider using spot instances. If you overbid, you can be almost sure it won't be terminated. In longer run they have price on a level of reserved instances, but you don't need to pay upfront. I believe you could also schedule the tests for non-peak hours, reaching even better prices, or switch to on-demand if spot instance price exceeds on-demand one - Hadoop should handle it nicely. Check this article about spot instances. It has also references to two other articles that analyze the potential of spot instances.

Can someone explain the concept of an "instance-hour" as used by cloud computing providers?

I am looking at the pricing of various cloud computing platforms, particularly Amazon's EC2, and a lot of the quotes are based on a unit called an Instance-Hour.
I am trying to get a handle on the exact definition of an instance-hour to better compare the costs of continuing to host a web-application versus putting it out on the cloud.
(1) Does it correspond to any of the Windows performance counters in such a way that I could benchmark our current implmentation and use it in their pricing calculators?
(2) How does a multi-processor instance figure into the instance-hour calculation?
An instance hour is simply a regular hour where the instance was available to you, wether you used it or not. Amazon has priced their different types of instances differently, so you pay for the type of resource you are getting, not how much you use it.
So... 1. No, it's just a regular hour. 2. It doesn't, it's already factored into the price you pay for the instance pr hour.
Note also that instance hours are billed rounded up (for Amazon EC2). So starting up an instance and immediately shutting it down again incurs the cost of 1 instance hour.
if you plan to start/stop the AWS EMR cluster rapidly within a single hour, and you want to avoid being billed for a full hour each time you do so, then
start the cluster with the --alive argument from cli, which means to leave it running.
then rapidly add steps to the same cluster, instead:
http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/AddingStepstoaJobFlow.html
Don't forget to stop the cluster when you're done! :)

Resources