Heroku Hobby Plan pricing calculation for 100 applications - heroku

I am very confused with the heroku hobby plan which states $7/dynos/month. Say if I have deployed 100 applications on my hobby plan how much will it cost me to host them? will it cost me as follows:
100 applications * $7 = $700/month?

Related

How does heroku charge free plan users after they reach the 1,000 free dyno hours?

I have a question regarding Heroku billing. So, every account gets 1,000 free dyno hours per month (if we add credit card info). That is technically enough for 1 app but if there are more than one and they are awake all the time, then I will exceed the 1,000 hours per month. Do you know how I will be charged for extra hours? After 1,000 hours are reached?
2nd question. The other dyno plans (hobby, standard, etc.) charge monthly for like $7, $25, etc. I know these paid accounts never let your app sleep and there is more RAM, free SSL, etc., but the usage pricing still works similar to free plan (with 1,000 free dyno hours and then charged for extra hours). or with these plans I won't be charged more than $7, $25, etc. for dyno hour usages?
You won't be charged after 1000 hours is reached. Your Dyno app will just shut down and not run.
They don't use free dyno hours. If you are running the app continuously for a month it will be $7, $25 etc. If it is just a day it will charge 1/30 of $7, $25 etc. They are billing hourly use but they won't exceed the monthly advertised price.

What's the minimum time counted for billing in Heroku?

Ec2 instance hour calculated by hour by hour. If you just start and close an instance, it still counted as one hour.
How Heroku handle this? By Minute or By Hour?
Lets assume my app usage exceeds 750 Free Dyno Hour Limit
Heroku prorata to the second. A dyno costs $0.05 per hour. So if you go over 750 hours you will be charged at $0.05 per hour or $0.000833333 per minute. In fact, pretty much all addons also follow the same billing model too.
You can read about billing and charges as https://devcenter.heroku.com/articles/usage-and-billing#cost
I will say, though, that the previous answer seems to be more accurate for the web dyno versus a worker dyno. Heroku's automated sleep cycle for your web dyno tries to prevent it from running too long when it's idle, say, for more than an hour. For the free web dyno it must sleep at least six hours per day for it not to incur charges. As long as you set the scaling to 1 for your web dyno and it sleeps then it should be free.
That said, when you add your first worker dyno those same automations aren't applied to this dyno. It presumably won't be triggered to sleep on idling for an hour. This means that unless you manage it you'll likely be charged $34.50 for each worker dyno per month. I wouldn't exactly call this lying to the customer but most people start off with that first free dyno, get comfortable with that and then innocently think that the next dyno will behave in a similar way--it won't and you'll likely get tagged paying more money than you'd bargained for. That's $414-per-year for a dyno. Compare this with Amazon's t2.micro cost of $150-per-year for one instance or $75-per-year for a 50% duty cycle of same.
As they say, "the devil is in the details". Heroku might be cheap for vanity websites but it's a bit costly if you have a database and worker thread (without any scaling otherwise whatsoever).

Amazon ec2 price calculations per visit

Im trying to build a financial model to run some projections for a web project. I am wanting to use Amazon ec2 but I dont know how to figure out the cost for traffic levels (i.e. visitors, length of visit etc) with instance rates.
Can someone help me with the calculations.
For example, if I have 1000 visits to the site for 3 minutes each, how many instances will that use. what about 10000, 100000 and so on?
Any help is greatly appreciated.
EC2 instances are priced based on the number of hours the instance is ON, regardless of CPU. Think of it like buying a dedicated machine for X hours, not like buying CPU cycles.
The variance in the cost is bandwidth consumed, so you would have to figure out how much bandwidth the 1000 users will consume for those 3 minutes.
Amazon has a calculator to help you figure out your costs http://calculator.s3.amazonaws.com/calc5.html

How can I do a capacity planning of my web application and decide the deployment architecture?

I have an ASP.net web application deployed on the small AWS instance (Dual Core AMD, 2.60 GHz, 1.7 GB RAM). I would like to perform load testing on this server for 300 concurrent users and for future, I want to design the tentative capacity planning and deployment architecture for 250,000 registered users for my application.
I am very new person in this area and have not done any kind of load testing before.
The Use-case and scenario of my
application will be as below:
Scenario - 250, 000 registered users in database
Concurrency – 5% - 7% - approximately 17,500
Each user has a book shelf and
assuming each user is subscribed for
10 books. Each book is of around 25 MB
in size with 400 pages
Use cases
User Login
Database authentication & authorization
View Book Shelf with book images
Book Shelf (.swf) - 400 KB (gets loaded for each user)
10 book images will be loaded (20KB per image)(approximately)
catalog.xml - 30 KB / user for allocated for the user
Note: Approximately 650KB of data is gets downloaded on to client
machine
Browse book : On clicking a book image following files & its sizes will
be downloaded to clients machine
One time
Reader.swf - 950 KB (first download)
XML data’s of approximately 100 KB / per book (on click)
Book.xml
Annotation.xml
Catalog.xml
Usersettings.xml 40KB*4 = 160 KB per user (.swf)
Note: Approximately 1200KB of data is gets downloaded on to client
machine
Could someone please suggest how can I proceed with this?
Very much thanks in advance,
Amar
Completing the first goal (test 300 users) is pretty straightforward - choose a load testing tool, build the scenarios and test. Then tune/optimize and repeat.
But I think your bigger question is how to approach testing and planning for your full capacity - which you say is ~18k concurrent users. First, make sure that number (7% of user base) is the peak concurrency, not average. You need to test the peak.
So assuming that you are planning a load-balanced multiple-server cluster to handle that load, the next step is to determine the maximum capacity of a single web/app server, without the load-balancer in place. This gives you a baseline that you can use to judge the performance of the cluster. This is a really important step and many of our clients skip this step, to their own detriment. It is important because there are many conditions under which a load-balanced system does not scale linearly with the number of servers in the cluster. Ideally it should and good systems get pretty close. You'd be surprised how frequently we see systems that don't scale well at all. We've even seen a few systems that actually have lower capacity as a cluster than a single server could handle on its own.
Once you have that baseline established, you can make a preliminary estimate about the total number of servers you'll need and you can build your cluster. I recommend next testing with 2 web/app servers. This should nearly double your capacity. If it doesn't then you need to determine why before moving on to larger tests. Likely candidates are the load balancer setup or the database (if a single database server is servicing all the web/app servers). Occasionally something more fundamental to the application architecture is at play.
When you are satisfied that scaling from 1 to 2 servers is performing optimally, then you can proceed to scale up to your full cluster and test maximum capacity. Be prepared to back-track if you don't see the scalability you expected - test with 3, 4, 5 servers, etc.
I hope that helps! Good luck :>
This link: http://support.microsoft.com/kb/231282, has links to some tools to stress test your website.
This is obviously a complicated area, so you may have 2.5 million registered users (really that many?), but how many are concurrent, and what areas of the website will they be using.
All of these things (and many more), will impact the capacity planning for your system.

Average EC2 Uptime?

Curious as to 99.95% uptime REALLY means; Is it really going to go down 7 minutes a month? Please post your longest/average uptimes on EC2, thanks.
Usually uptime is calculated in a yearly basis. So if you have a Service Level Agreement for 99.95% this means:
365 * 0.0005 = 0.1825 days or 4.38 hours
If during a year of service there is an outage and your system is down for more than that, then you are liable for compensation.
As of your question, I have a server running unstopped in EC2 for about 3 months now. I would say that their uptime is good, but if you have a mission critical application you definitely need to have a fail-over solution. A good uptime only means that they will be able to respond to an outage quickly. Even a 99.9999% uptime won't be able to save you if you aren't prepared for an outage.
Read the SLA carefully (http://aws.amazon.com/ec2-sla/) they only count "Region Unavailable" as downtime, and what is more they only count it as downtime if the region is down for 5 consecutive minutes.
“"Annual Uptime Percentage” is calculated by subtracting from 100% the percentage of 5 minute periods during the Service Year in which Amazon EC2 was in the state of “Region Unavailable.”
By my count this mean any downtime of less then 4 minutes is not countable. Also if they do break the SLA they are only in for %10 of the month in which you had largest downtime bill.
So if they where down for all of January and your bill was $100 they would apply a $10 credit to your account.
I would have a hard time convensing my boss that this is a serious product with a SLA like that.
SLA's are useless. They only measure how much risk the company is willing to take on and have no bearing on actual uptime. I've seen SLA's, with heavy penalties, offered when the company knew the could not meet the SLA in order to land the sale.
I have one client with 400+ days of EC2 uptime and another with 300+ days as measured by web pulse, this is by far the most reliable service I've worked with.
For my single instance running in the US-East availability zone, 9 months, 0 downtime.
Since Amazon switched to provide an SLA, I've never had an instance go down on me. When I've had instances go down in the past, Amazon has always sent a message informing me that the instance is degraded before it actually disappeared, so I've had time to start up a new instance.
The previous answer makes a good point, though; EC2's service model dictates that you write your apps to handle failover to a new server if you're not prepared for extended down time.
conrad#papa ~ $ uptime
04:42:36 up 495 days, 8:51, 8 users, load average: 0.02, 0.02, 0.00
Checking out the AWS Service Health Dashboard will get you a good idea of any current or past issues. My experience is that the AWS uptime is better than most "traditional" hosting options (even full-blown redundant RackSpace setup...).
However, simply going with AWS for uptime is like getting a car for the keychain (ok, almost.. ;)). With an architecture utilizing AWS the big benefit is scaling (without upfront costs).
SLA... Guaranteed uptime...
These are all very nice taglines. But when the servers aren't available for an hour (March 1, 2012, in the EU region) and the clients start calling, then it won't help you that they still have a 300 days uptime.
And when the lightning struck 1 out of 3 of their datacenters in the EU, we all found out that they have no off-site redundancies, and the fact that they have 3 datacenters doesn't mean a thing.
One must love the phrase "degraded performance", that actually means: "cross your fingers and pray that your data will still be available after the catastrophe passes".
I'm still trying to look for any official/non-official statistics about the availability percentages of all of their datacenters.
No luck thus far...
I've never had downtime on EC2, however, I do keep local backups and make daily images of my machines and port them to another availability zone, just in case. I use twilio to alert me if a machine is unreachable with a phone call to all my devices. Then I can just log in to EC2 and fire up a machine in another availability zone; worst case I'll be down for a few minutes.
Which in my case, is potentially pretty sucky, because my machines are doing 24/7 Forex trading.
My rule: know the potential cost of downtime, and be willing to invest that much in redundancy assuming it will happen - because it will.
That said, EC2 has never let me down. Helps probably that my servers are not in an area of the country where natural disasters are common. If you're in an earthquake zone, tornado alley, or a potential hurricane path, downtime truly is an inevitability.

Resources