How do you decide when to upgrade servers vs. add more servers? [closed] - amazon-ec2

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 days ago.
Improve this question
I run a small SAAS website with a (hopefully) growing number of customers. Right now, I have three Amazon EC2 instances. A micro instance runs the web frontend (Rails), a small instance runs the API layer (Rails), and a micro instance runs the data layer (Postgres). Please don't judge; this is more than adequate for my needs at this time.
As I add additional customers, I know that eventually I am going to have to a) increase the horsepower of the existing servers and/or b) load-balance the web/API servers and cluster the database servers.
My question is - how do you decide from a cost/benefit standpoint when to upgrade servers (i.e. micro-->small-->medium-->large) vs. adding additional servers of the same type? I understand that there are benefits to load balancing (such as keeping you online in the event of a server crash or an issue with an availability zone).
Obviously, Amazon charges more for servers with more memory and processing power, but they also charge for ELB and services like that. If I were to increase to two servers in each layer today, that would double my costs + the costs for ELB (not including data transfer costs). It seems like a bottleneck this early on would be best suited by upgrading to medium or better servers.
What are some good rules of thumb for when to build up as opposed to out? Please keep in mind that my choice of software (cough Rails) is very memory intensive when processing large amounts of data.

You mentioned some pros to scaling out(redundancy), but forgot the cons specifically more complex deployments, and increased overhead(more operating system resources being used).
It isn't just up vs out, it is up vs out for each layer. The db tier generally wants to scale up since it avoids clustering/replication headaches. The application tier can go either way. Web servers scale out nicely since they are handling requests and the requests are separate.
Specific to amazon and their specific pricing right now(https://aws.amazon.com/ec2/pricing/), it seems like scaling up vs out is about equivalent, with scaling up to about large being slightly ahead.

Scaling up has the benefit of faster CPU + more RAM, so your app's performance may increase as a result. Yes, there's a time when out vs. up will win, but from my perspective, we have chosen to scale up whenever possible (we're on AWS as well), as we notice a performance boost to our app each time we do, in addition to allowing for additional capacity as our user base grows.

Related

Microservices vs Monolithic Architecture [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
This post was edited and submitted for review 1 year ago and failed to reopen the post:
Opinion-based Update the question so it can be answered with facts and citations by editing this post.
Improve this question
What are advantages and disadvantages of microservices and monolithic architecture?
When to chose microservice architecture or monolithic architecture?
This is a very important question because a few people get lured by all the buzz around microservices, and there are tradeoffs to consider. So, what are the benefits and challenges of microservices (when compared with the monolithic model)?
Benefits
Deployability: more agility to roll out new versions of a service due to shorter build+test+deploy cycles. Also, flexibility to employ service-specific security, replication, persistence, and monitoring configurations.
Reliability: a microservice fault affects that microservice alone and its consumers, whereas in the monolithic model a service fault may bring down the entire monolith.
Availability: rolling out a new version of a microservice requires little downtime, whereas rolling out a new version of a service in the monolith requires a typically slower restart of the entire monolith.
Scalability: each microservice can be scaled independently using pools, clusters, grids. The deployment characteristics make microservices a great match for the elasticity of the cloud.
Modifiability: more flexibility to use new frameworks, libraries, datasources, and other resources. Also, microservices are loosely-coupled, modular components only accessible via their contracts, and hence less prone to turn into a big ball of mud.
Management: the application development effort is divided across teams that are smaller and work more independently.
Design autonomy: the team has freedom to employ different technologies, frameworks, and patterns to design and implement each microservice, and can change and redeploy each microservice independently
Challenges
Deployability: there are far more deployment units, so there are more complex jobs, scripts, transfer areas, and config files to oversee for deployment. (For that reason, continuous delivery and DevOps are highly desirable for microservice projects.)
Performance: services more likely need to communicate over the network, whereas services within the monolith may benefit from local calls. (For that reason, the design should avoid "chatty" microservices.)
Modifiability: changes to the contract are more likely to impact consumers deployed elsewhere, whereas in the monolithic model consumers are more likely to be within the monolith and will be rolled out in lockstep with the service. Also, mechanisms to improve autonomy, such as eventual consistency and asynchronous calls, add complexity to microservices.
Testability: integration tests are harder to setup and run because they may span different microservices on different runtime environments.
Management: the effort to manage operations increases because there are more runtime components, log files, and point-to-point interactions to oversee.
Memory use: several classes and libraries are often replicated in each microservice bundle and the overall memory footprint increases.
Runtime autonomy: in the monolith the overall business logic is collocated. With microservices the logic is spread across microservices. So, all else being equal, it's more likely that a microservice will interact with other microservices over the network--that interaction decreases autonomy. If the interaction between microservices involves changing data, the need for a transactional boundary further compromises autonomy. The good news is that to avoid runtime autonomy issues, we can employ techniques such as eventual consistency, event-driven architecture, CQRS, cache (data replication), and aligning microservices with DDD bounded contexts. These techniques are not inherent to microservices, but have been suggested by virtually every author I've read.
Once we understand these tradeoffs, there's one more thing we need to know to answer the other question: which is better, microservices or monolith? We need to know the non-functional requirements (quality attribute requirements) of the application. Once you understand how important is performance vs scalability, for example, you can weigh the tradeoffs and make an educated design decision.
While I'm relatively new to the microservices world, I'll try to answer your question as complete as possible.
When you use the microservices architecture, you will have increased decoupling and separation of concerns. Since you are litteraly splitting up your application.
This results into that your codebase will be easier to manage (each application is independent of the other applications to stay up and running). Therefore, if you do this right, it will be easier in the future to add new features to your application. Whereas with a monolithic architecture, it might become a very hard thing to do if your application is big (and you can assume at some point in time it will be).
Also deploying the application is easier, since you are building the independent microservices separately and deploying them on separate servers. This means that you can build and deploy services whenever you like without having to rebuild the rest of your application.
Since the different services are small and deployed separately, it's obvious easier to scale them, with the advantage that you can scale specific services of your application (with a monolithic you scale the complete "thing", even if it's just a specific part within the application that is getting an excessive load).
However, for applications that are not intended to become too big to manage in the future. It is better to keep it at the monolithic architecture. Since the microservices architecture has some serious difficulties involved. I stated that it is easier to deploy microservices, but this is only true in comparison with big monoliths. Using microservices you have the added complexity of distributing the services to different servers at different locations and you need to find a way to manage all of that. Building microservices will help you in the long-run if your application gets big, but for smaller applications, it is just easier to stay monolithic.
#Luxo is spot on. I'd just like to offer a slight variation and bring about the organizational perspective of it. Not only does microservices allow the applications to be decoupled but it may also help on an organizational level. The organization for example would be able to divide into multiple teams where each may develop on a set of microservices that the team may provide.
For example, in larger shops like Amazon, you might have a personalization team, ecommerce team, infrastructure services team, etc. If you'd like to get into microservices, Amazon is a very good example of it. Jeff Bezos made it a mandate for teams to communicate to another team's services if they needed access to a shared functionality. See here for a brief description.
In addition, engineers from Etsy and Netflix also had a small debate back in the day of microservices vs monolith on Twitter. The debate is a little less technical but can offer a few insights as well.

Akamai vs CloudFront [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
What are the advantages of using Akamai vs. CloudFront? From what I've read, Akamai seems to be more expensive but they seem to have a larger network for their CDN. CloudFront on the other end is newer and Amazon even used Akamai for their e-commerce site when CloudFront was launched in 2008. This might have changed since then which will not surprise me.
I like CloudFront because my application will be hosted on AWS so there might be significant benefits from using CloudFront rather than Akamai. CloudFront seems to be better documented too and their API is easily accessible whereas Akamai isn't. I'm hoping to get pros and cons between choosing Akamai vs. CloudFront. Thanks in advance!
Each service is performing differently in different regions. Amazon CloudFront can be better in the APAC region, while Akamai might be better in South of Europe and the Middle east.
Since this is a physical service that depends on the actual location of their PoP (Point of Presence) servers, you need to measure where most of your users are, and choose the better service for that region.
You can see such a comparison about the CDN performance in different regions here: http://media.amazonwebservices.com/FS_WP_AWS_CDN_CloudFront.pdf
The main difference between CloudFront and Akamai is the number of PoP servers. CloudFront is using Super PoP approach, which means much fewer (edge) locations (54 as of January 2016 - see complete list here), compared to the thousands that Akamai has around the world. This is why CloudFront costs less than Akamai.
Having more PoP was crucial in the early days of the Internet. But as the Internet is developing around the world the difference in performance is shrinking.
There are even benefits for "Super PoP" in terms of cache, as there is a better chance of finding an element in the cache if you have fewer cache servers.
If you are hosting your web servers in EC2, you will probably get better performance and surely better pricing from CloudFront. If not, you should check the performance and pricing between the various providers.
Note that you don't have to be exclusive, as many big content providers are using several CDN and not a single one.
Akamai is a more expensive solution, but not for nothing. It's more targeted towards enterprise customers, whereas CloudFront is like EC2 - easy to setup and pay as you go. So you probably won't find much publicly available data on Akamai as compared to CloudFront.
Here is a (not so useful) comparison - http://www.cdnplanet.com/compare/cloudfront/akamai/
For more about Akamai's network size, you can read this and this.

Need some help choosing between Amazon EC2 and VPS [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
At my company we are looking at hosting a blog and a CMS . We are still in the process of building the product and havent made it live yet. We are looking at some hosting options. We need to have complete root shell access to the system .So, i have the following two questions.
1.) Should we go for Amazon EC2 or VPS, considering our present requirements which i stated above and also considering that we may need to scale in the future ?
2.) If VPS is the way to go for us,could you please recommend some good service.Also which plan should we go for and also how much would it cost ?
Thank You.
Disclosure: I used to work for Linode.
Speaking objectively, I've heard from several customers that have migrated both from EC2 (as well as to EC2) that say Amazon EC2 is a bit difficult to work with for hosting Web services. From the cost per resource to the various quirks of their service -- last I heard, EC2 is designed more for utility computing than running a Web site and its associated services. I would recommend EC2 more for these kinds of applications:
Processing videos and other multimedia.
Throwaway computing, where nodes are added and removed as demand goes up and down.
Any service where CPU is the bottleneck.
A VPS is a much better choice for you, as you get root and -- if a company does its VPS service right -- scaling is ridiculous easy. If you plan for scalability from the get go with a load balancing solution, you can add a node with Linode in under a few minutes.
The two front runners in the VPS market are Slicehost and Linode. Each have their advantages and disadvantages. Again speaking objectively, Linode's cost per resource is better than Slicehost's, and Linode offers a few services Slicehost does not. Both have fairly active and helpful communities, and both are reliable services. Here's a comparison of both where Linode was ultimately chosen, and a discussion on Slicehost's forums with customers taking both sides.
I'm happy to answer any questions you have, on StackExchange or off.
Go with Linode. You won't regret it. I was a customer long before I was hired.
Another thought I just had is that it's unwise to put all your eggs in one basket; I recently completed full support for the wonderful libcloud project, and Slicehost is fully supported as well, as is EC2. Regardless of what platform you choose, management tools are catching up with cloud ideals.
EC2 is only reasonable if you plan on taking advantage of the scaling. With your dev server, I'm sure you're going to want it up at all times, and with that I think the cheapest instance is like $70 a month at amazon.
Just got for Linode. Great community and all that for only $20/mo.
While I agree with other answers that EC2 is more for data processing than web hosting, I found that now EC2 offers free micro-instances for one year and you may sign up for one and play with it yourself and see which is the best option for you.
If you're not planning on scaling up and down on a regular basis I would recommend a VPS. Jed Smith mentioned two options for that and another choice for a VPS is http://prgmr.com/xen/ which I've used and am happy with. They don't offer as many options as Slicehost or Linode, but they offer more RAM per dollar than most other providers I've seen. They also don't offer any wizards or ajax console access or other high level features. However, if you're ok with setting everything up via a command line console they are an option you should consider.
I have been using https://www.atum.com for years. I was with Amazon, it just didn't cut it for my needs. We have a lot of ram and disk requirements, I found the IO/RAM to be quite bad.
I used Linode for a while, they were quite good as well. I went to Atum mainly because of a friend who was with them, had good things to say about the performance. I have a lot of customers in Canada and that is where there datacenter is because of the patriot act, it had to be in Canada so. Atum VPS has been great so far to me =)
We have been using vps.net for a while and quite satisified with it.
I love linode's offerings as well though I haven't used it yet.

Why would you not want to use Cloud Computing [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Our company is considering moving from hosting our own servers to EC2 and I was wondering if this was a good idea.
I have seen a lot of stuff about can cloud computing (and specifically EC2) do x, or can it do y, but my real question is why would you NOT want to use it?
If you were setting up a business, what are the reasons (outside of cost) that you would choose to go through the trouble of managing your own servers?
I know there are a lot of cost calculations you can put in regarding bandwidth, disk usage etc, but there are of course, other costs regarding maintenance of your own server. For the sake of this discussion I am willing to consider the costs roughly equal.
I seem to remember that Joel Spolsky wrote a little blur on this at one time, but I was unable to find it.
Anyone have any reasons?
Thanks!
I can think of several reasons why not use EC2 (and I am talking about EC2, not grid comp in general):
Reliability: Amazon makes no guarantee as to the availability / down time / safety of EC2
Security: Amazon does not makes any guarantee as to whom it will disclose your data
Persistence: ensuring persistence of your data (that includes, effort to set up the system) is complicated over EC2
Management: there are very few integrated management tools for a cloud deployed on EC2
Network: the virtual network that allows EC2 instances to communicates has some quite painful limitations (latency, no multicast, arbitrary topological location)
And to finish that:
Cost: on the long run, if you are not using EC2 to absorb peak traffic, it is going to be much more costly than investing into your own servers (cheapo servers like Supermicro cost just a couple of hundred bucks...)
On the other side, I still think EC2 is a great way to soak up non-sensitive peak traffic, if your architecture allows it.
Some questions to ask:
What is the expected uptime, and how does downtime affect your business? What sort of service level agreement can you get, what are the penalties for missing it, and how confident are you that the SLA uptime goals will be met? (They may be better or worse at keeping the systems up than you are.)
How sensitive is the data you're proposing to put into the cloud? Again, we get into the questions of how secure the provider promises to be, what the contractual penalties and indemnities are, and how confident you are that the provider will live up to the agreement. Further, there may be external requirements. If you deal with health-related data in the US, you are subject to very strict requirements. If you deal with credit card data, you also have responsibilities (contractual, not legal).
How easy will it be to back out of the arrangement, should service not be what was expected, or if you find a better deal elsewhere? This includes not only getting your data back, but also some version of the applications you've been using. Consider the possibilities of your provider going bankrupt (Amazon isn't going to go bankrupt any time soon, but they could split off a cloud provider which could then go bankrupt), or having an internal reorganization. Bear in mind that a company in serious trouble may not be able to live up to your expectations of service.
How much independence are you going to have? Are you going to be running their software or software you pick? How easy will it be to reconfigure?
What is the pricing scheme? Is it possible for the bills to hit unacceptable levels without adequate warning?
What is the disaster plan? Ideally, it's running your software on servers in a different location from where the disaster hit.
What does your legal department (or retained corporate attorney) think of the contract? Is there a dispute resolution mechanism, and, if so, is it fair to you?
Finally, what do you expect to get out of moving to the cloud? What are you willing to pay? What can you compromise on, and what do you need?
Highly sensitive data might be better to control yourself. And there's legislation; some privacy sensitive information, for example, might not leave the the country.
Also, except for Microsoft Azure in combination with SDS, the data stores tend to be not relational, which is a nuisance in certain cases.
Maybe concern that that big a company will more likely be approached by an Agent Smith from the government to spy on everyone that a little small provider somewhere.
Big company - more customers - more data to aggregate and recognize patterns - more resources to organize a sophisticated watch system.
Maybe it's more of a fantasy but who ever knows?
If you don't have a paranoia it doesn't mean yet that you are not being watched.
The big one is: if Amazon goes down, there's nothing you can do to bring it back up.
I'm not talking about doomsday scenarios where the company disappears. I mean that you're at the mercy of their downtime, with little recourse of your own.
Security -- you don't know what is being done to your data
Dependency -- your business is now directly intertwined with the provider
There are different kinds of cloud computing with lots of different vendors providing it. It would make me nervous to code my apps to work with a single cloud vendor. that you specifically had to code for..amazon and Microsoft I believe you need to specifically code for that platform - maybe google too.
That said, I recently jettisoned my own dedicated servers and moved to Rackspaces Mosso Cloud platform (which have no proprietary coding necessary) and I am really, really pleased with it so far. Cut my costs in half, and performance is way better than before. My sql server databases are now running on 64Bit enterprise SQL server versions with 32G of ram - that would have cost me a fortune on my previous providers infrastructure.
As far as being out of luck when the cloud is down, that was true if my dedicated server went down - it never did, but if there was a hardware crash on my dedicated server, I am not sure it would be back on-line any quicker than rackspace could bring their cloud back up.
Lack of control.
Putting your software on someone else's cloud represents handing over some control. They might institute a file upload size limit, or memory limits which could ruin your application. A security vulnerbility in their control panel could get your site hacked.
Security issues are not relevant if your application does its own encryption. Amazon is then storing encrypted data that they have no way of decrypting.
But in addition to the uptime issues, Amazon could decide to increase their prices to whatever they want. If you're dependent on them, you'll just have to pay it.
Depends how much you trust your own infrastructure in comparison to a 3rd party cloud service. In my opinion, most businesses (at least not IT related) should choose the later.
Another thing you lose with the cloud is the ability to choose exactly what operating system you want to run. For example, the latest Fedora Linux kernel available on EC2 is FC8, and the latest Windows version is Server 2003.
Besides the issues raised regarding dependability, reliability, and cost is the issue of data ownership. When you locate data on someone else's server, you no longer control who views, accesses, modifies, or uses that data. While the cloud operators can limit your access, you possess no way of limiting theirs or limiting who they give access to. Yes, you can encrypt all the data on the server but you lack any way of knowing who possesses root access to the server itself and any means to stop others from downloading your encrypted data and cracking it open. You lose control over your data; depending on what type of apps you are running and the proprietary nature of the data involved, this could engender corporate security and/or liability risks.
The other factor to consider is what would happen to your company if Amazon and/or EC2 were to suddenly vanish overnight. While a seemingly preposterous position, it could happen. Would you be able to quickly fill the hole and restore service, or would your potentially revenue generating apps languish while the IT staff scramble to obtain servers and bandwidth to get them back online? Also, what would happen to your data? The cloud hard drive holding all your information still exists, somewhere, and could pose a potential liability risk depending on the information you stored there--items such as personal information, business transaction records etc.
If I was starting my own business now, I would go through the hassle of purchasing and maintaining my own severs so I retained data ownership. I could control root access to the hardware, as well as control who can access and modify the data.
Unanswered security questions.
Really, do you want your IP out there, where you're not the one in control of it?
Most cloud computing environment are at least partially vendor specific. There's no good way to move stuff from one cloud to another without having to do a lot of rewriting. That sort of lock-in puts you at the mercy of one vendor when it comes to downtime, price increases, etc. If you rent or own your own servers, hosting providers and colos are pretty much interchangeable. You always have the option of moving somewhere else.
This may change in the future, as these things become standardized, but for now tying yourself to the cloud means tying yourself to a specific vendor.
This is kind of like the "Why would you use Linux" comment I received from management many years ago. The response I got was that it is a solution in search of a problem.
So what are your goals and objectives in moving to EC2?
I'd be interested to know if you'd still want to move to a cloud, if it was your own.
Cloud computing has brought parallel programming a little closer to the masses, but you still have to understand how best to use it - otherwise you're going to waste compute cycles and bandwidth.
Re-architecting your application for most efficient use of a cloud computing service is non-trivial.
Besides what has already been said here, we have to consider uniformity across the business. Are all of you applications going to be hosted in the cloud, or only most? Is most enough to pull the trigger on using the cloud when you still have to have personnel to handle a few special servers?
In particular, there might be special hardware that you need to communicate with such modems to accept incoming data, or voice cards that make automated phone calls. I don't know how such things could be handled in a cloud environment.

How Much Traffic Can Shared Web Hosting Take? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 7 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
I have a cheap shared hosting plan with Reliablesite.net ($5/month).
I've been making a small site I want to start promoting in a few weeks and I was going to roadtest it by hosting it with the shared plan I already have.
My issue is that I don't know at what point I should move onto clustered hosting / dedicated hosting.
Questions
What pageviews / day can a
shared hosting plan be expected to
handle?
What can standard
shared database servers take without
choking up or me getting rude emails
from my hosting provider?
In my experience, shared hosting environment like Reliablesite.com can take around 10-20 000 unique users per day, or 100-200 000 pageviews/day. That number can vary, depending on your site. For optimization, It is important to reduce number of db queries (i keep it max 6-7 per page render), and be careful when programming. Using ASP.NET MVC gave nice perf improvement for me, but good written webforms app can perform well also. If you are using some other tech stack, like PHP/MySQL, i don't know the numbers.
When you exceed those numbers, you will have enough money from google adsense to go with VPS or dedicated plan.
Just to add something regarding page render / db queries performance: using multiple resultset sproc or query is great way to reduce number of db requests!
Traffic usually is not a problem on shared hosting. The only problem you may encounter is RAM and CPU restrictions. But if your application written correctly it could operate well with these limitations.
Hints:
user memory profiler to debug and optimize your web application
use CDN for storing media files
If you need some numbers, a properly written web application which use CDN for storing media files could handle at least 10k unique visitors per day on a shared hosting.
It would be best if you ask your provider these questions. Every provider is going to be different.
Generally what happens is that the provider can handle the requests, but they'll simply shut down your site once it reaches a certain threshold.
It also depends on the amount of bandwidth you have opted for. How much traffic are you expecting. My blog is in a shared hosting and and once 4k was my maximum in a day and I dint feel any difference in the performance. Dont worry unless your site appears in front page of digg or some high traffic websites link to you site.
I have been using mysql on shared hosting for a while mainly on informational websites that have gotten at most 300 visits per day. What I have found is that the hosting was barely sufficient to support more than 3 or 4 people on the website at one time without it almost crashing.
Theoretically i think shared hosting with most services could support about about 60 users per hour max efficiently if your users all came one or two at a time. This would equal out to about about 1500 users in one day. This is highly unlikely however because alot of users tend to be online at certain times of the day and you also have to throw in the fact that shared servers get sloppy alot due to abuse from others on the server.
I have heard from reliable sources that some vps hosting thats 40-50 dollars per month have supported 500,000 hits per month. I'm not sure what the websites configurations were though, i doubt the sites ran many dynamic db queries or possibly were simply static.
One other thing that is common on shared hosting is breaking up the file managers with the database hosting. Sometimes your files will do well appearing online but the database that runs your actual website will be lagging extremely due to abuse from your neighbors.
I suggest ensuring that your application is ready for large amounts of traffic, even if you are on a super duper webserver, but your app is badly written, you will loose potential clients. Some of the easiest optimizations that can be done to an existing web app is to reduce the number of DB connections, so read up on caching and partial caching.

Resources