How Much Traffic Can Shared Web Hosting Take? [closed] - hosting

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 7 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
I have a cheap shared hosting plan with Reliablesite.net ($5/month).
I've been making a small site I want to start promoting in a few weeks and I was going to roadtest it by hosting it with the shared plan I already have.
My issue is that I don't know at what point I should move onto clustered hosting / dedicated hosting.
Questions
What pageviews / day can a
shared hosting plan be expected to
handle?
What can standard
shared database servers take without
choking up or me getting rude emails
from my hosting provider?

In my experience, shared hosting environment like Reliablesite.com can take around 10-20 000 unique users per day, or 100-200 000 pageviews/day. That number can vary, depending on your site. For optimization, It is important to reduce number of db queries (i keep it max 6-7 per page render), and be careful when programming. Using ASP.NET MVC gave nice perf improvement for me, but good written webforms app can perform well also. If you are using some other tech stack, like PHP/MySQL, i don't know the numbers.
When you exceed those numbers, you will have enough money from google adsense to go with VPS or dedicated plan.
Just to add something regarding page render / db queries performance: using multiple resultset sproc or query is great way to reduce number of db requests!

Traffic usually is not a problem on shared hosting. The only problem you may encounter is RAM and CPU restrictions. But if your application written correctly it could operate well with these limitations.
Hints:
user memory profiler to debug and optimize your web application
use CDN for storing media files
If you need some numbers, a properly written web application which use CDN for storing media files could handle at least 10k unique visitors per day on a shared hosting.

It would be best if you ask your provider these questions. Every provider is going to be different.
Generally what happens is that the provider can handle the requests, but they'll simply shut down your site once it reaches a certain threshold.

It also depends on the amount of bandwidth you have opted for. How much traffic are you expecting. My blog is in a shared hosting and and once 4k was my maximum in a day and I dint feel any difference in the performance. Dont worry unless your site appears in front page of digg or some high traffic websites link to you site.

I have been using mysql on shared hosting for a while mainly on informational websites that have gotten at most 300 visits per day. What I have found is that the hosting was barely sufficient to support more than 3 or 4 people on the website at one time without it almost crashing.
Theoretically i think shared hosting with most services could support about about 60 users per hour max efficiently if your users all came one or two at a time. This would equal out to about about 1500 users in one day. This is highly unlikely however because alot of users tend to be online at certain times of the day and you also have to throw in the fact that shared servers get sloppy alot due to abuse from others on the server.
I have heard from reliable sources that some vps hosting thats 40-50 dollars per month have supported 500,000 hits per month. I'm not sure what the websites configurations were though, i doubt the sites ran many dynamic db queries or possibly were simply static.
One other thing that is common on shared hosting is breaking up the file managers with the database hosting. Sometimes your files will do well appearing online but the database that runs your actual website will be lagging extremely due to abuse from your neighbors.

I suggest ensuring that your application is ready for large amounts of traffic, even if you are on a super duper webserver, but your app is badly written, you will loose potential clients. Some of the easiest optimizations that can be done to an existing web app is to reduce the number of DB connections, so read up on caching and partial caching.

Related

How do you decide when to upgrade servers vs. add more servers? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 days ago.
Improve this question
I run a small SAAS website with a (hopefully) growing number of customers. Right now, I have three Amazon EC2 instances. A micro instance runs the web frontend (Rails), a small instance runs the API layer (Rails), and a micro instance runs the data layer (Postgres). Please don't judge; this is more than adequate for my needs at this time.
As I add additional customers, I know that eventually I am going to have to a) increase the horsepower of the existing servers and/or b) load-balance the web/API servers and cluster the database servers.
My question is - how do you decide from a cost/benefit standpoint when to upgrade servers (i.e. micro-->small-->medium-->large) vs. adding additional servers of the same type? I understand that there are benefits to load balancing (such as keeping you online in the event of a server crash or an issue with an availability zone).
Obviously, Amazon charges more for servers with more memory and processing power, but they also charge for ELB and services like that. If I were to increase to two servers in each layer today, that would double my costs + the costs for ELB (not including data transfer costs). It seems like a bottleneck this early on would be best suited by upgrading to medium or better servers.
What are some good rules of thumb for when to build up as opposed to out? Please keep in mind that my choice of software (cough Rails) is very memory intensive when processing large amounts of data.
You mentioned some pros to scaling out(redundancy), but forgot the cons specifically more complex deployments, and increased overhead(more operating system resources being used).
It isn't just up vs out, it is up vs out for each layer. The db tier generally wants to scale up since it avoids clustering/replication headaches. The application tier can go either way. Web servers scale out nicely since they are handling requests and the requests are separate.
Specific to amazon and their specific pricing right now(https://aws.amazon.com/ec2/pricing/), it seems like scaling up vs out is about equivalent, with scaling up to about large being slightly ahead.
Scaling up has the benefit of faster CPU + more RAM, so your app's performance may increase as a result. Yes, there's a time when out vs. up will win, but from my perspective, we have chosen to scale up whenever possible (we're on AWS as well), as we notice a performance boost to our app each time we do, in addition to allowing for additional capacity as our user base grows.

how does one identify why a website is slow? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I was asked this question once at an interview:
"Suppose you own a website where the server is at some remote location. One day, some user calls/emails you saying the site is abominably slow. How would you identify why the site is slow? Also, when you check the website yourself as any user would (using your browser), the site behaves just fine."
I could think of only one thing (which was shot down):
Check the server logs to analyse incoming traffic. Maybe a DoS attack or exceptionally high traffic. Interviewer told me to assume the server has normal traffic and no DoS.
I was kind of lost because I had never thought of this problem. I have almost no idea how running a server/website works. So if someone could highlight a few approaches, it would be nice.
While googling around, I could find only this relevant, wonderful article. That article is kind of too technical for me now, but I'm slowly breaking it down and understanding it.
Since you already said when you check the site yourself the speed is fine, this means that (at least for the pages you checked) there is nothing wrong with the server and it can serve those pages at a good speed. What you should be figuring out at this point is what the difference is between you and the user that reports your site is slow. It might be a lot of different things:
Is the user using a slow network connection (mobile for example)?
Does the user experience the same problems with other websites hosted at the same webhoster? If so, this could indicate a network problem. Normally this could also indicate a resource problem at the webserver, but in that case the site would also be slow for you.
If neither of the above leads to an answer, you could assume that the connection to the server and the server itself are fine. This means the problem must be in the users device. Find out which browser/OS he uses and try to replicate the problem. If that fails find out if he uses any antivirus or similar software that might cause problems.
This is a great tool to find the speed of web pages and tells you what makes it slow: https://developers.google.com/speed/pagespeed/insights
I think one of the important thing that is missing from above answers is the server location, which can play a vital in web performance.
When someone is saying that it is taking a longer time to open a web page that means high latency. High latency can be caused due to server location.
Let's assume as you are the owner of the web page then the server and client are co-located, so it will have a low latency.
But, now if client is across the border, then latency time will increase drastically. And hence a slow perfomance.
Another factor is caching which drastically affects the latency time.
Taking the example of facebook, they have server all over the world to reduce the latency time (and also to provide several other advantages) and they use huge caching system to cache their hot data (trending topics) whereas cold data (old data) are stored in hard disk so it takes a longer time to load an older photo or post.
So, a user might would have complained about this as they were trying load up some cold data.
I can think of these few reasons (first two are already mentioned above):
High Latency due to location of client
Server memory might need to be increased
Number of service calls from the page.
If a service could be down at the time of complaint, it could prevent page from loading.
The server load might be too high at the time of the poor experience. The server might need to increase the resources (e.g. adding another server/web server to the cluster).
Check if there was any background job running on the server at that time.
It is important to check the logs and schedules of the batch jobs to determine what all was running at that time.
Hope this help.
Normally the user takes the page loading time as a measure to find out that the site is slow. But if you really want to know that what is taking the maximum time the you can open the browser debugger by pressing f12. if your browser is chrome the click on network and see what calls your application is making and which are taking maximum time. If you are using Firefox the you need to install firebug. If you have that, then again press f12 and click on Net.
One reason could be the role of the user is different of your role. You might be having suppose an administrator privilege (some thing like super user role) and the code might be just allowing everything for such role that means it does not really do much of conditional checking to see what is allowed or not. Some times, it's a considerable over ahead to get all the privileges of the user and have the conditions checking, how course depends how how the authorization is implemented. That means, the page might be really slow for specific roles. Hence, you should find out the roles of the user and see if that is a reason.
Obviously an issue with the connection of the person connecting to your site OR it's possible it was a temporary issue and by the time you checked your site, everything was dandy. You could check your logs or ask your host if there was an issue at the time the slow down occured.
This is usually a memory issue and it can be resolved by increasing the Heap Size of the Web Server hosting the application. In case the application is running on Weblogic Server. Heap size can be increased in "setEnv" file located in Application Home.
Goodluck!
Michael Orebe
Though your question is quite clear, web site optimisation is a very extensive subject.
The majority of the popular web developing frameworks are for some reason, extremely processor inefficient.
The old fashioned way of developing n-tier web applications is still very relevant and is still considered to be best practice according the W3C. If you take a little time to read the source code structure of the most popular web developing frameworks you will see that they run much more code at the server than is necessary.
This may seem a bit of a simple answer but, the less code you run at the server and the more code you run at the client the faster your servers will work.
Sometimes contrasting framework code against the old fashioned way is the best way to get an understanding of this. Here is a link to a fully working mini web application which represents W3C best practices and runs the minimum amount of code at the server and the maximum amount of code at the client: http://developersfound.com/W3C_MVC_EX.zip this codebases is also MVC compliant.
This codebase comes with a MySQL database dump, php and client side code. To see this code in action you will need to restore the SQL dump to a MySQL instance (sql dump came from MySQL 8 Community) and add the user and schema permissions that are found in the php file (conn_include.php); setting the user to have execute permissions on the schema.
If you contrast this code base against all of the most popular web frameworks, it will really open your eyes to just how inefficient these frameworks are. The popular PHP frameworks that claim to be MVC frameworks aren’t actually MVC compliant at all. This is because they rely on embedding PHP tags inside HTML tags or visa-versa (considered very bad practice according the W3C). Also most popular node frameworks run way more code at the server than is necessary. Embedded tags also stop asynchronous calls from working properly unless the framework supports AJAX dumps such as Yii 2.
Two of the most important rules to follow with MVC compliance is: never embed server side tags (such as PHP tags) in HTML tags or visa-versa (unless there is a very good excuse such as SEO) and religiously never write code to run at the server if it can be run at the client. Also true MVC is based on tier separation, where as the MVC frameworks are based on code separation. True MVC compliance is very processor efficient. Don’t get me wrong MVC frameworks are very useful for a lot of things, but if you’re developing a site that is going to get millions of hits, they are quite useless, or at least they will drive your cloud bills so high that it will really eat into your company’s profits.
In summary frameworks don’t give much control over what code runs at the client or server and are very inefficient but you can get prototypes up and running quicker with less code.
In contrast the old fashioned way takes a bit more elbow grease but you have complete control over what runs at the server and what runs at the client.
As an additional bit of advice for optimisation avoid using pass-through queries and triggers and instead opt for stored procedures. Historically stored procedures weren’t invented at the time MVC was present as a paradigm but it definitely increases separation of concerns between the tiers and is much more processor efficient.
Hope this advice helps.

Need some help choosing between Amazon EC2 and VPS [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
At my company we are looking at hosting a blog and a CMS . We are still in the process of building the product and havent made it live yet. We are looking at some hosting options. We need to have complete root shell access to the system .So, i have the following two questions.
1.) Should we go for Amazon EC2 or VPS, considering our present requirements which i stated above and also considering that we may need to scale in the future ?
2.) If VPS is the way to go for us,could you please recommend some good service.Also which plan should we go for and also how much would it cost ?
Thank You.
Disclosure: I used to work for Linode.
Speaking objectively, I've heard from several customers that have migrated both from EC2 (as well as to EC2) that say Amazon EC2 is a bit difficult to work with for hosting Web services. From the cost per resource to the various quirks of their service -- last I heard, EC2 is designed more for utility computing than running a Web site and its associated services. I would recommend EC2 more for these kinds of applications:
Processing videos and other multimedia.
Throwaway computing, where nodes are added and removed as demand goes up and down.
Any service where CPU is the bottleneck.
A VPS is a much better choice for you, as you get root and -- if a company does its VPS service right -- scaling is ridiculous easy. If you plan for scalability from the get go with a load balancing solution, you can add a node with Linode in under a few minutes.
The two front runners in the VPS market are Slicehost and Linode. Each have their advantages and disadvantages. Again speaking objectively, Linode's cost per resource is better than Slicehost's, and Linode offers a few services Slicehost does not. Both have fairly active and helpful communities, and both are reliable services. Here's a comparison of both where Linode was ultimately chosen, and a discussion on Slicehost's forums with customers taking both sides.
I'm happy to answer any questions you have, on StackExchange or off.
Go with Linode. You won't regret it. I was a customer long before I was hired.
Another thought I just had is that it's unwise to put all your eggs in one basket; I recently completed full support for the wonderful libcloud project, and Slicehost is fully supported as well, as is EC2. Regardless of what platform you choose, management tools are catching up with cloud ideals.
EC2 is only reasonable if you plan on taking advantage of the scaling. With your dev server, I'm sure you're going to want it up at all times, and with that I think the cheapest instance is like $70 a month at amazon.
Just got for Linode. Great community and all that for only $20/mo.
While I agree with other answers that EC2 is more for data processing than web hosting, I found that now EC2 offers free micro-instances for one year and you may sign up for one and play with it yourself and see which is the best option for you.
If you're not planning on scaling up and down on a regular basis I would recommend a VPS. Jed Smith mentioned two options for that and another choice for a VPS is http://prgmr.com/xen/ which I've used and am happy with. They don't offer as many options as Slicehost or Linode, but they offer more RAM per dollar than most other providers I've seen. They also don't offer any wizards or ajax console access or other high level features. However, if you're ok with setting everything up via a command line console they are an option you should consider.
I have been using https://www.atum.com for years. I was with Amazon, it just didn't cut it for my needs. We have a lot of ram and disk requirements, I found the IO/RAM to be quite bad.
I used Linode for a while, they were quite good as well. I went to Atum mainly because of a friend who was with them, had good things to say about the performance. I have a lot of customers in Canada and that is where there datacenter is because of the patriot act, it had to be in Canada so. Atum VPS has been great so far to me =)
We have been using vps.net for a while and quite satisified with it.
I love linode's offerings as well though I haven't used it yet.

What are the advantages and disadvantages of self-hosting? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
What are the advantages and disadvantages of self-hosting something like a svn repository? All links and ideas are appreciated.
Off the top of my head:
Advantages of Self-hosting
Flexibility. On my own machine I can install whatever I want. If I would like to use a vcs like Bazaar and use Loggerhead instead of Trac, then right now there isn't really much choice beyond Launchpad, which has its warts
Save money. Costs add up over time especially for large teams
The free plans offered by sites like Assembla are not private. Anybody can have access to your code
Advantages of Paid hosting (ie: GitHub, Assembla, Google Code)
Robustness. You don't need to worry about your server catching fire because it's become somebody else's problem.
Less hassle. Don't need to be do all the system administration and tweaking of conf files. Instead you can just focus on the coding
For production you should only use self hosting if you are professional sys admin. Can you answer yes to following questions (a bit linux oriented, but you should get an idea):
Can you react to system failure in minutes (I mean you need sleep at least. Do you have somebody to look after system while you are asleep?)
Can you spot a system break?
Can you remove exploits from your system?
Can you recompile kernel. If you can't remove exploits?
Can you configure the system for optimal performance?
Are you willing to pay for UPS, backup storage and alternative internet provider?
If you can answer yes to these questions benefits are very atractive and I would go with it.
On the other hand hosting development environment can be managed by administrator of any level especially when there are such easy to use servers like Ubuntu.
You specifically asked about hosting a subversion repository, so the first disadvantage that comes to mind is data protection. I personally would never trust a third-party with my source code, except for open source code or code of an unimportant side project. Source code is a very important asset for a ISV, so trusting a third-party to protect your source code doesn't sound like a good idea.
And even if it's not about source code, outsourcing other critical parts of you business such as email, accounting/invoicing* is just asking for trouble. And it's not like you don't have to care about backups anymore when you outsource your data hosting. You still should backup your data, in case the hosting company screws up.
*) With outsourcing accounting/invoicing I mean all those new hosted invoicing apps, not getting help from an accountant of course
I find the web interface of external hosts to be hassles. Plus you can have however much space you want on your machine. Like you said, the maintenance can be a burden for self hosting though.
How big is your project? If it is not too big just get an account at http://www.beanstalkapp.com
That is what I did. I do not have to worry about any setups and can focus on the actual development.
If your situation is more complex self-hosting is worth considering. But keep in mind that you would have to take of backups too and that an update of the server screw a lot of things up.
This ties into the server catching fire, but one key advantage of external hosting is that it's (presumably) backed up automatically. Doing your own backups is a hassle, and ends up being less reliable than what you'd get from Google.
With self-hosting there comes great responsibility.
you have to backup everything
you need spareparts for your hardware
if you have stuff that is important you need redundant hardware
you don't have real holidays. if something breaks you have to fix it
In addition to what others have already mentioned, there are also benefits specific to using cloud services by companies like Amazon, Yahoo, Google, Microsoft, etc.
Despite what some might claim, self-hosting is not inherently "safer." In most cases, it's quite the opposite actually. This is because most small-to-medium-sized companies do not have the resources to provide the level of reliability and redundancy that mega-corporations like Microsoft or Amazon can. Unless you are hosting source code for a top-secret defense project or other projects where the threat of espionage is very real, the greatest threats to your code and your business are more mundane things like server/network downtime.
Redundancy: Cloud services provide levels of redundancy that most businesses simply cannot obtain on their own. This includes data redundancy (back-ups/RAID), hardware redundancy (components/equipment), and geographical redundancy (multiple server locations across the globe). If a natural disaster hits your city, is your data going to be safe?
Multi-tenancy: Each small business by itself cannot afford 24/7 support staff and multi-million dollar equipment. But pooling their resources together through a Cloud service affords them (through centralization and better resource utilization/higher efficiency) access to a much higher level of service.
Security: Related to multi-tenancy, by centralizing the data of thousands of businesses, this allows security resources to be much more tightly focused.
Lastly, it should be noted that most commercial hosting providers offer co-location and dedicated hosting. And even cloud service providers allow customers to configure their "server" however they want, and installing/running whatever applications on it they want. So you can have a great deal more freedom than that offered by $10/month web hosting.

Why is p2p web hosting not widely used? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
We can see the growth of systems using peer to peer principles.
But there is an area where peer to peer is not (yet) widely used: web hosting.
Several projects are already launched, but there is no big solution which would permit users to use and to contribute to a peer to peer webhosting.
I don't mean not-open projects (like Google Web Hosting, which use Google resources, not users'), but open projects, where each user contribute to the hosting of the global web hosting by letting its resources (cpu, bandwidth) be available.
I can think of several assets of such systems:
automatic load balancing
better locality
no storage limits
free
So, why such a system is not yet widely used ?
Edit
I think that the "97.2%, plz seed!!" problem occurs because all users do not seed all the files. But if a system where all users equally contribute to all the content is built, this problem does not occur any more. Peer to peer storage systems (like Wuala) are reliable, thanks to that.
The problem of proprietary code is pertinent, as well of the fact that a user might not know which content (possibly "bad") he is hosting.
I add another problem: the latency which may be higher than with a dedicated server.
Edit 2
The confidentiality of code and data can be achieved by encryption. For example, with Wuala, all files are encrypted, and I think there is no known security breach in this system (but I might be wrong).
It's true that seeders would not have many benefits, or few. But it would prevent people from being dependent of web hosting companies. And such a decentralized way to host websites is closer of the original idea of the internet, I think.
This is what Freenet basically is,
Freenet is free software which lets you publish and obtain information on the Internet without fear of censorship. To achieve this freedom, the network is entirely decentralized and publishers and consumers of information are anonymous. Without anonymity there can never be true freedom of speech, and without decentralization the network will be vulnerable to attack.
[...]
Users contribute to the network by giving bandwidth and a portion of their hard drive (called the "data store") for storing files. Unlike other peer-to-peer file sharing networks, Freenet does not let the user control what is stored in the data store. Instead, files are kept or deleted depending on how popular they are, with the least popular being discarded to make way for newer or more popular content. Files in the data store are encrypted to reduce the likelihood of prosecution by persons wishing to censor Freenet content.
The biggest problem is that it's slow. Both in transfer speed and (mainly) latency.. Even if you can get lots of people with decent upload throughput, it'll still never be as quick a dedicated servers or two.. The speed is fine for what Freenet is (publishing data without fear of censorship), but not for hosting your website..
A bigger problem is the content has to be static files, which rules out it's use for a majority of high-traffic websites.. To serve dynamic data each peer would have to execute code (scary), and would probably have to retrieve data from a database (which would be another big delay, again because of the latency)
I think "cloud computing" is about as close to P2P web-hosting as we'll see for the time being..
P2P website hosting is not yet widely used, because the companion technology allowing higher upstream rates for individual clients is not yet widely used, and this is something I want to look into*.
What is needed for this is called Wireless Mesh Networking, which should allow the average user to utilise the full upstream speed that their router is capable of, rather than just whatever some profiteering ISP rations out to them, while they relay information between other routers so that it eventually reaches its target.
In order to host a website P2P, a sort of combination of technology is required between wireless mesh communication, multiple-redundancy RAID storage, torrent sharing, and some kind of encryption key hierarchy that allows various users different abilities to change the data that is being transmitted, allowing something dynamic such as a forum to be hosted. The system would have to be self-updating to incorporate the latter, probably by time-stamping all distributed data packets.
There may be other possible catalysts that would cause the widespread use of p2p hosting, but I think anything that returns the physical architecture of hardware actually wiring up the internet back to its original theory of web communication is a good candidate.
Of course as always, the main reason this has not been implemented yet is because there is little or no money in it. The idea will be picked up much faster if either:
Someone finds a way to largely corrupt it towards consumerism
Router manufacturers realise there is a large demand for WiMesh-ready routers
There is a global paradigm shift away from the profit motive and towards creating things only to benefit all of humanity by creating abundance and striving for optimum efficiency
*see p2pint dot darkbb dot com if you're interested in developing this concept
For our business I can think of 2 reasons not to use peer hosting:
Responsiveness. Peer hosted solutions are often reliable because of the massive number of shared resources, but they are also nutoriously unstable. So the browsing experience will be intermittent.
Proprietary data/code. If I've written custom logic for my site I don't want everyone on the network having access. You also run into privacy issues with customer data.
If I were to donate some of my PCs CPU and bandwidth to some p2p web hosting service, how could I be sure that it wouldn't end up being used to serve child porn or other similarly disgusting content?
How many times have you seen "97.2%, please seed!!" for any random torrent?
Just imagine the havoc if even a small portion of the web became unavailable in this way.
It sounds like this idea would add a lot of cost to the individual seeder (bandwidth) without a lot of benefit.

Resources