I've been doing a good bit of research into website performance lately and I'd say I've gained a fair amount of knowledge about best practises to improve website performance as well as reduce bandwidth requirements by making such tweaks as GZipping, content caching, and image and script optimization.
My problem is I've found plenty of case studies from hugely popular sites such as Facebook, Google and Amazon but what I really want is some findings and figures for sites a bit smaller say 50-250k visitors a month.
I'm looking for what was gained from investing time into performance optimization e.g. significant speed improvements, reduced bounce rate, reduced running costs, and all the analytics stuff.
For Facebook or Google, a 5% performance tuning improvement can save a lot of money. I have done a lot of performance analysis for clients and they often start with tuning questions. But 90%+ of the time, the greatest performance gain is to look into the application itself. You cannot tune a tanker to run like a porsche. This is some findings I found for Top J2EE Web Application Performance Problems. If the web site is using Drupal or Wordpress, you want to turn on the built in caching before going to the production. Those software package also supply optimization in combining JavaScript and CSS into one file in reducing network round trip time. If you have a site with a lot of content, increase the memory allocated to different buffers in your DB. For a site with very static content, configure the web server like apache to compress the html data. Set the content expiration policy correctly. Try to optimize the image file size. I found images in a lot of web site can be further reduced in size without losing much image quality. Make sure the web server have enough physical memory. Most out of the box server configuration is reasonably optimized. So usually there is only a few things need to do. For the type of web site you are looking at, I don't think you need to worry too much. If you have some files like Flash or PDF that is extremely popular. You can consider putting those files to CDN (cloud) so have other expertise to take care of the bandwidth for you. Those solution become pretty affordable even for small and mid site web site now.
Related
I'm currently developing my portfolio website using Nuxt3 in the frontend and Netlify for hosting. The site contains a fair amount of videos and although most mp4 files are not excessively large in size (1.2 - 1.4mb), requesting them directly from my server has taken a strain on the loading times of my site.
Aside from lazy-loading and compressing, what further steps could I take to optimize the loading speed of my videos? I am aware of CDNs such as Amazon Cloudfront and Cloudinary, but uncertain as to which would be most suitable for a small portfolio project.
Since this is quite a general question, any pointers to other techniques and best practices are much appreciated. Thank you for the help!
Like images, video can have a billion things you can optimize and fine tune.
If it's a small portfolio project, just use Cloudinary. It will be super simple, highly optimized for you, will probably fall under a free tier and won't need reading a 400 pages book on how to work with various codes, containers, buffering etc etc...
I am trying to reduce the latency on site goldealers.co.uk
The site appears to have a latency of anywhere between 950ms and 1500ms.
I have checked:
Processes
RAM usage
HTTP connections
Ping
Removing ALL plugins
Removing plugins doesn't make the slightest bit of difference.
The server is a VPS Cloud Server with dedicated 1.5ghz processor and 1GB RAM.
My question:
Is latency a server / programming problem?
Do wordpress sites generally have a high latency?
I have checked the latency on Forbes.com (a wordpress site) - This only has a latency of 151ms!!!
I will soon be working on caching, adding expires headers, possibly using a CDN for images etc... but to be honest, there is no point if it takes over 1 second to even start to return any data.
Any advice that you can provide is much appreciated.
Your analysis and priority are correct - starting with the base page load time first, then later optimizing the remaining front-end components.
In general WordPress sites by default can be a bit slow to deliver the HTML pages. Times in the range you mentioned 1-1.5 seconds are not uncommon. (For comparison, an unoptimized WordPress site I run is in the 1-3 second range.)
I would look into two areas:
Basic speed on that host
Database query speed
It could be that your webhost does not have a very fast connection. You can test this (and eliminate the WordPress part of the equation) by fetching a static file. On your site, for example, I can pull the robots.txt file down in about 0.3 seconds. The speed to serve a static file is about your minimum baseline.
Next I would look at the MySQL database query speed. Is MySQL being served on the same host or a different one? The Debug Queries plugin can show you the exact queries being made and performance for each. If the DB queries appear to be the problem, the DB Cache Reloaded plugin can sometimes be helpful. It adds an additional layer of caching for frequent DB calls.
There are also some good suggestions in the answers to this SO question: How can I figure out why my site pages load so slowly?
Your latency is almost certainly a server-related issue. You said you have a VPS and most VPS installations come with all Apache modules enabled - all of which you DO NOT NEED for Wordpress.
Eliminating all of the modules you don't need reduces how much memory each PHP instance will consume.
I've answered this question here on stack overflow: How can I figure out why my Wordpress pages load so slowly?
When I took a look at your site I saw that a lot of time is being killed on Facebook widgets. Testing from different locations around the world, looks like you are losing 2-3 seconds just for the facebook widgets. Drop those and you will have a much faster site.
Is cloud hosting the way to go? Or is there something better that delivers fast page loads?
The reason I ask is because I run a buddypress site on a bluehost dedicated server, but it seems to run slow at most times of the day. This scares me because at the moment the sites not live and I'm afraid when it gets traffic it'll become worse and my visitors will lose interest. I use Amazon Cloud to handle all my media, JS, and CSS files along with a catching plugin, but it still loads slow at times.
I feel like the problem is Bluehost, because I visit other sites running buddypress and their sites seem to load instantly. Im not web hosting savvy so can someone please point me in the right direction here?
The hosting choice depends on many factors such as technical requirements, growth rates, burst rates, budgets and more.
Bigger Hardware
To scale up hosting operation, your first choice is often just using a more powerful server, VPS, or cloud instance. The point is not so much cloud vs. dedicated but that you simply bring more compute power to the problem. Cloud can make scaling up easier - often with a few clicks.
Division of Labor
The next step often is division of labor. You offload database, static content, caching or other items to specific servers or services. For example, you could offload static content to a CDN. You could a dedicated database.
Once again, cloud vs non-cloud is not the issue. The point is to bring more resources to your hosting problems.
Pick the Right Application Stack
I cannot stress enough picking the right underlying technology for your needs. For example, I've recently helped a client switch from a Apache/PHP stack to a Varnish/Nginx/PHP-FPM stack for a very business Wordpress operation (>100 million page views/mo). This change boosted capacity by nearly 5X with modest hardware changes.
Same App. Different Story
Also just because you are using a specific application, it does not mean the same hosting setup will work for you. I don't know about the specific app you are using but with Drupal, Wordpress, Joomla, Vbulletin and others, the plugins, site design, themes and other items are critical to overall performance.
To complicate matter, user behavior is something to consider as well. Consider a discussion form that has a 95:1 read:post ratio. What if you do something in the design to encourage more posts and that ratio moves to 75:1. That means more database writes, less caching, etc.
In short, details matter, so get a good understanding of your application before you start to scale out hosting.
A hosting service is part of the solution. Another part is proper server configuration.
For instance this guy has optimized his setup to serve 10 million requests in a day off a micro-instance on AWS.
I think you should look at your server config first, then shop for other hosts. If you can't control server configuration, try AWS, Rackspace or other cloud services.
just an FYI: You can sign up for AWS and use a micro instance free for one year. The link I posted - he just optimized on the same server. You might have to upgrade to a small server because Amazon has stated that micro is only to handle spikes and sustained traffic.
Good luck.
What kind of performance gain will I get from ditching Apache for NGINX if I have a very low traffic web site (e.g. 1000 unique visitors a day, approx 5 requests/sec at highest load, and approx 50 MB of traffic per day since lots of photos are being displayed).
Specifically, what gains (if any) would I have for:
Loading speed of the web site from the web user perspective
Server load
Concurrency
Again, this is for a low traffic web site and I'm running on a VPS.
If you have such a low traffic, I am not sure you need to go through the troubles of changing your webserver : kind of looks like "premature optimisation" to me.
Well, at least, if those 1,000 visitors don't visit too many pages, and don't all arrive at exactly the same time.
You'd probably have way better gains for your users (and that's what matter !) by activating gzip compression for JS/CSS/HTML, and/or regrouping JS/CSS files into one instead of several, for instance.
About that, running yslow on your webite, and following some of the advices it'll give you, will probably bring more speed to your users than changing server.
Just to make clear : I don't say that you shouldn't optimize your server -- but that, with such a low traffic, it might be more interesting to display pages faster ; at least, first.
Is your Apache server taking too much CPU or RAM? I switched from Apache to Nginx to save memory, especially to serve static file: I seem to be using about 75% less memory with Nginx.
Like the other comment said, are you sure that Apache is the bottle neck? If you are not swapping, then you have enough memory. I don't think you will save any significant server side latency.
What various methods and technologies have you used to successfully address scalability and performance concerns of a website? I am an ASP.NET web developer exploring .NET remoting with WCF with SQL clustering and am curious as to what other approaches exist (such as the ‘cloud’). In which cases would you apply various approaches (for example method a for roughly x many ‘active’ users).
An example of what I mean, a myspace case study: http://highscalability.com/myspace-architecture
This is a very broad question making it difficult to answer, but I'll try and provide a few general suggestions.
1 - Unless you are doing some things seriously wrong then you'll likely not need to worry about perf or scale until you hit a significant amount of traffic (over 1 million page views a month).
2 - Your biggest performance problems initially are likely to be the page load times from other countries. Try the Gomez Instance Site Test to see the page load times from around the world, and use YSlow as a guide for optimizing.
3 - When you do start hitting performance problems it will first most likely be due to the database work. Use the SQL Server Profiler to examine your SQL traffic looking for long running queries to try optimizing, and also use dm_db_missing_index_details to look for indexes you should add.
4 - If your web servers start becoming the performance bottleneck, use a profiler to (such as the ANTS Profiler) to look for ways to optimize your web pages code.
5 - If your web servers are well optimized and still running too hot, look for more caching opportunities, but you're probably going to need to simply add more web servers.
6 - If your database is well optimized and still running too hot, then look at adding a distributed caching system. This probably won't happen until you're over 10 million page views a month.
7 - If your database is starting to get overwhelmed even with distributed caching, then look at a sharding architecture. This probably won't happen until you're over 100 million page views a month.
I've worked on a few sites that get millions/hits/month. Here are some basics:
Cache, cache, cache. Caching is one of the simplest and most effective ways to reduce load on your webserver and database. Cache page content, queries, expensive computation, anything that is I/O bound. Memcache is dead simple and effective.
Use multiple servers once you are maxed out. You can have multiple web servers and multiple database servers (with replication).
Reduce overall # of request to your webservers. This entails caching JS, CSS and images using expires headers. You can also move your static content to a CDN, which will speed up your user's experience.
Measure & benchmark. Run Nagios on your production machines and load test on your dev/qa server. You need to know when your server will catch on fire so you can prevent it.
I'd recommend reading Building Scalable Websites, it was written by one of the Flickr engineers and is a great reference.
Check out my blog post about scalability too, it has a lot of links to presentations about scaling with multiple languages and platforms:
http://www.ryandoherty.net/2008/07/13/unicorns-and-scalability/
There is velocity from MS as well as MEMCache has a port to .NET now and also indeXus.Net