Speed up loading time of web page with huge gifs - performance

I have a website with lots of huge gif images. I have limited each page to 5 imagesbut the loading time is yet very high (+60seg). The images are around 2MB in size.
Is there a way of speeding up loading? Because of the nature of the images, i think they cannot be compressed (again) because it would decrease quality significantly. The images are "soundless mini videos" of funny situations.
I also thought about creating multiple connection to download images faster (as many download acelerators do), but i doubt it to be possible on client side.
I also tried load images one per one (aka wait first image to be download and then adding through DOM the next), but total time increased (less connections = slower total download speed).
Have you some idea?
UPDATE: Solved by using cloudflare (See answer)

I solved the problem by using cloudflare
CloudFlare protects and accelerates any website online. Once your
website is a part of the CloudFlare community, its web traffic is
routed through our intelligent global network. We automatically
optimize the delivery of your web pages so your visitors get the
fastest page load times and best performance.
Now my website is loading in seconds instead of minutes, it looks my hosting service was poor.

Related

No load time difference with and without varnish

I'am trying to cache static files on my server using varnish cache. I configured varnish to cache files with image extensions (.jpg, .png etc.). After that I open my website and debug it with browser developer tools and check load time of all images on my site and there is no difference in load time when I use varnish or not. There is a "HIT" in X-Cache entry in response header so images are available in my cache right? Any idea what can I doing wrong?
Ps. I'm using nginx as a backend server
Varnish shoudln't have a real impact on static files, especially when they're located on a SSD. Very heavy frequented sites may be an exception, particulary when the data is stored on a (slow) HDD. Here you have a huge amout of I/O which can be highly reduced by caching the images in the ram with Varnish. But these might be some special cases where caching of static files make sense. For nginx is also noticeable that this is a very fast webserver which is very good at delivering static files.
The main purpose for Varnish is HTML generated by some server-side backend like PHP, ASP.NET, and other languages which are designed for this task. Compared with serving static files it's very time-sensitive to generate dynamic content: The backend hat to work for example on database-querys which are very common in web-applications today or parsing templates. Wordpress is a widespread CMS and also a good example for this: Several 10k of php-code are executed on a single request and depending on the amout of plugins 100 database-querys and more are no exception.
So there are a lot of things to do for the server - for every request. For you as a site-owner this has the following effects:
The loadtime of the page increases which will result in to problems when its too high:
Visitors are not very patient and they're going to leave your page when they're thinking it's not fast enough. A online-shop which is making $100k per day can be loss up to $2.5 million per year by a delay of 1 second (see https://blog.kissmetrics.com/loading-time/ for more information)
As a result of this its not unexpected that Google is using the loadtime as an indicator for your ranking (see http://www.shoutmeloud.com/google-started-ranking-websites-based-on-load-time-and-speed.html)
Depending on the amount of visitors it can cost you money for more or more powerfull servers
Varnish can store the HTML generated by a backend in the RAM or on a hard drive. Especially with a SSD the latter make sense. Depending of the structure and use of your site, Varnish will at least improve the speed of your page and maybe also save money because less (powerfull) servers will do the job.
When Varnish is used as fronted for dynamic-generated content, you'll notice a noticeable difference. Depending of the application even a big difference. I configured varnish for a vBulletin based forum and could improve the page load time about 5 times.
Summarizing you should focus on caching dynamic pages instead of static stuff like images or clientscript because in most cases the webserver is already good enough to deliver those things. When static content is really slow, this can probably improved by using a CDN. Or maybe your webserver is not well configured for optimal speed. Perhaps there is no lifetime defined for images as example. This can have a negative impact on performance, especially on larger ones. But without further about your application and configuration its not possible to investigate the performance-issue and give concret tipps how this can be enhanced.

Slow joomla site from time to time

The joomla 2.5 site (shared hosting) http://dppumps.eu/home (temporary homepage) from time to time is extremely slow. When visiting after a long time it may take up to 30sec to load, and it has happened to me several times to get 500 internal server error after loading for a long time. When refreshed it takes from 0.5 sec to 4 sec to load. How is it possible to have such a big range in loading time? Maybe a server issue or something in my script? Thank you. (I have created numerous sites with joomla 2.5 hosted in the same hosting company with no problems)
You really do need to do some research before you ask questions like this.
First thing you should always do is run a site speed test using Pingdom or something similar. I've run a test on your and you're initially loading 11.1MB of data.
6.6MB of this data is from 2 flash video files which cannot even be found. You then have to take into consideration that you've most likely not used web compression when saving images for your website.
In addition to this, you may want to consider enabling Joomla's GZip compression in the Global Configuration and enable the System Cache plugin. Should you not want to do this, I would strongly suggest using a caching plugin such as JCH Optimize.
Article upon article has been written regarding slow website so please look around to see what methods suits you the most

How can I improve the performance of this architecture?

I'm running a website that is CPU heavy due to a lot of thumbnailing of images.
This is how I currently do things:
User uploads image to server
Server keeps a copy, and stores the image on Amazon S3
When an thumbnail is requested, server uses the local copy to generate it, and then stores it on S3; then gives the S3 URL to the client
Subsequent requests are optimized like this: Server caches S3 URL in memcached, so it won't do the work again; server never generates a thumbnail again if the file exists; the server uses mid-sized thumbnails to generate small-sized one, so not to work with large files of not necessary
Now, I'm hosting on a Linode 4G instance (8 cores with 4x priority, 4GB RAM), and despite my optiomizations and having a memcached hit ratio of 70%, my average CPU is 170%. I'm constantly seeing all 8 CPUs working with frequent spikes of 100% for many of them at the same time.
I'm using nginx and gunicorn to serve a Django application, and the thumbnails are generated with PIL.
How can I improve this architecture?
I was thinking about a few possibilities:
#1. Easiest: add a second identical server with a load balancer in front, so that they'd share the load.
The problem with this is that the two servers would not share the local image cache. Could I solve this by placing such share on a network drive, or would the latency ultimately hinder the gains?
#2. A little harder: split the thumbnailing code out of my app, as a separate webservice, that would run on a second server. This way the main application and database would not suffer from high CPU usage, and the web pages would be served fast. The thumbnails are anyway already served asynchronously with JavaScript
Can anyone recommend some other solution?
Are you sure your performance problems come from thumbnails? OK, I suppose you've checked that.
You can downsize and upload the 2 thumbnails to S3 immediately (or shortly) after user uploaded the image. This way you should be able to save unnecessary CPU load you're now wasting for every HTTP request checking those thumbnails and doing IPC with memcached.
In a way your problem is a "good" problem to have (or at least it could have been a lot worse), in that there are no dependencies between separate image resizing tasks, so you can trivially distribute them over multiple servers. A few comments:
Have you checked to see if there is anything you can do to make the image resizing operations faster? (Google brought this up, don't know if it's any help: http://dmmartins.appspot.com/blog/speeding-up-image-resizing-with-python-and-pil) Even if you still find you need to add more servers, anything you can do to make each resize operation more efficient will make each server go farther.
If your users keep becoming more and more, you will eventually need to "scale out", but for the short term, it is possible you could solve the problem simply by paying another $80 for the next "tier" of service (8 cores at 8x priority).
Is image resizing really your app's only bottleneck? If image resizing was "free", how much further can you scale on your existing server before rendering pages, running DB queries, etc. would limit throughput? If you don't know, it would be good to do some simulated load testing and find out. I ask because if rendering pages, DB queries, etc. are also bottlenecks, or are soon to become bottlenecks, you are going to have to distribute the app anyways. In that case, you might as well keep thumbnailing in the main app, and distribute it right now, rather than making your thumbnailing run as a web service on a 2nd server.
Regardless of whether you distribute the main app, or split out thumbnailing into a separate app on a different server, you need some kind of authoritative store to keep track of where each thumbnail is kept on S3. You can keep that information in memcached, in a database, or wherever you want. It doesn't really matter. Even if you keep it in memcached, that doesn't mean you can't share the cache between 2 servers -- 1 server can connect to a memcached instance running on the other server.
You asked if "the latency" of checking a cache which is held on a different server will "hinder the gains". I don't think you need to worry about that. Your problem is throughput, not latency. Those high-latency network operations parallelize very well. So if you just service more requests in parallel, you can still make full use of your CPUs (which is the resource bottleneck right now).

High latency on my Wordpress Site

I am trying to reduce the latency on site goldealers.co.uk
The site appears to have a latency of anywhere between 950ms and 1500ms.
I have checked:
Processes
RAM usage
HTTP connections
Ping
Removing ALL plugins
Removing plugins doesn't make the slightest bit of difference.
The server is a VPS Cloud Server with dedicated 1.5ghz processor and 1GB RAM.
My question:
Is latency a server / programming problem?
Do wordpress sites generally have a high latency?
I have checked the latency on Forbes.com (a wordpress site) - This only has a latency of 151ms!!!
I will soon be working on caching, adding expires headers, possibly using a CDN for images etc... but to be honest, there is no point if it takes over 1 second to even start to return any data.
Any advice that you can provide is much appreciated.
Your analysis and priority are correct - starting with the base page load time first, then later optimizing the remaining front-end components.
In general WordPress sites by default can be a bit slow to deliver the HTML pages. Times in the range you mentioned 1-1.5 seconds are not uncommon. (For comparison, an unoptimized WordPress site I run is in the 1-3 second range.)
I would look into two areas:
Basic speed on that host
Database query speed
It could be that your webhost does not have a very fast connection. You can test this (and eliminate the WordPress part of the equation) by fetching a static file. On your site, for example, I can pull the robots.txt file down in about 0.3 seconds. The speed to serve a static file is about your minimum baseline.
Next I would look at the MySQL database query speed. Is MySQL being served on the same host or a different one? The Debug Queries plugin can show you the exact queries being made and performance for each. If the DB queries appear to be the problem, the DB Cache Reloaded plugin can sometimes be helpful. It adds an additional layer of caching for frequent DB calls.
There are also some good suggestions in the answers to this SO question: How can I figure out why my site pages load so slowly?
Your latency is almost certainly a server-related issue. You said you have a VPS and most VPS installations come with all Apache modules enabled - all of which you DO NOT NEED for Wordpress.
Eliminating all of the modules you don't need reduces how much memory each PHP instance will consume.
I've answered this question here on stack overflow: How can I figure out why my Wordpress pages load so slowly?
When I took a look at your site I saw that a lot of time is being killed on Facebook widgets. Testing from different locations around the world, looks like you are losing 2-3 seconds just for the facebook widgets. Drop those and you will have a much faster site.

Optimal delivery method for a large quantity of images

I have a website centered around an online chat application where each user can have up to several hundred contacts. Each contact has there own profile image. I want to make it so that the contact's profile image is loaded next to there name. However, having the user download 100+ images every time they load the site seems intensive (Studies have shown that as much as 40% of users don't utilize there cache). Each image is around 60x60 pixels in dimension.
When I search on google or sign on to facebook, dozens of images are served nearly instantaneously. Beyond just having fast servers and a good connection, what are the optimal methods for delivering so many images to the user?
Possible approaches I have come up with are:
Storing each user's profile image in a database, constructing one image in a php file, than having the user download that, then using css to display each profile image. However, this seems extremely intense on the server and referencing such a large file so many times might take a toll on the user's browser.
Using nginx rather than apache to server the images (nginx generally works better to server static content such as this). However, this seems more like an optimization to a solution, rather than a solution in itself.
I am also aware that data can be delivered across persistent http connections so multiple requests do not have to be made to the server for multiple files. However, exactly how many files can be delivered across one persistent connection. Would this persistent model mean that just having the images load as separate files would not necessarily be a bad idea?
Any suggestions, solutions, and/or notes on personal experiences with relevant matters would be greatly appreciated. Scalability is extremely important here, as well as cross-browser support (IE7+, Opera, Firefox, Chrome, Safari)
EDIT: I AM NOT USING JQUERY.
Here's a jquery plugin that delays loading images until they're actually needed (i.e., only loads images "above the fold".)
http://www.appelsiini.net/2007/9/lazy-load-images-jquery-plugin
An alternative may be to use Flash to display just the images. The advantage is Flash is a much stronger local cache that you have programm

Resources