I have a website www.fullyawaken.com. The pictures on it are acting very erratically. One time it loads them in 5 secs, another it takes a full 30-50 secs. I've compressed all the pictures as much as possible and they are all not more than 500 kilobytes. Anyone had this problem before? What can I do? I'm trying to do as much as possible to enable faster load time - I've enabled gzip compression optimized almost everything I can think of, but I don't know how to deal with this issue.
Just to start with, I tested your website with pagespeed and GTMetrix and got below results.
https://gtmetrix.com/reports/www.fullyawaken.com/sewTPgWd
Though you have taken care of performance to some extent, It suggests you can further optimize your website and have recommendations as well.
Most of them are like,
Minify javascript
Defer parsing
Optimize images (its possible to compress them more than you have)
serve resources from a consistent url
If you are not sure about how to do it, suggestions are also present. I would suggest do these things first and if still problem persists then you can look for design changes or dig deeper.
Let me know if you can't see those results. I have a pdf report.
Related
As part of my routine scan through things like PageSpeed Insights, I decided to put my images on separate hosts to improve parallel downloads. Images are now served from a subdomain like this:
http://n26eh5.i.example.com/img/something.png
Where that n36eh5 is the file's modification time, fantastic for cacheing since it will change immediately when the file does, automatically. Two birds with one stone, right?
Sure enough, I ended up scoring full points on the parallel downloads front.
Today I ran a test on another site. It's now telling me that I have too many DNS lookups.
Now... I have *.i.example.com set up as a wildcard vhost, but is that DNS lookup going to be an issue? With DNS cacheing, is it even a problem? After all, it will only be fetching that image the first time it gets requested, loading it from cache every time afterwards.
Should I look for a balance, or continue using the wildcard subdomain as I am now?
This question is mentioned in the talk Re-evaluating Front-end Performance Best Practices by Ben Vinegar https://www.youtube.com/watch?v=yfDM4M6MC8E
the takeaway from that is it is not a good idea to use more than 2 hostnames in this way, my opinion from the talk it isn't really worthwhile doing it at all as there are most likely other things to improve performance anyway and, if you will probably have to do extra coding to make this work and we should try to code as little as possible to reach a solution.
He also said he probably should have answered the question and felt bad for not doing so.
I think my question says it all. With many successful sites using lots of user inputted images (e.g., Instagram), it just seems, in my perhaps naive opinion, that source code minification might be missing the boat.
Regardless of the devices one is coding for (mobile/desktop/both), wouldn't developers time be better spent worrying about the images they are serving up rather than their code sizes?
For optimizing for mobile browsers slower speeds, I was thinking it would probably be best to have multiple sized images and write code to serve up the smallest ones if the user is on a phone.
Does that sound reasonable?
First, I don't know that many developers do "worry" about minification of scripts. Plenty of public facing websites don't bother.
Second, minification is the removal of unnecessary whitespace whereas decreasing the size of an image usually entails reducing it's quality so there is some difference.
Third, I believe that if it weren't so easy to implement a minification step into a deployment process it would be even less popular. It doesn't save much bandwidth true but if all it takes is a few minutes to configure a deployment script to do it then why not?
100KB of data for a medium-sized JS library is no light-weight. You should optimize your site as best as possible. If by minifying a script you can get it to half of its size, and then by gzipping it, save another third, why wouldn't you?
I don't know how you do your minifying, but there are many scripts that automate the process, and will often bundle all of your JS into one package on the fly. This saves bandwidth, and hassle.
My philosophy is always "use what you need". If it is possible to save bandwidth for you and your users without compromising anything, then do it. If you can compress your images way down and they still look good, then do it. Likewise, if you have some feature that your web application absolutely must have, and it takes quite a bit of space, use it anyway.
this is kind of a strange title, so let me explain:
We have a web application (PHP, Zend Framework) that is quite successfull. Over time traffic grew and performance degraded (tens of requests with 80ms to tenthousands requests with >600ms average). We didn't expect so much traffic when first designing the application, so no big surprise. We decided to look into many things that could improve the performance.
After some days into the effort a production bug appeared that needed to be fixed. As the first changes we made to clean up some queries and caching code were already done and tested, we figured we could just add these to the update. None of the changes really improved the performance much in local testing and staging, but anyway.
But yeah, it did on production. Our graphes plunged to almost zero and we were totally destroyed that the update somehow made all the traffic disappear. But as we looked closer, the graphs were back to 80ms and almost invisible next to the 600ms mountains ;)
So we totally fixed the performance problems with some changes, we didn't even think would make a difference. Total success, but of course we want to understand which of these changes made the difference.
How would you tackle this problem?
Some background:
PHP application using Zend Framework, MySQL as database, Memcache for caching.
We get our performance graphs and insight into the application from NewRelic.com, but I can't really find the reason of the better performance there.
Using jMeter we could reproduce the bad performance on our dev servers, and also more or less the better performance of the updated version.
The only idea I have right now is to start with the old version, loadtest it, add one commit, loadtest it, add another feature, loadtest it... but this doesn't sound any fun or very effective.
Update: We found the reason for the performance problems, I will add an answer later to explain what we did and what the reason was. (Or how are updates and solutions handled to such questions?)
Update 2: Will add solution and way to find it as answer.
I think the easiest way would be to use XDebug or Zend Studio to debug your application.
Running it through the profiler will show you a breakdown of the execution flow, and all methods called, how long they took, and how much memory you used. The profiler should reveal if some block of code is called many times, or if there is something that simply takes a long time to execute sometimes.
If you do see 20ish millisecond responses from the profiler, then I would run a load tester in the background while I profiled on a different machine to see if heavy load seems to explain some of the time increases, and if so, what exactly is taking longer.
To me, that is the easiest way to see what is taking so long rather than loading different version of code and seeing how long they take. Doing it that way, you at least know which branch had the speed problem, but then you are still left to hunt down why it is taking so long as it may not be as simple as some piece of code being changed or optimized. It could be a combination of things.
I use Zend Studio for profiling and it is a huge time saver with that feature. XDebug's profiler is very similar AFIK.
Docs:
http://files.zend.com/help/Zend-Studio/profiling.htm
http://xdebug.org/docs/profiler
Ideally you need to profile the old version of the app and the new version of the app with the same realistic data but I somehow doubt you're going to have the time or inclination to do that.
What you could do is start by comparing the efficiency of the DB queries you've re-written against the previous versions, also look at how often they're called etc., and what effect the caching you've introduced has on that.
What I would also do is change the process going forward so that you introduce change as a flow (continuous integration/deployment style) so that you can see the impact of individual changes more clearly.
So what was the problem? Two additional ' in a MySQL query. They number value going into the method accidentally was a string, so the ORM used ' around it. Normally these problems are caught by the optimizer, but in this case it was a quite complicated combination of JOINs, perhaps that's why it was missed. Because this was also the most used query, every execution of it was a tiny bit slower - but that made all the difference in the end.
When you simply cannot optimize and locally scale any more, take a look here:
http://www.zend.com/en/products/php-cloud/
In terms of HTTP request performance should I pick AJAX or Flash? To be more specific, I'm more into Flash than AJAX and I'm currently working on a wide scale web project. I wanted to try AJAX out for once and now it's getting too messy for me. Before it gets more complicated I thought may be I can run Flash on the background for HTTP Requests and use it with javascript.
I couldn't find any benchmark on the Internet, but I think AJAX is faster than Flash. So what's your personal experience? Is there too much difference between Flash and AJAX?
Flash and JS both use the browser to send HTTP requests so I don't see any reason there would be a difference in performance between them.
From my personal experience, AJAX tends to be a little faster than Flash, depending on what movie you're showing. If your movie is extremely large, then it will take longer, but for small content they're virtually as fast; the difference is barely seen. However, keep in mind I'm testing on a fairly good laptop; on other devices and machines, like cellphones, the difference might be bigger (probably flash would be slower).
Hope this helps a bit!
N.S.
I agree that AJAX is a generally faster than Flash performing a similar request, but really the speed difference should be a negligible consideration. Having the additional requirement of a Flash movie to just act as an HTTP communication tool seems to be a bad idea because you are still going to require a Javascript solution to act where Flash is unavailable.
I wonder where the proof is in any of these responses. I've used both, I started off doing lots of HTML and JS programming and used AJAX when it was first getting traction and found it to be okay with regard to performance. AMF3 is faster than JSON hands down. Why? Not because of differences in the HTTP standards that they both hitch a ride on, but because of the way the data itself is represented (the compressions schemes used and serialization/de-serialization mechanisms make all the difference).
Go ahead and check it out for yourself, http://www.jamesward.com/census2/
(after all the best proof is a test)
Dojo JSON using gzip compression is closest to AMF3 but still produces a payload of about 160% the size of the AMF payload, one and a half times larger is not in my opinion ever going to be faster assuming equivalent bandwidth. I believe with the latest JavaScript engines the time to de-serialize the data in a browser directly vs having the Flash plugin do that work might make JSON faster for small payloads but when it comes to large amounts of data I don't think that processing time difference would make up for the payload size.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Which is the BEST way to optimize a web site for faster download without affecting my ranking?
How do I optimize my CSS file and images?
If you have Firefox download Yahoo's YSlow plug-in for Firefox. It will look at your site and suggest things you can do to speed it up.
http://developer.yahoo.com/yslow/
That's two completely different questions.
To optimize download speed use something like minify.
To rank higher in search engines, provide content that people want to see in an easily accessible way, and the high search ranking will come by itself.
Make pages primarily for users, not
for search engines. Don't deceive your
users or present different content to
search engines than you display to
users, which is commonly referred to
as "cloaking."
Avoid tricks intended to improve
search engine rankings. A good rule of
thumb is whether you'd feel
comfortable explaining what you've
done to a website that competes with
you. Another useful test is to ask,
"Does this help my users? Would I do
this if search engines didn't exist?"
Source
If you use a lot of images in your CSS, the use of CSS sprites can reduce the amount of HTTP requests needed to generate your page. Sprite Me can help build it for you.
As Mark Byers noted, these are two separate questions. In regards to web performance, you might be interested in the Performance Advent Calendar 2009, an interesting series about different ways you can improve the performance of your web site.
Check out Google Page Speed for a Firebug plugin that will test and recommend things you can do to speed up your pages.
http://code.google.com/speed/page-speed/
As far as SEO that is a far more complicated issue.
The best advice I can give you is to make your content the best you possibly can, include your keywords in your page title and page url.
Those 3 things will make the most impact of than all the other advice you will hear.
“which is the BEST way to optimise a web site for faster download”
HTML/CSS/JavaScript
Gzipping. In my limited experience, this tends to reduce HTML, CSS and JavaScript file size by about 60–70%. If you do nothing else, do this.
After you’ve done that, minifying your code can knock off another 5–10%.
Images
JPG for photographs, set the quality as low as you can without it looking unacceptably bad. (You can generally halve image size, with no noticeable visible difference to the image, using 70–80% quality.)
PNG for images with gradients and whatnot. Use pngcrush to get these as small as possible.
GIF for small images that only use a few different colours. Save them with the smallest colour palette. (In PhotoShop, see Save for Web.)
Other
After you’ve done all that, pop your stylesheets, JavaScript files and images up on a CDN, e.g. Amazon Cloudfront. This will almost certainly deliver the files quicker than whatever server you have.
Have as few stylesheet, JavaScript and image files as well, to reduce the number of HTTP requests. Spriting your CSS background images can help with with this.
The Basics
But, as always, measure. Don’t blindly apply anything we’ve told you here. Measure how quickly your site’s downloading. Make a change. See if it speeds up your site, or slows it down. And balance any speed improvement you get against other factors, e.g. how much more difficult the code is to understand and maintain.
“without affecting my ranking”
That’s pretty much up to Google. I haven’t heard of any of the approaches suggested here causing issues for search engines. If you want to show up high in search results, get everyone on the internet to link to you.
For reducing bandwidth I have been using web site maestro for a few years, it's great.