This is a very vague question, but I have noticed that Polymer (both version 1.0 and 2.0) are very slow to load in Mozilla FireFox when you have a large number of custom elements.
Does anyone know any high-level, general strategies for increasing the load time?
The page loads near instantly in Chrome, but can take up to 4 seconds in FireFox. I can see on the network tab that the resources are coming just as fast, but the in-browser render time is significantly slower.
Or even a method to test exactly what is taking a while to load in FireFox - that would be very helpful. Right now, I can see each network request, but not just browser processing.
Thanks in advance!
**edit: the elements are all minified and vulcanized into 1 file.
Try loading the webcomponents-lite.min.js instead of webcomponents.js this should give you way better Performance in FF
Try using the PRPL-pattern.
It's described here: https://developers.google.com/web/fundamentals/performance/prpl-pattern/
Or use the optimizations polymer build gives you:
https://www.polymer-project.org/2.0/docs/tools/polymer-cli
I am looking for some advice on how to track application performance; the application is developed using ReactJS, and I am building it with webpack.
First of all I will just present what I have done and what the application is expected to do:
I need to render a lot of, let's just call them widgets, that update real time presenting a lot of data. So, on a scale, I would say each widget renders about 50 to 80 values, these updates might be received from the server side all at once, so they should happen instantly when data is received. Consider I might have around 25 to 30 widgets that need to update real time.
Let me tell you a little bit about the implementation:
I have used the smart/dumb pattern for ReactJS components
The actual data is stored in application state and is distributed by the smart components to dumb components through props
I am using Pure Render Mixin to avoid unnecessary rendering
Also using Immutable data so that I will ensure Pure Render Mixin is working accordingly, that is, being accurate in determining if a render is necessary and at the same time be fast, really fast.
There are no weird bindings of callbacks, that might determine re-rendering of components, this is double checked already.
Now the issues I am having:
with about 5-6 widgets, meaning around 400-500 values that need to render each second, it works very well in Chrome and decent in Firefox.
adding about 25-30 widgets gets the application to still work decent in Chrome, but it starts to act slow in Firefox, by slow I mean user interaction that might even get a delay of around 1 second. That is really unacceptable.
What I have tried:
use Chrome dev tools to measure the performance; that didn't help too much, what I could see though, is that everything is alright. And there is no way I could read all the graphics this tool provides. (And I've read a lot of articles about it)
tried to use Firebug in Firefox. That's an amazing tool, but not in this case; just by opening it with the above mentioned load (30 widgets) gets Firefox to freeze... and the profiler gave me nothing)
on a last resort, I have used the default dev tools from Firefox, it has a performance tab. That got me some information of what parts of the application has the most load on the browser. It seemed it was some heavy computing determining min/max of an Immutable.List.
Unfortunately the application still has performance issues, and it is of high importance to get it working perfect, and Firefox profiler doesn't give me any other leads.
So my questions would be:
what would be the next action to take in order to determine performance issues? (as much as possible where they are: class/method/at least file)
did you guys use any performance testing tool that gives you an insight of what the hell is happening?
is there something else to consider to improve the overall functionality, especially targeting multiple browsers? (Firefox, Chrome, IE11)
So I am backend developer who is dipping my toes into developing a responsive site that, for SEO reasons, needs to be able to show up to a 1000 responsive "containers" for search results, ie...
[1]: http://107.6.139.93/Melbourne.homes
So what seems to be happening is that the browser is locking up trying to render all these containers or something? For searches with less then 300 results, the delay is tolerable ie...
[2]: http://107.6.139.93/Viera.homes
To be honest, i'm in somewhat over my head here (im a database guy) and i'm trying to learn, but have no idea if its going to even be possible to improve performance without using pagination (something my clients is very much against)
I'm wondering if anyone here has any insights into my issues.
EDIT - the same "lock-up" delay occurs when you resize the browser and wait for the responsive-ness to kick in
I think your overloading the browsers memory, this you can't solve with css. it's the whole package (images and content).
You could solve this by using infinite scroll and thus only load content when the user scrolls. There are some things you have to look in to before throwing yourself into infinite srcolling especially on SEO level.
You might want to read this:
http://googlewebmastercentral.blogspot.nl/2014/02/infinite-scroll-search-friendly.html
I am in the process of speeding the perceived load time of a website specifically aimed at the mobile platforms (iPhone primarily, Android secondarily, who cares about the rest...) I have already tried several general techniques for speeding load times for normal websites but I was wondering if anyone had specific pointers for the mobile web performance.
I am already bundling scripts, spriting images, lazy loading as much as possible, putting fixed sizes on things, linking css in the head etc. I want insights specifically for mobile.
For example I have heard that the iPhone will only cache files less than 25k so sometimes splitting a script/file into 25k chunks may give an overall boost since now they can be cached even though it caused additional connections to be made. Any other insights like this would be much appreciated.
Also, does anyone know of a good tool to test the load times in iphone?)
Ok, here's some measuring tools for you...
Steve Souders' mobile bookmarklet - http://stevesouders.com/mobileperf/mobileperfbkm.php (a bit limited on he timing front but has lots of interesting other features)
Stoyan Stefanov's iOS app for exploring page load - http://calendar.perfplanet.com/2011/i-see-http/ (very new, so not sure about it's limitations)
3P Mobile have their own iOS browser in beta that produces waterfalls - Android version was very good.
As far as optimisation goes...
The mobile cache may be small but you still have access to localstorage i.e. include CSS/js inline and extract and save to local storage - Bing mobile does this.
DataURIs are another way of reducing requests but of course the user loses the option to turn off images
Ensure you make good use of keep-alive and connection pipelining - splitting between multiple hostnames can get in the way of these so be careful if you do this.
Caching varies wildly between different phones an OS versions saw an article on it recently will see if I can find it again.
Yahoo! has a nice page about how to squeeze the most performance out of your site.
They also have a Firefox plugin (YSlow) that automatically checks whether your site follows them and suggest improvements. That plugin covers also performance problems that may affect mobile browsers.
I'd like to answer your call for a mobile web site profiling tool - WebDevTools:
https://play.google.com/store/apps/details?id=com.voltcode.webdevtools
disclaimer - I am connected to the company that released this software
While not being targeted for iphones, you could definitely test on various android models and get the load times.
From my experience, IPhone 4+ will load pages the same or better than HTC Desire and similar - both in general speed, but also in HTTP concurrency, etc.
Check out this testing solution:
Tutorial: Does Your Website Load in 3 Seconds?
http://blog.testobject.com/2013/09/does-your-website-load-in-3-seconds.html
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am trying to quantify "site slowness". In the olden days you just made sure that your HTML was lightweight, images optimized and servers not overloaded. In high end sites built on top of modern content management systems there are a lot more variables: third party advertising, trackers and various other callouts, the performance of CDN (interestingly enough sometimes content delivery networks make things worse), javascript execution, css overload, as well as all kinds of server side issues like long queries.
The obvious answer is for every developer to clear the cache and continuously look at the "net" section of the Firebug plugin. What other ways to measure "site dragging ass" have you used?
Yslow is a tool (browser extension) that should help you.
YSlow analyzes web pages and why they're slow based on Yahoo!'s rules for high performance web sites.
Firebug, the must have for web developers Firefox extension, can measure the loading time of different elements on your webpage. At least you can rule out CSS, JavaScript, and other elements taking too much time to load.
If you do need to shrink JavaScript and CSS loading times, there are various JavaScript and CSS compressors out there on the web that simply take out unnecessary text out of them like newline characters and comments. Of course, keep an ordinary version on the side for development sake.
If you use PNGs, I recently came across a PNG optimizer that can shrink PNG sizes called OptiPNG.
"Page Load time" is really not easy to define in general.
It depends on the browser you use, because different browsers may do more requests in parallel, because javascript has differents speeds in different browsers and because rendering time is different.
Therefore you can only really measure your true page load time using the browser you are interested in.
The end of the page load can also be difficult to define because there might be an Ajax request after everything is visible on the page. Does that count the the page load or not?
And last but not least the real page load time might not matter that much because the "perceived performance" is what matters. For the user what matters is when sHe has enough information to proceed
Markus
I'm not aware of any way (at least no I could tell you :] ) that would automatically measure your pages perceived load time.
Use AOL Pagetest for IE and YSlow for firefox (link see above) to get a "feeling" for you load time.
Get yourself a proper debugging proxy installed (I thoroughly recommend Charles)
Not only will you be able to see a full breakdown of response times / sizes, you can save the data for later analysis / comparison, as well as fiddle with the requests / responses etc.
(Edit: Charles' support for debugging SOAP requests is worth the pittance of its shareware fee - it's saved me a good half a day of hair-loss this week alone!)
I routinely use webpagetest.org, which you can use to perform performance tests from different locations, on different browsers (although only msie 7-9), with different settings (number of iterations, connection speed, first run vs 2nd visit, excluding specific requests if you want, credentials if needed, ...).
the result is a very detailed report of page loading time which also provides advise on how to optimize.
it really is a great (free) tool!
Last time I worked on a high-volume website, we did several things, including:
We used Yslow to get an analysis of the individual factors affecting page load: https://addons.mozilla.org/en-US/firefox/addon/5369
performance monitoring using an external, commercial tool called Gomez - http://www.gomez.com/instant-test-pro/
We stress tested using a continuous integration build, using Apache JMeter. http://jmeter.apache.org/
If you want a quick look, say a first approximation, I'd go with YSlow and see what the major factors affecting page load time in your app are.
Well, call me old fashioned but..
time curl -L http://www.example.com/path
in linux :) Other than that, I'm a big fan of YSlow as previously mentioned.
PageSpeed is an online checking tool by Google, which is very accurate and reliable:
https://developers.google.com/pagespeed/
If it's asp.net you can use Trace.axd.
Yahoo provide yslow which can be great for checking javascript
YSlow as mentioned above.
And combine this with Fiddler. It is good if you want to see which page objects are taking the most bandwidth, which are being compressed at the server, unexpected round-trips, and what is being cached. And it can give you a general idea about processing time in the client web browser as compared to time taken between server & client
Apache Benchmark. Use
ab -c <number of CPUs on server> -n 1000 url
to get good approximation of how fast your page is.
In Safari, the Network Timeline (available under the Develop menu, which you have to specifically enable) gives useful information about loading time of individual page components, as well as showing when each component started loading.
Yslow is good, and HttpWatch for IE is great as well. However, both miss the most important metric to a user "When is the page -above the fold- ready for use to the user?". I don't think that one has been solved yet...
There are obviously several ways to identify the response time, but the challenge has always been how to measure the rendering time that is spent in browser.
We have a controlled test phase in which we use several automated tools for testing the application. One of the output we generate from this test is a fiddler trace for each transaction (a click). We can then analyse the fiddler trace to understand the Time for last byte and subtract it with the overall time the page took.
Something like this
1. A= Total response time as measured by the an automated tool (in our case we use QTPro)
2. B= Time to last byte (Server + Network time, from the fiddler trace)
3. C= A-B (approx Rendering time, OR the time spent in browser)
All the above I explained can be made a standard test process and end of the test we could generate a break-up of time spent at each layer e.g. rendering time, network time, database calls etc...