How to figure out what's causing long DOM processing time? - performance

I have a landing page to which I'm driving traffic through PPC. For a variety of reasons, I've come to believe that, even though the site is highly performant for me, it isn't for my actual visitors. So, I turned on AWS CloudWatch.
For me, with cleared caches, the page loads at around 0.9 seconds in Safari, 1.75 seconds in Firefox, and 2.25 seconds in Chrome. If I were in micro-optimization mode, I'd worry about that Chrome number, but right now, my issue is much bigger. According to CloudWatch, my real users are experiencing an average load time of 12.1 seconds! And of that 12.1 seconds, DOM processing is taking about 11 seconds:
Now, I'm not sure if I have to worry about the full 11 seconds, or just the part I've marked "A" — the processing that happens before it starts loading the DOM — but either way, how do I figure out what's causing this?
I know there's a way to simulate low network speed in devtools, but maybe there's also a way to throttle CPU? Or maybe there's a way to "look" at the waterfall in devtools and figure out which pieces are blocking the DOM Content loaded action? Then, even though they're fast for me, I can try to optimising those pieces. Or maybe a deeper level of diagnostics I can enable on CloudWatch? Or, maybe there's some other option I haven't considered.
FWIW, almost all of my visitors are on Android devices, and they're about 70% Chrome, 10% Android Browser.

Related

ReactJS heavy load application performance issue

I am looking for some advice on how to track application performance; the application is developed using ReactJS, and I am building it with webpack.
First of all I will just present what I have done and what the application is expected to do:
I need to render a lot of, let's just call them widgets, that update real time presenting a lot of data. So, on a scale, I would say each widget renders about 50 to 80 values, these updates might be received from the server side all at once, so they should happen instantly when data is received. Consider I might have around 25 to 30 widgets that need to update real time.
Let me tell you a little bit about the implementation:
I have used the smart/dumb pattern for ReactJS components
The actual data is stored in application state and is distributed by the smart components to dumb components through props
I am using Pure Render Mixin to avoid unnecessary rendering
Also using Immutable data so that I will ensure Pure Render Mixin is working accordingly, that is, being accurate in determining if a render is necessary and at the same time be fast, really fast.
There are no weird bindings of callbacks, that might determine re-rendering of components, this is double checked already.
Now the issues I am having:
with about 5-6 widgets, meaning around 400-500 values that need to render each second, it works very well in Chrome and decent in Firefox.
adding about 25-30 widgets gets the application to still work decent in Chrome, but it starts to act slow in Firefox, by slow I mean user interaction that might even get a delay of around 1 second. That is really unacceptable.
What I have tried:
use Chrome dev tools to measure the performance; that didn't help too much, what I could see though, is that everything is alright. And there is no way I could read all the graphics this tool provides. (And I've read a lot of articles about it)
tried to use Firebug in Firefox. That's an amazing tool, but not in this case; just by opening it with the above mentioned load (30 widgets) gets Firefox to freeze... and the profiler gave me nothing)
on a last resort, I have used the default dev tools from Firefox, it has a performance tab. That got me some information of what parts of the application has the most load on the browser. It seemed it was some heavy computing determining min/max of an Immutable.List.
Unfortunately the application still has performance issues, and it is of high importance to get it working perfect, and Firefox profiler doesn't give me any other leads.
So my questions would be:
what would be the next action to take in order to determine performance issues? (as much as possible where they are: class/method/at least file)
did you guys use any performance testing tool that gives you an insight of what the hell is happening?
is there something else to consider to improve the overall functionality, especially targeting multiple browsers? (Firefox, Chrome, IE11)

Why does a cached file have a high Waiting (TTFB) or Content Loaded ms value?

I'm looking at a waterfall in Chromes Developer tools of several CSS and Javascript files.
When refreshing the page, several of the files load from the browser cache, as expected. These are taking 1ms to load most of the time. However some files, and it seems to be the same offenders each refresh, are taking quite a bit longer. Sometthing between 400ms and 800ms.
The waterfall timeline in Chromes network tab shows that this time is spent in the TTFB (time to first byte) in some cases. This doesn't make any sense to me, if it's getting it from the browser cache it should be getting it from the hard drive, not the server, why is there a TTFB?
For other files or sometimes on a different refresh I see the time is blamed on content dowloaded time. Again, coming from cache this should be pretty instantaneous, yet I'm seeing it take over half a second sometimes.
Can anyone shed some light on what's happening here?
This is a web app I'm working on so I don't have a link I can share I'm afraid.

Can I find out if there are other browser windows running webgl?

I am creating a three.js powered website, and on some browsers (I am looking at you firefox), if other tabs are also running webGL, my performance stutters.
I would like to know if there is a way to find out (in the browser) if other tabs are consuming webgl services so that I can alert the user.
I appreciate any and all comments!
That would be a security violation, so no, you can't do that.
Update:
I'll just add, you could include stats.js (more likely, you'll need to do something very similar to establish to what stats.js is doing to get an idea of what the average frame rate is like and look for dips in that performance), and if that is regularly dropping then alert the user. That said, you are likely to get the calculation wrong, and there are always many variables you can't control which can affect performance. Most of those can be resolved with a browser restart, particularly Firefox doesn't seem to be releasing its GPU memory across page reloads. When that memory is full the performance drops badly.
In any case, any warnings you give out should probably not be intrusive for the user in any way.
Also note that properly written WebGL programs (using requestAnimationFrame as intended) should to my knowledge not be consuming much in compute resources when the tab is in the background, though this may also vary per browser. But a tab in the background will still consume the memory (GPU memory, and JavaScript code and objects).

Slow javascript execution in chrome, profiler yields "(program)"

How would I go about determining what the hangups are in my javascript app when the profiler puts (program) at the top with 80%? Is my logic too complex for the hotspot tracking to occur? Is my memory footprint too big? What is generally the cause of this?
More Information:
There are no elements on the form save the one canvas tag
There are no waiting communications (xhr)
http://i.imgur.com/j6mu1.png
Idle cycles ("doing nothing") will also render as "(program)" (you may profile this SO page for a few seconds and get 100% of (program)), so this is not a sign of something bad in itself.
The other thing is when you actually see your application lagging. Then (program) will be contributed by the V8 bindings code (and the WebCore code they invoke, which is essentially anything: DOM/CSS operations, painting, memory allocations and GCs, what not.) If that is the case, you can record a Timeline of your application (switch to the Timeline panel in Developer Tools and press the Record button in the bottom status bar, then run your application for a while.) You will see many internal events with their timings as horizontal bars. You will see reflows, style recalculations, timers fired, GC events, and more (btw, the latest Chromium versions have an improved memory profiler utilization timeline, so you will be able to monitor the memory used by certain internal factors, too.)
To diagnose memory problems (multiple allocations entailing multiple Full GC cycles) you may use the Profiles panel. Take a heap snapshot before the intensive part of your code starts, and another one after this code has run for some time. Then compare the two heapshots (the right SELECT at the bottom) to see which allocations have taken place, along with their memory impact.
To check if it's getting slow due to a memory option use: chrome://memory
Also you can check chrome://profiler/ for possible hints of what is happening.
Another option is to post your javascript code here.
See this link : it will help you in Understanding Firebug profiler output
I would say you should check which methods taking %. You can minimize unwanted procedures from them. I saw in your figure given some draw method is consuming around 14% which is running in background. May be because of that your JS loading slowly. You should determine what´s taking time. Both FF and Chrome has a feature that shows the network traffic. Have a look at yslow as well, they have a great addon to Firebug.
I would suggest some Chome's auditing tools which can tell you a lot about why is this happening, you should probably include more information about:
how long did it take to connect to server?
how long did it take to transfer content?
how much other stuff are you loading on that page simultaneously?
anyway even without all that, here's a checklist to improve performance for you:
make sure your javascript is treated and served as static content, e.g. via nginx/apache/whatever directly or cdn, not hitting your application framework
investigate if you can make use of CDN for serving javascript, sometimes even pointing different domain names to your server makes a positive impact, e.g. instead of http://example.com/blah.js -> http://cdn2.example.com/blah.js
make sure your js is served with proper expiration headers, don't re-download it every time client refreshes a page
turn on gzipping of js content
minify your js using different tools available(e.g. with Google closure compiler)
combine your scripts (reduces the number of requests)
put your script tags just before
investigate and cleanup/optimize your onload and document.ready hooks
Have a look at the YSlow plugin and Google PageSpeed, both very useful in improving performance.

Firefox cache bug

This is a bug/issue which has cost my time for at least 3 years now.
I have complex, dynamic pages in ASP.NET which use a lot of javascript (which is more or less static).
Now I have a behaviour which happens only in Firefox and then only every few 10.000 requests.
Users are playing games on my site so they are hitting the same page again and again, every day. And then the game locks up with javascript errors on the page. I have never been able to find out what exactly happens. A file is corrupt perhaps?
Shift-F5 or simple reloading does not help. If the user clears his cache, the problem is gone.
This has been reported hundreds of times now. Every time the user has been a Firefox user, every time, clearing the cache fixed the issue.
I can't nail down the bug since I can't reproduce it.
There are lots of reports that Firefox is caching files which it shouldn't cache. But that doesn't seem to be the issue in my case. Something else is going on.
Anyone got an idea what's going on?

Resources