How would I go about determining what the hangups are in my javascript app when the profiler puts (program) at the top with 80%? Is my logic too complex for the hotspot tracking to occur? Is my memory footprint too big? What is generally the cause of this?
More Information:
There are no elements on the form save the one canvas tag
There are no waiting communications (xhr)
http://i.imgur.com/j6mu1.png
Idle cycles ("doing nothing") will also render as "(program)" (you may profile this SO page for a few seconds and get 100% of (program)), so this is not a sign of something bad in itself.
The other thing is when you actually see your application lagging. Then (program) will be contributed by the V8 bindings code (and the WebCore code they invoke, which is essentially anything: DOM/CSS operations, painting, memory allocations and GCs, what not.) If that is the case, you can record a Timeline of your application (switch to the Timeline panel in Developer Tools and press the Record button in the bottom status bar, then run your application for a while.) You will see many internal events with their timings as horizontal bars. You will see reflows, style recalculations, timers fired, GC events, and more (btw, the latest Chromium versions have an improved memory profiler utilization timeline, so you will be able to monitor the memory used by certain internal factors, too.)
To diagnose memory problems (multiple allocations entailing multiple Full GC cycles) you may use the Profiles panel. Take a heap snapshot before the intensive part of your code starts, and another one after this code has run for some time. Then compare the two heapshots (the right SELECT at the bottom) to see which allocations have taken place, along with their memory impact.
To check if it's getting slow due to a memory option use: chrome://memory
Also you can check chrome://profiler/ for possible hints of what is happening.
Another option is to post your javascript code here.
See this link : it will help you in Understanding Firebug profiler output
I would say you should check which methods taking %. You can minimize unwanted procedures from them. I saw in your figure given some draw method is consuming around 14% which is running in background. May be because of that your JS loading slowly. You should determine what´s taking time. Both FF and Chrome has a feature that shows the network traffic. Have a look at yslow as well, they have a great addon to Firebug.
I would suggest some Chome's auditing tools which can tell you a lot about why is this happening, you should probably include more information about:
how long did it take to connect to server?
how long did it take to transfer content?
how much other stuff are you loading on that page simultaneously?
anyway even without all that, here's a checklist to improve performance for you:
make sure your javascript is treated and served as static content, e.g. via nginx/apache/whatever directly or cdn, not hitting your application framework
investigate if you can make use of CDN for serving javascript, sometimes even pointing different domain names to your server makes a positive impact, e.g. instead of http://example.com/blah.js -> http://cdn2.example.com/blah.js
make sure your js is served with proper expiration headers, don't re-download it every time client refreshes a page
turn on gzipping of js content
minify your js using different tools available(e.g. with Google closure compiler)
combine your scripts (reduces the number of requests)
put your script tags just before
investigate and cleanup/optimize your onload and document.ready hooks
Have a look at the YSlow plugin and Google PageSpeed, both very useful in improving performance.
Related
I am looking for some advice on how to track application performance; the application is developed using ReactJS, and I am building it with webpack.
First of all I will just present what I have done and what the application is expected to do:
I need to render a lot of, let's just call them widgets, that update real time presenting a lot of data. So, on a scale, I would say each widget renders about 50 to 80 values, these updates might be received from the server side all at once, so they should happen instantly when data is received. Consider I might have around 25 to 30 widgets that need to update real time.
Let me tell you a little bit about the implementation:
I have used the smart/dumb pattern for ReactJS components
The actual data is stored in application state and is distributed by the smart components to dumb components through props
I am using Pure Render Mixin to avoid unnecessary rendering
Also using Immutable data so that I will ensure Pure Render Mixin is working accordingly, that is, being accurate in determining if a render is necessary and at the same time be fast, really fast.
There are no weird bindings of callbacks, that might determine re-rendering of components, this is double checked already.
Now the issues I am having:
with about 5-6 widgets, meaning around 400-500 values that need to render each second, it works very well in Chrome and decent in Firefox.
adding about 25-30 widgets gets the application to still work decent in Chrome, but it starts to act slow in Firefox, by slow I mean user interaction that might even get a delay of around 1 second. That is really unacceptable.
What I have tried:
use Chrome dev tools to measure the performance; that didn't help too much, what I could see though, is that everything is alright. And there is no way I could read all the graphics this tool provides. (And I've read a lot of articles about it)
tried to use Firebug in Firefox. That's an amazing tool, but not in this case; just by opening it with the above mentioned load (30 widgets) gets Firefox to freeze... and the profiler gave me nothing)
on a last resort, I have used the default dev tools from Firefox, it has a performance tab. That got me some information of what parts of the application has the most load on the browser. It seemed it was some heavy computing determining min/max of an Immutable.List.
Unfortunately the application still has performance issues, and it is of high importance to get it working perfect, and Firefox profiler doesn't give me any other leads.
So my questions would be:
what would be the next action to take in order to determine performance issues? (as much as possible where they are: class/method/at least file)
did you guys use any performance testing tool that gives you an insight of what the hell is happening?
is there something else to consider to improve the overall functionality, especially targeting multiple browsers? (Firefox, Chrome, IE11)
I am creating a three.js powered website, and on some browsers (I am looking at you firefox), if other tabs are also running webGL, my performance stutters.
I would like to know if there is a way to find out (in the browser) if other tabs are consuming webgl services so that I can alert the user.
I appreciate any and all comments!
That would be a security violation, so no, you can't do that.
Update:
I'll just add, you could include stats.js (more likely, you'll need to do something very similar to establish to what stats.js is doing to get an idea of what the average frame rate is like and look for dips in that performance), and if that is regularly dropping then alert the user. That said, you are likely to get the calculation wrong, and there are always many variables you can't control which can affect performance. Most of those can be resolved with a browser restart, particularly Firefox doesn't seem to be releasing its GPU memory across page reloads. When that memory is full the performance drops badly.
In any case, any warnings you give out should probably not be intrusive for the user in any way.
Also note that properly written WebGL programs (using requestAnimationFrame as intended) should to my knowledge not be consuming much in compute resources when the tab is in the background, though this may also vary per browser. But a tab in the background will still consume the memory (GPU memory, and JavaScript code and objects).
Can someone explain me why TWebBrowser control is working so slow on all XE editions of Delphi including XE5 and possibly XE6? To test this you need to create a new Delphi project and put TWebBrowser control in it. On form show event, navigate to this website:
http://ie.microsoft.com/testdrive/Performance/setImmediateSorting/Default.html
Please test this on Windows 7 or later. When navigation is complete, run setImmediate test and watch the results. It will take huge amount of time to complete the test. It will take about a minute to finish this.
When you open true Internet Explorer browser and do the same thing - test will be completed instantly (~200 miliseconds).
Some additional wierd informations:
When you recreate this procedure on old versions of Delphi (Delphi 7 to be precise) the web-control works as fast as it should be working and test is completed instantly. But the HTML5 speed test will still works slow (alternative test on this page).
Another weird thing is, the same slow behavior can be seen on C++ Builder but not in Visual Studio products. Is Microsoft deliberately slowing down the TWebBorwser in Embarcadero products?? I can't belive this.
I was trying to overcome this problem with diffrent methods such as:
Trying different feature options in registry such as:
FEATURE_GPU_RENDERING,
FEATURE_BROWSER_EMULATION (11001),
FEATURE_ALIGNED_TIMERS (undocumented option),
FEATURE_ALLOW_HIGHFREQ_TIMER (undocumented option),
Setting timerBeginPeriod(1) - no effect.
Please, if someone have any clue how to fix this issue - share this information with me.
UPDATE1
I made standalone test app if anyone cares. It can be downloaded here: http://mp.org.pl/download/ietest.zip It contains source and exe app with htm file. HTM file contains some js procedure that works 10 times faster in standalone IE than in TWebBrowser control. It uses setImmediate as a test (the same procedure used in test described above). But it can be easier for testing this way.
I can also see the behavior described (in your original post and in the comments). I have a few thoughts, but not necessarily an answer.
One should expect some difference in performance between the WebBrowser control and IE, in part because your Delphi app will need to build in support for certain features/APIs that IE supports out of the box.
For example, the WebBrowser control fires notifications related to tabbed browsing (old, but relevant), but it does not intrinsically handle those notifications or update the UI. You have to respond to the notifications and draw the tabs yourself. By default, IE is hardware accelerated and uses certain Windows APIs that may not be directly supported by Delphi's VCL (for resource/performance) reasons. (Hardware acceleration could account for some of the performance differences you've noticed.)
(And, for the record, I don't believe a list of differences between IE and the WebBrowser control was ever documented. I certainly don't remember seeing one in the portfolio.)
Also, the default values for various feature controls vary between IE and applications hosting the WebBrowser control. Part of the reason for this stems from the idea that IE needs to highlight performance over compatibility whereas applications generally need to emphasize compatibility over performance. You may wish to review the feature control reference to see if there are other FCKs you need to enable for your app.
Second, your loops are very tight, perhaps too tight. You've got one request piling on earlier requests and you're not really leaving much room for processing, even with setImmediate. (IIRC, we're not really supposed to use anything smaller than 250ms for setInterval without risking performance hits from the sheer number of requests.) The remarks in the setImmdiate ref. page provide some guidance, as does this article on requestAnimationFrame.
One reason why dragging the window appears to improve performance may due to the priority of window drag repaint requests. They may be forcing your loops to hold long enough (or even break) to allow other events to process. Hard to say without with tracing the system with a debugger.
Have you ever had to add application.processMessages() to your Delphi apps in order to allow the system a chance to handle the work you've already assigned? A similar need may be coming into play given the nature of your test.
Performance testing and timing is a tricky thing. You need to make sure the test isn't imposing so much overhead that it interferes with the actual work you're trying to perform.
Finally, there were some questions about the document mode of the page as it's loaded into your project. When I first started messing around with your sample, I couldn't get project4 to load slowtest.html in anything other than IE5 quirks mode (notoriously slow). Here's what eventually started working for me:
<!DOCTYPE html>
<!-- saved from url=(0023)http://www.contoso.com/ -->
<html>
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge"/>
<script type="text/javascript">
...
(Note, I deleted your initial doctype declaration and rewrite it to resolve a syntax error that was being reported by the F12 tools debugger.)
A few style key points here:
I used a mark of the web to load the page in the Internet zone. I find this makes it easier to load the page in edge mode, as pages in the intranet zone are loaded in compatibility view by default (unless you map the zones differently).
The x-ua-compatible header needs to be one of the first in the head block. It can follow title, but not much else.
Stylistically, elements need to be specified in lower case these days. There's a possibility that not following the current conventions forces the parser to fall back to an earlier rendering that supports the conventions.
Once I was able to control the documentMode at runtime, I found results I expected: older document modes ran more slowly. I also found that using requestAnimationFrame instead of setImmediate led to even better performance, but also surfaced the timing issue almost immediately.
In the end, this may be a case where the test highlights a problem, but not necessarily one you're trying to solve. (Insert Inigo meme here.) I get that you're trying to resolve a bottleneck. Are you sure you've found the correct bottleneck?
You may not be able to replicate the same performance of the native browser, but perhaps you can refactor the code to perform adequately without the extra overhead? Is there anything that might be better handled with a worker or some other implementation technique?
Hope this helps...
Yet my site pages load very slowly. Usually there's a 2-3 second lag before the page renders, and I cannot figure out why.
My site is powered by Wordpress v3.4.2.
I'm on a dedicated virtual server with plenty of resources and
bandwidth.
There are no huge images loading.
My CSS files load before JS scripts.
I've spent a lot of time trying to optimize the site within the constraints of the platform (Wordpress + plugins, etc). I don't expect my site to be SUPER fast, but I need it to not be SO slow.
I'm using Chrome's developer tools to audit my site but the suggestions do not appear to explain the long load time (unused CSS rules, etc). When I look at the timeline, I see a 2.7x second load time initially but I can't figure out why. Can anyone help me get to the bottom of this?
My site is located here. The homepage has some extra scripts, so it may be more helpful to look at this page.
I found this superb guide which really helped me fight through the mire of optimising Apache for use with WordPress:
http://thethemefoundry.com/blog/optimize-apache-wordpress/
You said you have a virtual server so chances are it's currently set up to load EVERY module - you'll find a great speed boost here if you eliminate unnecessary modules. Keep a backup of your config file in case you screw it up.
Also - use the TOP command through SSH to see how much memory PHP is using. Probably a lot currently. This will all be improved through eliminating modules as per above link. You don't mention how much memory you have on your VPS but there's a good chance your performance issues are coming from memory thrashing which will be mitigated significantly by reducing how much memory each PHP instance consumes using the link above.
Also, it matters to find out where your performance issues are actually coming from – a great little plugin called WP Tuner helps me locate performance bottlenecks. The original plugin is incompatible but someone else has written an upgrade:
http://www.wwvalue.com/tuts/tut-wp/wordpress-profiler-tuner-revised.html
That will help you identify which specific parts of the page are taking the longest to load so you will immediately find your performance bottleneck.
In addition, a cool plugin called Debug Queries is useful for tracking down performance issues although the wordpress profiler above actually does track queries too.
Finally – I can’t recommend highly enough this WordPress.org discussion on performance, and specifically on W3 Total Cache vs Super Cache (both are excellent).
It’s a fantastic read for anyone looking for split-second response times:
http://wordpress.org/support/topic/wp-super-cache-vs-w3-total-cache
I use W3 total cache on one of my sites and WP Super Cache on another. Both are great. I used both so I could learn about both. I would say use WP Super cache plus all the other tools the guy at the link above recommends if you're looking for extreme performance, but if you're looking to get immediate performance W3 total cache is more comprehensive in its initial setup.
Hope that helps.
use caching plugin,
put JS files at the bottom,
try different webhost (DB server may be slow sometimes)
minify css and JS,
make fewer HTTP requests
make sure external services (like FB and others) are not slowing down (remove
them and see if it helps)
run Yslow or similar test
try to use typekit or google font instead of cufon
Have you tried http://wordpress.org/extend/plugins/wp-super-cache/ or a similar caching plugin?
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am trying to quantify "site slowness". In the olden days you just made sure that your HTML was lightweight, images optimized and servers not overloaded. In high end sites built on top of modern content management systems there are a lot more variables: third party advertising, trackers and various other callouts, the performance of CDN (interestingly enough sometimes content delivery networks make things worse), javascript execution, css overload, as well as all kinds of server side issues like long queries.
The obvious answer is for every developer to clear the cache and continuously look at the "net" section of the Firebug plugin. What other ways to measure "site dragging ass" have you used?
Yslow is a tool (browser extension) that should help you.
YSlow analyzes web pages and why they're slow based on Yahoo!'s rules for high performance web sites.
Firebug, the must have for web developers Firefox extension, can measure the loading time of different elements on your webpage. At least you can rule out CSS, JavaScript, and other elements taking too much time to load.
If you do need to shrink JavaScript and CSS loading times, there are various JavaScript and CSS compressors out there on the web that simply take out unnecessary text out of them like newline characters and comments. Of course, keep an ordinary version on the side for development sake.
If you use PNGs, I recently came across a PNG optimizer that can shrink PNG sizes called OptiPNG.
"Page Load time" is really not easy to define in general.
It depends on the browser you use, because different browsers may do more requests in parallel, because javascript has differents speeds in different browsers and because rendering time is different.
Therefore you can only really measure your true page load time using the browser you are interested in.
The end of the page load can also be difficult to define because there might be an Ajax request after everything is visible on the page. Does that count the the page load or not?
And last but not least the real page load time might not matter that much because the "perceived performance" is what matters. For the user what matters is when sHe has enough information to proceed
Markus
I'm not aware of any way (at least no I could tell you :] ) that would automatically measure your pages perceived load time.
Use AOL Pagetest for IE and YSlow for firefox (link see above) to get a "feeling" for you load time.
Get yourself a proper debugging proxy installed (I thoroughly recommend Charles)
Not only will you be able to see a full breakdown of response times / sizes, you can save the data for later analysis / comparison, as well as fiddle with the requests / responses etc.
(Edit: Charles' support for debugging SOAP requests is worth the pittance of its shareware fee - it's saved me a good half a day of hair-loss this week alone!)
I routinely use webpagetest.org, which you can use to perform performance tests from different locations, on different browsers (although only msie 7-9), with different settings (number of iterations, connection speed, first run vs 2nd visit, excluding specific requests if you want, credentials if needed, ...).
the result is a very detailed report of page loading time which also provides advise on how to optimize.
it really is a great (free) tool!
Last time I worked on a high-volume website, we did several things, including:
We used Yslow to get an analysis of the individual factors affecting page load: https://addons.mozilla.org/en-US/firefox/addon/5369
performance monitoring using an external, commercial tool called Gomez - http://www.gomez.com/instant-test-pro/
We stress tested using a continuous integration build, using Apache JMeter. http://jmeter.apache.org/
If you want a quick look, say a first approximation, I'd go with YSlow and see what the major factors affecting page load time in your app are.
Well, call me old fashioned but..
time curl -L http://www.example.com/path
in linux :) Other than that, I'm a big fan of YSlow as previously mentioned.
PageSpeed is an online checking tool by Google, which is very accurate and reliable:
https://developers.google.com/pagespeed/
If it's asp.net you can use Trace.axd.
Yahoo provide yslow which can be great for checking javascript
YSlow as mentioned above.
And combine this with Fiddler. It is good if you want to see which page objects are taking the most bandwidth, which are being compressed at the server, unexpected round-trips, and what is being cached. And it can give you a general idea about processing time in the client web browser as compared to time taken between server & client
Apache Benchmark. Use
ab -c <number of CPUs on server> -n 1000 url
to get good approximation of how fast your page is.
In Safari, the Network Timeline (available under the Develop menu, which you have to specifically enable) gives useful information about loading time of individual page components, as well as showing when each component started loading.
Yslow is good, and HttpWatch for IE is great as well. However, both miss the most important metric to a user "When is the page -above the fold- ready for use to the user?". I don't think that one has been solved yet...
There are obviously several ways to identify the response time, but the challenge has always been how to measure the rendering time that is spent in browser.
We have a controlled test phase in which we use several automated tools for testing the application. One of the output we generate from this test is a fiddler trace for each transaction (a click). We can then analyse the fiddler trace to understand the Time for last byte and subtract it with the overall time the page took.
Something like this
1. A= Total response time as measured by the an automated tool (in our case we use QTPro)
2. B= Time to last byte (Server + Network time, from the fiddler trace)
3. C= A-B (approx Rendering time, OR the time spent in browser)
All the above I explained can be made a standard test process and end of the test we could generate a break-up of time spent at each layer e.g. rendering time, network time, database calls etc...