When analyzing the waterfall diagram via GTMetrix.com I noticed that my page loads the asset Noop.js very slowly (> 3 seconds) from https://www.paypalobjects.com/muse/noop.js and the status is even shown as "incomplete".
Why does this happen? And what should I do to avoid this?
Thanks.
No op means no operation, so apparently this is intentional. The referrer is an analytics package and it would seem this can be safely ignored.
It's asynchronous and doesn't affect anything else on the page.
Related
I cleared my cache and cookies, reload a website and it loads up on my screen in less than a second, yet when I go to GTmetrix.com and test the example site, the 'Fully Loaded Time' is much longer than what I experience on my computer, even on the highest unthrottled broadband setting.
On the GTmetrix site, it says the 'Fully Loaded Time' is "the point after the Onload event fires and there has been no network activity for 2 seconds." yet on some well optimized sites, I see the Fully loaded Time as being under 2 seconds. How can it be under 2 seconds when it has to calculate the point at which there's no network activity for 2 seconds.
Also, I went to the w3schools.com definition of the download event and it says it is an event that is programmed to occur when objects such as images/scripts files/css files/etc. are fully loaded. So I'm assuming the Fully Loaded Time is when everything besides the images/js/css files have completed loading and all the extra stuff after that finishes loading as well.
For the average user, is then 'Fully Loaded Time' from GTmetrix not much of a concern since most of the website information loads quickly unless it is some sort of web app that needs to have its programmatic functions load fully as well?
How important is the 'Fully Loaded Time' metric and could you give me a use case example where this metric would be important for the page to load completely? For example, I go to Amazon.com and everything loads in under a second and I can begin shopping right away yet on GTmetrix it says the Fully Loaded Time is 14.7 seconds and even the video screen capture on GTmetrix shows the page loading rather slowly tho the images and site structure seems to finish loading by the halfway mark.
I'm trying to understand the glossary terms of page load speeds better and this is confusing me. Thanks.
"the point after the Onload event fires and there has been no network activity for 2 seconds."
So what it does is load the resources on the page and listen out for no more resources being loaded for 2 seconds.
At that point it works out the last time it loaded a resource and gives you a time for load.
That is how it can be under 2 seconds, it waits for the first 2 second break and then looks at the last time it downloaded a resource, that is the time it uses for "fully loaded".
"For the average user, is then 'Fully Loaded Time' from GTmetrix not much of a concern"
Fully loaded time is not as important as first contentful paint, time to first byte etc.
It is a useful diagnostic tool to see if you are sending way too much down the wire or to identify problems with the site where non essential resources are "hanging".
Additionally it is useful to see if any lazy loading etc. is working as expected as one of the criteria that is important for mobile is trying to not send unneeded information down the wire to preserve a user's data plan. (if you send me 20Mb of off-screen images that I never see then that is a waste of my data).
It is also important for things like anchored headings etc. If a page takes 15 seconds to load when i got to "yoursite.com/page#interesting-heading" there may be a lot of "Cumulative Layout Shift" if things are still loading.
If you are trying to work out what Google etc. think are important then this answer I gave on the scoring updates to Google Page Speed Insights (same engine as GTmetrix) illustrates what they actually score on and the weightings they give each critera, which is a good insight into what your users are likely to care about / what will increase conversions / time on site.
I'm writing a Chrome extension and I want to measure how it affects performance, specifically currently I'm interested in how it affects page load times.
I picked a certain page I want to test, recorded it with Fiddler and I use this recording as the AutoResponder in Fiddler. This allows me to measure load times without networking traffic delays.
Using this technique I found out that my extension adds ~1200ms to the load time. Now I'm trying to figure out what causes the delay and I'm having trouble understanding the DevTools Performance results.
First of all, it seems there's a discrepancy in the reported load time:
On one hand, the summary shows a range of ~13s, but on the other hand, the load event arrived after ~10s (which I also corroborated using performance.timing.loadEventEnd - performance.timing.navigationStart):
The second thing I don't quite understand is how the number add up (or rather don't add up). For example, here's a grouping of different categories during load:
Neither of this columns sums up to 10s nor to 13s.
When I group by domain I can get different rows for the extension and for the rest of the stuff:
But it seems that the extension only adds 250ms which is much lower than the exhibited difference in load times.
I assume that these numbers represent just CPU time, and do not include any wait time. Is this correct? If so, it's OK that the numbers don't add up and it's possible that the extension doesn't spend all its time doing CPU bound work.
Then there's also the mysterious [Chrome extensions overhead], which doesn't explain the difference in load times either. Judging by the fact that it's a separate line from my extension, I thought they are mutually exclusive, but if I dive deeper into the specifics, I find my extension's functions under the [Chrome extensions overhead] subdomain:
So to summarize, this is what I want to be able to do:
Calculate the total CPU time my extension uses - it seems it's not enough to look under the extension's name, and its functions might also appear in other groups.
Understand whether the delay in load time is caused by CPU processing or by synchronous waiting. If it's the latter, find where my extension is doing a synchronous wait, because I'm pretty sure that I didn't call any blocking APIs.
Update
Eventually I found out that the reason for the slowdown was that we also activated Chrome accessibility whenever our extension was running and that's what caused the drastic slowdown. Without accessibility the extension had a very minor effect. I still wonder though, how I could see in the profiler that my problem was the accessibility. It could have saved me a ton of time... I will try to look at it again later.
The built-in Sitecore rendering stats http://<sitename>/sitecore/admin/stats.aspx is really helpful for identifying inefficient and slow-loading XSLT renders. Recently I've started switching to .ascx sub layouts to take advantage of the Sitecore C# API which can help improve performance when used correctly.
However, I've noticed that sub layouts (as opposed to XSLT renders) are not reported correctly on the stats page. See the screenshot below....
I know for a fact that this sub layout takes about 1.8 seconds to generate (I calculated this in the code-behind). Caching is turned off. I've refreshed the page 20 times to ensure I get an average. You will see that the "Avg. items" is always 0 - I can live with this - but the "Avg. time (ms)" is less than 1ms which is just clearly wrong.
Does anyone have any insights into this? Has anyone found a way to get it to work correctly?
Judging whether a statistic is right/wrong is going to rely on understanding exactly what it is measuring.
Digging around in Sitecore.Diagnostics.Statistics using Reflector I note the following:
Sitecore.Web.UI.Webcontrol contains a field m_timer
This is 'started' in the BeforeRender() method and 'stopped' in the AfterRender() method
Data from that timer is sent to Statistics.AddRenderingData() and is logged against the control
This means it is measuring the time taken to render the control, which for an XSLT includes the processing time for preparing all the data in it, but as much of the work of a normal ASCX is done prior to the Render-stage the statistic is much less useful. Incorporating the Load stage in the time would inadvertently include the processing time for all child components, since the Load sequence is chained and called recursively, so that probably doesn't help much either.
I suspect there is no good way of measuring the processing time for a specific ASCX control (excluding children) without first acquiring cumulative data then post-processing the call chain and splitting the time apart. This is the sort of thing RedGate ANTS does really well, but might not be so good if it was being executed on a live production system, given the overheads.
I was simple cruising through the modx options and i noticed the option to cache snippets. I was wondering what kind of effect this would have (downsides) to my site. I know that caching would improve the loading time of the site by keeping them 'cached' after the first time and then only reloading the updates but this all seems to good to be true. My question is simple: are there any downsides to caching snippets? Cheers, Marco.
Great question!
The first rule of Modx is (almost) always cache. They've said so in their own blog.
As you said, the loading time will be lower. Let's just get the basics on the floor first. When you chose to cache a page, the page with all the output is stored as a file in your cache-folder. If you have a small and simple site, you might not see the biggest difference in caching and not, but if you have a complex one with lots of chunks-in-chunks, snippets parsing chunks etc, the difference is enormous. Some of the websites I've made goes down 15-30 levels to parse the content in come sections. Loading all this fresh from the database can take up to a coupe of seconds, while loading a flat-file would take only a few microseconds. There is a HUGE difference (remember that).
Now. You can cache both snippets and chunks. Important to remember. You can also cache one chunk while uncache the next level. Using Modx's brilliant markup, you can chose what to cache and what to uncache, but in general you want as much as possible cached.
You ask about the downside. There are none, but there are a few cases where you can't use cached snippets/chunks. As mentioned earlier, the cached response is divided into each page. That means that if you have a page (or url or whatever you want to call it), where you display different content based on for example GET-parameters. You can't cache a search-result (because the content changes) or a page with pagination (?page=1, ?page=2 etc would produce different output on the same page). Another case is when a snippet's output is random/different every time. Say you put a random quotes in your header, this needs to be uncached, or you will just see the first random result every time. In all other cases, use caching.
Also remember that every time you save a change in the manager, the cache will be wiped. That means that if you for example display the latest news-articles on your frontpage, this can still be cached because it will not display different content until you add/edit a resource, and then the cache will be cleared.
To sum it all up. Caching is GREAT and you should use it as much as possible. I usually make all my snippets/chunks cached, and if I crash into problems, that is the first thing I check.
Using caching makes your webserver respond quicker (good for the user) and produces fewer queries to the database (good for you). All in all. Caching is a gift. Use it.
There's no downsides to caching and honestly I wonder what made you think there were downsides to it?
You should always cache everything you can - there's no point in having something be executed on every page load when it's exactly the same as before. By caching the output and the source, you bypass the need for processing time and improve performance.
Assuming MODX Revolution (2.x), all template tags you use can be called both cached and uncached.
Cached:
[[*pagetitle]]
[[snippet]]
[[$chunk]]
[[+placeholder]]
[[%lexicon]]
Uncached:
[[!*pagetitle]] - this is pointless
[[!snippet]]
[[!$chunk]]
[[!+placeholder]]
[[!%lexicon]]
In MODX Evolution (1.x) the tags are different and you don't have as much control.
Some time ago I wrote about caching in MODX Revolution on my blog and I strongly encourage you to check it out as it provides more insight into why and how to use caching effectively: https://www.markhamstra.com/modx/2011/10/caching-guidelines-for-modx-revolution/
(PS: If you have MODX specific questions, I'd suggest posting them on forums.modx.com - there's a larger MODX audience there that can help)
No, this is NOT the "my page doesn't have any traffic and it has to be reloaded" issue.
We have 4 dynos for an alpha application. The reason we do, is because each page takes over 2 seconds to load. Even little things like rendering a text string (no layouts, erb or anything).
If I watch our logs, for our longer queries, they report response times in the 300-700ms range--which is far shorter than 2 seconds.
The DNS is cached, and the collective time to load given that isn't a slow DNS issue. And, that shouldn't affect subsequent page loads, right?
Any thoughts on how to get to the bottom of this would be appreciated.
Here are two screenshots to show what I mean.
http://dl.dropbox.com/u/7175041/Screenshots/qo.png
http://dl.dropbox.com/u/7175041/Screenshots/qq.png
Thanks!
First thing I'd do is to switch on NewRelic Basic - it's a free performance monitor integrated with Heroku. That'll help you get a bearing on the basics of where the trouble is coming from.
I take it that you don't see similar results locally? If you don't, then skip this step, but if you do, you can also run NewRelic locally and interrogate all of your queries for response times.
I'd stay away from using things like the Benchmark library - that was my first thought in troubleshooting a speed issue, but Benchmark is necessarily going to ignore elements of your app that are outside the pure Ruby layer, and if that's where you're slow then NewRelic catches that anyway.
Finally, if all else fails, a support ticket with Heroku's team has always been extremely helpful to me. Just make sure you check the box that lets them clone your app, it makes things a lot easier for them.
Let us know what you find out - I'm curious to see what the particular gremlin is!