dns-prefetch + preconnect vs. browser cache - performance

To improve the page load time I want to use dns-prefetch and preconnect for external javascripts.
<link rel="dns-prefetch" href="https://example.com">
<link rel="preconnect" href="https://example.com">
What happen if the ressource (in my case the external javascript) is already in the browser cache? Do dns-prefetch and preconnect add page load time unnecessarily? In other words: Are dns-prefetch and preconnect only useful on the first page load?

On the repeat visit, the preconnect/dns-prefetch will indeed be useless if all resources are taken from cache. But they will not increase page load time. They happen in parallel of page loading, and cache reads do not wait for DNS/TLC/TCP resolution. So the only drawback is that you create unused TCP connection and slightly increase the load on the server.

Related

What difference does it make if I add think time to my virtual users as opposed to letting them execute requests in a loop as fast as they can?

I have a requirement to test that a Public Website can serve a defined peak number of 400 page loads per second.
From what I read online, when testing web pages performance, virtual users (threads) should be configured to pause and "think" on each page they visit, in order to simulate the behavior of a real live user before sending a new page load request.
I must use some remote load generator machines to generate this necessary load, and I have a limit on how many virtual users I can use per each load generator. This means that if I make each virtual user pause and "think" for x seconds on each page, that user will not generate a lot of load compared to how much it would if it was executing as fast as it could with no configured think time - and this would cause me to need more users and implicitly need more load generator machines to achieve my desired "page loads per second" and this would be more costly in the end.
If my only request is to prove that a server can serve 400 page loads per second, I would like to know what difference does it really make if I add think times (and therefore use more virtual users) or not.
Why is generally "think time" considered as something which should be added when testing web pages performance ?
Virtual user which is "idle" (doing nothing) has minimal resources footprint (mainly thread stack size) so I don't think you will need to have more machines
Well-behaved load test must represent real life usage of the application with 100% accuracy, if you're testing a website each JMeter thread (virtual user) must mimic a real user using a real browser with all related features like
handling embedded resources (image, scripts, styles, fonts, sounds, etc.)
using caching properly
getting and sending back cookies
sending appropriate headers
processing AJAX requests like browser does
the most straightforward example of the difference between 400 users without think times and 4000 users with think times will be that 4000 users will open 4000 connections and keep them open and 400 users will open only 400 connections.

Joomla huge idle time on load

we are trying to figure out with my friend why we are have all the time over 15s idle time on load as you can see in screenshot
We are using Joomla version 3.5.1 and Land Box theme. Any idea why we have such a huge idle time on loading?
Here's a webpage test view of the same page.
There's a 10sec wait until until the first byte, my first suspicion for this is that the server may be overloaded.
Then there's close to another 10 seconds until the page is loaded, and the reason for this is the large number of files -- about 100 in total (40 JS files, 37 images, and 40 CSS and others)
Hope this helps!

What does the time elapsed between network requests mean?

As you can see the second request starts 784ms after the beginning. I wonder why that request starts at 784ms instead of 741ms (directly after the first one)?
What happens in the 43ms in between the first and second request? There is no indicator showing that there is a blocking request.
The 43ms between the two requests indicate the parsing time. I.e. this is the time needed to process the request to the page URL you requested and find embedded resources like images, JavaScripts, etc., which need to be loaded.
So there will always be a gap between the first request and the second.
The same happens for requests to other resources like CSS files. Their contents first need to be interpreted after their download to know the other resources they include like e.g. images, web fonts or other CSS files.

Why does the same html page take 25 sec to load on one server and 2 sec to load on another?

I have the exact same html sitting on two different servers. Both pages call things like stylesheets and images from the same servers (not each from their local server). In other words, these pages are identical except they exist on two different servers. It's all static html. The only DNS lookups are for images.
On one server it takes 25 seconds to load, and it appears most of that is waiting on the html page itself
http://tools.pingdom.com/fpt/#!/CmGSycTZd/http://205.158.110.184/contents/mylayout/2
On another server it takes under 2 seconds to load
http://tools.pingdom.com/fpt/#!/rqg73fi7V/http://socialmediaphyte.com/TEST/image-dns-testing-ImageON.html
The only difference I can ID from Pingdom is "Connection." The slow server responds with "close" and the fast server responds with "Keep-Alive". Is THAT the most likely issue? Or is it possibly something else? (And if you know the remedy for your suspected cause, that would be wonderful.)
Thanks!
Not using keep-alive will slow the overall load time a bit because you incur the additional overhead of having to establish a new connection for each resource, rather than re-using one or more connections. This shouldn't equate to 23 seconds difference though.
Using the FireBug Net Panel for Firefox can be of great assistance in seeing what is taking so long. It shows you how long each requested resource from the page took to load, and how long each phase of requesting the resource took.
Other factors could include one server is using gzip compression on pages and the other is not, or it could just be overloaded.

How does Chartbeat measure Server Load Time?

It seems that Chartbeat is reporting my Server Load Time incorrectly.
Does anyone know how Chartbeat measures this data so I can make this metric more accurate?
chartbeat server load time is a measure of how long time it takes to load the HTML from your server, from servers on the east coast US.
http://chartbeat.com/faq/
From Chartbeat: How fast is my site? What information do I get from Load Times?
User Page load
...
It is calculated by timing the two pieces of Chartbeat javascript on the site. This
means it includes all the Javascript and embedded items you have on the page and tells
you how long it's taking to load on the user's browser.
If the load times look incorrect, check to make sure that you have:
<script type="text/javascript">var _sf_startpt=(new Date()).getTime()</script>
In the head of your page before loading your other resources, and the other code they give you just before the body closes.

Resources