DNS-Prefetch of domains which are resulting in SPOF - performance

What happens if i inject
<link rel="dns-prefetch" href="www.example.com" />
in the head of the document and the domain example.com is down (SPOF).
Will it affect the page load time?.

No (well, effectively anyway).
For starters, the DNS resolution isn't tied to the service itself (usually) and is cached along the way in the network. Usually when a service is down it will still resolve fine.
Assuming that the DNS resolutions are timing out it still won't have any impact. The dns-prefetch hints are just that and not required to complete loading the page so the browser could continue trying to resolve it while the page does what it needs to and it won't delay anything else.
The slight caveat is that the browser may limit concurrent DNS lookups because of buggy home routers (Chrome limits it to 6 but it is subject to change). In theory the dns-prefetch hint could tie up one of those concurrent DNS lookup slots but practically speaking it's not very likely and the impact will be minimal (probably not even measurable).

Related

Is there any significant performance benefits with HTTP2 multiplexing as compared with HTTP2 Server Push?

HTTP2 multiplexing uses the same TCP connection thereby removing Connection time to the same host.
But with HTTP2 Server Push is there any significant performance benefits except for the roundtrip time that HTTP2 multiplexing will take while requesting every resource.
I gave a presentation about this, that you can find here.
In particular, the demo (starting at 36:37) shows the benefits that you can have with multiplexing alone, and then by adding HTTP/2 Push.
Spoiler: the combination of HTTP/2 multiplexing and Push yields astonishing better results with respect to HTTP/1.1.
Then again, every case is different, so you have to actually measure your case.
But the potential of HTTP/2 to yield better performance than HTTP/1.1 is really large, and many (most?) cases will benefit from this.
I'm not sure what exactly you're asking here, or if it's a good fit for StackOverflow but will attempt to answer none-the-less. If this is not the answer you are looking for then please rephrase the question so we can understand what exactly it is you are looking for.
You are right in that HTTP/2 uses multiplexing, which does negate the need for multiple connections (and the time and resources needed to set them up and manage them). However it's much more than that as it's not limited (browsers will typically limit connections to 4-6 per host) and also allows for "similar" connections (same IP and same certificate but different hostname) to share connections as well. Basically it solves the queuing of resources that the request/response method of HTTP/1 means and reduces need of limited multiple connections that HTTP/1 requires as a workaround. Which also reduces need for other workarounds like sharding, sprite files, concatenation... etc.
And yes HTTP/2 server push saves on one round trip. So when you request a webpage it sends both the HTML and the CSS needed to draw the page as the server knows you will need the CSS as it's pointless just sending you the HTML, waiting for your web browser to get it, parse it, see it needs CSS and request the CSS file and wait for it to download.
I'm not sure if you're implying that a round trip time is so low, that there is little gains in HTTP/2 server push because there is now no delay in requesting a file due to HTTP/2 multiplexing? If so that is not the case - there are significant gains to be made in pushing resources, particularly blocking resources like CSS which the browser will wait for before drawing a single thing on screen. While multiplexing reduces the delay in sending a request, it does not reduce the latency on the request travelling to the server, now on the server responding to that and sending it back. While these sound small they are noticeable and make a website feel slow.
So yes, at present, the primary gain for HTTP/2 Server Push is in reducing that round trip time (basically to zero for key resources).
However we are at the infancy of this and there are potential other uses for performance or other reasons. For example you could use this as a way of prioritising content so an important image could be pushed early when, without this, a browser would likely request CSS and Javascript first and leave images until later. Server Push could also negate the need for inline CSS (which bloats pages with copies of style sheets and may require Javascript to then load the proper CSS file) - another HTTP/1.1 workaround for performance. I think it will be very interesting to watch what happens with HTTP/2 Server Push over the coming years.
Saying that, there still some significant challenges with HTTP/2 server push. Most importantly how do you prevent wasting bandwidth by pushing resources that the browser already has cached? It's likely a digest HTTP header will be added for this but still under discussion. Which leads on how to implement HTTP/2 Server Push in the best method - for web browsers, web servers and web developers? The HTTP/2 spec is a bit vague on how this should be implemented, which leaves it up to different web servers in particular providing different methods to signal to the server to push a resource.
As I say, I think this one of the parts of HTTP/2 that could lead to some very interesting applications. We live in interesting times...

Web app initial load time

I am using a shared hosting plan at Bluehost to host a golf tournament live scoring mobile web app. I am caching everything I can on Cloudflare, and spent quite some time on overall optimization of the initial download & rendering times. There might be more I could do, but without question my single biggest issue is the initial call to my website: www.spanishpointscup.org. Sometimes this seems to be related to DNS lookup and other times related to Waiting(TTFB).
Below are 2 screen shot images of the network calls that show variations in accessing my index.html. Sometimes this initial file load can be even longer. Very rarely are any of the other files downloaded creating a long delay time, so my only focus now is the initial file load. I think that even if I had server side rendering, I would still have this issue.
Does anyone have specific recommendations that they are confident will help me? Switch to VPS or other host? Thank you.
This is typical when you use a shared server.
The DNS has nothing to do with the issue. DNS has to do with the request not the response. It is the Browser that must resolve the the domain name to an ip address.
The delay you are seeing is due to the server being busy and your page is sitting in a queue waiting behind other processes. Possibly you have a CPU grabbing neighbor on your shared server. Or Bluehost has some performance issues.
You will likely notice some image files take an excessively long time to transmit. Which image is slow will appear to be random with each fresh (not in cache) page load.
UPDATE
After further review I noticed the "wait" times are excessive. Wait time is in green on your waterfall. Notice how the transmit time (blue) is short. This is the time it takes the server to retrieve the page from the disk and put it into the transmit buffer. 300-400 millisecond is excessive.
Find a new service provider.

Shouldn't CloudFlare with "Cache Everything" cache everything?

I have a CloudFlare account and found out that if I use page rules, I could use a more agressive cache setting called "Cache Everything", when reading about this, I understood that it should basically cache everything. I tested it on a site that is completely static, and I set the expiration time to 1 day.
Now after a few days looking at how many requests have been served from the cache and not from the cache, there's no change, still about 25% of the requests have not been served from the cache.
The two rules I've added for Cache Everything are:
http://www.example.com/
and
http://www.example.com/*
Both with Cache Everything and 1 day expiration time.
So my questions are, have I misinterpreted the use of Cache Everything (I thought I only should get one request per page/file each day using this setting), or is something wrong with my rules? Or maybe do I need to wait a few days for the cache to kick in?
Thanks in advance
"Or maybe do I need to wait a few days for the cache to kick in?"
Our caching is really designed to function based on the number of requests for the resources (a minimum of three requests), and works basically off of the "hot files" on your site (frequently requested) and is also very much data center related. If we get a lot of requests in one data center, for example, then we would cache the resources.
Also keep in mind that our caching will not cache third-party resources that are on your site (calls to ad platforms, etc.).

How to slow down WWW on nameserver level?

for scientific purposes I would like to know how to slow down www server on DNS level.
Is it possible via TTL setting ?
Thank You
Ralph
It should not be possible to slow down the speed of a website (http) solely by modifying the DNS response.
However, you could easily slow down the initial page load time via DNS by modifying the DNS server to take an abnormally long time before returning the DNS results. The problem is, this will really only effect the initial load of the website, as after that, web browsers, computers, and ISPs will cache the results.
.
the TTL you spoke of only effects how long the DNS result should be cached for, which generally has minimal effect on speed of the website. That being said, theoretically it would be possible to set the DNS TTL to a value close to 0, requiring the client to have to re-lookup the IP via DNS with nearly every page load. This would make nearly every new page from the website load very slowly.
However, the problem with this attack is that in the real world, venders and ISPs often don't follow the rules exactly. There are numerous ISPs and even some consumer devices that don't honor low TTL values in DNS replies, and will cache the DNS result for a decent period of time regardless of what the DNS server asked it to be cached for.
.
So from my experience in lowering TTL to very low values while transferring services to new IPs, and seeing ridiculously long caching time regardless, I would say that while such an attack such as this may work, it would depend hugely on what DNS server each victim is using, and in most cases would make close to no delay after the initial page load.

Causes of high network latency

I have a site that is moving incredibly slowly right now. Both Safari's inspector and Firebug are reporting that most of the load time is due to latency. The actual download is happening in less than a second. There's a lot of database activity in play (though the metrics on that indicate that it's pretty healthy), but what else can cause really high latency? Is it a purely network thing or are there changes I can make to the app to improve the latency numbers?
I'm using YSlow to help identify performance improvements, but on the whole, I don't see it reporting anything that seems crazy unreasonable. Opportunities for improvement, certainly, but nothing that seems like it would cause the huge load times I'm seeing.
Thanks.
UPDATE
Some background and metrics, in case it's useful. This is a CakePHP application and I'm using my UsersController::login action as the benchmark. For the sake of identifying how much of a factor the application code plays in this, I've printed a stacktrace immediately upon entering UsersController::beforeFilter(). Here's output:
UsersController::beforeFilter() - APP/controllers/users_controller.php, line 13
Controller::startupProcess() - CORE/cake/libs/controller/controller.php, line 522
Dispatcher::_invoke() - CORE/cake/dispatcher.php, line 187
Dispatcher::dispatch() - CORE/cake/dispatcher.php, line 171
[main] - APP/webroot/index.php, line 83
Load times, as shown by Safari's inspector range from 11.2 seconds to 52.2 seconds. This would seem to point me away from the application code and maybe something with my host, but maybe I'm completely misinterpreting this or oversimplifying it?
If you cannot identify directly a slow moving component of your application, there are a number of other steps along the way that can certainly slow your site down. Whenever I'm experiencing unusually long polling, I typically start by looking at the local DNS and then onto my hosted DNS. Sometimes a cache refresh (on their part, not yours) can cause a lot of polling until their database has caught up.
Else, they might actually have a service outage and your requests are being made to their secondary or backup server. If everything seems fine in terms of domain resolution, your hosting provider might be experiencing a service outage that can take a number of different shapes like serving static content from their backups or over-allocating shared resources until everything is running as it should. You can experience a ton of what they call throttling on shared cloud architectures when they have a box go down. On the plus side, you don't have a total outage in this circumstance.
One time, and this was just in a shared grid configuration, I had a processor go to hell. The bizarre part of it was that static content was still serving from a backup, but it was still polling against our database (which was on a different server) and causing our account to throttle because of over allocation on the backup. Wasn't our fault, but the host started sending nasty emails about our excessive long-polls. Moral of the story is, if it's not your application, and it's out of the blue, somewhere along the line I'll bet you'll find some hardware failure or misconfiguration.
Also now that I think of it, if you are syndicating some outside content (be it server or browser side) it might not be in your chain of responsibility altogether. If you are serving ads for example from a subscriber service, they might be having a high-load period or outage. These are just the steps that I would take to narrow down the culprit.
Probably this will be not the solution for you, but when I has doggy slow safari (and FF too) I simply changed the DNS servers to opendns (208.67.222.222, 208.67.220.220) and all my problems are resolved.

Resources