Why does firefox always load the next page in the menu as well as the page I have requested? - firefox

I am working on a new website. While testing some of the functionality I had a number of debug statements and was watching the logs. It seems that Firefox (at least) loads the "next" page in the menu as well as the page I have clicked on. If I have menu items A B C D E and click on B then I see a request for mysite.com/B and then a request for mysite.com/C in the logs, and so on.
Is this some kind of look-ahead performance thing? Is there any way to avoid it (setting an attribute on the link maybe?) The problem is that the second page in my menu is somewhat heavier as it loads a whole lot of data from a web service. I'm happy for people to do that if they want to use the functionality, but would rather not that every visitor to the front page loads it unneccessarily. Is this behvaiour consistent across browser?

Yes, Firefox will prefetch links to improve the perceived performance to the user. You can read more about the functionality in Firefox here https://developer.mozilla.org/en-US/docs/Link_prefetching_FAQ
It isn't possible to disable this in the client's browser, however the request should include the header X-moz: prefetch which you can use to determine if it is in fact a prefetch request or not, and potentially return a blank page for prefetch requests. You can then use Cache-control: must-revalidate to make sure the page loads appropriately when actually requested by the user.
If you happen to be using Worpdress for your site, you can disable the tags with the prefetch information by using:
Wordpress 3.0+
//remove auto loading rel=next post link in header
remove_action('wp_head', 'adjacent_posts_rel_link_wp_head');
Older versions:
//remove auto loading rel=next post link in header
remove_action('wp_head', 'adjacent_posts_rel_link');

Yes, it's called prefetch. It can be turned off in the client, see the FAQ:
https://developer.mozilla.org/en-US/docs/Link_prefetching_FAQ
I'm not aware of a way to turn it off via the server

Related

What are full page reloads and Why did we need to do full page reloads without ajax?

I was reading up on ajax and how it empowers us to exchange data with a server behind the scenes and consequently avoid full page reloads. My confusion lies here, I don't really understand what full-page reloads mean. I think it's probably cause I've been working with ajax/react since the start I guess and have not really seen any webpage of mine fully reload when I access stuff from a database or an api.
It'd be great if someone could explain what they are and why did we need them before ajax?
A full page load is where the entire page is downloaded from the server. A page typically consists of several sections: header, footer, navigation, and content. In a classic web application without AJAX, a user clicks on a link to another page, and has to download the full page, even though only the main content is changing. The header, footer, and navigation all get downloaded again even though they don't change.
With AJAX there is the opportunity to only change the parts of the page that will change. When a user clicks on the link, JavaScript loads just the content for that link and inserts it into the current page. The header, footer, and navigation don't need to reload.
This introduces other problems that need attention.
When AJAX inserts new content into the page, the URL doesn't change. That makes it difficult for users to bookmark or link to specific content. Well written AJAX applications use history.pushState() to update the URL when loading content via AJAX.
There are then two paths to get to every piece of content. Users can either load the URL containing that content directly, or load the content into some other page by following a link. Web developers need to test and ensure both work.
Search engines have trouble crawling AJAX powered sites. For best compatibility, you need to employ server side rendering (SSR) or pre-rendering to serve initial content on a page load that doesn't require JavaScript.
Even for Googlebot (which executes JavaScript) care must be taken to make an AJAX powered site crawlable. Googlebot doesn't simulate user actions like clicking, scrolling, hovering, or moving the mouse.
Content needs to appear on page load without any user interaction
You must use <a href=...> links for navigation so that Googlebot can find other pages by scanning the document object model (DOM). For users, JavaScript can intercept clicks on those links and prevent a full page load by using return false from the onclick handler or event.preventDefault() in the click handler.

Too many FB share button request

I'm optimizing my site speed. One of the main issue I'm facing is the homepage.
In the homepage, each article has FB/TW share buttons.
I only inserted the scripts in the footer once but I'm getting bunch of FB/TW share button requests.
Is it normal or there is something I need to do?
For every Like/Share button that you have, your browser needs to make a request to get the content. This is only executed when the browser has received the page from your server, so it does not affect the initial load time.
As CBroe mentions, the button is displayed in an iFrame. These are loaded and depending on your browser settings all at the same time or consecutive. During this time, your browser is not blocked so your used can already interact with the page.
If you want to reduce load, the only option is to remove the buttons. I think you have some index/home page where you load all the articles and for each of those a button? You could consider only showing the buttons on the articles itself, if you are really concerned about this.
But, since this is normal behaviour and your page is not blocked by loading all the iframes, this is not a big issue nor can you optimise it yourself.

How to circumvent cache revalidation on browser refresh?

Sometimes it is useful to measure you site´s performance in a full cached situation. But browsers make this hard to test, because on every manual page refresh it will revalidate all items, which results in a request for every resource on the webserver. Valid cacheitems will respond with a HTTP 304, invalid ones with a 200 OK. So you will end up with wrong timings for this particular use-case because of the latency to your webserver.
One solution is to open a new tab, then enter the site´s url, which results in my expected behaviour: Cached items are served from disk. As soon as you hit refresh, the items revalidate again.
This workflow (open tab, open tab, and so on) is kinda bad, so i want to ask, if anybody knows a better way to achieve this. Maybe there is a nicely hidden shortcut i missed so far out there on the internet.
just add a bookmark and fill in the following in the 'url' field: javascript:location.href=location.href;
it does make perfectly sense for browsers to revalidate all resources of a given url whenever the 'reload' button is clicked, as this best complies with the intention of the user doing so - in other words, a 'reload' is expected to result in the display of 'fresh' data, instead of simply serving what's already in the browser cache. in addition to this, all major browser have implemented a way of actually fetching every single resource, bypassing any instructions in the http headers to cache it (such as a '304 not modified' status), using shortcuts like SHIFT/CTRL + R/F5.
In Firefox, just focus the url bar (Ctrl-L on Windows and Linux, Cmd-L on Mac) and hit enter. This is treated like a normal toplevel load of the url, not a reload.
Unfortunately, other browsers handle that sort of thing differently. The simplest way to deal cross-browser is probably to have a little harness page that loads the thing you want to reload in an iframe and has a button that sets subFrameWindow.location.href = subFrameWindow.location.href to trigger a new load....

How to refresh jquery mobile multipage document after deploying new software

I have a jqm multipage document (index.html) that includes several pages and other assets (js, css, etc.). I have my server configured to use etags for the html, css, and js files. The request/response headers are set appropriately and it works as expected.
During use of my application there are no requests (with the exception of signing off) to index.html, so there is never really a chance for the browser to see if there is a new file out there, let alone all of it's css and js files (unless the user signs off, requests the page again or does a refresh). If I deploy new software, how might I notify the user that new sw is available and/or somehow force a refresh of the index.html file?
My initial thoughts were to store the version # on the client and periodically make ajax requests to the server to check the version #. If new sw is available, display a link to notify the user informing them of the new sw and to click on the link to get it (reload index.html).
I'm curious how others have done this? Thoughts? Recommendations?
If you link to multipage document, you must use a data-ajax="false"
attribute on the link to cause a full page refresh due to the
limitation above where we only load the first page node in an Ajax
request due to potential hash collisions. There is currently a subpage
plugin that makes it possible to load in multi-page documents.
Reference: http://jquerymobile.com/demos/1.1.0/docs/pages/page-navmodel.html
Please also look at this:
Cases when Ajax navigation will not be used
Under certain conditions,
normal http requests will be used instead of Ajax requests. One case
where this is true is when linking to pages on external websites. You
can also specify that a normal http request be made through the
following link attributes:
rel=external
target (with any value, such as "_blank")
If the pages is loaded as http request instead of Ajax, the problem would be solved.

Firefox 3 doesn't allow 'Back' to a form if the form result in a redirect last time

Greetings,
Here's the problem I'm having. I have a page which redirects directly to another page the first time it is visited. If the user clicks 'back', though, the page behaves differently and instead displays content (tracking session IDs to make sure this is the second time the page has been loaded). To do this, I tell the user's browser to disable caching for the relevant page.
This works well in IE7, but Firefox 3 won't let me click 'back' to a page that resulted in a redirect. I assume it does this to prevent the typical back-->redirect again loop that frustrates so many users. Any ideas for how I may override this behavior?
Alexey
EDIT: The page which we redirect to is an external site over which we have no control. Server-side redirects won't work because this wouldn't generate a 'back' button for in the browser.
To quote:
Some people in the thread are talking about server-side redirect, and redirect headers (same thing)... keep in mind that we need client-side redirection which can be done in two ways:
a) A META header - Not recommended, and has some problems
b) Javascript, which can be done in at least three ways ("location", "location.href" and "location.replace()")
The server side redirect won't and shouldn't activate the back button, and can't display the typical "You'll be redirected now" page... so it's no good (it's what we're doing at the moment, actually.. where you're immediately redirected to the "lucky" page).
I think the Mozilla team takes a step into the right direction by breaking this particularly annoying pattern. Finding a way around it somehow defies the purpose, doesn't it?
Instead of redirecting on first encounter, you could simply make your page render differently when a user hits it the first time. Should be easy enough on the server side, since you already have the code that is able to make that distinction.
You can get around this by creating an iframe and saving the state of the page in a form field in the iframe before doing the redirect. All browsers save the form fields of an iframe.
This page has a really good description of how to get it working. This is the same technique google maps uses when you click on map search results.
I'm strongly in favor for the Firefox behaviour.
The most basic way to redirect is to let the server send HTTP status code 302 + Location header back to the client. This way the client (typically a browser) will not place the request URI into its history, but just resend the same request to the advocated URI.
Now it seems that Firefox started to apply the bevaviour also for server responses that try redirections e.g. by Javascript's onload event.
If you want the browser not to display a page, I think the best solution is if the server does not send the page in the first place.
Its possibly in aide to eliminate repeated actions.
A lot of ways people do things is
page 1 -> [Action] -> page 2 -> redirect to page 2 without the action parameters.
Now if you were permitted to click the back button in this situation and visit the page without the redirect, the action would be blindly re-performed.
Instead, firefox presumes the server sent a redirect header for a good reason.
Although it is noted, that you can however have content delivered after the redirect header, sending a redirect header ( at least in php ) doesn't terminate execution, so in theory, if you were to ingnore the redirect request you would get the page doing weird stuff.
( I circumvent this by the fact all our redirects are done via the same function call, where i call an explicit terminate directly after the redirect, because people when coding assume this is how it behaves )
In the URL window of firefox type about:config
Change this setting in firefox
browser.sessionstore.postdata
Change from a 0 to 1

Resources