Sometimes it is useful to measure you site´s performance in a full cached situation. But browsers make this hard to test, because on every manual page refresh it will revalidate all items, which results in a request for every resource on the webserver. Valid cacheitems will respond with a HTTP 304, invalid ones with a 200 OK. So you will end up with wrong timings for this particular use-case because of the latency to your webserver.
One solution is to open a new tab, then enter the site´s url, which results in my expected behaviour: Cached items are served from disk. As soon as you hit refresh, the items revalidate again.
This workflow (open tab, open tab, and so on) is kinda bad, so i want to ask, if anybody knows a better way to achieve this. Maybe there is a nicely hidden shortcut i missed so far out there on the internet.
just add a bookmark and fill in the following in the 'url' field: javascript:location.href=location.href;
it does make perfectly sense for browsers to revalidate all resources of a given url whenever the 'reload' button is clicked, as this best complies with the intention of the user doing so - in other words, a 'reload' is expected to result in the display of 'fresh' data, instead of simply serving what's already in the browser cache. in addition to this, all major browser have implemented a way of actually fetching every single resource, bypassing any instructions in the http headers to cache it (such as a '304 not modified' status), using shortcuts like SHIFT/CTRL + R/F5.
In Firefox, just focus the url bar (Ctrl-L on Windows and Linux, Cmd-L on Mac) and hit enter. This is treated like a normal toplevel load of the url, not a reload.
Unfortunately, other browsers handle that sort of thing differently. The simplest way to deal cross-browser is probably to have a little harness page that loads the thing you want to reload in an iframe and has a button that sets subFrameWindow.location.href = subFrameWindow.location.href to trigger a new load....
Related
As I understand, this is how browser caching works. Assuming, a far future header has been set to let's say a year and foo.js is set to be cached. Here are some scenarios:
First visit to the page, server returns 200 and foo.js is cached for a year.
Next visit, browser checks the cache but has to check the server if foo.js has been modified. If not, server returns a 304 - Not Modified.
User is already on the page (and foo.js is in cache) clicks a link to go to another page, browser looks at the cached version of foo.js and serves it without doing a roundtrip to the server and returns a 200 (Cached).
User is already on the page (and foo.js is in cache) and for some reason hits F5/Reload, browser checks the cache but has to do a round trip to the server and check if foo.js has been modified. If not, server returns a 304.
As you can see, whenever a page is refreshed, it will always have to do a trip to the server to check if the file has been modified or not. I know this is not a lot and server will only return the header info but a round trip time in some cases are extremely important.
The question is, is there a way I can avoid this since I'm already setting the expiration for the files. I just want it to always fetch it from the cache until the expiration has expired or replace the file with something else (by versioning it).
From what I understand, pressing F5/Ctrl-R is browser specific action, thus leaving the control to browser.
What if the user clears the cache before clicking another action? So, even if there was HTTP specification to forcefully use cache in F5, there's no guarantee that you'll be able to achieve your need.
Simply configure and code to cache wherever maximum possible and leave the rest to user.
It looks like, when you navigate to a page (that is entering an address in URL bar or clicking a link), resources are fetched from cache without a HEAD request to server. But when you refresh the page it does the HEAD request ans so the RTT.
This looks more clear in Network tab of IE's Developer Tools. If you see the initator column, it says navigate for the first case and refresh for CTRL+R or F5.
You can override the F5 and CTRL+R behavior by adding an event listener on them and doing a window.location = window.location and prevent the default behavior by event.peventDefault or something similar. This will cause page navigation instead of refresh.
Also, I didn't test the case when the cached resource has actually changed on server. If that turns out to be a problem, you can solve it by version numbering of resources and generation of HTML with URLs pointing to the latest version of the resource (kind of like cache-manifest problem with HTML5 offline applications).
EDIT: This however doesn't solve the problem if user clicks on browser's refresh button; onbeforeunload event may help in that case.
I am using Opera and sometimes a page keeps on loading even though all content has already been presented. How do I find out which elements are to be loaded or what causes the ongoing loading process?
Even though all content seems to be 'presented', the page may still be loading images, JavaScript, CSS, or other resources referenced by it. This process performed by the browser isn't refereed to as "AJAX" as you have tagged in your question. AJAX is the asynchronous invocation of JavaScript to retrieve or submit data without requiring page refreshes.
As for examining which resources are causing your page to appear to be still "loading"...
I use Firebug's network tab to look at pending requests for resources in Firefox. It shows every resource your browser requests, how long it takes to retrieve, and the entire request & response headers and body.
Google chrome has something similar built-in, just hit F12 to bring up the "Developer Tools"
I would assume Opera has something similar although I am not sure of it's name.
So I am concerned with
webdriver.navigate().back();
in particular. AFter reading How does the Back button in a web browser work?
it made me think of how can I make sure back button behaves as expected?
Here's different ways of having "back" navigation. How would you go about detecting which approach to use? Listen to whether POST or GET is being made? Listen for AJAX requests and plan the appropriate plan?
a) navigate back() (essentially hitting back button in firefox)
b) make GET request to the previous page url
c) click on "return to results" on current page
with a) back() sometimes do not work correctly for AJAX sites with no breadcrumbs. or for POST search results for example where pressing back will prompt alert message.
with b) my concern is that the url may not match up,
ex) dynamic urls with unique hash sessionid parameters
http://www.aa.com/results.php?sessionid=29756293changeseverytime
So how do I create a contingency to make sure the back navigation works correctly as expected for a variety of web apps and sites (there are lot of variability in terms of how the back button will behave).
Why don't you store the location of the page that you want to verify, hit a link, use the goBack and then verify location of check the variables (the one you stored and the location of the verify page)?
By the way, if your site uses AJAX I suggest you use the pause function that waits for the AJAX lib. To fully load, or set the speed of your site (maybe combine them together in rare cases).
I am implementing a web application using ASP .Net and C#. One of the pages has a requirement that it always needs to be fetched from the server, rather than from the local browser cache. I have been able to achieve this.
We have a back button in the application, which simply invokes javascript:history.back() method. The problem is that when the back button is clicked to navigate to the page which is always to be reloaded from the server, the browser displays a "Web page expired message".
The intent here is to force the browser to reload the page rather than display the web page expired message.
Any help would be highly appreciated. Thanks a ton in advance.
You will probably need to change the implementation to make the browser load the URL explicitly:
window.location.href = 'http://....';
instead of invoking the back button, since the intention of the back button is to get the last page from the cache.
(If browsers would not act that way, they would re-send your form data multiple times when using the back button during a registration process or similar.)
You mean you want to control browser behaviour, which is not possible. I doubt you can solve it that way. You could set the expiration time to a small value (1 minute perhaps?) so that the page is still valid if one navigates back quickly.
Greetings,
Here's the problem I'm having. I have a page which redirects directly to another page the first time it is visited. If the user clicks 'back', though, the page behaves differently and instead displays content (tracking session IDs to make sure this is the second time the page has been loaded). To do this, I tell the user's browser to disable caching for the relevant page.
This works well in IE7, but Firefox 3 won't let me click 'back' to a page that resulted in a redirect. I assume it does this to prevent the typical back-->redirect again loop that frustrates so many users. Any ideas for how I may override this behavior?
Alexey
EDIT: The page which we redirect to is an external site over which we have no control. Server-side redirects won't work because this wouldn't generate a 'back' button for in the browser.
To quote:
Some people in the thread are talking about server-side redirect, and redirect headers (same thing)... keep in mind that we need client-side redirection which can be done in two ways:
a) A META header - Not recommended, and has some problems
b) Javascript, which can be done in at least three ways ("location", "location.href" and "location.replace()")
The server side redirect won't and shouldn't activate the back button, and can't display the typical "You'll be redirected now" page... so it's no good (it's what we're doing at the moment, actually.. where you're immediately redirected to the "lucky" page).
I think the Mozilla team takes a step into the right direction by breaking this particularly annoying pattern. Finding a way around it somehow defies the purpose, doesn't it?
Instead of redirecting on first encounter, you could simply make your page render differently when a user hits it the first time. Should be easy enough on the server side, since you already have the code that is able to make that distinction.
You can get around this by creating an iframe and saving the state of the page in a form field in the iframe before doing the redirect. All browsers save the form fields of an iframe.
This page has a really good description of how to get it working. This is the same technique google maps uses when you click on map search results.
I'm strongly in favor for the Firefox behaviour.
The most basic way to redirect is to let the server send HTTP status code 302 + Location header back to the client. This way the client (typically a browser) will not place the request URI into its history, but just resend the same request to the advocated URI.
Now it seems that Firefox started to apply the bevaviour also for server responses that try redirections e.g. by Javascript's onload event.
If you want the browser not to display a page, I think the best solution is if the server does not send the page in the first place.
Its possibly in aide to eliminate repeated actions.
A lot of ways people do things is
page 1 -> [Action] -> page 2 -> redirect to page 2 without the action parameters.
Now if you were permitted to click the back button in this situation and visit the page without the redirect, the action would be blindly re-performed.
Instead, firefox presumes the server sent a redirect header for a good reason.
Although it is noted, that you can however have content delivered after the redirect header, sending a redirect header ( at least in php ) doesn't terminate execution, so in theory, if you were to ingnore the redirect request you would get the page doing weird stuff.
( I circumvent this by the fact all our redirects are done via the same function call, where i call an explicit terminate directly after the redirect, because people when coding assume this is how it behaves )
In the URL window of firefox type about:config
Change this setting in firefox
browser.sessionstore.postdata
Change from a 0 to 1