How can I use Fiddler to confirm that HTTP caching is working? Is there another better way?
You can confirm caching by having a page fetch a resource and note that no request for the resource appeared in Fiddler. I can't think of a better way to do it. Works for me.
right click the URL in the fiddler and click properties, you can check the cach info in that popup under "WININET CACHE INFO"
Browse the site through the Fiddler as proxy. In each response details, there's a tab "Caching". This shows useful info about the response headers - e.g. what the different Cache-Control and Expires values mean.
I think the best way is to use the method demonstrated within most caching tutorials - Have a label on the page that displays the current server time. If the value is cached, you will not see it update with subsequent page refreshes until the cache is regenerated.
If your requirement is more complex (you need to use Fiddler), Anthony's suggestion is the one I have used successfully in the past.
Fiddler will definitely help with this. You'll either see the server respond with an HTTP 304 response (Not Modified - which tells the client that the cached item is still valid) or for content that has it's web expiry set correctly, you won't see a request at all.
In fact, you'll find Firefox plus FireBug will do this for you too.
Related
My goal is to cache images in browser's cache.
Current setup
Cache-Control: public, max-age=10368000 header is attached to the requested image to cache the image.
Furthermore, ETag: f5f9f1fb39790a06c5b93735ac6e2397 header is attached too to check if image has changed once max-age is reached.
When the same image is requested again, if-none-match header's value is checked to see if image has changed. If it has not, 304 is returned.
I am using Fiddler to inspect http traffic (note - behaviour described below was happening already before Fiddler, so it is not the problem)
Chrome version - 77.0.3865.90 (Official Build) (64-bit)
Expected behaviour
I expect that the image would get cached 10368000 seconds. Unless 10368000 seconds have passed, image gets served from browser's cache on each request. Once 10368000 seconds have passed, request is made to the server which checks Etag. If the image has not changed on the server, return 304 to the client and extend caching period to another 10368000 seconds.
Behaviour with Google Chrome
Image gets requested for the first time. Below are the request headers
Image is returned successfully with 200 status. Below are the response headers
Fiddler tells that an image was requested and served
Furthermore, the fact that the image is cached is seen in ~/Library/Caches/Google/Chrome as it appears there.
I click on the link bar in browser and press Enter to request the image again. Below are the request headers
Image is returned successfully with 304 status. Below are the response headers
Fiddler tells that image was served from the cache
Why did Chrome make an additional request for the image and served it from the cache only after it received 304 response?
I see that when requesting the image second time Cache-Control: max-age=0 request header is set. As I understand, it means that Chrome wants to re-validate if cached image is valid before serving it. But why?
Some people online have told that Etag header is the reason for Chrome making sure images are valid. Even if I do not include Etag header, just have Cache-Control: public, max-age=10368000 in the first response, the Cache-Control: max-age=0 exists in second request's headers.
I have also tried excluding public, making it private etc. I also added Expires and Last-Modified pair with and without max-age=10368000 and get the same behaviour.
Furthermore, in dev tools I have NOT checked Disable cache. So the cache is enabled for sure which also makes sense because the image was served after returning 304.
This exact behaviour happens also when, instead of clicking link bar and pressing Enter, I press Refresh arrow aka CMD + R. If I do hard refresh CMD + SHIFT + R, then the image is requested as if it is first time requesting it, which makes sense.
Behaviour with Firefox
Firefox works exactly as expected.
Image gets requested for the first time. Below are the request headers
Image is returned successfully with 200 status. Below are the response headers
Fiddler tells that an image was requested and served
Furthermore, if I hover over response status, then it says OK
I click on the link bar in browser and press Enter to request the image again. Below are the request headers
Image is returned successfully with 200 status. Below are the response headers
BUT Firefox shows that it was cached.
FURTHERMORE, Fiddler detected no activity on the second request, so the image was served from Firefox's cache. Unlike Chrome, which made another request to the server just to receive 304 from it.
Thank you very much for your time and help. I highly appreciate it and am open to provide any other information. Really looking forward to what you think about this.
Have a nice day :)
Why did Chrome make an additional request for the image and served it from the cache only after it received 304 response?
Because you told it to - whether you realise it or not! When you browse forwards and backwards the cached image will be used. When you click refresh or F5 (Cmd R on MacOS) or a page, you are explicitly asking Chrome to double check if the page is still the correct one. So when you click on the URL bar and click enter it’s the same thing. Chrome thinks that the only reason you’d do that, on a URL you are already on, you must want it to check.
I see that when requesting the image second time Cache-Control: max-age=0 request header is set. As I understand, it means that Chrome wants to re-validate if cached image is valid before serving it. But why?
Correct. See the answer to this question. As to why - because as per above you have Chrome a signal you wanted to refresh.
There is also the cache-control immutable header, which is designed to tell the browser never to re-fetch if in the cache - even on refresh.
Firefox works exactly as expected.
Interesting. I actually like the way Chrome does this. Certainly for reload (F5/Cmd + R). Appreciate it confused you (though hopefully will be less confusing now you understand it), but testing this is not the norm, and most people don’t click the URL bar and then hit enter without changing the URL unless they want to recheck the current page with the server.
A cookie has been set with the SameSite=Strict attribute. When Javascript tries to read the cookie before making an XHR request the cookie seem to be unavailable. But the developer tools show the cookie exists. This problem is happening only in the recent version of Firefox. Not sure if I am missing anything. The domain and the path are set right on the cookie.
Apparently, it depends on how you get to the page that performs the XHR request. If you get there by clicking on a link on another website (say, following a link on your webmail client), the Strict cookies will not be available, even in subsequent XHR requests! This behaviour seems to be different in Firefox than in other browsers.
Some more info you may find here: https://www.netsparker.com/blog/web-security/same-site-cookie-attribute-prevent-cross-site-request-forgery/
I still don't know why, I think it about the firefox behavior, I fixed by using location.replace("") to reload page, but you can also use "lax" or "none" to fix that.
As I understand, this is how browser caching works. Assuming, a far future header has been set to let's say a year and foo.js is set to be cached. Here are some scenarios:
First visit to the page, server returns 200 and foo.js is cached for a year.
Next visit, browser checks the cache but has to check the server if foo.js has been modified. If not, server returns a 304 - Not Modified.
User is already on the page (and foo.js is in cache) clicks a link to go to another page, browser looks at the cached version of foo.js and serves it without doing a roundtrip to the server and returns a 200 (Cached).
User is already on the page (and foo.js is in cache) and for some reason hits F5/Reload, browser checks the cache but has to do a round trip to the server and check if foo.js has been modified. If not, server returns a 304.
As you can see, whenever a page is refreshed, it will always have to do a trip to the server to check if the file has been modified or not. I know this is not a lot and server will only return the header info but a round trip time in some cases are extremely important.
The question is, is there a way I can avoid this since I'm already setting the expiration for the files. I just want it to always fetch it from the cache until the expiration has expired or replace the file with something else (by versioning it).
From what I understand, pressing F5/Ctrl-R is browser specific action, thus leaving the control to browser.
What if the user clears the cache before clicking another action? So, even if there was HTTP specification to forcefully use cache in F5, there's no guarantee that you'll be able to achieve your need.
Simply configure and code to cache wherever maximum possible and leave the rest to user.
It looks like, when you navigate to a page (that is entering an address in URL bar or clicking a link), resources are fetched from cache without a HEAD request to server. But when you refresh the page it does the HEAD request ans so the RTT.
This looks more clear in Network tab of IE's Developer Tools. If you see the initator column, it says navigate for the first case and refresh for CTRL+R or F5.
You can override the F5 and CTRL+R behavior by adding an event listener on them and doing a window.location = window.location and prevent the default behavior by event.peventDefault or something similar. This will cause page navigation instead of refresh.
Also, I didn't test the case when the cached resource has actually changed on server. If that turns out to be a problem, you can solve it by version numbering of resources and generation of HTML with URLs pointing to the latest version of the resource (kind of like cache-manifest problem with HTML5 offline applications).
EDIT: This however doesn't solve the problem if user clicks on browser's refresh button; onbeforeunload event may help in that case.
I am experiencing random occurrences of caching of Ajax requests created through Jquery's get.
The Jquery gets are done in the most straight forward conventional way (route + params + callback)
I am already using
$.ajaxSetup({cache:false});
But it doesn't seem to always work. I get how ajaxSetup no cache works, and I see the added random parameter being added to my request url.
My current browser is IE 8.0
Does anyone know of another solution besides the ajaxSetup way...
The browser itself is simply not allowed/able to cache requests with distinct parameters, as added by {cache:false}.
It sounds like the caching is happening somewhere else in your chain, possibly in your web server/app.
Use firebug's net tab to check exactly what is being requested by the browser, and what the URLs are exactly, then take it from there.
It turns out I was wrong about my assumption about caching of ajax requests.
The real issue was caching of subsequent redirect to action requests that took place on the server (in response to the original ajax call).
The solution ended up being the following attribute.
[OutputCache(Location = OutputCacheLocation.None)]
It can be either applied at the controller level or the action level.
If you add a image to your browser's DOM, IE6 will not check its cache to see if it already downloaded the image but will, instead, re-retrieve it from the server. I have not found any combination of HTTP response headers (of the ensuing image request) to convince IE6 that it can cache the image: Cache-control, Expires, Last-modified.
Some suggest you can return a 304 of the subsequent image requests to tell IE6 "you already got it" but I want to avoid the whole round trip to the server in the first place.
Maybe this will work? (is the same behaviour like hovering on links with css background image)
A quick google mentions the "Expires" header, which you've already tried. Digging deeper, it mentions the ETag header:
http://mir.aculo.us/2005/08/28/internet-explorer-and-ajax-image-caching-woes
Hope this helps.