How to prevent JavaScript files to be cached in IIS - caching

I have a weird problem, javascript files are cached in IIS,
What I have done till now
Disabled caching on my website in IIS
Disable cache on chrome dev tools
Adding timespan at url in page to prevent any chaching
Doubled check to see if the files on disk are updated and they were updated
But my scripts and css not updating from latest version on disk, until I do IISReset

Your attempt to break the cache is short circuited by the fact you are getting the cached version of the content page with the old timestamp, meaning you never request the JavaScript page with the new timestamp until something else forces the content page to reload.
You can check this by using the same cache breaking scheme on the content page itself in addition to the JavaScript page.
This is not a recommended solution as you will bypass cache altogether for the content page.
Every answer I've seen for this problem has worked sometimes and other times not. If I get a solution, I'll post it.

Related

is Pushstate inferior to Hashbangs when it comes to caching?

There are several advantages to the HTML5 Pushstate in comparison to hasbangs, in fact, Google is now encouraging the use of Pushstate. The only Pushstate disadvantage being publicly discussed is the fact that non-modern browsers do not support it. However, to me it seems that Pushstate is also disadvantageous when it comes to caching. I might be wrong, hence this question.
is Pushstate inferior to Hashbangs when it comes to caching pages?
Here is a case where it seems that Pushstate is bad at caching.
Pushsate
Bob navigates to eg.com/page1, the full page is downloaded, rendered and cached.
Bob clicks a button, eg.com/json/page2 is downloaded and cached.
The browser Processes the JSON and re-renders parts of Bob's page.
Pushstate changes the displayed browser address to eg.com/page2.
Bob closes the browser, then re-opens it and directly visits
eg.com/pushstate2. The full page is downloaded, rendered and cached.*
*-Despite the fact that it is already theoretically available in the cache under the guise of eg.com/json/page2
Hashbangs
Alice navigates to eg.com/#!page1, eg.com/index.html is downloaded and cached.
eg.com/json/page1 is downloaded and cached.
The browser Processes the JSON and renders Alice's page.
Alice clicks a button, eg.com/json/page2 is downloaded and cached, the displayed browser address is changed to eg.com/#!page2
The browser Processes the JSON and renders Alice's page.
Alice closes the browser, then re-opens it and directly visits eg.com/#!page2. NOTHING is downloaded and everything is loaded from cache, unlike Pushstate.
Summary
I have numerous similar cases in mind, The question is whether or not this is indeed valid, I may be missing something which is leading me to wrong conclusions. is Pushstate inferior to Hashbangs when it comes to caching pages?
I think that pushstate is inferior, but if you are building a SPA page correctly the differences should not be significant:
Assuming that you are using one of the latest frameworks, your index.html page should be relatively small with a few <script> tags (frameworks like webpack, systemjs etc).
The js files that are referenced with these tags do get cached normally so the only difference between the two methods is fetching index.html for every pushstate url as opposed to fetching it once in hashbang mode.
I got the idea from the following question:
https://webmasters.stackexchange.com/questions/65694/is-this-way-of-using-pushstate-seo-friendly

Any firefox plugin that can cache entire page and load it when accessed using url

I am looking for a firefox plugin that can store entire page in cache and load it when ever that address is typed, and only load content from server if we press reload cache or similar functionality.
My usecase is, I use documentation of Laravel, Bootstrap, Jquery etc on regularly based but each time I am accessing it from the internet, these pages do not have any changes for many days. I dont want to store the webpage as file and access it, I would love to do it in comfort of your web browser than opening files.
I think firefox already does this. Load a page in a tab, then go to top left firefox menu > developer > work offline.
Reload that page in tab its from cache.
If you ever go work offline it loads from cache.

Can you force a browser to always fetch the cached files and not do a round trip for a 304?

As I understand, this is how browser caching works. Assuming, a far future header has been set to let's say a year and foo.js is set to be cached. Here are some scenarios:
First visit to the page, server returns 200 and foo.js is cached for a year.
Next visit, browser checks the cache but has to check the server if foo.js has been modified. If not, server returns a 304 - Not Modified.
User is already on the page (and foo.js is in cache) clicks a link to go to another page, browser looks at the cached version of foo.js and serves it without doing a roundtrip to the server and returns a 200 (Cached).
User is already on the page (and foo.js is in cache) and for some reason hits F5/Reload, browser checks the cache but has to do a round trip to the server and check if foo.js has been modified. If not, server returns a 304.
As you can see, whenever a page is refreshed, it will always have to do a trip to the server to check if the file has been modified or not. I know this is not a lot and server will only return the header info but a round trip time in some cases are extremely important.
The question is, is there a way I can avoid this since I'm already setting the expiration for the files. I just want it to always fetch it from the cache until the expiration has expired or replace the file with something else (by versioning it).
From what I understand, pressing F5/Ctrl-R is browser specific action, thus leaving the control to browser.
What if the user clears the cache before clicking another action? So, even if there was HTTP specification to forcefully use cache in F5, there's no guarantee that you'll be able to achieve your need.
Simply configure and code to cache wherever maximum possible and leave the rest to user.
It looks like, when you navigate to a page (that is entering an address in URL bar or clicking a link), resources are fetched from cache without a HEAD request to server. But when you refresh the page it does the HEAD request ans so the RTT.
This looks more clear in Network tab of IE's Developer Tools. If you see the initator column, it says navigate for the first case and refresh for CTRL+R or F5.
You can override the F5 and CTRL+R behavior by adding an event listener on them and doing a window.location = window.location and prevent the default behavior by event.peventDefault or something similar. This will cause page navigation instead of refresh.
Also, I didn't test the case when the cached resource has actually changed on server. If that turns out to be a problem, you can solve it by version numbering of resources and generation of HTML with URLs pointing to the latest version of the resource (kind of like cache-manifest problem with HTML5 offline applications).
EDIT: This however doesn't solve the problem if user clicks on browser's refresh button; onbeforeunload event may help in that case.

Firefox does not load page from offline cache manifest, but works fine on Chrome (+ cache manifest troubleshooting tips)

This is basically identical to this SO questions which has not been answered: Offline Web App not caching on Firefox but ok on Chrome
I'm experiencing the same problem and I'll provide a bit more information because it might help someone trying to figure out the arcane mysteries and traps of trying to implement cache manifest.
The problem: hit page reload/refresh and the page should reload from the offline cache, but it doesn't.
It works fine in Chrome 23: when I disconnect from the internet and refresh the page, it loads fine and the console shows Application Cache NoUpdate event.
However, Firefox 15.0.1 shows me my 404 page.
Troubleshooting (Firefox): Go to Firebug and click on DOM, then applicationCache. The status shows 0, which means uncached (the page has not been stored offline).
Troubleshooting (Firefox): Go to Firefox Options -> Advanced and look at Offline Web Content and User Data. It shows that my domain is using 1.4MB of data for offline use. (This is a good place to check for whether your page has been cached).
Troubleshooting (Firefox): Open a new tab and go to about:cache. Here you should see a heading for Offline cache device. Here you can see the Cache Directory, which is where your offline cache files are saved. If you click on List Cache Entries you'll see the files in your offline cache manifest.
There are 2 things that I find strange here: 1) is that when I click on any of the files on the list it goes to a blank page that says "Cache entry information The cache entry you selected is not available." However, the files do exist and are not blank in the Cache Directory 2) Although all of the files from the cache manifest are listed, the page that I'm calling is not listed here (in Chrome DevTools it shows up in the manifest as Master: it is cached automatically even though it's not explicitly listed in the cache manifest file).
Here's what I see when I'm online: With a cold (empty) cache, when I load the page the console shows the checking, downloading, progress, and cached events, but the cache status is uncached. Basically, the cache files are downloaded, but they cannot be accessed. Firebug DOM applicationCache says: 0 items in offline cache (this contradicts what is shown in about:cache and Options -> Advanced). The status is 1, which means idle. When I look at the Net tab in Firebug when online, it shows a GET request for the page with a 200 OK response. The Expires setting shows Wed Dec 31 1969, which I think means that the page will always be fetched. The other files show a 304 Not Modified response, which means they are being loaded from the browser cache, not the offline cache (the analogous response from Chrome for these files is 200 OK (from cache), which means they are loaded from the offline cache, not the browser cache).
When I'm offline: With the "uncached" cache the GET request fails and it loads the offline fallback 404 page with a 200 OK (BFCache) response.
It seems the offline cache is downloaded because it physically exists on disk and progress events are shown in the console, but Firefox never fires the cached event, so some of the resources have not downloaded successfully. The files are all png, js, or php format, so no crazy file formats. Chrome downloads the exact same files into the cache without a problem. I've also tried mobile Safari and it successfully reloads the page from the offline cache.
Are there any known issues with Firefox not caching certain file types? I use the .html.php extension on some of my files. I also generate the manifest dynamically using a php file, so it is only getting files that exist and it hashes them to detect changes and update the manifest.
Next Steps: I will try a bare-bones manifest to see if I can get it to work, then add files one by one to see which file triggers the error. Perhaps Firefox doesn't like the fact that I'm dynamically generating the cache manifest instead of manually updating a static file?
I've learned a lot about the arcane intricacies of cache manifest, but I'm more of a hacker than a computer expert. Has anyone else experienced this quirkiness with Firefox?
The beginning of the page:
<!DOCTYPE html>
<html manifest="/directory/manifest.php">
and then the manifest.php is just
header('Content-Type: text/cache-manifest');
echo "CACHE MANIFEST\n";
etc. etc. It uses RecursiveDirectoryIterator to get all the files in the directory (except the cache itself, which is included by default).

Does HTML 5 offline functionality work for browsers?

I've been at this for a few days now and am becoming more and more frustrated. I'm getting inconsistent offline functionality results across Chrome, FF, and I've just started using Safari.
I'm developing a sandbox app using Asp .Net MVC 3. Below is the structure of my application:
Controllers/CarController
Views/Car/Edit
Views/Car/EditOffline
Views/Car/Index
Out of the 3 views, Index is the only that has the manifest attribute defined. Index is the view that is initially requested. Below is the contents of my manifest:
CACHE MANIFEST
FALLBACK:
Car/Edit Car/EditOffline
#Version 1
Upon the first request of Index, the browser creates 3 entries in the Application Cache. They are:
localhost/Sandbox/Car, type = master
localhost/Sandbox/Car/EditOffline, type = fallback
localhost/Sandbox/Offline., type = manifest
The way I've been simulating offline behavior for all 3 browsers is by explicitly stopping IIS. After, Index has been requested. I shut down IIS and make a request to the Edit action. The result is is EditOffline gets served up. Now, when I the Index view again, I get a 404 error, but why? I thought the browser would've have served up the cached version of that page? When I re-requeset the Edit view (while still offline), I also get a 404 error, but why? The browser served the EditOffline view previously so why do I get a 404 now? In FF, I've gotten it to work as expected a few times, but I made no code changes. I explicitly deleted the offline cache, restarted the server, re-requested the index view and it magically worked.
It looks like your initial request is for http://localhost/Sandbox/Car, is that the URL you then get a 404 on? The manifest works by URL but it knows nothing about default pages or any other server configuration. So http://localhost/Sandbox/Car is a different page to http://localhost/Sandbox/Car/Index as far as the application cache is concerned. The view involved is largely irrelevant to the caching other than that you've included a reference to the manifest file in it.

Resources