jQuery .get caching working too well? - caching

I'm using the jQuery .get() function to load in a template file and then display the loaded HTML into a part of the page by targeting a particular DOM element. It works well but I've recently realised for reasons that confound me that it is caching my template files and masking changes I have made.
Don't get me wrong ... I like caching as much as the next guy. I want it to cache when there's no timestamp difference between the client's cache and the file on the server. However, that's not what is happening. To make it even odder ... using the same function to load the templates ... some template files are loading the updates and others are not (preferring the cached version over recent changes).
Below is the loading function I use.
function LoadTemplateFile ( DOMlocation , templateFile , addEventsFlag ) {
$.get( templateFile , function (data) {
$( DOMlocation ).html(data);
});
}
Any help would be greatly appreciated.
NEW DETAILS:
I've been doing some debugging and now see that the "data" variable that comes back to the success function does have the newer information but for reasons that are not yet clear to me what's inserted into the DOM is an old version. How in heck this would happen has now become my question.

You can set caching to off in the jQuery.get() call. See the jQuery.ajax() doc for details with the cache: false option. The details of how caching works is browser specific so I don't think you can control how it works, just whether it's on or off for any given call.
FYI, when you turn caching off, jQuery bypasses the browser caching by appending a unique timestamp parameter to the end of the URL which makes it not match any previous URL, thus there is no cache hit.
You can probably also control cache lifetime and several other caching parameters on your server which will set various HTTP headers that instruct the browser what types of caching to allow. When developing your app, you probably want caching off entirely. For deployment, I would suggest that you want it on and then when you rev your app, the safest way to deal with caching is to change your URLs slightly so the new versions of your URLs are different. That way, you always get max caching, but also users get new versions immediately.

$get will always cache by default, especially on IE. You will need to manually append a querystring, or use the ajaxSetup method to bust the cache.

As an alternative, I found in the jQuery docs that you can just override the default caching. For me I didn't have to convert all of my $.get over to $.ajax calls. Note that this also overrides default behavior for all other types of calls, like $.getScript which has the opposite default behavior of cache: false from $.get.
https://api.jquery.com/jquery.getscript/
$.ajaxSetup({
cache: false
});

Related

Hot to make visitors of my page automatically see the updated version (prevent caching) [duplicate]

Is there a way I can put some code on my page so when someone visits a site, it clears the browser cache, so they can view the changes?
Languages used: ASP.NET, VB.NET, and of course HTML, CSS, and jQuery.
If this is about .css and .js changes, then one way is "cache busting" by appending something like "_versionNo" to the file name for each release. For example:
script_1.0.css // This is the URL for release 1.0
script_1.1.css // This is the URL for release 1.1
script_1.2.css // etc.
or after the file name:
script.css?v=1.0 // This is the URL for release 1.0
script.css?v=1.1 // This is the URL for release 1.1
script.css?v=1.2 // etc.
You can check this link to see how it could work.
Look into the cache-control and the expires META Tag.
<META HTTP-EQUIV="CACHE-CONTROL" CONTENT="NO-CACHE">
<META HTTP-EQUIV="EXPIRES" CONTENT="Mon, 22 Jul 2002 11:12:01 GMT">
Another common practices is to append constantly-changing strings to the end of the requested files. For instance:
<script type="text/javascript" src="main.js?v=12392823"></script>
Update 2012
This is an old question but I think it needs a more up to date answer because now there is a way to have more control of website caching.
In Offline Web Applications (which is really any HTML5 website) applicationCache.swapCache() can be used to update the cached version of your website without the need for manually reloading the page.
This is a code example from the Beginner's Guide to Using the Application Cache on HTML5 Rocks explaining how to update users to the newest version of your site:
// Check if a new cache is available on page load.
window.addEventListener('load', function(e) {
window.applicationCache.addEventListener('updateready', function(e) {
if (window.applicationCache.status == window.applicationCache.UPDATEREADY) {
// Browser downloaded a new app cache.
// Swap it in and reload the page to get the new hotness.
window.applicationCache.swapCache();
if (confirm('A new version of this site is available. Load it?')) {
window.location.reload();
}
} else {
// Manifest didn't changed. Nothing new to server.
}
}, false);
}, false);
See also Using the application cache on Mozilla Developer Network for more info.
Update 2016
Things change quickly on the Web.
This question was asked in 2009 and in 2012 I posted an update about a new way to handle the problem described in the question. Another 4 years passed and now it seems that it is already deprecated. Thanks to cgaldiolo for pointing it out in the comments.
Currently, as of July 2016, the HTML Standard, Section 7.9, Offline Web applications includes a deprecation warning:
This feature is in the process of being removed from the Web platform.
(This is a long process that takes many years.) Using any of the
offline Web application features at this time is highly discouraged.
Use service workers instead.
So does Using the application cache on Mozilla Developer Network that I referenced in 2012:
Deprecated This feature has been removed from the Web standards.
Though some browsers may still support it, it is in the process of
being dropped. Do not use it in old or new projects. Pages or Web apps
using it may break at any time.
See also Bug 1204581 - Add a deprecation notice for AppCache if service worker fetch interception is enabled.
Not as such. One method is to send the appropriate headers when delivering content to force the browser to reload:
Making sure a web page is not cached, across all browsers.
If your search for "cache header" or something similar here on SO, you'll find ASP.NET specific examples.
Another, less clean but sometimes only way if you can't control the headers on server side, is adding a random GET parameter to the resource that is being called:
myimage.gif?random=1923849839
I had similiar problem and this is how I solved it:
In index.html file I've added manifest:
<html manifest="cache.manifest">
In <head> section included script updating the cache:
<script type="text/javascript" src="update_cache.js"></script>
In <body> section I've inserted onload function:
<body onload="checkForUpdate()">
In cache.manifest I've put all files I want to cache. It is important now that it works in my case (Apache) just by updating each time the "version" comment. It is also an option to name files with "?ver=001" or something at the end of name but it's not needed. Changing just # version 1.01 triggers cache update event.
CACHE MANIFEST
# version 1.01
style.css
imgs/logo.png
#all other files
It's important to include 1., 2. and 3. points only in index.html. Otherwise
GET http://foo.bar/resource.ext net::ERR_FAILED
occurs because every "child" file tries to cache the page while the page is already cached.
In update_cache.js file I've put this code:
function checkForUpdate()
{
if (window.applicationCache != undefined && window.applicationCache != null)
{
window.applicationCache.addEventListener('updateready', updateApplication);
}
}
function updateApplication(event)
{
if (window.applicationCache.status != 4) return;
window.applicationCache.removeEventListener('updateready', updateApplication);
window.applicationCache.swapCache();
window.location.reload();
}
Now you just change files and in manifest you have to update version comment. Now visiting index.html page will update the cache.
The parts of solution aren't mine but I've found them through internet and put together so that it works.
For static resources right caching would be to use query parameters with value of each deployment or file version. This will have effect of clearing cache after each deployment.
/Content/css/Site.css?version={FileVersionNumber}
Here is ASP.NET MVC example.
<link href="#Url.Content("~/Content/Css/Reset.css")?version=#this.GetType().Assembly.GetName().Version" rel="stylesheet" type="text/css" />
Don't forget to update assembly version.
I had a case where I would take photos of clients online and would need to update the div if a photo is changed. Browser was still showing the old photo. So I used the hack of calling a random GET variable, which would be unique every time. Here it is if it could help anybody
<img src="/photos/userid_73.jpg?random=<?php echo rand() ?>" ...
EDIT
As pointed out by others, following is much more efficient solution since it will reload images only when they are changed, identifying this change by the file size:
<img src="/photos/userid_73.jpg?modified=<? filemtime("/photos/userid_73.jpg")?>"
A lot of answers are missing the point - most developers are well aware that turning off the cache is inefficient. However, there are many common circumstances where efficiency is unimportant and default cache behavior is badly broken.
These include nested, iterative script testing (the big one!) and broken third party software workarounds. None of the solutions given here are adequate to address such common scenarios. Most web browsers are far too aggressive caching and provide no sensible means to avoid these problems.
Updating the URL to the following works for me:
/custom.js?id=1
By adding a unique number after ?id= and incrementing it for new changes, users do not have to press CTRL + F5 to refresh the cache. Alternatively, you can append hash or string version of the current time or Epoch after ?id=
Something like ?id=1520606295
<meta http-equiv="pragma" content="no-cache" />
Also see https://stackoverflow.com/questions/126772/how-to-force-a-web-browser-not-to-cache-images
Here is the MDSN page on setting caching in ASP.NET.
Response.Cache.SetExpires(DateTime.Now.AddSeconds(60))
Response.Cache.SetCacheability(HttpCacheability.Public)
Response.Cache.SetValidUntilExpires(False)
Response.Cache.VaryByParams("Category") = True
If Response.Cache.VaryByParams("Category") Then
'...
End If
Not sure if that might really help you but that's how caching should work on any browser. When the browser request a file, it should always send a request to the server unless there is a "offline" mode. The server will read some parameters like date modified or etags.
The server will return a 304 error response for NOT MODIFIED and the browser will have to use its cache. If the etag doesn't validate on server side or the modified date is below the current modified date, the server should return the new content with the new modified date or etags or both.
If there is no caching data sent to the browser, I guess the behavior is undetermined, the browser may or may not cache file that don't tell how they are cached. If you set caching parameters in the response it will cache your files correctly and the server then may choose to return a 304 error, or the new content.
This is how it should be done. Using random params or version number in urls is more like a hack than anything.
http://www.checkupdown.com/status/E304.html
http://en.wikipedia.org/wiki/HTTP_ETag
http://www.xpertdeveloper.com/2011/03/last-modified-header-vs-expire-header-vs-etag/
After reading I saw that there is also a expire date. If you have problem, it might be that you have a expire date set up. In other words, when the browser will cache your file, since it has a expiry date, it shouldn't have to request it again before that date. In other words, it will never ask the file to the server and will never receive a 304 not modified. It will simply use the cache until the expiry date is reached or cache is cleared.
So that is my guess, you have some sort of expiry date and you should use last-modified etags or a mix of it all and make sure that there is no expire date.
If people tends to refresh a lot and the file doesn't get changed a lot, then it might be wise to set a big expiry date.
My 2 cents!
I implemented this simple solution that works for me (not yet on production environment):
function verificarNovaVersio() {
var sVersio = localStorage['gcf_versio'+ location.pathname] || 'v00.0.0000';
$.ajax({
url: "./versio.txt"
, dataType: 'text'
, cache: false
, contentType: false
, processData: false
, type: 'post'
}).done(function(sVersioFitxer) {
console.log('Versió App: '+ sVersioFitxer +', Versió Caché: '+ sVersio);
if (sVersio < (sVersioFitxer || 'v00.0.0000')) {
localStorage['gcf_versio'+ location.pathname] = sVersioFitxer;
location.reload(true);
}
});
}
I've a little file located where the html are:
"versio.txt":
v00.5.0014
This function is called in all of my pages, so when loading it checks if the localStorage's version value is lower than the current version and does a
location.reload(true);
...to force reload from server instead from cache.
(obviously, instead of localStorage you can use cookies or other persistent client storage)
I opted for this solution for its simplicity, because only mantaining a single file "versio.txt" will force the full site to reload.
The queryString method is hard to implement and is also cached (if you change from v1.1 to a previous version will load from cache, then it means that the cache is not flushed, keeping all previous versions at cache).
I'm a little newbie and I'd apreciate your professional check & review to ensure my method is a good approach.
Hope it helps.
In addition to setting Cache-control: no-cache, you should also set the Expires header to -1 if you would like the local copy to be refreshed each time (some versions of IE seem to require this).
See HTTP Cache - check with the server, always sending If-Modified-Since
There is one trick that can be used.The trick is to append a parameter/string to the file name in the script tag and change it when you file changes.
<script src="myfile.js?version=1.0.0"></script>
The browser interprets the whole string as the file path even though what comes after the "?" are parameters. So wat happens now is that next time when you update your file just change the number in the script tag on your website (Example <script src="myfile.js?version=1.0.1"></script>) and each users browser will see the file has changed and grab a new copy.
Force browsers to clear cache or reload correct data? I have tried most of the solutions described in stackoverflow, some work, but after a little while, it does cache eventually and display the previous loaded script or file. Is there another way that would clear the cache (css, js, etc) and actually work on all browsers?
I found so far that specific resources can be reloaded individually if you change the date and time on your files on the server. "Clearing cache" is not as easy as it should be. Instead of clearing cache on my browsers, I realized that "touching" the server files cached will actually change the date and time of the source file cached on the server (Tested on Edge, Chrome and Firefox) and most browsers will automatically download the most current fresh copy of whats on your server (code, graphics any multimedia too). I suggest you just copy the most current scripts on the server and "do the touch thing" solution before your program runs, so it will change the date of all your problem files to a most current date and time, then it downloads a fresh copy to your browser:
<?php
touch('/www/sample/file1.css');
touch('/www/sample/file2.js');
?>
then ... the rest of your program...
It took me some time to resolve this issue (as many browsers act differently to different commands, but they all check time of files and compare to your downloaded copy in your browser, if different date and time, will do the refresh), If you can't go the supposed right way, there is always another usable and better solution to it. Best Regards and happy camping. By the way touch(); or alternatives work in many programming languages inclusive in javascript bash sh php and you can include or call them in html.
For webpack users:-
I added time with chunkhash in my webpack config. This solved my problem of invalidating cache on each deployment. Also we need to take care that index.html/ asset.manifest is not cached both in your CDN or browser. Config of chunk name in webpack config will look like this:-
fileName: [chunkhash]-${Date.now()}.js
or If you are using contenthash then
fileName: [contenthash]-${Date.now()}.js
This is the simple solution I used to solve in one of my applications using PHP.
All JS and CSS files are placed in a folder with version name. Example : "1.0.01"
root\1.0.01\JS
root\1.0.01\CSS
Created a Helper and Defined the version Number there
<?php
function system_version()
{
return '1.0.07';
}
And Linked JS and SCC Files like below
<script src="<?= base_url(); ?>/<?= system_version();?>/js/generators.js" type="text/javascript"></script>
<link rel="stylesheet" type="text/css" href="<?= base_url(); ?>/<?= system_version(); ?>/css/view-checklist.css" />
Whenever I make changes to any JS or CSS file, I change the System Verson in Helper and rename the folder and deploy it.
I had the same problem, all i did was change the file names which are linked to my index.html file and then went into the index.html file and updated their names, not the best practice but if it works it works. The browser sees them as new files so they get redownloaded on to the users device.
example:
I want to update a css file, its named styles.css, change it to styless.css
Go into index.html and update , and change it to
in case interested I've found my solution to get browsers refreshing .css and .js in the context of .NET MVC (.net fw 4.8) and the use of bundles.
I wanted to make browsers refresh cached files only after a new assembly is deployed.
Buinding on Paulius Zaliaduonis response, my solution is as follows:
store your application base url in the web config app settings (the HttpContext is not yet available at runtime during the RegisterBundle...), then make this parameter changing according to the configuration (debug, staging, release...) by the xml transform
In BundleConfig RegisterBundles get the assembly version by the means of reflection, and...
...change the default tag format of both styles and scripts so that the bundling system generates link and script tags appending a query string parameter on them.
Here is the code
public static void RegisterBundles(BundleCollection bundles)
{
string baseUrl = system.Configuration.ConfigurationManager.AppSettings["by.app.base.url"].ToString();
string assemblyVersion = Assembly.GetExecutingAssembly().GetName().Version.ToString();
Styles.DefaultTagFormat = $"<link href='{baseUrl}{{0}}?v={assemblyVersion}' rel='stylesheet'/>";
Scripts.DefaultTagFormat = $"<script src='{baseUrl}{{0}}?v={assemblyVersion}'></script>";
}
You'll get tags like
<script src="https://example.org/myscriptfilepath/script.js?v={myassemblyversion}"></script>
you just need to remember to to build a new version before deploying.
Ciao
2023 onward
At the time of writing, many web browsers support the Clear-Site-Data HTTP header [MDN reference]. To instruct the client web browser to clear the cache for the website domain and subdomains, set the following header in the HTTP response from the server:
Clear-Site-Data: "cache"
Alternatively, the following header may be better supported across browsers, but it clears other website data, such as localStorage and cookies, in addition to the cache.
Clear-Site-Data: "*"
However note that intermediate caches (e.g. a CDN) may not understand or respect this header, so intermediate caches may still respond with previously cached data.
Do you want to clear the cache, or just make sure your current (changed?) page is not cached?
If the latter, it should be as simple as
<META HTTP-EQUIV="Pragma" CONTENT="no-cache">

prevent ajax request caching in IE during post method

How to prevent IE from caching the request sent to the server?
i tried by setting ("Cache-Control: no-cache) in the https response object but still the IE is caching my request data.
Please find tmy project details as below:
in my application i am sending login request to the server. so after i login if i take the memory dump using winHex tool i am able to get the password details in the memory.
i am clearing the dialog refrense also but still the request data is getting cached.
Please suggest me some work arround for this
You could try to add a parameter to your URL with a random value, this will prevent that the URL is always thesame.
Example:
Normal URL:
www.test.com/test.php
Fake different URL:
www.test.com/test.php?_dc=12353somerandomval
Make sure the _dc parameter always has a different value, you can (for example) use JavaScript date object for this (It returns the current time in milliseconds, which will virtually always be different):
params: {
_dc : new Date().getTime()
}
In a project I did a while back I had the exact same issues, I searched around and saw a few things that recommended adding a time stamp to the request, that does work too, but this was the most elegant way that worked for me.
$('document').ready(function () {
$.ajaxSetup({
cache: false
});
});

MVC3 OutputCache not working on Server and Client as expected

I'm having trouble using the OutputCache attribute in Microsoft's MVC3 framework.
Please imagine the following controller action, which can be used as part of an AJAX call to get a list of products based on a particular manufacturerId:
public JsonResult GetProducts(long manufacturerId)
{
return Json(this.CreateProductList(manufacturerId), JsonRequestBehavior.AllowGet);
}
I want this result to be cached on the server to avoid making excessive database queries. I can achieve this by configuring the attribute thus:
[OutputCache(Duration = 3600, Location = OutputCacheLocation.Server, VaryByParam = "manufacturerId")]
This works as I expected - the browser makes an intitial request which causes the server to create and cache the result, subsequent requests from the same or different browser get the cached version.
But... I also want the browser to cache these results locally; if I filter first on manufacturer X, then Y then go back to X, I don't want it to make another request for X's products - I want it to just use its cached version.
I can make this happen, by changing the OutputCache to this:
[OutputCache(Duration = 3600, Location = OutputCacheLocation.Client)]
Here's the question: how do I combine these so that I can have both sets of behaviour? I tried setting the Location to ServerAndClient but this just made it behave the same as when Location was Server.
I'm sure that the problem has something to do with the "Vary: *" response header I get with ServerAndClient but I don't know how to get rid of it.
I welcome comments about the rationality of my intentions - the fact that I'm not getting the results I expect makes me think I might have some fundamental misunderstanding somewhere...
Many thanks.
PS: This is on a local dev environment, IIS Express from VS2010.
You can use OutputCacheLocation.Any which specifies
The output cache can be located on the browser client (where the
request originated), on a proxy server (or any other server)
participating in the request, or on the server where the request was
processed. This value corresponds to the HttpCacheability.Public
enumeration value.
You may want to also set Cache-control public to in the HTTP header for these requests.
Edit
It seems, depending on your .Net version of the web server you may need to include Response.Cache.SetOmitVaryStar(true); within your controller action to remove the Vary * headers as you suggest.
Details of the reason why in .Net 4 breaking changes release notes.

Should I be using POST or GET when retrieving JSON data into jqGrid in my ASP.NET MVC application?

I am using jqgrid in my ASP.NET MVC application. Currently I have mTYpe: 'POST' like this:
jQuery("#myGrid").jqGrid({
mtype: 'POST',
toppager: true,
footerrow: haveFooter,
userDataOnFooter: haveFooter,
But I was reading this article, and I see this paragraph:
Browsers can cache images, JavaScript, CSS files on a user's hard
drive, and it can also cache XML HTTP calls if the call is a HTTP GET.
The cache is based on the URL. If it's the same URL, and it's cached
on the computer, then the response is loaded from the cache, not from
the server when it is requested again. Basically, the browser can
cache any HTTP GET call and return cached data based on the URL. If
you make an XML HTTP call as HTTP GET and the server returns some
special header which informs the browser to cache the response, on
future calls, the response will be immediately returned from the cache
and thus saves the delay of network roundtrip and download time.
Given this is the case, should I switch my jqGrid mType all to use "GET" from "POST" for the mType? (It says XML (doesn't mention JSON). If the answer is yes, then actually what would be a situation why I would ever want to use POST for jqGrid mType as it seems to do the same thing without this caching benefit?
The problem which you describe could be in Internet Explorer, but it will be not exist in jqGrid if you use default options.
If you look at the full URL which will be used you will see parameters like
nd=1339350870256
It has the same meaning as cache: true of jQuery.ajax. jqGrid add the current timestemp to the URL to make it unique.
I personally like to use HTTP GET in jqGrid, but I don't like the usage of nd parameter. The reason I described in the old answer. It would be better to use prmNames: {nd:null} option of jqGrid which remove the usage of nd parameter in the URL. Instead of that one can control the caching on the server side. For example the setting of
Cache-Control: private, max-age=0
is my standard setting. To set the HTTP header you need just include the following line in the code of ASP.NET MVC action
HttpContext.Current.Response.Cache.SetMaxAge (new TimeSpan (0));
You can find more details in the answer.
It's important to understand, that the header Cache-Control: private, max-age=0 don't prevent the caching of data, but the data will be never used without re-validation on the server. Using other HTTP header option ETag you can make the revalidate really working. The main idea, that the value of ETag will be always changed on changing the data on the server. In the case if the previous data are already in the web browser cache the web browser automatically send If-None-Match part in the HTTP request with the value of ETag from the cached data. So if the server see that the data are not changed it can answer with HTTP response having 304 Not Modified status and empty body of the HTTP response. It allows the web browser to use local previously cached data.
In the answer and in this one you will find the code example how to use ETag approach.
If the data that the server sends changes, then you should use POST to avoid getting cached data everytime you request it.
You should not use GET for all the purposes. GET requests are supposed to use for getting data from the server not for saving or deleting operation. GET requests has some limitation since the data you are sending to the server or appended as query-strings you can't send very large data using GET requests. Also you should not use GET request to send sensitive information to the server. You should the POST request in all the other cases like adding, editing and deleting.
As far as I'm aware jqgrid appends a unique key in every GET request so you don't get any benefit from browser caching.
One way around the caching behavior is to make the GET unique each time the request is made. jQuery.ajax() does this with "cache: false" by appending a timestamp to the end of the request. You can replicate this behavior with something similar:
uri = uri + '?_=' + (new Date()).getTime(); // uri represents the URI to the endpoint

Issue with METHOD in prototype / Ajax.Request

I am trying to call yahoo api via Ajax to find current weather:
var query = "select * from weather.forecast where location in ('UKXX0085','UKXX0061','CAXX0518','CHXX0049') and u='c'";
var url = 'http://query.yahooapis.com/v1/public/yql?q=' + encodeURIComponent(query) +'&rnd=1344223&format=json&callback=jsonp1285353223470';
new Ajax.Request(url, {
method: 'get',
onComplete: function(transport) {
alert(transport.Status); // say 'null'
alert(transport.responseText); // say ''
}
});
I noticed, that instead of GET firebug says OPTIONS. What is it and how I can use force prototype to use GET?
Here is functionality which i am trying to recreate.
And here is full URL which I am trying to access:
http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20weather.forecast%20where%20location%20in%20(%27UKXX0085%27%2C%27UKXX0061%27%2C%27CAXX0518%27%2C%27CHXX0049%27)%20and%20u%3D%27c%27&rnd=1344223&format=json&callback=jsonp1285353223470
After hours of trying to debug the same issue myself, I came to the following conclusion.
I believe this happens because of XSS counter-measures in newer browsers.
You can find very detailed information about these new counter-measures here:
https://developer.mozilla.org/en/http_access_control
Basically, a site can specify how "careful" the browser should be about allowing scripts from other domains. If your site, or a site from which you're loading external JavaScript code, includes one of these pieces of "browser advice", newer browsers will react by enforcing a stronger XSS policy.
For some reason, Prototype's Ajax.Request, under Firefox, seems to react by attempting to do an OPTIONS request, rather than a GET or POST, so perhaps Prototype has not been updated to correctly handle these new security conditions.
At least that was the conclusion in my case. Maybe this clue can help with your case...

Resources