Every time i update my website system UI/Jquery,
users complain that things are not working for them and that they have bugs.
Users are internet/computer dummies so they don't know how to clear the cookies or the cache of the browser, so i need to connect to each one of their computers and do it myself.
I spend lots off hours doing it and they always complain.
Some of the users use Chrome, some Firefox.
Googled and found no solution for this.
Is there any client code operation that will command the browser to clear its cache
or even pop up browser window which will ask user to confirm the clear?
Regarding cache clearing: No, there isn't.
What you can do, however, is configure your web server to correctly serve expiration and cache validity headers for your content. (How to do this depends on your web server.)
You can also use "cache busting" versioned URLs. Instead of using, let's say,
<script src="script.js">
you can "version" the URL like this:
<script src="script.js?2012-12-03-13-06">
<!-- or instead of dates any other versioning scheme you like -->
and when said script is updated, also increment/change the query parameter accordingly. This should cause browsers to consider the script as new as the URL isn't found in the cache.
document.cookie = '';
With browsers that allow entering js code in the address bar you can simply make a shortcut lets say a favourite with the following code as a url:
javascript:document.cookie = '';
If you'd like to clear cache you can use meta tags not to cache the site, though caching is conciderable.
<META HTTP-EQUIV="CACHE-CONTROL" CONTENT="NO-CACHE">
Related
Basically I want to keep all the <link> and <script> requests in one php file, but some scripts only pertain to one page on the site. I don't want to create extra overhead requesting a script that isn't used on a page.
However, extra scripts are all on the home page. So for first-timers to the site, they would generally come through the home page. AFAIK, the script is then cached.
Would this offset the overhead problem I mentioned? i.e. if the file is cached, is there no noticeable overhead from requesting it on another page?
Yes, they will be cached.
However, an HTTP header request to check for a newer file will still be sent on every subsequent page request. The response from your server will be "You have the newest version already". So it will use the one from the cache.
I find that HTTP requests are the great time killers of the web - especially for mobile devices, so at the very least you should use Expires header so that when you give the files to the browser you can say "*Don't request this file again for 30 days", etc.
Lastly, you can "prefetch" extra assets after page load so that it doesn't effect the user load time but still caches things while the user is on your home page.
* As a caveat, if you do use Expires header you will then need to do versioning through folder path or query parameter because web browsers will no longer check for new files. Something like this:
<script src="http://yoursite.com/js/compressed_files.js?v=1"></script>
I am trying to get Opera to re-request a page every time instead of just serving it from the cache. I'm sending the 'Cache-control: no-cache' and 'Pragma: no-cache' response headers but it seems as if Opera is just ignoring these headers. It works fine in other browsers - Chrome, IE, Firefox.
How do I stop Opera from caching pages? What I want to be able to do is have Opera re-request a page when the user clicks the Back button on the browser.
As a user, I absolutely detest pages that slow down my history navigation by forcing re-loads when I use the back button. (If the browser you use on a daily basis paid attention to the various caching directives and let them affect history navigation the way you want as a developer you'd probably notice some sites slowing down yourself...)
If you have a very strong use case for doing this I'd say your architecture might be "wrong" in some sense - for example, if you're switching between different "views" of constantly updating data and thus want to enforce re-load when users go back perhaps using Ajaxy techniques for loading the constantly changing data into the current page would be better?
Opera's implementation is on purpose - "caching" is seen as conceptually different from "history navigation", the former is more about storing things on disk and between sessions, the latter is switching back to a temporarily hidden page you just visited, in the state you left it.
However, if you really, really need it there is a loophole in this policy that enables the behaviour you want. Sending "Cache-control: must-revalidate" will force Opera to re-load every page on navigation, but only if you're sending the page over https. (This is a feature requested by and intended for paranoid banks, it slows down way too many normal sites if applied on http).
It sounds like your problem is related to this answer. After testing your header and the suggested headers, I could only reproduce your expected behavior in Internet Explorer.
SIMPLE SERVERSIDE CACHE CONTROL WITHOUT HEADERS OR FRONTEND SCRIPTS
Zero Dependency, Universal Language Edition
You can force re-caching globally without using a header by appending an md5 or sha1 checksum to your filename.
That way it will cache if it is an exact match, and otherwise treat it like a new resource.
Works in all browsers
Validates as strict HTML5 (originally did not, but this has been updated. Untested for XHTML, but probably not valid for that)
Does not require extra headers
Keeps frontend concerns and backend concerns nicely decoupled.
Does not require client side sanity checks or source validation.
Anything that can print html can do this consistently, including static content
If not static, easy to extend runtime control to end users (with authentication, if desired) that allows for simple page flags to determine minified, prettified, or debug source being returned.
Entirely encapsulates client cache control in the content serving mechanism, which makes things super simple to maintain.
As a side perk, introduces versioned client-side caching automatically by deferring to the checksums the browser has cached, which can be useful if you have alternate versions and need to unit test a release package to determine it's minimum stable dependency versions or something.
You don't ever have to fiddle with your browser to get the caching not to interfere with your development process again.
This approach also can be used for versioned images, video, audio, pdfs, etc. Pretty much any resource that is served as static data will operate similarly, cache on the first request for the content, and persist automatically without further consideration if the file does not change.
This is RFC valid markup. Notice the script and link tags have a get string:
?checksum=ba411cafee2f0f702572369da0b765e2
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Client Cache Control Example</title>
<meta name="description" content="You're only going to cache this when the content changes, and always when the content changes.">
<meta name="author" content="https://stackoverflow.com/users/1288121/mopsyd">
<!-- Example Stylesheet -->
<link rel="stylesheet" href="css/styles.css?checksum=ba411cafee2f0f702572369da0b765e2">
<!-- Example Script -->
<script src="js/scripts.js?checksum=ba411cafee2f0f702572369da0b765e2"></script>
</head>
<body>
</body>
</html>
The GET string ?checksum=ba411cafee2f0f702572369da0b765e2 refers to either an MD5 or SHA1 hash of the filesize of the resource. It can be obtained through a command line, language construct, or by hashing it from the value of the Content-Length: header. You then construct your href or src attribute by appending it as a GET string to the filename.
This browser will interpret these as distinct, and cache separately.
The server will ignore the GET parameter if it is a static resource, but if it is served dynamically, then the GET parameter will be available to the interpreting language.
This means that whenever that hash changes in the links, the browser will cache that specific version independently one time, and then keep it until forever, or Expires: goes by, whichever is sooner.
Since the checksum is a direct reflection of the filesize, you can set Expires: to forever and it doesn't make much difference. You will still see your changes immediately as soon as that file changes even a single byte.
Generate your css or js source with whatever utilities you normally do.
Run an md5 or sha1 checksum on the filesize at runtime if you are serving dynamically, and at compiletime if you are generating static content (like ApiGen docs, for example).
Serve the normal file with the hash as a GET string appended to the filename (eg: styles.css becomes styles.css?checksum=ba411cafee2f0f702572369da0b765e2)
Any change in the file forces a recache, which means you see the real value reflected immediately.
Optional, but rad: An additional benefit of this approach is that you can easily set up a dev GET flag, which will make ALL frontend source resolve to prettified dev source with any of your own custom debug functionality enabled, or use it to interpret versioning flags. You can do a redundant check to make sure that flag is only passed from a known development IP address, proxy authentication, etc. by the server and otherwise is not honored if you need it secure. I usually divide my frontend source up whenever possible similar to this:
This is what it is doing on live right now (minified production, cached, default, ?checksum=ba411cafee2f0f702572369da0b765e2).
This is what it ought to be doing on live right now, prettified enough for me to read (prettified production, never cached, ?debug_pretty_source=true).
This is what I use to figure out what isn't doing what it ought to on live if it exists in both of the previous (prettified with debug enabled, never cached, ACL/whitelist authorized, ?debug_dev_enable=true or similar).
You can apply the same principle to package releases by using version numbers instead of checksums, provided your versions don't change. Checksums are less readable but easier to automate and keep in sync with exact changes, but version suffixes are useful for testing package stability also, provided the version number reflects an immutable resource.
Found this whilst searching for solution. No joy, so wrote some javascript to solve the problem which may be of use to others.
In <HEAD> above any other javascript:
<script>
if( typeof(opera) != 'undefined' ) { // only do for Opera
if (window.name == 'previously_loaded') { // will be "" before page is loaded
alert('Reloading Page from Server'); // for testing
window.name = ''; // prevent multiple reload
window.location.reload(true);
}
}
</script>
Now change window name so Opera detects it on subsequent load from cache:
window.name = 'previously_loaded';
Insert this line in one of your js blocks that wont be executed during “window load” causing infinite reload. For me there was no need to refresh the page unless someone has exited by a link, so I just added it to my onclick/onunload function.
Before and after demos here with a few more notes. I intend to add it to my blog. I've only a few late versions of Opera, so I would appreciate some tries of the demo before I get egg on my face.
Edit: Just realised that if a later visited site changes window name (its persistent) then back-tab reload wont happen. Just alter above if statement to:
if (window.name != "") {
Demo worked fine when open in multiple tabs; but I vaguely recollect that window names should be unique; so I've altered the demo to generate a unique name.
window.name = new Date().getTime();
I have a web page that always needs to stay current. I do not want the browser to cache it. To that end, this meta tag is embedded with the page:
<meta name="Expires" content="Tue, 01 Jun 1999 19:58:02 GMT">
However, some browsers seem to ignore it. Chrome is particularly bad at it, though other browsers tend to do the same thing.
When I pick the page from the bookmarks bar, most of the time, it doesn't even hit the server, just loads it from cache. If I then press F5, it does go to the server and fetch a new copy.
Am I missing something simple? I thought the expires meta tag is the way it's done.
This is happening on an IIS 5.0 on Windows 2000.
Bottom line: looks like meta tags inside the HTML code pretty much do nothing. However, setting the expires tags within the HTTP does the trick nicely.
Send your expires headers using your server. Specifically, if you're using apache, look at this:
http://httpd.apache.org/docs/2.0/mod/mod_expires.html
This should help you:
<meta http-equiv="cache-control" content="no-cache" />
You can also configure the static content cache mechanism through IIS; you can learn how to do so here: http://support.microsoft.com/kb/247404.
<meta http-equiv="Cache-Control" content="private, no-store" />
Is really ALL you need, as stated here https://youtu.be/TNlcoYLIGFk?t=654 by Andrew Betts, elected W3C TAG member.
Using this, you will not need pragma or expires. Infact, the above will overwrite the Expires command.
You want to send an Expires header set to a date in the past (like your Meta tag).
Expires is the most widely respected cache header, but you can also use things like Last-Modified, or Etags to get more specific control.
Meta tags are a somewhat outdated means of setting caching protocols, and most of the meta cache control properties are fairly deprecated (e.g. NO-CACHE). A lot of user agents ignore them.
There is a great article I used to read about browser caching ans caching in general :
http://www.mnot.net/cache_docs/
It explains in high details what works and what does not, what is best to do.
In summary there are a lot of ways (html tags, HTTP headers) and types of cache (browser proxy, gateways)
Send Cache-Control: no-cache to the client within the response headers.
Please specify what platform are you using to make a better response.
IE8 has a feature called InPrivate Filtering, which will block scripts it finds on webpages from more than 'n' different sites.
I'm listening to the most recent 'Security Now' podcast which is raving about this feature as being great.
At the very same time I'm screaming NOOO! What the *#&$ -- because my site (as does many many others) includes the following (jQuery + SWFObject). i.e. I'm using Google's CDN to host my jQuery.
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js"></script>
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/swfobject/2.1/swfobject.js"></script>
So whats the deal - should I stop usin jQuery and swfobject from a CDN ?
Whats everybody else doing?
**Edit: ** I couldn't find out if they keep a list of 'trusted sites' or not, but according to this from Microsoft the InPrivate filtering is per session. So at least someone has to actively enable it every session.
InPrivate Filtering is off by default and must be enabled on a
per-session basis. To use this
feature, select InPrivate Filtering
from the Safety menu. To access and
manage different filtering options for
Internet Explorer 8, select InPrivate
Filtering Settings from the Safety
menu. To end your InPrivate Browsing
session, simply close the browser
window.
If your site has content that people would not want cached (bank site, porn, or something else "sensitive"), then I would not use an externally hosted file. Or if your site is just totally broken if the file does not load I would consider it. But if your site is anything else, I wouldn't worry about it. I don't think this is a feature most people will use if they want to hide their tracks. And if they really want to, let them deal with the consequences.
This may seem silly but since IE8 is out, why don't you test your site with InPrivate turned on and see how it behaves? Also if you can report back your findings here that would be great :)
It looks like there's a significant chance this will be disabled with InPrivate enabled, but it ultimately depends on each user's browsing habits.
If a user visits 10 sites in regular mode that all link to files from the same third-party domain, links to files on that domain will be blocked when InPrivate is enabled.
So while you won't be able to take advantage of the CDN, you should host files like this yourself if you need them to work reliably.
InPrivate Blocking keeps a record of
third-party items like the one above
as you browse. When you choose to
browse with InPrivate, IE
automatically blocks sites that have
“seen” you across more than ten sites.
You can also manually choose items to
block or allow, or obtain information
about the third-party content directly
from the site by clicking the “More
information from this website” link.
Note that Internet Explorer will only
record data for InPrivate Blocking
when you are in “regular” browsing
mode, as no browsing history is
retained while browsing InPrivate. An
easy way to think of it is that your
normal browsing determines which items
to block when you browse InPrivate.
Disclaimer: I haven't actually tested any of this as I don't have IE8, but the document you linked to is pretty clear about this.
You should host the JS files on your own site.
Here's another reason to host the JS file on your site.
I've always wondered, would it be possible to have a safe fallback in the event the CDN is down/unavailable?
Something like:
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js"></script>
<script type="text/javascript">
if (typeof jQuery == 'undefined') {
document.write(unescape("%3Cscript src='local/jquery.min.js' type='text/javascript'%3E%3C/script%3E"));
}
</script>
I think there would be a low percent of people using IE8 (I think), then turning on the option "InPrivate Browsing". Google's CDN somehow says "it has a server near where the user accessing the website is, so that the performance is increased" (not directly quoted). IE has caused me numerous problems in the past, and I dropped support for it.
does it work from the domain name of the site e.g. ajax.googleapis.com or does it resolve the name? if it just logs the domain, couldn't you just wrap it in a CNAME e.g. js.yourdomain.com -> ajax.googleapis.com?
Is it possible to clear all site cache? I would like to do this when the user logs out or the session expires instead of instructing the browser not to cache on each request.
As far as I know, there is no way to instruct the browser to clear all the pages it has cached for your site. The only control that you, as a website author, have over caching of a page occurs when the browser tries to access that page. You can specify that cached versions of your pages should expire at a certain time using the Expires header, but even then the browser won't actually clear the page from its cache at that time.
i certainly hope not - that would give the web site destructive powers over the client machine!
If security is your main concern here, why not use HTTPS? Browsers don't cache content received via HTTPS (or cache it only in memory).
One tricky way to mimic this would be to include the session-id as a parameter when referencing any static piece of content on the site. When the user establishes the session, the browser will recognize all the pieces of content as new due to the inclusion of this parameter. For the duration of the session the browser will used the static content in its cache. After the user logs out and logs back in again, the session-id parameter for the static contents will be different, so the browser will recognize this is as completely new content and will download everything again.
That being said... this is a hack and I wouldn't recommend pursuing it.. For what reason do you want the user's cache to be cleared after their session expires? There's probably a better solution that can fit your situation as opposed to what you are currently asking for.
If you are talking about asp.net cache objects, you can use this:
For Each elem As DictionaryEntry In Cache
Cache.Remove(elem.Key)
Next
to remove items from the cache, but that may not be the full-extent of what you are trying to accomplish.