High-performance passive-access-optimised dynamic REST web pages - caching

The following question is about a caching framework to be implemented or already existing for the REST-inspired behaviour described in the following.
The goal is that GET and HEAD requests should be handled as efficiently as requests to static pages.
In terms of technology, I think of Java Servlets and MySQL to implement the site. (But emergence of good reasons may still impact my choice of technology.)
The web pages should support GET, HEAD and POST; GET and HEAD being much more frequent than POST. The page content will not change with GET/HEAD, only with POST. Therefore, I want to serve GET and HEAD requests directly from the file system and only POST requests from the servlet.
A first (slightly incomplete) idea is that the POST request would pre-calculate the HTML for successive GET/HEAD requests and store it into the file system. GET/HEAD then would always obtain the file from there. I believe that this could easily be implemented in Apache with conditional URL rewriting.
The more refined approach is that GET would serve the HTML from the file system (and HEAD use it, too), if there is a pre-computed file, and otherwise would invoke the servlet machinery to generate it on the fly. POST in this case would not generate any HTML, but only update the database appropriately and delete the HTML file from the file system as a flag to have it generated anew with the next GET/HEAD. The advantage of this second approach is that it handles more gracefully the “initial phase” of the web pages, where no POST has been called yet. I believe that this lazy-generate-and-store approach could be implemented in Apache by providing an error-handler, which would invoke the servlet in case of “file-not-found-but-should-be-there”.
In a later round of refinement, to save bandwidth, the cached HTML files should also be available in a gzip-ed version which is served when the client understands that. I believe that the basic mechanisms should be the same as for the uncompressed HTML files.
Since there will be many such REST-like pages, both approaches might occasionally need some mechanism to garbage-collect rarely used HTML files in order to save file space.
To summarise, I am confident that my GET/HEAD-optimised architecture can be cleanly implemented. I would like to have opinions on the idea as such in the first place (I believe it is good, but I may be wrong) and whether somebody has already experience with such an architecture, perhaps even knows a free framework implementing it.
Finally, I'd like to note that client caching is not the solution I am after, because multiple different clients will GET or HEAD the same page. Moreover, I want to absolutely avoid the servlet machinery during GET/HEAD requests in case the pre-computed file exists. It should not even be invoked to provide cache-related HTTP headers in GET/HEAD requests nor dump a file to output.
The questions are:
Are there better (standard) mechanisms available to reach the goal stated at the beginning?
If not, does anybody know about an existing framework like the one I consider?
I think that a HTTP cache does not reach my goal. As far as I understand, the HTTP cache would still need to invoke the servlet with a HEAD request in order to learn whether a POST has meanwhile changed the page. Since page changes will come at unpredictable points in time, an HTTP header stating an expiration time is not good enough.

Use Expires HTTP Header and/or HTTP conditional requests.
Expires
The Expires entity-header field gives the date/time after which the response is considered stale. A stale cache entry may not normally be returned by a cache (either a proxy cache or a user agent cache) unless it is first validated with the origin server (or with an intermediate cache that has a fresh copy of the entity). See section 13.2 for further discussion of the expiration model.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
Conditional Requests
Decorate cache-able response with Expires,Last-Modified and/or ETag header. Make requests conditional with If-Modified-Since, If-None-Match header, If-*, etc. (see RFC).
e.g.
Last response headers:
...
Expires: Wed, 15 Nov 1995 04:58:08 GMT
...
don't perform new request on the resource before expiration date (the Expires header) and then perform conditional request:
...
If-Modified-Since: Wed, 15 Nov 1995 04:58:08 GMT
...
If the resource wasn't modified then 304 Not Modified response code is returned and the response doesn't have a body. 200 OK and response with body is returned otherwise.
Note: HTTP RFC also defines Cache-Control header
See Caching in HTTP
http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html

Related

Do we need Etag and Last-Modified header when using CommonsChunkPlugin in webpack

I am using webpack to bundle all my assets files so I get something like this.
bundle.7fb44c37b0db67437e35.js
vendor.495e9142a1f8334dcc8c.js
styles.bc5b7548c97362b683f5582818914909.css
I use chunkhash in the name so when browser caches something it doesnt cache again until hash has changed. For example if I change something in styles, bundle the files and deploy, only the hash from styles will change, others wont so browser will request from the server again just the styles file and the rest will use from memory cache.
In response header I also have Etag and Last-Modified and they change every time I deploy app for every file. Should I remove them from response? Can that confuse the browser to contact the server and see if files have changed even though hash is still the same?
Great question. This depends largely on how the back-end is implemented and how it calculates the header values. Are the files served from our server, or something else like s3? Are we using a CDN? Are we using a framework for our application server? Who calculates these headers, the web server or the application server?
For the purposes of this answer and to keep things simple let's assume we are using the popular server framework Express with no CDN or 3rd party hosting. Like most application servers, Express calculates ETag and Last-Modified based on the content of the file being served - not the name of the file.
The first time a browser requests one of our files, it will receive the the ETag and Last-Modified for the resource. The next time the same resource is requested the browser will send the cached ETag and Last-Modified headers to the server. The server then decides based on these headers whether the browser needs to download a new version of the resource or if the cached version is current. If the cached resource is current, the server responds with a 304 - Not Modified status code. The status code is the key to this whole caching system - it is how the browser decides if it should use the cached resource.
To generate the ETag header, Express passes a binary Buffer representation of the response body to the etag module which calculates a SHA-1 hash based on the contents of the Buffer (source: generating ETag header and source: generating hash). To generate the Last-Modified header, Express uses the file system's last modified time (see lastModified in docs).
When webpack builds a new bundle the file's binary will change even if the chunkhash is the same. This causes Express to output a different Etag and Last-Modified, which means it will not respond with a 304 the next time the resource is requested. Without the 304 status code, the browser will unnecessarily re-download the bundle.
Answer
I think the best thing to do here is disable ETag and Last-Modified headers for these assets and instead use the Expires or Cache-Control: max-age header set to a date far in the future (usually 1 year). This way the browser will only re-download a bundle if it is expired or if it simply does not exist in the cache.

Asking Chrome to bypass local cache for XmlHttpRequest like it's possible in Firefox?

As some of you may already know, there are some caching issues in Firefox/Chrome for requests that are initiated by XmlHttpRequest object. These issues mean that browser does not strictly follow the rules and does not go to server for the new XSLT file (for example). Response does not have Expires header (for performance reasons we can't use it).
Firefox has additional parameter in the XHR object "channel" to which you put value Components.interfaces.nsIRequest.LOAD_BYPASS_CACHE to go to server explicitly.
Does something like that exist for Chrome?
Let me immediatelly stop everyone who would recommend adding timestamp as a value of GET parameter or random integer - I don't want server to get different URL requests. I want it to get the original URL. Reason is that I want to protect server from getting too many different requests for simple static files and sending too much data to clients when it is not needed.
If you hit static file with generated GET parameter (like '?forcenew=12314') would render 200 response each first time and 304 for every following request for that value of random integer. I want to make requests that will always return 304 if the target static file is identical to client version. This is BTW how web browsers should work out-of-the-box but XHR objects tend to not go to server at all to ask is file changed or not.
In my main project at work I had the same exact problem. My solution was not to append random strings or timestamps to GET requests, but to append a specific string to GET requests.
If you have a revision number e.g. subversion revision or likewise from git/mer or whatever you are using, append that. Static files will get 304 responses until the moment a new revision is released. When the new release happens a single 200 response is granted and it is back to happily generating 304 responses. :-)
This has the added bonus of being browser independent.
Should you be unlucky and not have a revision number, then make one up and increment it each time you make a release.
You should look into Etags, etags are keys that can be generated from the contents of the file therefore once the file on the server changes the system will be a new etag. Obviously this will be a service-side change which is something that you will need to do given that you want a 200 and then subsequent 304's. Chrome and FF should respect these etags so you shouldn't need to do any crazy client-side hacks.
Chrome now supports Cache-Control: max-age=0 request HTTP header. You can set it after you open an XMLHttpRequest instance:
xhr.setRequestHeader( "Cache-Control", "max-age=0" );
This will instruct Chrome to not use cached response without revalidation.
For more information check The State of Browser Caching, Revisited by Mark Nottingham and RFC 7234 Hypertext Transfer Protocol (HTTP/1.1): Caching.

Refreshing in RestSharp for Windows Phone

I implemented RestSharp succesfully in my WP7 application, but one issue remains:
When I load resources from the server (for example a GET request on http://localhost:8080/cars), the first time the collection of (in this case) cars is succesfully returned.
When I issue the same request for the second time, I always get the same result as the first time - even when the resources have changed in the meantime. When looking at my server, the second time there is no request issued at all.
I presume there's a caching mechanism implemented in RestSharp, but I see no way to invalidate the cache results.
Are there any ways to manually invalidate the RestSharp for Windows Phone cache results? (Or ways to force the library to get the results from the server)
You can control caching of resources by setting headers on the response your server sends back. If you do not want the resource to be cached then set the cache-control header to no-cache.
It is the server's job to specify how long a resource is good for, the client should do its best to respect that information.
If you really, really want to delete entries in the cache you need to go via the WinINet API
As a quick hack to avoid caching you can append a unique value to the end of the query string. The current DateTime (including seconds and milliseconds if necessary) or a GUID are suitable.
eg.
var uri = "http://example.com/myrequest?rand=" + DateTime.Now().ToString();

Can I clear a specific URL in the browser's cache (using POST, or otherwise)?

The Problem
There's an item (foo.js) that rarely changes. I'd like this item to be stored in the browser's cache (using Expires header). However, when it does change, I'd like the browser to update to the newest version.
The Attempt
Foo.js is returned with a far future Expires header. It's cached on the browser and requires no round trip query to the server. Just the way I like it. Now, when it changes....
Let's assume I know that the user's version of foo.js is outdated. How can I force a fresh copy of it to be obtained? I use xhr to perform a POST to foo.js. This should, in theory, force the browser to get a newer version of foo.js.
Unfortunately, this only seems to work in Firefox. Other browsers will use their cached version of the copy, even if other POST paramters are set.
WTF
First off, is there a way to do what I'm trying to do?
Second, why is there no sensible key/value type of cache that browser's have? Why can I not simply not include in headers: "Cache: some_key, some_expiration_time" and also specify "Clear-Cache: key1, key2, key3" (the keys must be domain specific, of course). Instead, we're stuck with either expensive round-trips that ask "is content new?", or the ridiculous "guess how long it'll be before you modify something" Expires header.
Thanks
Any comments on this matter are greatly appreciated.
Edits
I realize that adding a version number to the file would solve this. However, in my case it is not possible -- the call to "foo.js" is hardcoded into a bookmarklet.
You can just add a querystring to the end of the file, the server can ignore it, but the browser can't, it must treat it as a new request:
http://www.site.com/foo.js?v=1.12345
Many people use this approach, SO uses a hash of some sort, I use the build number (so users get a new version each build). If either of these is an option, you get the benefit of long duration cache headers, but still force a fetch of a new copy when needed.
Why set your cache expiration so far in the future? If you set it to one day for instance, the only overhead that you will incur (once a day) is the browser revalidating that it is the same file. If you still have not changed it, then you will not re-download the file, the server will respond with a not-modified response.
All caches have a set of rules that
they use to determine when to serve a
representation from the cache, if it’s
available. Some of these rules are set
in the protocols (HTTP 1.0 and 1.1),
and some are set by the administrator
of the cache (either the user of the
browser cache, or the proxy
administrator).
Generally speaking, these are the most
common rules that are followed (don’t
worry if you don’t understand the
details, it will be explained below):
If the response’s headers tell the cache not to keep it, it won’t.
If the request is authenticated or secure (i.e., HTTPS), it won’t be
cached.
A cached representation is considered fresh (that is, able to be
sent to a client without checking with
the origin server) if:
* It has an expiry time or other age-controlling header set, and
is still within the fresh period, or
* If the cache has seen the representation recently, and it was
modified relatively long ago.
Fresh representations are served directly from the cache, without
checking with the origin server.
If an representation is stale, the origin server will be asked to
validate it, or tell the cache whether
the copy that it has is still good.
Under certain circumstances — for example, when it’s disconnected
from a network — a cache can serve
stale responses without checking with
the origin server.
If no validator (an ETag or
Last-Modified header) is present on a
response, and it doesn't have any
explicit freshness information, it
will usually — but not always — be
considered uncacheable.
Together, freshness and validation are
the most important ways that a cache
works with content. A fresh
representation will be available
instantly from the cache, while a
validated representation will avoid
sending the entire representation over
again if it hasn’t changed.
http://www.mnot.net/cache_docs/#BROWSER
There is an excellent suggestion made in this thread: How can I make the browser see CSS and Javascript changes?
See the accepted answer by user, "grom".
The idea is to use the "modified" time stamp from the server to note when the file has been modified, and adding a version parameter to the end of the URL, making your CSS and JS files have URLs like this: my.js?version=12345678
This makes the browser think it is a new file, and so it does not refer to the cached version.
I am using a similar method in my app. It works pretty well. Of course, this would assume you are using something like PHP to process your HTML.
Here is another link with a more simple implementation for WordPress: http://markjaquith.wordpress.com/2009/05/04/force-css-changes-to-go-live-immediately/
With these constraints I guess your only option is to use window.location.reload(true) and force the browser to fresh all the cached items.. it's not pretty
You can invalidate cache on a specific url, using Cache-Control HTML header.
On your desired URL you can run (with xhr/ajax for instance) a request with following headers :
headers: {
'Cache-Control': 'no-cache, no-store, must-revalidate, max-age=0',
Pragma: 'no-cache',
Expires: '0',
}
Your cache will be invalidated, and next GET requests will return a brand new result.

Check if web page is modifed / has expired with Ruby

I'm writing a crawler for Ruby, and I want to honour the headers that the server sends out in order to make the crawl more efficient. Is there a straightforward way in Ruby of determining whether a page needs to be re-downloaded by the client? I know I need to consider at least these headers:
Last Modified
Etags
Cache Control
Expires
What's the definitive way of determining this - is it specified anywhere?
You are right on the headers you will need to look at, but you need to consider that the server is what is setting these. If they are set correctly, then you can use them to make the decision, but none of them are required.
Personally, I would probably start with tracking the expires value as I do the initial download, as well as logging the etag. Finally I'd look at last modified as I did the next pass, assuming the expires or etag showed some sign that I might need to re-download (or if they aren't even set). I wouldn't expect Cache Control to be all the useful.
You'll want to read about the head method in Net::HTTP -- http://www.ruby-doc.org/stdlib/

Resources