Once we open a link in a new tab in Firefox, the data corresponding to that web page(static or dynamic) gets stored in Browser Cache. Then, when we switches at that tab again, it extracts data of that page from Cache(not requesting from the server of that site) and paints it at the frame buffer of the screen.
I want to know that how Firefox fetches this data in correct sequence?
What kind of mapping does the Firefox uses to extract the page data from its Cache?
Firefox (like any other browser) uses heuristics to decide when and what to cache. This is assuming no caching information is included in the resources. When no caching information is provided, Firefox might still decide to cache the files for certain period of time.
If you want to avoid Firefox to cache your resources altogether, you must include the following response header on your resources:
Cache-Control:no-cache, no-store
Now, the exact algorithm that Firefox uses to fetch from cache I don't think is public. Maybe somebody from Mozilla is able to answer this.
Related
Traditionally a browser will parse HTML and then send further requests to the server for all related data. This seems like inefficient to me, since it might require a large number of requests, even though my server already knows that a browser that wants to use this web application will need all of it's resources.
I know that js and css could be inlined, but that complicates server side code and img data as base64 bloats the size of the data... I'm aware as well that rendering can start before all assets are downloaded, which would potentially no longer work (depending on the implementation). I still feel that streaming an entire application in one go should be faster on slow connections than making tens of requests separately.
Ideally I would like the server to stream an entire directory into one HTTP response.
Does any model for this exist?
Does the reasoning make sense?
ps: If browser support for this is completely lacking, I'm wondering about a 2 step approach. Download a small JavaScript which downloads a compressed web app file, extracts it and plugs the resources into the page. Is anyone already doing something like this?
Update
I found one: http://blog.another-d-mention.ro/programming/read-load-files-from-zip-in-javascript/
I started to research related issues in order to find the way to get best results with what seems possible without changing web standards, and I wondered about caching. If I could send the last modified date of every subresource of a page along with the initial HTML page, a browser could avoid asking if modified headers once it has loaded every resource at least once. This would in effect be better than to send all resources with the initial request, since that would be beneficial only on the first load, and detrimental on subsequent loads, since it would be better for browsers to use their cache (as Barmar pointed out).
Now it turns out that even with a web extension you can not get hold of the if-modified-since header and so you surely can't tell the browser to use the cached version instead of contacting the server.
I then found this post from Facebook on how they tried to reduce traffic by hashing their static files and giving them a 1 year expiry date. This would mean that the url garantuees the content of the file. They still saw plenty of unnecessary if-modified-since requests and they managed to convince Firefox and Chrome to change the behaviour of their reload buttons to no longer reload static resources. For Firefox this requires a new cache-control: immutable header, for Chrome it doesn't.
I then remembered that I had seen something like that before and it turns out there is a solution for this problem which is more convenient than hashing the contents of resources and serving them from a database for at least ten years. It is to just a new version number in the filename. The even more convenient solution would be to just add a version query string, but it turns out that that doesn't always work.
Admittedly, changing your filenames all the time is a nuisance, because files referencing these files also need to change. However the files don't actually need to change. If you control the server it might be as simple as writing a redirect rule to make sure that logo.vXXXX.png will be redirected to logo.png (where XXXX is the last modified timestamp in seconds since epoch)[1]. Now let your template system automatically generate the timestamp, like in wordpress' wp_enqueue_script. WordPress actually satisfies itself with the query string technique. Now you can set the expiration date to a far future and use the immutable cache header. If browsers respect the cache control, you can now safely ignore etags and if-modified-since headers, since they are now completely redundant.
This solution guarantees the browser shall never ask for cache validation and yet you shall never see a stale resource, without having to decide on the expiry date in advance.
It doesn't answer the original question here about how to avoid having to do multiple requests to fetch the resources on the same page on a clean cache, but ever after (as long as the browser cache doesn't get cleared), you're good! I suppose that's good enough for me.
[1] You can even avoid the server overhead of checking the timestamp on every resource every time a page references it by using the version number of your application. In debug mode, for development, one can use the timestamp to avoid having to bump the version on every modification of the file.
I am not really sure how to google this, so I thought I could ask here.
I have the same image, posted on a page twice, will that slow down the execution time or will it remain the same since I am using the same resource?
The browser should be able to cache the image the first time it is requested from the source server. Most of the popular browsers should have this implemented. It should not have to load it twice, just once on the initial request to the server and then using the cached version the second time.
This also assumes the end user has the browser caching enabled. If that is turned off (even if the browser supports it), then it will make that extra request for the image since the cache is not there to pull from.
I have a website which is displayed to visitors via a kiosk. People can interact with it. However, since the website is not locally hosted, and uses an internet connection - the page loads are slow.
I would like to implement some kind of lazy caching mechanism such that as and when people browse the pages - the pages and the resources referenced by the pages get cached, so that subsequent loads of the same page are instant.
I considered using HTML5 offline caching - but it requires me to specify all the resources in the manifest file, and this is not feasible for me, as the website is pretty large.
Is there any other way to implement this? Perhaps using HTTP caching headers? I would also need some way to invalidate the cache at some point to "push" the new changes to the browser...
The usual approach to handling problems like this is with HTTP caching headers, combined with smart construction of URLs for resources referenced by your pages.
The general idea is this: every resource loaded by your page (images, scripts, CSS files, etc.) should have a unique, versioned URL. For example, instead of loading /images/button.png, you'd load /images/button_v123.png and when you change that file its URL changes to /images/button_v124.png. Typically this is handled by URL rewriting over static file URLs, so that, for example, the web server knows that /images/button_v124.png should really load the /images/button.png file from the web server's file system. Creating the version numbers can be done by appending a build number, using a CRC of file contents, or many other ways.
Then you need to make sure that, wherever URLs are constructed in the parent page, they refer to the versioned URL. This obviously requires dynamic code used to construct all URLs, which can be accomplished either by adjusting the code used to generate your pages or by server-wide plugins which affect all text/html requests.
Then, you then set the Expires header for all resource requests (images, scripts, CSS files, etc.) to a date far in the future (e.g. 10 years from now). This effectively caches them forever. This means that all requests loaded by each of your pages will be always be fetched from cache; cache invalidation never happens, which is OK because when the underlying resource changes, the parent page will use a new URL to find it.
Finally, you need to figure out how you want to cache your "parent" pages. How you do this is a judgement call. You can use ETag/If-None-Match HTTP headers to check for a new version of the page every time, which will very quickly load the page from cache if the server reports that it hasn't changed. Or you can use Expires (and/or Max-Age) to reload the parent page from cache for a given period of time before checking the server.
If you want to do something even more sophisticated, you can always put a custom proxy server on the kiosk-- in that case you'd have total, centralized control over how caching is done.
is it some how possible to search firefox cache entries from an application which is online and doesn't ask user to install anything on his computer...does mozilla provide any api for this or some functions to do this.
Probably not the cache you want, maybe something more along the line of
HTML5 local storage or Offline Web Application, apart from that it would be difficult to know what is in the entire cache due to security constrains, but you can use the server response header to detect what item the browser think it has in cache (detect the response 304 in http server header, if you provide the proper headers ).
I'm not sure if the local cache is available online from any other computer, but you can type about:cache in the URL bar and it'll take you to a page that can identify things in both memory and disk.
Hope that helps.
Background
I'm building an app that links recent
web pages you've visited together.
To do this, I need to get the HTML
for recent URLs using Cocoa.
Right now, I'm using an invisible
WebView to do this.
As I understand it, if the URL isn't
in the cache for my app, this is
hitting web servers.
What I want
The chances are high that the URL I'm grabbing has already been cached by Safari as the page has already been visited.
I want my app to check Safari's cache for the URL first. If it's there, it should just use this data. If not, it should hit the web server and store the page in my app's cache.
I don't really want to have to parse the cache.db file from Safari using sqlite3 - I've no idea if this format will stay the same. I'm after something simpler and more high level.
What I've tried
I know that you can set up your own NSURLCache using the method initWithMemoryCapacity:diskCapacity:diskPath: but I don't want to try pointing this to the Safari cache in case it screws up Safari by writing to it.
Is there an easy, high level way of sharing the Safari cache?
UPDATE
Aha. I've just realised there may be a way to do this I've been missing.
I could make a new instance of NSURLCache with initWithMemoryCapacity:diskCapacity:diskPath:, point it at the Safari cache, then specify a cache policy of NSURLRequestReturnCacheDataDontLoad for the URL Request when loading the page.
When this fails, I could just try and load the page as normal. I'll try this out and update the question when I know more.
To be honest, you just can't do this.
Firstly, I'm pretty certain -[NSURLCache initWithMemoryCapacity:diskCapacity:diskPath:] won't work as you expect. It will instead blow away the old cache file to create its own; potentially highly upsetting Safari.
Secondly NSURLCache is a composite cache. That is, it caches data first in memory, and then moves it out to disk at some point. So even if you could properly access Safari's cache file (which you can't) you'd only be able to access the older cached data; not the most recent.