I have couple of questions regarding service worker and workbox.
1) When to register a service worker in webapp.
* If I register directly in index.html, if I have precache assets then
in first paint the precache assets are downloading twice once with
request and once wit workbox which is delaying the first paint.
* If I register onload of page the issue is getting resolved but if I
have api hits which are to be cached it is getting cached on second
time refresh of the page.
2) In my webapp we use lot of 3rd party js,css,gif and soon. So in my routing if i want to cache them for each request if register a router it seems to be complex, so tried using regex patterns, So could any one suggest best regex to cache them if I need to exclude some image formats from caching. So the best way to cache third party js,css, api hits and so on other formats.
3) The best way to clear the cache, one is to use the expiration and all. But in my case if user changes his location I need to clear some cache contents, So how can I achieve this.
4) When to use caching and when to use indexeddb ?
5) Advantages of using workbox-webpack plugin then directly workbox. Any major advantages ?
6) Best practices for Registering/ unregistering/ updating service worker?
7) If I use workbox-webpack plugin and dynamic route to cache the js/css file so for every build changes a js/css file with different hash is being generated and its getting cached, So service worker caching the samefile with different hash values.
8) The best approach to forcefully delete a cache name and single file in cache.
Related
I'm using Vue CLI PWA workbox plugin mode.
My app has more than 1000 assets, and precaching them results in very bad performance. Please, check it out:
https://nikudu.com/
Is there a way to precache files more specificly?
For example, precache files by URL.
On url x/y only precache files 1,5,6 and on url x/v precache files 7,8,2.
I have worked with PWAs and Angular, so no experience with Vue. That said, I would suggest to pre-cache only your shell app (hence the minimum static assets/js that you want to present to the user, when accessing the app offline).
For the other images you can use a lazy approach and cache the images only after they have been requested once (but I do not know if this goes against your requirements).
From this point you could even expand your logic to define, which are the most used routes and preload only those images in the background, once the user lands on the app.
If you are interested, I wrote an article about Service Workers and caching strategies independent from Angular, therefore you can use the same concepts in your app too.
2 popular Wordpress plugins, Super Cache and Contact Form 7 - there are still some issues expected around _nonce AJAX calls.. How it is possible to make them work together smoothly? Definitely, there are some situations you want to keep ALL your posts (pages) super-cached, and not to be served dinamically without caching while preserving and providing contact form functionality.
When not checking the "Cache rebuild" in the settings, Super Cache will function:
standard pages will NOT enforce a new cache file to be generated (e.g. it REALLY serves an (if) existing supercache file whithout triggering a new cache file to be generated
pages with a wpcf7-form included WILL enforce/trigger the re-generation of supercache file while serving the initial on the request. I think that is the point when things go wrong.
Question: How to stop unwanted re-generation of pages with a wpcf7-form included? Based on wpcf7 plugin homepage Docs by the author, my expectation is that the wpcf7 form has evolved and been developing to meet this ajax-call requirements such _nonce calls. Any idea how to resolve this?
to be hard-coded (e.g. exclude _nonce call and/or URL parameter..) within wpcf7 form (this is against WP standards) or
settings in Super cache?
any other idea, solution or alternatives?
In this thread wpcf7 author says:
Contact Form 7 uses WP nonce as secret key. As nonce life is 24 hours by default, if cache clears less than 24 hours, the two plugins should work without problem, even if the page is cached.
I've read all over the place and I'm trying to figure out if I am understanding the way caching happens in Drupal 6. We have a site that has a real time stock ticker in it. We have Drupal caching enabled so the stock price ends up getting cached and frozen at a specific spot. I figured a way I could handle it would be to put the ticker in a block I make in a custom module and set BLOCK_NO_CACHE, but if I'm understanding this correctly, if you have site caching enabled, then the ENTIRE page gets cached including any and all blocks on it regardless of their individual cache settings. Is this correct? So am I unable to take advantage of site caching then if I have certain spots that should not cache? Does anyone know of another solution I might be able to use to have the best of both worlds? To be able to have site caching, but also have a real time stock ticker? By the way, the stock ticker is making a JSON request to the Yahoo finance API to get the quote.
You are correct, the directive BLOCK_NO_CACHE is only applicable to block level. However when page caching is enabled then Drupal will cache the entire page (which includes the block as well). But this is only applicable to anonymous users. Drupal's philosophy is that the content for anonymous users is always the same so they get served the cached page. But this is not applicable to authenticated users. Since different users might have different access to certain parts of the page (e.g. the links block will look different for an admin than a regular user).
You might want to have a look at this discussion: BLOCK_NO_CACHE not working for anonymous users
And there is a solution, which you'll stumble upon this discussion. It's this module: Ajax Blocks. Extract from the module description:
Permits to load some blocks by additional AJAX request after loading
the whole cached page when the page is viewed by anonymous user. It is
suitable for sites which are mostly static, and the page caching for
anonymous users is a great benefit, but there are some pieces of
information that have to be dynamic.
I have a website which is displayed to visitors via a kiosk. People can interact with it. However, since the website is not locally hosted, and uses an internet connection - the page loads are slow.
I would like to implement some kind of lazy caching mechanism such that as and when people browse the pages - the pages and the resources referenced by the pages get cached, so that subsequent loads of the same page are instant.
I considered using HTML5 offline caching - but it requires me to specify all the resources in the manifest file, and this is not feasible for me, as the website is pretty large.
Is there any other way to implement this? Perhaps using HTTP caching headers? I would also need some way to invalidate the cache at some point to "push" the new changes to the browser...
The usual approach to handling problems like this is with HTTP caching headers, combined with smart construction of URLs for resources referenced by your pages.
The general idea is this: every resource loaded by your page (images, scripts, CSS files, etc.) should have a unique, versioned URL. For example, instead of loading /images/button.png, you'd load /images/button_v123.png and when you change that file its URL changes to /images/button_v124.png. Typically this is handled by URL rewriting over static file URLs, so that, for example, the web server knows that /images/button_v124.png should really load the /images/button.png file from the web server's file system. Creating the version numbers can be done by appending a build number, using a CRC of file contents, or many other ways.
Then you need to make sure that, wherever URLs are constructed in the parent page, they refer to the versioned URL. This obviously requires dynamic code used to construct all URLs, which can be accomplished either by adjusting the code used to generate your pages or by server-wide plugins which affect all text/html requests.
Then, you then set the Expires header for all resource requests (images, scripts, CSS files, etc.) to a date far in the future (e.g. 10 years from now). This effectively caches them forever. This means that all requests loaded by each of your pages will be always be fetched from cache; cache invalidation never happens, which is OK because when the underlying resource changes, the parent page will use a new URL to find it.
Finally, you need to figure out how you want to cache your "parent" pages. How you do this is a judgement call. You can use ETag/If-None-Match HTTP headers to check for a new version of the page every time, which will very quickly load the page from cache if the server reports that it hasn't changed. Or you can use Expires (and/or Max-Age) to reload the parent page from cache for a given period of time before checking the server.
If you want to do something even more sophisticated, you can always put a custom proxy server on the kiosk-- in that case you'd have total, centralized control over how caching is done.
When I log-in to my Gmail Inbox it starts caching the mails one-by-one in JavaScript.
When I click on a mail in the Inbox, it doesn't send an Ajax request then to fetch the mail contents.
Instead it serves from an already cached JavaScript array.
Is there any good jQuery plugin to implement this?
I came across a few but they don't seem to be under active development.
http://plugins.jquery.com/project/jCache
http://plugins.jquery.com/project/jCacher
Any better plugin?
Edit1:
My requirement is exactly same as what Gmail is doing.
There is a ticket management system which shows a list of open tickets(say 100 tickets on a page) and once you click on a ticket its details are displayed. I want to cache the details of all 100 tickets displayed on the page.
I am planning to implement the cache as object of key-value pairs only. But I am looking for a plugin which takes care of tasks like setting/getting values from cache, auto-updating the cache periodically etc.
Storing in JS object shall be enogh for me. I don't see any advantages of using HTML5 local storage as
* No offline browsing is required and
* I wan't to load fresh data every time a new window is opened
* I won't need huge amount of memory
You could use some of the new html5 localstorage http://diveintohtml5.ep.io/storage.html
I think Google is rather using HTML5 local storage than caching. They seem to be big fans of HTML5 and adopt anything as soon as it's available. If you must use cookies, I'd recommend this one.
As Pointy suggested, the cache implementation was indeed heavily dependent on my application. Hence I have written my own code to handle this requirement. Thanks to all.