I am using "caches" to cache in service worker my PWA assets and make it available offline.
When I change an asset, specifically a js file, I modify at least one byte in my service worker to trigger its native update: the service worker updates and retrieves all of its previously cached assets to refresh its caches.
Yet, server responds with a cached version of the file, and whereas I own the files served I have no control over Cache-Control http header.
How can i prevent browser caching on service worker cached resources? Versioning the files with a
"?v="+version
suffix won't work, because this version cannot be passed to the or or tags that references the cached files in html files, which are static and caches will not recognize and serve offline unversioned file names.
Since "caches.addAll" does not allow AFAIK any means to specify http request headers such as Cache-Control as fetch or XMLHttpRequest do, how can I prevent additional aggressive caching stages over my assets?
I am using plain Javascript and if possible I need it to be done without any additional library. Note also that meta http-equiv tags won't solve the problem for assets other than complete html.
You can bypass the browser's cache by explicitly constructing a Request object with a cache property set to an appropriate cache mode. 'reload' is a good choice, as it will bypass the browser's cache for the outgoing request, but it will update the browser's cache with the response (so you'll have a fresher browser cache overall). If you don't even want that update to be performed, you could use 'no-store'.
Here's some code showing how to do this concisely for an array of URLs that could be passed in to cache.addAll():
async function addAllBypassCache(cacheName, urls) {
const cache = await caches.open(cacheName);
const requests = urls.map((url) => new Request(url, {
cache: 'reload',
}));
await cache.addAll(requests);
}
Related
I'd like to understand how to enable caching in Strapi for a specific (or any) API endpoint. At the moment when the browser hits my endpoint in the response headers I don't see any caching related headers. Is there a way to use etags and have a long cache time to allow the JSON response to be cached?
There is one mention of etags in the docs but I'm not sure how to implement. If anyone can provide more detailed information it would be appreciated.
At least for static files this can be done within Strapi itself. The middlewares documentation suggests, that there is already a middleware called public, which sets a Cache-Control header with its maxAge when it serves the files from the public/ directory.
But if you want to get your uploaded files cached (i.e. files within the public/uploads/ directory), that’s not enough, because the middleware strapi-provider-upload-local (not yet documented) runs first.
A recently published package solves this issue:
npm i strapi-middleware-upload-plugin-cache
Simply activate it by adding or modifying the config/middleware.js file with the following content:
module.exports = ({ env }) => ({
settings: {
'upload-plugin-cache': {
enabled: true,
maxAge: 86400000
}
}
});
I suggest you manage this kind of thing outside of Strapi.
With another service. For example, if you host you app on AWS, you can use CloudFront.
when I delete the image from my graphql server and using uploader.upload.destroy(public_id), it deletes from media library of cloudinary (https://cloudinary.com/console/media_library/folders/%2F)
but image is still available If I access it via cloudinary endpoint (https://res.cloudinary.com/db9rcrnuw/image/upload/v1576054005/47122.png)
I want to destroy those endpoints as well when the image is deleted.
here, screen.basePath means public_Id of the image
const screen = await ctx.prisma
.deleteScreen({
id: args.screenId
})
.$fragment(fragment);
if (scrn.basePath.length === 5) {
console.log(scrn.basePath.length);
cloudinary.uploader.destroy(screen.basePath, function(error, result) {
console.log(result, error);
});
return screen;
}
The short answer is that this is caused by a combination of not using the 'invalidate' parameter in your destroy API call and a difference between which URL format (i.e. with a version number (v123456789), 'v1' or no version number) the resource is accessed using versus what format your account is configured to send for invalidation.
The first thing to do is ensure that all destroy API calls include the 'invalidate' parameter set to 'true' if you'd like CDN invalidation.
Regarding the URL formats;
The 'v1576054005' that is part of delivery URLs is a version number that is essentially the UNIX timestamp of the upload time of the asset. Its main purpose is to always return the latest image and avoid CDN caching (upload API responses return the URL with the latest upload version). A bit more information on this topic can be found in this article - https://support.cloudinary.com/hc/en-us/articles/202520912-What-are-image-versions.
Please note that there are three possible URL formats Cloudinary can send for invalidation at the CDN, and these are outlined here: https://support.cloudinary.com/hc/en-us/articles/360001208732-What-URL-conventions-are-invalidated
Invalidation requests are sent when you delete or overwrite an image using the Media Library UI, or when you use the SDK/API, and also provide the 'invalidate' parameter, set to 'true'.
By default, all accounts send invalidations for the default format of URL which the SDKs produces, which uses no version number for assets in the root of your account, and a 'v1' placeholder for assets in folders (option 1 from the URL above).
If you were accessing the image with the full version component then that isn't sent for invalidation by default and why you are likely getting a cached copy returned.
In your case, the URL that would've been sent for invalidation would be without a version component (as the resource is in the root folder) i.e.
https://res.cloudinary.com/db9rcrnuw/image/upload/47122.png
Depending on how you are building your URLs, i.e., if you're using the SDK helper methods, taking the URL from the url or secure_url fields of the Upload API response (which use the full version number), will determine the format and thus how your account should be configured to invalidate.
I suggest you to email Cloudinary support (support#cloudinary.com) and share a link to this thread as well as some details on how the URLs you're using are generated so that your account can be configured accordingly.
In my XPage I have a xe:djxDataGrid (dojox.grid.datagrid) which uses xe:restService which seems to use dojox.data.JsonRestStore.
Everything works fine without proxy but my client accesses the application via a proxy because of corporate policy. After a user updates data in the DataGrid it shows old values when accessed behind the proxy.
When the REST Control/JsonRestStore sends an ajax GET request to get data, there is no Cache-Control parameter in request headers. And Domino does not place Expires parameter in the reponse headers. I believe that's why the old version of the GET request gets cached by the proxy.
We have tried to disable cache in browsers but that does not help which indicates the proxy is caching the requests.
I believe this could be solved either by:
Setting Cache-Control parameter in request headers OR
Setting Expires parameter in response headers
But I haven't found a way to set either of these. For the XPage Domino sets Expires:-1 response header but not for the ajax GET request which is:
/mypage.xsp/?$$viewid=!ddrg6o7q1z!&$$axtarget=view:_id1:_id2:callback1:restService1
This returns the JSON data to JsonRestStore and gets cached by the proxy.
One options is to try to get an exception to the proxy so requests to this site would bypass the proxy cache. But exceptions are generally not easy to get thru.
Any ideas? Thanks.
Update1
My colleque suggested that I could intercept the xhr GET requests made by dojox.data.JsonRestStore and add a time parameter to the URL to prevent cache. Here is my question about that:
Prevent cache in every Dojo xhr request on page
Update2
#SvenHasselbach has a great solution for preventing cache for all xhrs:
http://openntf.org/XSnippets.nsf/snippet.xsp?id=cache-prevention-for-dojo-xhr-requests
It seems to work perfectly, &dojo.preventCache= parameter is added to the URLs and the requests seem to return correct JSON also with this parameter. But the DataGrid stops working when I use that code. Every xhr causes this error:
Tried with Firefox and Chrome. The first page of data still loads because xhr interception is not yet in place but the subsequent pages show only "..." in each cell.
The solution is Sven Hasselbach's code in the comment section of Julian Buss's blog which needs to be slightly modified.
I changed xhrPost to xhrGet and did not place the code to dojo.addOnLoad. When placed there it was not effective in the first XHR by the DataGrid/Store.
I also removed the headers modification because it overrides existing headers. When the REST control requests data from server with xhrGet the URL is always the same and rows requested are in HTTP header like this:
Range: items=0-9
This (and other) headers disappear when the original code is used. To just add headers we would have take the existing headers from args and append to them. I didn't see a need for that because it should be enough to add the parameter in the URL. Here is the extremely simple code I'm using:
if( !(dojo._xhrGet )) {
dojo._xhrGet = dojo.xhrGet;
}
dojo.xhrGet = function (args) {
args['preventCache'] = true;
return dojo._xhrGet(args);
}
Now I'm getting all rows and all XHR Get URLs have &dojo.preventCache= parameter which is exactly what I wanted. Next we'll test in customer environment to see if this solves their problem.
Update
As Julian points out in his blog I could also use a Web Site Rule to set Expires or cache-control http response headers.
Update
The customer reports it's working now for them!
I'm developing an application which is supposed to serve different content for "normal" browser requests and AJAX requests for the same URL requested.
(in fact, encapsulate the response HTML in JSON object if the request is AJAX).
For this purpose, I'm detecting an AJAX request on the server side, and processing the response appropriately, see the pseudocode below:
function process_response(request, response)
{
if request.is_ajax
{
response.headers['Content-Type'] = 'application/json';
response.headers['Cache-Control'] = 'no-cache';
response.content = JSON( some_data... )
}
}
The problem is that when the first AJAX request to the currently viewed URL is made strange things happens on Google Chrome - if, right after the response comes and is processed via JavaScript, user clicks some link (static, which redirects to other page) and then clicks back button in the browser, he sees the returned JSON code instead of the rendered website (logging the server I can say that no request is made). It seems for me that Chrome stores the latest request response for the specific URL, and doesn't take into account that it has different content-type etc.
Is that a bug in the Chrome or am I misusing HTTP protocol ?
--- update 12 11 2012, 12:38 UTC
following PatrikAkerstrand answer, I've found following Chrome bug: http://code.google.com/p/chromium/issues/detail?id=94369
any ideas how to avoid this behaviour?
You should also include a Vary-header:
response.headers['Vary'] = 'Content-Type'
Vary is a standard way to control caching context in content negotiation. Unfortunately it has also buggy implementations in some browsers, see Browser cache vary broken.
I would suggest using unique URLs.
Depending of you framework capabilities you can redirect (302) the browser to URL + .html to force response format and make cache key unique within browser session. Then for AJAX requests you can still keep suffix-less URL. Alternatively you may suffix AJAX URL with .json instead .
Another options are: prefixing AJAX requests with /api or adding some cache boosting query params ?rand=1234.
Setting cache-control to no-store made it in my case, while no-cache didn't. This may have unwanted side effects though.
no-store: The response may not be stored in any cache. Although other directives may be set, this alone is the only directive you need in preventing cached responses on modern browsers.
Source: Mozilla Developer Network - HTTP Cache-Control
I'm using the jQuery .get() function to load in a template file and then display the loaded HTML into a part of the page by targeting a particular DOM element. It works well but I've recently realised for reasons that confound me that it is caching my template files and masking changes I have made.
Don't get me wrong ... I like caching as much as the next guy. I want it to cache when there's no timestamp difference between the client's cache and the file on the server. However, that's not what is happening. To make it even odder ... using the same function to load the templates ... some template files are loading the updates and others are not (preferring the cached version over recent changes).
Below is the loading function I use.
function LoadTemplateFile ( DOMlocation , templateFile , addEventsFlag ) {
$.get( templateFile , function (data) {
$( DOMlocation ).html(data);
});
}
Any help would be greatly appreciated.
NEW DETAILS:
I've been doing some debugging and now see that the "data" variable that comes back to the success function does have the newer information but for reasons that are not yet clear to me what's inserted into the DOM is an old version. How in heck this would happen has now become my question.
You can set caching to off in the jQuery.get() call. See the jQuery.ajax() doc for details with the cache: false option. The details of how caching works is browser specific so I don't think you can control how it works, just whether it's on or off for any given call.
FYI, when you turn caching off, jQuery bypasses the browser caching by appending a unique timestamp parameter to the end of the URL which makes it not match any previous URL, thus there is no cache hit.
You can probably also control cache lifetime and several other caching parameters on your server which will set various HTTP headers that instruct the browser what types of caching to allow. When developing your app, you probably want caching off entirely. For deployment, I would suggest that you want it on and then when you rev your app, the safest way to deal with caching is to change your URLs slightly so the new versions of your URLs are different. That way, you always get max caching, but also users get new versions immediately.
$get will always cache by default, especially on IE. You will need to manually append a querystring, or use the ajaxSetup method to bust the cache.
As an alternative, I found in the jQuery docs that you can just override the default caching. For me I didn't have to convert all of my $.get over to $.ajax calls. Note that this also overrides default behavior for all other types of calls, like $.getScript which has the opposite default behavior of cache: false from $.get.
https://api.jquery.com/jquery.getscript/
$.ajaxSetup({
cache: false
});