I have an issue with typo3 and caching.
We have done the following setup:
1 Nginx load balancer (ip_hash i.e sticky sessions)
2 TYPO3 web instances
1 redis cache shared by both typo3 instances
The issue is that when the first web servers serves a given page, it gets cached. As long as the same web server is serving that page, the cached version gets returned.
As soon as the page request is served by the other web served, the full cache get reloaded.
I noticed that additional items are added to the cache although the page content has not changed.
Is there anything I could check to avoid these unnecessary cache reloads?
There are some considerations before scaling TYPO3 horizontally: https://stackoverflow.com/a/63594837/2819581
Basically, database/caches and some directories all carry state which is not independent of each other.
Related
I have built a new site for a customer and taken over managing their domain and using a new hosting. The previous site and hosting have been completely taken down.
I am running into a major issue that I am not sure how to fix. The previous developer used a service worker to cache and load the previous site. The problem is that users that had previous visited the site keep seeing the old one since it is all loading from a cache. This old site no longer even exists so I have no way of adding any javascript to remove the service worker from their browser unless they hit the new site.
Has anyone ever had this issue and know of a way to resolve it? Note, asking the users to delete the service worker from their browser won't work.
You can use cache busting to achieve the outcome. As per Keycdn
Cache busting solves the browser caching issue by using a unique file
version identifier to tell the browser that a new version of the file
is available. Therefore the browser doesn’t retrieve the old file from
cache but rather makes a request to the origin server for the new
file.
In case you want to update the service worker itself, you should know, for a service worker an update is triggered if any of the following happens:
A navigation to an in-scope page.
A functional events such as push and sync, unless there's been an update
check within the previous 24 hours.
Calling .register() only if the service worker URL has changed. However, you should avoid changing the worker URL.
Updating the service worker
Maybe using the clear-site-data header would be the most thorough solution.
I have created Progressive Web Application (PWA) with angular 5.0 and .net core 2.0. It works fine in offline mode. But only static data are cached for offline mode. I need to store previously requested network data in service worker cache, so that I can fetch these data through service worker cache in offline mode.
You can use also angular service worker for it.
Data Groups - Cache External API Data
The data groups config allows you to cache external API calls, which makes it possible for your app to use an external data source without a network connection. This data is not known at build-time, so it can only be cached at runtime. There are two possible strategies for caching data sources - freshness and performance.
api-freshness - This freshness strategy will attempt to serve data from the network first, then fallback to th cache. You can set a maxAge property that defines how long to cache responses and a timeout that defines how long to wait before falling back to the cache.
api-performance - The performance cache will serve data from the cache first and only reach out to the network if the cache is expired.
Example you could find here in section ngsw-config.json.
Try to check HTTP Caching.
All you need to do is ensure that each server response provides the
correct HTTP header directives to instruct the browser on when and for
how long the browser can cache the response.
For further info, you can check the whole documentation. It provides example and illustrations to understand better about HTTP Caching.
We started caching our static pages in akamai with user defined ttl (for example 7 days). We want control over caching so at 7th day we will purge this cache and recreate by curling all cached pages.
The issue is as akamai serves pages from geographically near node hence there is no control/validation for cache creation. My question is,
A. How can I ensure purge happens in all nodes
B. How can I ensure while curling urls, cache is updated in all nodes.
C. Is there any better way of controlling cache in akamai?
From what I know if you've configured a TTL in Akamai cache, the elements in cache become stale after the defined period & when a request comes on to that node once after the cache has become stale, it will hit the origin / its parent node (if the server is a child) to refresh the stale content. You don't have to explicitly CURL a URL to refresh it. Alternately if you want to forcibly refresh a cache, you can use Akamai APIs or the Edgesuite interface to manually refresh a cache.
i am trying on a caching solution through ARR V2 on my local machine. I have a single node server farm pointing to the oringal server. And all requests coming in to my local machine will be filtered through an inbound rule to be directed to that server farm.
In the ARR Cache action panel I have added a Cache Control Rules to always cache all objects for 20 mins for all requests directed to the server farm. Here I suppose I have overwritten the Memory cache duration settings in the server farm's caching action panel. (Correct me if I'm wrong)
Some other setting on IIS includes:
Output caching is disabled for both user mode and kernel mode (a bit weired to me that objects are still cached in kernel cache)
Server proxy setting is disabled in ARR Cache action panel (I suppose this won't affect because server farm is also proxy)
After complete all the settings, I can see that All objects are cached in my disk cache when I triggered a web request. And through "netsh http show cache" I can also observe that they are all cached in kernal cache( ARR memory cache).
However, when I trigger the web request again, although the request is not going to the original server, request to html and images don't hit the memory cache and through "netsh http show cache" I can see that their hit count remain '1' and their TTL refreshed to be in sync with their corresponding disk cached objects. In contrast, css and javascript always hit memory cache till they expire. In my MIME list, they are all treated as static files so I don't understand why they behaves differently here.
Then I cleared disk cached objects (both primary and secondary), and trigger the web request again. Now (note that all objects are still in the memory cache), only html and images are cached in the disk cache while request to css and javascript are still served from the memory cache so they don'y appear in disk cache.
I just start experimenting on ARR don't fully understand the concept. Hope someone could help.
Thanks in advance.
I'm trying to configure Varnish to cache range requests. I notice the http_range_support option, but everything I've read says that this will attempt to cache the entire file before satisfying the request. Is it possible to do so without requiring the entire file already be cached?
Depends on Varnish version,
From Varnish 3.0.2 you can stream uncached content while it caches the full object.
https://www.varnish-software.com/blog/http-streaming-varnish
"Basically, his code lifts the limitations of the 3.0 release and allows Varnish to deliver the objects, while they are being fetched, to multiple clients."
The feature will be available on beresp.do_stream
https://www.varnish-software.com/blog/streaming-varnish-30