Coherence triggered event upon creating cache - events

I am developing a web page where server has to send a cache name to the client, when ever a new cache is created using ConfigurableCacheFactory.ensureCache() or CacheService.ensureCache() by any other extend clients.
Will there be any event that I can listen at the server end, which will be triggered after creating a coherence cache in cluster, such that I can listen to that event and send the newly created cache name to the client ? Any help would be appreciated. Thank you!!!

Normally, I would start by looking at the LiveEvents feature. For example:
http://www.slideshare.net/OracleCoherence/coherence-live-events
http://docs.oracle.com/middleware/1213/coherence/develop-applications/api_live_events.htm#COHDG5611
http://docs.oracle.com/middleware/1213/coherence/tutorial/uem.htm#COHTU950
https://www.youtube.com/watch?v=ebEBv817Jn0
However, I don't think that there is an event for a cache being created. You can certainly detect when a cache is created, but it is a relatively low-level capability, e.g. implementing (over-riding) a ConfigurableCacheFactory, or a BackingMapManager, or a Backing Map. All are very advanced. Perhaps there is a simpler approach that you could consider? Like just putting that new cache name somewhere in a registry or a thread-local variable?

Related

Can Digital Ocean Stateless Servers (I have 4 servers running on Digital Ocean) work with the Caching Policy I implemented on Spring Boot?

I have implemented Caching in my Spring Boot REST Application. My policy includes a time based cache eviction strategy, and an update-based cache eviction strategy. I am worried that since I employ a stateless server, if there is a method called to update certain data, and this was handled by server instance A, then the corresponding caches in server instance B, C and D, are not updated as well.
Is this an issue I would face / is there a way to overcome this issue?
This is the oldest problem in software development - cache invalidation when you have multiple servers
One way to handle it is to move your cache out of the individual servers and move them to somewhere shared like another instance that holds the cache entries that every other app refers to or something like redis [centralized cache]
Second way is to do a broadcast message so that each server now knows to invalidate the entry once the data has been modified or deleted - here you run the risk of the message not being processed and thus a stale entry is left in some server[s]
Another option is to have some sort of write ahead log [like kafka or redis streams] which is processed by each server and thus they will all process the events deterministically and have the same cache state
Lmk if you need more help - we can setup some time outside of SO

Which does stale-while-revalidate cache strategy mean?

I am trying to implement different cache strategies using ServiceWorker. For the following strategies the way to implement is completely clear:
Cache first
Cache only
Network first
Network only
For example, while trying to implement the cache-first strategy, in the fetch hook of the service-worker I will first ask the CacheStorage (or any other) for the requested URL and then if exists respondWith it and if not respondWith the result of network request.
But for the stale-while-revalidate strategy according to this definition of the workbox, I have the following questions:
First about the mechanism itself. Does stale-while-revalidate mean that use cache until the network responses and then use the network data or just use the network response to renew your cache data for the next time?
Now if the network is cached for the next time, then what scenarios contain a real use-case of that?
And if the network response should be replaced immediately in the app, so how could it be done in a service worker? Because the hook will be resolved with the cached data and then network data could not be resolved (with respondWith).
Yes, it means exactly that. The idea is simple: respond immediately from the cache, then refresh the cache in the background for the next time.
All scenarios where it is not important to always get the very latest version of the page/app =) I'm using stale-while-revalidate strategy on two different web applications, one for public transportation services and one for displaying restaurant menu information. Many sites/apps are just fine with this but of course not all.
One very important thing to note here on the #2:
You could eg. use stale-while-revalidate only for static assets. This way your html, js, css, images etc. would be cached and quickly served to the user, but the data fetched dynamically from an API could still be fresh. For some apps this works, for some others not so well. Depends completely on the app. Of course you have to remember not to change the semantics of your API if the user is running a previous version of the app etc.
Not possible in any automatic way. What you could do, however, is implement a msg channel between the Service Worker and the "regular JS code on the page" using window.postMessage API. You could listen for certain messages on the page and then, from the Service Worker, send a msg when an important change has happened and the cache has been updated. Then you could either show the user a prompt telling that the page really needs to be reloaded right now or even force reload it from JS. You would need to put this logic of determining when an important update has happened into the Service Worker of course.

How to clear a browser cached service worker when the old site is no longer accessible?

I have built a new site for a customer and taken over managing their domain and using a new hosting. The previous site and hosting have been completely taken down.
I am running into a major issue that I am not sure how to fix. The previous developer used a service worker to cache and load the previous site. The problem is that users that had previous visited the site keep seeing the old one since it is all loading from a cache. This old site no longer even exists so I have no way of adding any javascript to remove the service worker from their browser unless they hit the new site.
Has anyone ever had this issue and know of a way to resolve it? Note, asking the users to delete the service worker from their browser won't work.
You can use cache busting to achieve the outcome. As per Keycdn
Cache busting solves the browser caching issue by using a unique file
version identifier to tell the browser that a new version of the file
is available. Therefore the browser doesn’t retrieve the old file from
cache but rather makes a request to the origin server for the new
file.
In case you want to update the service worker itself, you should know, for a service worker an update is triggered if any of the following happens:
A navigation to an in-scope page.
A functional events such as push and sync, unless there's been an update
check within the previous 24 hours.
Calling .register() only if the service worker URL has changed. However, you should avoid changing the worker URL.
Updating the service worker
Maybe using the clear-site-data header would be the most thorough solution.

Raise event after distributed call has refreshed cache in umbraco 7

Is there a built-in way to raise an event on a load-balanced server in umbraco 7.
I would like to hook into the event-pipeline on the load-balanced server once the editor-server has finished clearing caches as a distributed call.
The reason for this is that I need to clear some custom caches based on the newly published content and would like to avoid building my own solution for this if there already is an event for this type of functionality.
I have already hooked up to these events:
Umbraco.Core.Services.ContentService.Created
Umbraco.Core.Services.ContentService.Saved
Umbraco.Core.Services.ContentService.Publishing
Umbraco.Core.Services.ContentService.Published
None of these seem to fire on the load-balanced servers, only on the editor-server.
Publishing of normal content works as intended with the distributedcalls configuration in my solution.
I've had a very similar need with custom caches, and have success with these events:
umbraco.content.AfterRefreshContent
umbraco.content.AfterUpdateDocumentCache
umbraco.content.AfterClearDocumentCache

How can I cache the data in Meteor?

thanks everyone!
recently i want to built a small cms on meteor,but have some question
1,cache,page cache,data cache,etc..
For example,when people search some article
in server side:
Meteor.publist('articles',function(keyword){
return Articles.find({keyword:keyword});
});
in client:
Meteor.subscribe('articles',keyword);
that's ok ,but ......
the question is ,everytime people doing so ,it invoke a mongo query,and reduce the performance,
in other framework use common http or https,people can depend on something like squid or varnish to cache the page or data,so everytime you route to a url,you read data from the cache server ,but Meteor built on socket.js or websocket,and I don't know how to cache throught the socket.......I trid varnish ,but seen no effect.
so,may be it ignore the websocket?is there some method to cache the data,in the mongodb,in server,can i add some cache server ?
2, chat
I see the chatroom example in https://github.com/zquestz/simplechat
But unlike implyment using socket.js,this example save the chat message in the mongodb ,so the data flow is message ->mongo->query->people,this invoke the mongo query too!
and in socket.js,just save the socket in the context(or the server side cache),so the data don't go throught the db.
My question is , is there a socket interface in Meteor ,so I can message->socket->people? and if can't , how is the performace in the productive envirment as the chatroom example doing(i see it runs slow ...)
With Meteor, you don't have to worry about caching Mongodb queries. Meteor does that for you. Per the docs on data and security:
Every Meteor client includes an in-memory database cache. To manage the client cache, the server publishes sets of JSON documents, and the client subscribes to those sets. As documents in a set change, the server patches each client's cache.
[...]
Once subscribed, the client uses its cache as a fast local database, dramatically simplifying client code. Reads never require a costly round trip to the server. And they're limited to the contents of the cache: a query for every document in a collection on a client will only return documents the server is publishing to that client.
Because Meteor does poll the server every so often to see if the client's cache needs patching, you're probably seeing those polls happening every now and then. But they probably aren't very large requests. Additionally, due to a feature of Meteor called latency compensation, when you update a data source, the client immediately reflects the change without first waiting on the server. This reduces the appearance of performance reduction to the user.
If you have many documents in mongo, you may also be seeing them all get fetched if you still have the autopublish package enabled. You can fix that by removing it with meteor remove autopublish and write code to only publish the relevant data instead of the entire database.
If you really need to manage caching manually, the docs also go into that:
Sophisticated clients can turn subscriptions on and off to control how much data is kept in the cache and manage network traffic. When a subscription is turned off, all its documents are removed from the cache unless the same document is also provided by another active subscription.
Additional performance improvements to Meteor are currently being worked on, including a DDP-level proxy to support "very large number of clients". You can see more detail on this at the Meteor roadmap.
If you stumble upon this question not because of a lack of understanding of meteor's minimongo and are instead interested in how to cache subscriptions after they are no longer needed for the moment (but they maybe in the future and don't want to keep their extra DDP overhead on client server) there are two package options:
https://github.com/ccorcos/meteor-subs-cache
https://github.com/kadirahq/subs-manager
I was creating a mobile app and cache of database was not working hence I used GroundDB package of meteor https://github.com/raix/Meteor-GroundDB now the database is always in local whenever I restart the app,
Also you need to look in appcache package of meteor to cache the entire app locally.

Resources