What are the eviction rule for apollo-cache-inmemory? - apollo-server

From what I understand, anything in cache is ephemeral and is subjected to some kind of eviction rule, like LRU. In this case, if we are using the in-memory cache and apollo-link-state to replace redux or vuex, how do we guarantee that some states don't get evicted in the middle of running the application?

As of Apollo Client v2, there is no eviction whatsoever. Based on the comments it might be on the roadmap for v3.
You can check these Github issues for discussion:
https://github.com/apollographql/apollo-client/issues/3965
https://github.com/apollographql/apollo-feature-requests/issues/4
https://github.com/apollographql/apollo-client/issues/621
As for the more general question - in most cases there is no need of such a guarantee. The reason is that cache is completely transparent to the app due to the Apollo Client and React design. When you use a Query component, your subcomponent will receive data. At that point, you decide what to do if the data is available or not.
For example, if you decide to render a loading spinner if data is not available, then theoretically each time data is evicted, your component will be re-rendered and will show spinner.
I can imagine a case where you might have a long running, async operation (if it's not async then again data cannot be evicted in the middle of it, due to JavaScript execution model). In such a case (rare, but possible), you could potentially copy the data first to local variables etc.

Related

State vs cookie/localstorage read performance

I am developing a app in React + Redux and I have a constant doubt and I can't find documentation about it. Is there any performance downside if, let's say in a saga, I read data from a cookie/localStorage instead from the state? This read process would only happen once on each load.
The key thing here is the performance, without taking into consideration if it's good or bad practice.
Thank in advance.
First of all - what do you mean state ? In redux - state is just a plain object (plus some methods, but still). So when you read data from there - you just read props from object.
While cookies, localstorage - it's DOM api, which first of all slower, plus you need not only read data, but also parse it (cause both cookies, storage work with serialized data). So definitely storage/cookie slower than state.
You can check http://jsben.ch/nvo5G
BUT! - you can't save in-memory object state between page reloads. So for this, you can use storage (pattern named persistent state. And there is probably no other way to implement this functionality (or client DB) - in case you need to restore some state on reload - you have just two options - save state on a client (cookies, storage/db), or on server (and do fetch request).
It's MICRO optimisations, mostly you shouldn't care about it (in the case of reading just on start app)

Is there a "best practice" in microservice development for versioning a database table?

A system is being implemented using microservices. In order to decrease interactions between microservices implemented "at the same level" in an architecture, some microservices will locally cache copies of tables managed by other services. The assumption is that the locally cached table (a) is frequently accessed in a "read mode" by the microservice, and (b) has relatively static content (i.e., more of a "lookup table" vice a transactional content).
The local caches will maintain synch using inter-service messaging. As the content should be fairly static, this should not be a significant issue/workload. However, on startup of a microservice, there is a possibility that the local cache has gone stale.
I'd like to implement some sort of rolling revision number on the source table, so that microservices with local caches can check this revision number to potentially avoid a re-synch event.
Is there a "best practice" to this approach? Or, a "better alternative", given that each microservice is backed by it's own database (i.e., no shared database)?
In my opinion you shouldn't be loading the data at start up. It might be bit complicated to maintain version.
Cache-Aside Pattern
Generally in microservices architecture you consider "cache-aside pattern". You don't build the cache at front but on demand. When you get a request you check the cache , if it's not there you update the cache with latest value and return response, from there it's always returned from cache. The benefit is you don't need to load everything at front. Say you have 200 records, while services are only using 50 of them frequently , you are maintaining the extra cache that may not be required.
Let the requests build the cache , it's the one time DB hit . You can set the expiry on cache and incoming request build it again.
If you have data which is totally static (never ever change) then this pattern may not be worth a discussion , but if you have a lookup table that can change even once a week, month, then you should be using this pattern with longer cache expiration time. Maintaining the version could be costly. But really upto you how you may want to implement.
https://learn.microsoft.com/en-us/azure/architecture/patterns/cache-aside
We ran into this same issue and have temporarily solved it by using a LastUpdated timestamp comparison (same concept as your VersionNumber). Every night (when our application tends to be slow) each service publishes a ServiceXLastUpdated message that includes the most recent timestamp when the data it owns was added/edited. Any other service that subscribes to this data processes the message and if there's a mismatch it requests all rows "touched" since it's last local update so that it can get back in sync.
For us, for now, this is okay as new services don't tend to come online and be in use same day. But, our plan going forward is that any time a service starts up, it can publish a message for each subscribed service indicating it's most recent cache update timestamp. If a "source" service sees the timestamp is not current, it can send updates to re-sync the data. This has the advantage of only sending the needed updates to the specific service(s) that need it even though (at least for us) all services subscribed have access to the messages.
We started with using persistent Queues so if all instances of a Microservice were down, the messages would just build up in it's queue. There are 2 issues with this that led us to build something better:
1) It obviously doesn't solve the "first startup" scenario as there is no queue for messages to build up in
2) If ANYTHING goes wrong either in storing queued messages or processing them, you end up out of sync. If that happens, you still need a proactive mechanism like we have now to bring things back in sync. So, it seemed worth going this route
I wouldn't say our method is a "best practice" and if there is one I'm not aware of it. But, the way we're doing it (including planned future work) has so far proven simple to build, easy to understand and monitor, and robust in that it's extremely rare we get an event caused by out-of-sync local data.

Cache invalidation algorithm

I'm thinking about caching dynamic content in web server. My goal is to bridge the whole processing by returning a cached HTTP response without bothering the DB (or Hibernate). This question is not about choosing between existing caching solutions; my current concern is the invalidation.
I'm sure, a time-based invalidation makes no sense at all: Whenever a user changes anything, they expect to see to the effect immediately rather than in a few seconds or even minutes. And caching for a fraction of a second is useless as there are no repeated requests for the same data in such a short period (since most of the data is user-specific).
For every data change, I get an event and can use it to invalidate everything depending on the changed data. As request happen concurrently, there are two time-related problems:
Invalidation may come too late and stale data may be served even to the client who changed them.
After the invalidation has finished, a long running request may finish and its stale data may get put into the cache.
The two problems are sort of opposite to each other. I guess, the former is easily solved by partially serializing requests from the same client using a ReadWriteLock per client. So let's forget it.
The latter is more serious as it basically means a lost invalidation and serving the stale data forever (or too long).
I can imagine a solution like repeating the invalidation after every request having started before the change happened, but this sounds rather complicated and time-consuming. I wonder if any existing caches do support this, but I'm mainly interested in how this gets done in general.
Clarification
The problem is a simple race condition:
Request A executes a query and fetches the result
Request B does some changes
The invalidation due to B happens
Request A (which was delayed for whatever reason) finishes
The obsolete response by request A gets written into the cache
To solve the race condition, add a timestamp (or a counter) and check this timestamp when setting a new cache entry.
This ensures that obsolete response will not be cached.
Here's a pseudocode:
//set new cache entry if resourceId is not cached
//or if existing entry is stale
function setCache(resourceId, requestTimestamp, responseData) {
if (cache[resourceId]) {
if (cache[resourceId].timestamp > requestTimestamp) {
//existing entry is newer
return;
} else
if (cache[resourceId].timestamp = requestTimestamp) {
//ensure invalidation
responseData = null;
}
}
cache[resourceId] = {
timestamp: requestTimestamp,
response: responseData
};
}
Let's say we got 2 requests for the same resource "foo":
Request A (received at 00:00:00.000) executes a query and fetches the result
Request B (received at 00:00:00.001) does some changes
The invalidation due to B happens by calling setCache("foo", "00:00:00.001", null)
Request A finishes
Request A calls setCache("foo", "00:00:00.000", ...) to write the obsolete response to cache but fails because the existing entry is newer
This is just the basic mechanism, so there is room for improvements.
I think you don't realize (or don't want to explicitly call out) that you are asking about a choice between cache synchronization strategies. There are several well known strategies: "cache aside", "read through", "write through", and "write behind". e.g. read here: A beginner’s guide to Cache synchronization strategies. They offer various levels of cache consistency (invalidation as you call it).
Your choice should depend on your needs and requirements.
It sounds like so far you've chosen "write behind" strategy (queue or defer cache invalidation). But from your concerns it sounds like you've chosen it incorrectly, because you are worried about inconsistent cache reads.
So, you should consider using "cache aside" or "read/write through" strategies, because those offer better cache consistency. They all are different flavors of the same thing - always keep cache consistent. If you don't care about cache consistency, then ok, stay with "write behind", but then this question becomes irrelevant.
Architecture wide, I would never go with raising events to invalidate the cache, because it seems like you've made it part of your business logic, while it's just an infrastructure concern. Invalidate (or queue invalidation of) cache as part of read/write operations, and not somewhere else. That allows cache to become just one aspect of your infrastructure, and not part of everything else.

Dealing with concurrency issues when caching for high-traffic sites

I was asked this question in an interview:
For a high traffic website, there is a method (say getItems()) that gets called frequently. To prevent going to the DB each time, the result is cached. However, thousands of users may be trying to access the cache at the same time, and so locking the resource would not be a good idea, because if the cache has expired, the call is made to the DB, and all the users would have to wait for the DB to respond. What would be a good strategy to deal with this situation so that users don't have to wait?
I figure this is a pretty common scenario for most high-traffic sites these days, but I don't have the experience dealing with these problems--I have experience working with millions of records, but not millions of users.
How can I go about learning the basics used by high-traffic sites so that I can be more confident in future interviews? Normally I would start a side project to learn some new technology, but it's not possible to build out a high-traffic site on the side :)
The problem you were asked on the interview is the so-called Cache miss-storm - a scenario in which a lot of users trigger regeneration of the cache, hitting in this way the DB.
To prevent this, first you have to set soft and hard expiration date. Lets say the hard expiration date is 1 day, and the soft 1 hour. The hard is one actually set in the cache server, the soft is in the cache value itself (or in another key in the cache server). The application reads from cache, sees that the soft time has expired, set the soft time 1 hour ahead and hits the database. In this way the next request will see the already updated time and won't trigger the cache update - it will possibly read stale data, but the data itself will be in the process of regeneration.
Next point is: you should have procedure for cache warm-up, e.g. instead of user triggering cache update, a process in your application to pre-populate the new data.
The worst case scenario is e.g. restarting the cache server, when you don't have any data. In this case you should fill cache as fast as possible and there's where a warm-up procedure may play vital role. Even if you don't have a value in the cache, it would be a good strategy to "lock" the cache (mark it as being updated), allow only one query to the database, and handle in the application by requesting the resource again after a given timeout
You could probably be better of using some distributed cache repository, as memcached, or others depending your access pattern.
You could use the Cache implementation of Google's Guava library if you want to store the values inside the application.
From the coding point of view, you would need something like
public V get(K key){
V value = map.get(key);
if (value == null) {
synchronized(mutex){
value = map.get(key);
if (value == null) {
value = db.fetch(key);
map.put(key, value);
}
}
}
return value;
}
where the map is a ConcurrentMap and the mutex is just
private static Object mutex = new Object();
In this way, you will have just one request to the db per missing key.
Hope it helps! (and don't store null's, you could create a tombstone value instead!)
Cache miss-storm or Cache Stampede Effect, is the burst of requests to the backend when cache invalidates.
All high concurrent websites I've dealt with used some kind of caching front-end. Bein Varnish or Nginx, they all have microcaching and stampede effect suppression.
Just google for Nginx micro-caching, or Varnish stampede effect, you'll find plenty of real world examples and solutions for this sort of problem.
All boils down to whether or not you'll allow requests pass through cache to reach backend when it's in Updating or Expired state.
Usually it's possible to actively refresh cache, holding all requests to the updating entry, and then serve them from cache.
But, there is ALWAYS the question "What kind of data are you supposed to be caching or not", because, you see, if it is just plain text article, which get an edit/update, delaying cache update is not as problematic than if your data should be exactly shown on thousands of displays (real-time gaming, financial services, and so on).
So, the correct answer is, microcache, suppression of stampede effect/cache miss storm, and of course, knowing which data to cache when, how and why.
It is worse to consider particular data type for caching only if data consumers are ready for getting stale date (in reasonable bounds).
In such case you could define invalidation/eviction/update policy to keep you data up-to-date (in business meaning).
On update you just replace data item in cache and all new requests will be responsed with new data
Example: Stocks info system. If you do not need real-time price info it is reasonable to keep in cache stock and update it every X mils/secs with expensive remote call.
Do you really need to expire the cache. Can you have an incremental update mechanism using which you can always increment the data periodically so that you do not have to expire your data but keep on refreshing it periodically.
Secondly, if you want to prevent too many users from hiting the db in one go, you can have a locking mechanism in your stored proc (if your db supports it) that prevents too many people hitting the db at the same time. Also, you can have a caching mechanism in your db so that if someone is asking for the exact same data from the db again, you can always return a cached value
Some applications also use a third service layer between the application and the database to protect the database from this scenario. The service layer ensures that you do not have the cache miss storm in the db
The answer is to never expire the Cache and have a background process update cache periodically. This avoids the wait and the cache-miss storms, but then why use cache in this scenario?
If your app will crash with a "Cache miss" scenario, then you need to rethink your app and what is cache verses needed In-Memory data. For me, I would use an In Memory database that gets updated when data is changed or periodically, not a Cache at all and avoid the aforementioned scenario.

When to Use Azure Caching Local Cache

I want to start using the Azure Distributed Caching and came across the concept of LocalCache. But the fact that it can go out of sync with the Distributed Cache, makes me wonder, why I would want to use it and how I could use it safely.
When enabled, items retrieved from the cache cluster are locally stored in memory on the client machine. This improves performance of subsequent get requests, but it can result in inconsistency of data between the locally cached version and the actual item in the cache cluster.
Calling DataCache.GetIfNewer is one option to ensure that I get the latest version, but that requires that I still do a call to the Distributed Cache, passing in the object that I want to check, in order to see if the two versions differ.
I could use Notifications to invalidate the LocalCache object, but that is done on a polling basis, which opens up the opportunity for an update to occur within the poll period leaving me with stale data.
So,why would I ever use LocalCache, and if there is a reason to do so, how do I use it safely?
"There are only two hard things in Computer Science: cache invalidation and naming things" - Phil Karlton
You would use LocalCache when a) performance is critical b) you don't care that the retrieved object might be stale.
There are many cases where the object is never going to be out of date (e.g. list of public/bank holidays), or when you are not too worried about being 100% up-to-date (e.g. if item has > 1000 units in stock, use local cache, otherwise re-fetch from database).
Don't try and invalidate the local cache. If you need more up-to-date objects, get them from the cluster. If you cannot tolerate out-of-sync data, get it from the database. Caching is always a performance-inconsistency compromise — LocalCache more than the server cache, but the server cache is still a compromise.

Resources