Should I set expiration on caches of "constant" results in Redis? - caching

I have a few queries to a database that return absolutely constant responses, i.e. some entries on this database are never changed after written.
I'm wondering if I'm to implement caching on them with Redis, should I set an expiration time?
Pros and cons of not doing that -
Pros: Users will always benefit from caching (except for the first query)
Cons: The number of these entries to be queried is growing. So Redis will end up using more and more memory.
Edit
To give more context, the queries run quite slow. Each of them may take seconds. It will be beneficial to minimize the number of users that experience this.
Also, each of these results has size around the magnitude a several kB; The number (not size) of entries may be increasing for 1 per minute.

Sorry for answering with questions. Still waiting for enough reputation to comment and clarify.
Answering your direct question:
Are the number of queries you expect unbounded?
No: You could improve first user experience by triggering the queries on startup and leaving in cache. Other responses that are expected to change you could attach a TTL to and use any of the following maxmemory-policy settings in the config: volatile-ttl, allkeys-lru, 'volatile-lfu, or volatile-random` to only evict keys with TTLs.
Yes: Prioritize these by attaching a TTL and updating each time it's requested to keep in cache as long as possible and use any of the memory management policies that best fit the rest of your use case.
Related concerns:
If these are really static values, why are you querying a database rather than reading from a flat file of constants generated once and read at startup?
Have you attempted to optimize your queries?

Related

Best way to construct a cache key whose uniqueness is defined by 6 properties

Currently I am tasked to fix cache for an ecommerce like system whose prices depend on many factors. The cache backend is redis. For a given product the factors that influence the price are:
sku
channel
sub channel
plan
date
Currently the cache is structured like this in redis:
product1_channel1_subchannel1: {sku_1: {plan1: {2019-03-18: 2000}}}
The API caters to requests for multiple products, skus and all the factors above . So they decided to query all the data on a product_channel_subchannel level and filter the data in the app which is very slow. Also they have decided that, on a cache miss they will construct the cache for all skus for 90 days of data. This way only one request will face the wrath while the others gets benefited from it (only the catch is now we are busting cache more often which is also dragging the system down)
The downside of going with all these factors included in the keys is there will be too many keys. To ball park there are 400 products each made up of 20 skus with 20 channels, 200 subchannels 3 types of plans and 400 days of pricing. To avoid these many keys at some place we must group the data.
The system is currently receives about 10 rps and the has to respond within 100ms.
Question is:
Is the above cache structure fine? Or how do we go about flattening this structure?
How are caches stored in pricing systems in general. I feel like this a very trivial task nonetheless I find it very hard to justify my approaches
Is it okay to sacrifice one request to warm cache for bulk of the data? Or is it better to have a cache warming strategy?
Any sort of caching strategy will be an exercise in trade-offs. And the precise trade-offs you need to make will be dependent upon complex domain logic that you can't predict until you try it out.
What this means is that whatever you implement should be based on data and should be flexible enough to change over time as the business changes. In particular the answer to these questions:
Is it okay to sacrifice one request to warm cache for bulk of the data? Or is it better to have a cache warming strategy?
depend on how the data will be queried by your users and how long a cache miss will take. If queries tend to be clustered around certain skus, or certain dates in a predictable manner, then you should use that information to help guide cache hits and misses.
There is no way I, or anyone else, can give you a correct answer without doing proper experimentation, but we can give you some guidelines.
Here are some best practices that I would recommend when using redis for caching:
If the bottleneck is sending data from redis to the api, then consider using lua scripts to do the simple processing before any data leaves redis. But, be careful that you don't make the scripts too complex since a long-running lua script can block all other parts of redis
It looks like you are using simple get/set keys to store your data. Consider using something more complex:
a. use sorted sets (zsets) if you want to have better access to data by date (use the date as the score).
b. use hash sets to get more fine-grained access to skus
Based on your question, it looks like you will have about 1.6M keys. This is not a huge amount, but you need to make sure that redis has enough memory to store everything in ram without swapping anything to disk. This is something that we had to learn the hard way. If you are running your redis instance on linux, you must set the system's swappiness to 0, to ensure swap is never used.
But, most importantly, you need to experiment with everything until you find a good solution.

Which caching mechanism to use in my spring application in below scenarios

We are using Spring boot application with Maria DB database. We are getting data from difference services and storing in our database. And while calling other service we need to fetch data from db (based on mapping) and call the service.
So to avoid database hit, we want to cache all mapping data in cache and use it to retrieve data and call service API.
So our ask is - Add data in Cache when it gets created in database (could add up-to millions records) and remove from cache when status of one of column value is "xyz" (for example) or based on eviction policy.
Should we use in-memory cache using Hazelcast/ehCache or Redis/Couch base?
Please suggest.
Thanks
I mostly agree with Rick in terms of don't build it until you need it, however it is important these days to think early of where this caching layer would fit later and how to integrate it (for example using interfaces). Adding it into a non-prepared system is always possible but much more expensive (in terms of hours) and complicated.
Ok to the actual question; disclaimer: Hazelcast employee
In general for caching Hazelcast, ehcache, Redis and others are all good candidates. The first question you want to ask yourself though is, "can I hold all necessary records in the memory of a single machine. Especially in terms for ehcache you get replication (all machines hold all information) which means every single node needs to keep them in memory. Depending on the size you want to cache, maybe not optimal. In this case Hazelcast might be the better option as we partition data in a cluster and optimize the access to a single network hop which minimal overhead over network latency.
Second question would be around serialization. Do you want to store information in a highly optimized serialization (which needs code to transform to human readable) or do you want to store as JSON?
Third question is about the number of clients and threads that'll access the data storage. Obviously a local cache like ehcache is always the fastest option, for the tradeoff of lots and lots of memory. Apart from that the most important fact is the treading model the in-memory store uses. It's either multithreaded and nicely scaling or a single-thread concept which becomes a bottleneck when you exhaust this thread. It is to overcome with more processes but it's a workaround to utilize todays systems to the fullest.
In more general terms, each of your mentioned systems would do the job. The best tool however should be selected by a POC / prototype and your real world use case. The important bit is real world, as a single thread behaves amazing under low pressure (obviously way faster) but when exhausted will become a major bottleneck (again obviously delaying responses).
I hope this helps a bit since, at least to me, every answer like "yes we are the best option" would be an immediate no-go for the person who said it.
Build InnoDB with the memcached Plugin
https://dev.mysql.com/doc/refman/5.7/en/innodb-memcached.html

Strategy for "user data" in couchbase

I know that a big part of the performance from Couchbase comes from serving in-memory documents and for many of my data types that seems like an entirely reasonable aspiration but considering how user-data scales and is used I'm wondering if it's reasonable to plan for only a small percentage of the user documents to be in memory all of the time. I'm thinking maybe only 10-15% at any given time. Is this a reasonable assumption considering:
At any given time period there will be a only a fractional number of users will be using the system.
In this case, users only access there own data (or predominantly so)
Recently entered data is exponentially more likely to be viewed than historical user documents
UPDATE:
Some additional context:
Let's assume there's a user base of a 1 million customers, that 20% rarely if ever access the site, 40% access it once a week, and 40% access it every day.
At any given moment, only 5-10% of the user population would be logged in
When a user logs in they are like to re-query for certain documents in a single session (although the client does do some object caching to minimise this)
For any user, the most recent records are very active, the very old records very inactive
In summary, I would say of a majority of user-triggered transactional documents are queried quite infrequently but there are a core set -- records produced in the last 24-48 hours and relevant to the currently "logged in" group -- that would have significant benefits to being in-memory.
Two sub-questions are:
Is there a way to indicate a timestamp on a per-document basis to indicate it's need to be kept in memory?
How does couchbase overcome the growing list of document id's in-memory. It is my understanding that all ID's must always be in memory? isn't this too memory intensive for some apps?
First,one of the major benefits to CB is the fact that it is spread across multiple nodes. This also means your queries are spread across multiple nodes and you have a performance gain as a result (I know several other similar nosql spread across nodes - so maybe not relevant for your comparison?).
Next, I believe this question is a little bit too broad as I believe the answer will really depend on your usage. Does a given user only query his data one time, at random? If so, then according to you there will only be an in-memory benefit 10-15% of the time. If instead, once a user is on the site, they might query their data multiple times, there is a definite performance benefit.
Regardless, Couchbase has pretty fast disk-access performance, particularly on SSDs, so it probably doesn't make much difference either way, but again without specifics there is no way to be sure. If it's a relatively small document size, and if it involves a user waiting for one of them to load, then the user certainly will not notice a difference whether the document is loaded from RAM or disk.
Here is an interesting article on benchmarks for CB against similar nosql platforms.
Edit:
After reading your additional context, I think your scenario lines up pretty much exactly how Couchbase was designed to operate. From an eviction standpoint, CB keeps the newest and most-frequently accessed items in RAM. As RAM fills up with new and/or old items, oldest and least-frequently accessed are "evicted" to disk. This link from the Couchbase Manual explains more about how this works.
I think you are on the right track with Couchbase - in any regard, it's flexibility with scaling will easily allow you to tune the database to your application. I really don't think you can go wrong here.
Regarding your two questions:
Not in Couchbase 2.2
You should use relatively small document IDs. While it is true they are stored in RAM, if your document ids are small, your deployment is not "right-sized" if you are using a significant percentage of the available cluster RAM to store keys. This link talks about keys and gives details relevant to key size (e.g. 250-byte limit on size, metadata, etc.).
Basically what you are making a decision point on is sizing the Couchbase cluster for bucket RAM, and allowing a reduced residency ratio (% of document values in RAM), and using Cache Misses to pull from disk.
However, there are caveats in this scenario as well. You will basically also have relatively constant "cache eviction" where "not recently used" values are being removed from RAM cache as you pull cache missed documents from disk into RAM. This is because you will always be floating at the high water mark for the Bucket RAM quota. If you also simultaneously have a high write velocity (new/updated data) they will also need to be persisted. These two processes can compete for Disk I/O if the write velocity exceeds your capacity to evict/retrieve, and your SDK client will receive a Temporary OOM error if you actually cannot evict fast enough to open up RAM for new writes. As you scale horizontally, this becomes less likely as you have more Disk I/O capacity spread across more machines all simultaneously doing this process.
If when you say "queried" you mean querying indexes (i.e. Views), this is a separate data structure on disk that you would be querying and of course getting results back is not subject to eviction/NRU, but if you follow the View Query with a multi-get the above still applies. (Don't emit entire documents into your Index!)

How to deal with expiring item (due to TTL) in memcached on high-load website?

When you have peaks of 600 requests/second, then the memcache flushing an item due to the TTL expiring has some pretty negative effects. At almost the same time, 200 threads/processes find the cache empty and fire of a DB request to fill it up again
What is the best practice to deal with these situations?
p.s. what is the term for this situation? (gives me chance to get better google results on the topic)
If you have memcached objects which will be needed on a large number of requests (which you imply is the case), then I would look into having a separate process or cron job that regularly calculated and refreshed these objects. That way it should never go TTL. It's a common trade-off: you add a little unnecessary load during low traffic time to help reduce the load during peaking (the time you probably care the most about).
I found out this is referred to as "stampeding herd" by the memcached folks, and they discuss it here: http://code.google.com/p/memcached/wiki/NewProgrammingTricks#Avoiding_stampeding_herd
My next suggestion was actually going to be using soft cache limits as discussed in the link above.
If your object is expiring because you've set an expiry and it's gone past date, there is nothing you can do but increase the expiry time.
If you are worried about stale data, a few techniques exist you could consider:
Consider making the cache the authoritative source for whatever data you are looking at, and make a thread whose job is to keep it fresh. This will make the other threads block on refilling the cache, so it may only make sense if you can
Rather than setting a TTL on the data, change whatever process updates the data to update the cache. One technique I use for frequently changing data is to do this probabilistically -- 10% of the time data is written, it is updated. You can tune this for whatever is sensible, depending on how expensive the DB query is and how severe the impact of stale data.

Advantage Database Server: in-memory queries

As far as I know, ADS v.10 tries to keep result of query in memory until it is a quite huge. The same should be true for the __output table and for temporary tables. When the result becoming large, swapping stated.
The question is what memory limit is set for a query, a worker, whatever? Could this limit be configured?
Thanks.
The overall limit is controlled by the MAX_CACHE_MEMORY configuration parameter. I don't think there is currently any configuration parameter or mechanism to control it on a per-user basis.
In general, an LRU scheme is used to remove old pages when the limit is reached. In addition, it uses a scaling algorithm to control how much memory each user can get to avoid allowing one connection to constantly be acquiring overly large portions of the cache.

Resources