My redis cluster is configured with maxmem. The eviction policy is allkeys-lru. I want to know how long the keys are lasting on average before they are evicted. Every time a key is evicted, doesn't redis capture how long it lived for?
I was looking at INFO in redis-cli and I see these keyspace metrics:
# Keyspace
db5:keys=13502644,expires=0,avg_ttl=0
However, it seems that "avg_ttl" only corresponds to keys that expire; not keys that are evicted.
The only other relevant thing I can find are total counts:
expired_keys:0
evicted_keys:17026842
Neither of these help me answer the question: How long do keys normally last before they are evicted?
Related
My usecase:
I am using Redis for storing high amount of data.
In 1 sec i write around 24k keys in redis with ttl as 30 minutes and i want the keys to be deleted after ttl has expired.
The current redis implementation of evicting keys is it works in tasks and each task pick 20 random keys and see if keys have expired ttl then it deletes those keys and redis recommends not more than 100 such tasks to be used. So if i se hz(no of tasks to 100) then Redis will be able to clear tke keys max # 2000 keys/ sec which is too less for me as my insertion rate is very high which eventually results in out of memory exception when memory gets full.
Alternative i have is :
1/ Hit Random Keys, or keys which we know have expired, this will initiate delete in Redis
2/ Set eviction policy when maxmemory is reached. This will aggressively delete redis keys, when max memory is reached.
3/ Set hz (Frequency), to some higher value. This will initiate more tasks for purging expired keys per sec.
1/ Doesn't seem feasible.
For 2/ & 3/
Based on the current cache timer of 30 minutes, and given insertion rate, we can use
maxmemory 12*1024*1024
maxmemory-samples 10
maxmemory-policy volatile-ttl
hz 100
But using 2 would mean all the time redis will be performing the deletion of keys and then insertion,as i am assuming in my case memory will always be equal to 12 GB
So is it good to use this strategy, or we should write our own keys eviction service over Redis?
Are you using Azure Redis cache? If yes, you can think of using clustering. You can have up to 10 shards in the cluster and this will help you to share your load for all different no. of keys for different operation.
I think the only way to get a definitive answer to your question is to write a test. A synthetic test similar to your actual workload should not be hard to create and will let you know if redis can expire keys as quickly as you can insert them and what impact changing the hz value has on performance.
Using maxmemory should also be a workable strategy. Will mean the allocated memory might always be full, but should work.
Another is to reduce the number of keys you are writing to. If the keys you are writing to contain string values, you can instead write them into a redis hash field.
For example, if your inserts look something like this:
redis.set "aaa", value_a
redis.set "bbb", value_b
You could instead use a hash:
# current_second is just a timestamp in seconds
redis.hset current_second, "aaa", value_a
redis.hset current_second, "bbb", value_b
By writing to a hash with the current timestamp in its key and setting the TTL on the entire hash redis only has to evict one key per second.
Given some of the advantages of using hashes in redis, I expect the hash approach to perform best if your use case is compatible.
Might be worth testing before deciding.
I'm working on an app that will be using Redis to store some end user session state details. There will be many tens to hundred of millions of key/value (unordered sets) with expiries set (to the second). To gauge whether or not my existing Redis installation will run into memory exhaustion problems, I need to create a budget for my app's expected and worst case Redis memory usage.
The size of the raw data, number of keys, key value pair lifespan, etc. is well know and in the calculation spreadsheet. For Redis specific things, like expiry value size, I just have wild guesses.
For my memory budget calculation, what values should I use for:
per key redis overhead (bytes? percent?)
per key expiry size (seconds, not ms)
per key set size/overhead
per set item size/overhead
(Other per key or data type information might be helpful to others.)
I am using Redis in a Java application, where I am reading log files, storing/retrieving some info in Redis for each log. Keys are IP addresses in my log file, which mean that they are always news keys coming, even if the same appears regularly.
At some point, Redis reaches its maxmemory size (3gb in my case), and starts evicting some keys. I use the "allkeys-lru" settings as I want to keep the youngest keys.
The whole application then slows a lot, taking 5 times longer than at the beginning.
So I have three questions:
is it normal to have such a dramatic slowdown (5 times longer)? Did anybody experience such slowdown? If not, I may have another issue in my code (improbable as the slowdown appears exactly when Redis reaches its limit)
can I improve my config ? I tried to change the maxmemory-samples setting without much success
should I consider an alternative for my particular problem? Is there a in-memory DB that could handle evicting keys with better performances ? I may consider a pure Java object (HashMap...), even if it doesn't look like a good design.
edit 1:
we use 2 DBs in Redis
edit 2:
We use redis 2.2.12 (ubuntu 12.04 LTS). Further investigations explained the issue: we are using db0 and db1 in redis. db1 is used much less than db0, and keys are totally different. When Redis reaches max-memory (and LRU algo starts evicting keys), redis does remove almost all db1 keys, which slows drastically all calls. This is a strange behavior, probably unusual and maybe linked to our application. We fixed the issue by moving to another (better) memory mechanism for keys that were loaded in db1.
thanks !
I'm not convinced Redis is the best option for your use case.
Redis "LRU" is only a best effort algorithm (i.e. quite far from an exact LRU). Redis tracks memory allocations and knows when it has to free some memory. This is checked before the execution of each command. The mechanism to evict a key in "allkeys-lru" mode consists in choosing maxmemory-samples random keys, comparing their idle time, and select the most idle key. Redis repeats these operations until the used memory is below maxmemory.
The higher maxmemory-samples, the more CPU consumption, but the more accurate result.
Provided you do not explicitly use the EXPIRE command, there is no other overhead to be associated with key eviction.
Running a quick test with Redis benchmark on my machine results in a throughput of:
145 Kops/s when no eviction occurs
125 Kops/s when 50% eviction occurs (i.e. 1 key over 2 is evicted).
I cannot reproduce the 5 times factor you experienced.
The obvious recommendation to reduce the overhead of eviction is to decrease maxmemory-samples, but it also means a dramatic decrease of the accuracy.
My suggestion would be to give memcached a try. The LRU mechanism is different. It is still not exact (it applies only on a per slab basis), but it will likely give better results that Redis on this use case.
Which version of Redis are you using? The 2.8 version (quite recent) improved the expiration algorithm and if you are using 2.6 you might give it a try.
http://download.redis.io/redis-stable/00-RELEASENOTES
I'm storing a bunch of realtime data in redis. I'm setting a TTL of 14400 seconds (4 hours) on all of the keys. I've set maxmemory to 10G, which currently is not enough space to fit 4 hours of data in memory, and I'm not using virtual memory, so redis is evicting data before it expires.
I'm okay with redis evicting the data, but I would like it to evict the oldest data first. So even if I don't have a full 4 hours of data, at least I can have some range of data (3 hours, 2 hours, etc) with no gaps in it. I tried to accomplish this by setting maxmemory-policy=volatile-ttl, thinking that the oldest keys would be evicted first since they all have the same TTL, but it's not working that way. It appears that redis is evicting data somewhat arbitrarily, so I end up with gaps in my data. For example, today the data from 2012-01-25T13:00 was evicted before the data from 2012-01-25T12:00.
Is it possible to configure redis to consistently evict the older data first?
Here are the relevant lines from my redis.cnf file. Let me know if you want to see any more of the cofiguration:
maxmemory 10gb
maxmemory-policy volatile-ttl
vm-enabled no
AFAIK, it is not possible to configure Redis to consistently evict the older data first.
When the *-ttl or *-lru options are chosen in maxmemory-policy, Redis does not use an exact algorithm to pick the keys to be removed. An exact algorithm would require an extra list (for *-lru) or an extra heap (for *-ttl) in memory, and cross-reference it with the normal Redis dictionary data structure. It would be expensive in term of memory consumption.
With the current mechanism, evictions occur in the main event loop (i.e. potential evictions are checked at each loop iteration before each command is executed). Until memory is back under the maxmemory limit, Redis randomly picks a sample of n keys, and selects for expiration the most idle one (for *-lru) or the one which is the closest to its expiration limit (for *-ttl). By default only 3 samples are considered. The result is non deterministic.
One way to increase the accuracy of this algorithm and mitigate the problem is to increase the number of considered samples (maxmemory-samples parameter in the configuration file).
Do not set it too high, since it will consume some CPU. It is a tradeoff between eviction accuracy and CPU consumption.
Now if you really require a consistent behavior, one solution is to implement your own eviction mechanism on top of Redis. For instance, you could add a list (for non updatable keys) or a sorted set (for updatable keys) in order to track the keys that should be evicted first. Then, you add a daemon whose purpose is to periodically check (using INFO) the memory consumption and query the items of the list/sorted set to remove the relevant keys.
Please note other caching systems have their own way to deal with this problem. For instance with memcached, there is one LRU structure per slab (which depends on the object size), so the eviction order is also not accurate (although more deterministic than with Redis in practice).
Is it possible to persist only certain keys to disk using Redis? Is the best solution for this as of right now to run separate Redis servers where one server can have throw away caches and the other one has more important data that we need to flush to disk periodically (such as counters to visits on a web page)
You can set expirations on a subset of your keys. They will be persisted to disk, but only until they expire. This may be sufficient for your use case.
You can then use the redis maxmemory and maxmemory-policy configuration options to cap memory usage and tell redis what to do when it hits the max memory. If you use the volatile-lru or volatile-ttl options Redis will discard only those keys that have an expiration when it runs out of memory, throwing out either the Least Recently Used or the one with the nearest expiration (Time To Live), respectively.
However, as stated, these values are still put to disk until expiration. If you really need to avoid this then your assumption is correct and another server looks to be the only option.