Redis Cache throws OOM Error with volitile-lru - caching

For debugging we have set Redis to volitile-lru and maxmemory of 10mb
We are using Redis for HTTP Caching in an Ecommerce shop - when there are parallel Requests on a Page the error:
OOM command not allowed when used memory > 'maxmemory'
appears. Shouldn't this be avoided by setting the maxmemory-policy to volitile-lru ? Is redis not fast enought to set the memory free and set the new one (each request has about 200-600kb)

From the docs:
volatile-lru: evict keys by trying to remove the less recently used (LRU) keys first, but only among keys that have an expire set, in order to make space for the new data added.
It seems like your keys might not have an expiration. If thats the case, you might want to consider using allkeys-lru as your eviction policy.
You can also use INFO stats to see if evicted_keys has a value greater than zero.

Related

Does EX second impact performance in Redis?

I tried googling something similar , but wasn't habel to find something on the topic
I'm just curious, does it matter how big the number of seconds are set in a key impact performance in redis?
For example:
set mykey "foobarValue" EX 100 VS set mykey "foobarValue" EX 2592000
To answer this question, we need to see how Redis works.
Redis maintains tables of a key, value pair with an expiry time, so each entry can be translated to
<Key: <Value, Expiry> >
There can be other metadata associated with this as well. During GET, SET, DEL, EXPIRE etc operations Redis calculates the hash of the given key(s) and tries to perform the operation. Since it's a hash table, it needs to prob during any operation, while probing it may encounter some expired keys. If you have subscribed for "Keyspace notification" then notification would be sent and the given entry is removed/updated based on the operation being performed. It also does rehashing, during rehashing it might find expired keys as well. Redis also runs background tasks to cleanup expire keys, that means if TTL is too small then more keys would be expired, as this process is random, so more event would be generated.
https://github.com/antirez/redis/blob/a92921da135e38eedd89138e15fe9fd1ffdd9b48/src/expire.c#L98
It does have a small performance issue when TTL is small since it needs to free the memory and fix some pointers. But it can so happen that you're running out of memory since expired keys are also present in the database. Similarly, if you use higher expiry time then the given key would present in the system for a longer time, that can create memory issue.
Setting smaller TTL has also more cache miss for the client application, so client will have performance issues as well.

How to block all writes and allow only reads in redis server?

I have an use case where I need to put restriction on used_memory_rss usage to ensure redis uses the RAM well within the boundary.
In the situations when the RSS of redis reaches the threshold it is expected that redis should not accept any new DB writes whereas it should accept DB reads.
How can we achieve this? Could someone please provide some insights here?
You can set the maxmemory configuration to set a memory usage limit, and set the maxmemory-policy configuration to noeviction.
With the above configuration, when memory usage reaches the limit, Redis will only accepts read operation, and returns error for write operation.

Redis cache lru start softlimit

I know redis can be used as LRU cache, but is there softlimit flag, where we can state after specific criteria is reached "redis will start cleaning LRU items".
Actually I'm getting OOM errors on redis, I've set redis to LRU cache, but it hits OOM limit and application stops.
I know of "maxmemory " flag, but is there a softlimit, where we've some 10% space left, and we can start eviction of some items, so that application doesn't stop !
Did you set a specific eviction policy?
See: Eviction policies http://redis.io/topics/lru-cache
I would then check, to make sure that you are not inadvertently setting PERSIST on your redis objects. PERSISTED objects, I believe, cannot be LRU'd out.
You can use http://redis.io/commands/ttl TTL to find out the time limit on your keys. And "Keys" to get a list of keys (this is dangerous on a production server, as the list could be very long and blocking). http://redis.io/commands/keys
-daniel

Redis cache updating

EDIT2: Clarification: The code ALREADY has refresh cache on miss logic. What I'm trying to do is reducing the number of missed cache hits.
I'm using Redis as a cache for an API. The idea is that when the API receives a call it first checks the cache and if the data isn't in cache the API will fetch it and cache it afterwards for next time.
At the moment the configuration is the following:
maxmemory 50mb
maxmemory-policy allkeys-lru
That is, use at most 50mb memory, keep trying keys in there and when memory is full start by deleting the least recently used keys (lru).
Now I want to introduce a second category of keys. For this second category I'm going to set a certain expiry time. Now I would like to set up a mechanism such that when these keys expiry this mechanism kicks in and refreshes them (and sets new expiry).
How do I do this?
EDIT:
Some progress. It turns out that Redis has a pub/sub messaging system which in particular can dispatch messages on event. One of them is expiring keys, which can be enabled as such:
notify-keyspace-events Ex
I found this code can describes a blocking python process subscribing to Redis' messaging system. It can easily be changed to detect keys expiring and make a call to the API when a key expires, and the API will then refresh the key.
def work(self, item):
requests.get('http://apiurl/?q={param}'.format(param=item['data']))
So this does precisely what I was asking about.
Often, this feels way too dangerous and out of control. I can imagine a bunch of different situations under which this will very quickly fail.
So, what's a better solution?
http://redis.io/topics/notifications
Keyspace notifications allows clients to subscribe to Pub/Sub channels
in order to receive events affecting the Redis data set in some way.
Examples of the events that is possible to receive are the following:
All the keys expiring in the database 0. (e.g)
...
EXPIRE generates an expire event when an expire is set to the key, or
a expired event every time setting an expire results into the key
being deleted (see EXPIRE documentation for more info).
To expire keys, just use Redis' built-in expiry mechanism. You don't need to refresh the cache contents on expiry, the simplest is to do it when the code experiences a cache miss.

Redis namespacing basics

I am really new to Redis and have been using it along with my Ruby on Rails (Rails 2.3 and Ruby 1.8.7) application using the redis gem for simple tagging functionality as a key value store. I recently realized that I could use it to maintain a user activity feed as well.
The thing is I need the tagging data (stored as key => Sets) in memory and its extremely important to determine results for tagging related operations, where as for the activity feed the data could be deleted on a first in first out basis. Assuming I store X number of activities for every user
Is it possible that I could namespace the redis data sets and have one remain permanently in memory and have the other stay temporarily in the memory. What is the general approach when one uses unrelated data sets that need to have different durations of survival in memory.
Would really appreciate any help on this.
You do not need to define a specific namespace for this. With Redis, you can use the EXPIRE command to set a timeout on a key by key basis.
The general policy regarding key expiration is defined in the configuration file:
# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached? You can select among five behavior:
#
# volatile-lru -> remove the key with an expire set using an LRU algorithm
# allkeys-lru -> remove any key accordingly to the LRU algorithm
# volatile-random -> remove a random key with an expire set
# allkeys->random -> remove a random key, any key
# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
# noeviction -> don't expire at all, just return an error on write operations
#
For your purpose, the volatile-lru policy should be set.
You just have to call EXPIRE on the keys you want to be volatile, and let Redis evict them. However please note it is difficult to guarantee that the oldest keys will be evicted first once the timeout has been triggered. More explanations here.
For your specific use case however, I would not use key expiration but rather try to simulate capped collections. If the activity feed for a given user is represented as a list of objects, it is easy to LPUSH the activity objects, and use LTRIM to limit the size of the list. You get FIFO behavior and keep memory consumption under control for free.
UPDATE:
Now, if you really need to isolate data, you have two main possibilities with Redis:
using two distinct databases. Redis database are identified by an integer, and you can have several of them per instance. Use the select command to switch between databases. Databases can be used to isolate data, but not to assign them different properties (like an expiration policy for instance).
using two distinct instances. An empty Redis instance is a very light process. So several of them can be started without any problem. It is actually the best and the more scalable way to isolate data with Redis. Each instance can have its own policies (including eviction policy). The clients should open as many connections as instances.
But again, you do not need to isolate data to implement your eviction policy requirements.

Resources