I am looking for a way to access the remaining TTL of a redis key value pair via laravel. I don't mind using either the Cache or Redis facades (or anything else for that matter).
In the api I can only see how to return the default TTL - getDefaultCacheTime().
I want to find the remaining TTL.
For reference, the redis terminal command is TTL mykey
Since there's a command method on the Illuminate\Redis\Database class you can simply run :
Redis::command('TTL', ['yourKey']);
This is documented here.
Turns out (with the recent versions of laravel anyway) that you can use redis commands and they will be converted using magic methods. So you can simply use
Redis::ttl('yourKey');
Related
When the lookup for a redis key fails (using zrangeByScore for example), it could be either because the key does not exist or because the redis lookup failed. So, how does one detect a redis failure, in Java?
Call Redis' EXISTS to distinguish between an empty response and a non-existent key.
By Redis failure if you mean a network or server issue, then you can put your code in a try, catch block and look for an exception.
exists call to Redis will check for a missing key. Also note that for Redis a missing key and an empty set are the same. So if you remove the last member of a sorted set via zrem, the key is automatically deleted.
I have this problem related to maintaining and I have looked in several places for the answer but I have found no specific answer.
The situation is like this:
We have several mysql queries which generate menus for our web application. About once a day, we need to update the tables and those updates affect the menu generation. Naturally, we enclose those updates within a transaction.
So far so good. But the improve the speed and responsiveness and also reduce database load, we want to use memcached. And in all respects, memcached is perfect for this role because the updates happen only once a day.
But what we would like to do is this:
Our update scripts starts and its first operation is to "suspend" the memcached pool. Once this is done, memcached no longer answers queries and all queries are passed through to mysql. The important thing is that the memcached server still responds with a miss quickly so that mysql comes into action quickly. The other important thing is that during this period, memcached will refuse to set any data.
Flush all data in memcached pool.
Update script runs.
Restore memcached to normal operation.
So, 1. and 4. is where I am stuck.
Our technology is based around mysql and PHP. I am using the nginx memcached module to directly retrieve data from memcached. But the PHP which sets the cache could run in many different places.
Having said that, I am open to using any language or technology. This is a generic enough problem and we could discuss anything that works best.
Thanks in advance for responses.
The usual method of (atomically) swapping over from one set of data in cache is with a Namespace. A prefix that is stored in its own key and is queried first before going on to fetch the main cached data.
It works like this:
You have a 'namespace' under a key - it could be date/time based for example - menuNamespace = 'menu:15050414:' (the 2015-05-04, 2pm menu build).
That key is a prefix for all the actual data for the menus, or other data, eg: menu:15050414:top-menu, menu:15050414:l2-menu, etc, etc
The back end system builds a new set of cached data with new keys: menu:15050510:top-menu, menu:15050510:l2-menu
Only when the data is in place, do you change namespace key cached entry from 'menu:15050414:' to 'menu:15050510:'
The next time the namespace is fetched, it is used as a prefix to then fetch the new data.
There is some more in a MemcacheD FAQ/tricks page on Namespacing.
Based on #alister_b's initial answer, there is a simpler way to solve my initial problem.
The key is to signal to the PHP code to stop setting the cache values. That can be done through memcached entry like setCache:false or through a MySQL column.
Then, a flush command will guarantee nginx cache misses.
Once the tables are updated, setCache is set to true and normal sets by php are resumed.
This will work with my Ajax calls without issues.
It is not mutually exclusive with namespaces.
I am using redis as a cache and would like to expire data in redis that are not actively used. Currently, setting expiry for an object deletes an object after the expiry time has elapsed. However, I would like to retain the object in redis if it is read atleast once before the object expires.
One way I see is to store a separate expiry_key for every object and set the expiry to the expiry_key instead of the original object. Subscribe to del notification on the expiry_key and when a del notification is received, check if the object is read atleast once (via a separately maintained access log) during the expiry interval. If the object is not read, execute a del command on the original object. If it is read, recreate the expiry_key with the expiry interval.
This implementation requires additional systems to manage expiry and would prefer to do it locally with redis.
Are there better solutions to solve this?
Resetting expiry for the object for every read will increase the number of writes to redis and hence this is not a choice.
Note the redis cache refresh is managed asynchronously via a change notification system.
You could just set the expiry key again after each read (setting a TTL on a key is O(1)).
It maybe make sense for your system to do this in a transaction:
MULTI
GET mykey
EXPIRE mykey 10
EXEC
You could also pipeline the commands.
This pattern is also described in the official documentation.
Refer to section "Configuring Redis as a cache" in http://redis.io/topics/config
We can set maxmemory-policy to allkeys-lru to clear inactive content from redis. This would work for the usecase I have stated.
Another way is do define a notification on the key , and then reset it's expiration
see here
I want to implement a session store based on Redis. I would like to put session data into Redis. But I don't know how to handle session-expire. I can loop through all the redis keys (sessionid) and evaluate the last access time and max idle time, thus I need to load all the keys into the client, and there may be 1000m session keys and may lead to very poor I/O performances.
I want to let Redis manage the expire, but there are no listener or callback when the key expire, so it is impossible to trigger HttpSessionListener. Any advice?
So you need your application to be notified when a session expires in Redis.
While Redis does not support this feature, there are a number of tricks you can use to implement it.
Update: From version 2.8.0, Redis does support this http://redis.io/topics/notifications
First, people are thinking about it: this is still under discussion, but it might be added to a future version of Redis. See the following issues:
https://github.com/antirez/redis/issues/83
https://github.com/antirez/redis/issues/594
Now, here are some solutions you can use with the current Redis versions.
Solution 1: patching Redis
Actually, adding a simple notification when Redis performs key expiration is not that hard. It can be implemented by adding 10 lines to the db.c file of Redis source code. Here is an example:
https://gist.github.com/3258233
This short patch posts a key to the #expired list if the key has expired and starts with a '#' character (arbitrary choice). It can easily be adapted to your needs.
It is then trivial to use the EXPIRE or SETEX commands to set an expiration time for your session objects, and write a small daemon which loops on BRPOP to dequeue from the "#expired" list, and propagate the notification in your application.
An important point is to understand how the expiration mechanism works in Redis. There are actually two different paths for expiration, both active at the same time:
Lazy (passive) mechanism. The expiration may occur each time a key is accessed.
Active mechanism. An internal job regularly (randomly) samples a number of keys with expiration set, trying to find the ones to expire.
Note that the above patch works fine with both paths.
The consequence is Redis expiration time is not accurate. If all the keys have expiration, but only one is about to be expired, and it is not accessed, the active expiration job may take several minutes to find the key and expired it. If you need some accuracy in the notification, this is not the way to go.
Solution 2: simulating expiration with zsets
The idea here is to not rely on the Redis key expiration mechanism, but simulate it by using an additional index plus a polling daemon. It can work with an unmodified Redis 2.6 version.
Each time a session is added to Redis, you can run:
MULTI
SET <session id> <session content>
ZADD to_be_expired <current timestamp + session timeout> <session id>
EXEC
The to_be_expired sorted set is just an efficient way to access the first keys that should be expired. A daemon can poll on to_be_expired using the following Lua server-side script:
local res = redis.call('ZRANGEBYSCORE',KEYS[1], 0, ARGV[1], 'LIMIT', 0, 10 )
if #res > 0 then
redis.call( 'ZREMRANGEBYRANK', KEYS[1], 0, #res-1 )
return res
else
return false
end
The command to launch the script would be:
EVAL <script> 1 to_be_expired <current timestamp>
The daemon will get at most 10 items. For each of them, it has to use the DEL command to remove the sessions, and notify the application. If one item was actually processed (i.e. the return of the Lua script is not empty), the daemon should loop immediately, otherwise a 1 second wait state can be introduced.
Thanks to the Lua script, it is possible to launch several polling daemons in parallel (the script guarantees that a given session will only be processed once, since the keys are removed from to_be_expired by the Lua script itself).
Solution 3: use an external distributed timer
Another solution is to rely on an external distributed timer. The beanstalk lightweight queuing system is a good possibility for this
Each time a session is added in the system, the application posts the session ID to a beanstalk queue with a delay corresponding to the session time out. A daemon is listening to the queue. When it can dequeue an item, it means a session has expired. It just has to clean the session in Redis, and notify the application.
I am really new to Redis and have been using it along with my Ruby on Rails (Rails 2.3 and Ruby 1.8.7) application using the redis gem for simple tagging functionality as a key value store. I recently realized that I could use it to maintain a user activity feed as well.
The thing is I need the tagging data (stored as key => Sets) in memory and its extremely important to determine results for tagging related operations, where as for the activity feed the data could be deleted on a first in first out basis. Assuming I store X number of activities for every user
Is it possible that I could namespace the redis data sets and have one remain permanently in memory and have the other stay temporarily in the memory. What is the general approach when one uses unrelated data sets that need to have different durations of survival in memory.
Would really appreciate any help on this.
You do not need to define a specific namespace for this. With Redis, you can use the EXPIRE command to set a timeout on a key by key basis.
The general policy regarding key expiration is defined in the configuration file:
# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached? You can select among five behavior:
#
# volatile-lru -> remove the key with an expire set using an LRU algorithm
# allkeys-lru -> remove any key accordingly to the LRU algorithm
# volatile-random -> remove a random key with an expire set
# allkeys->random -> remove a random key, any key
# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
# noeviction -> don't expire at all, just return an error on write operations
#
For your purpose, the volatile-lru policy should be set.
You just have to call EXPIRE on the keys you want to be volatile, and let Redis evict them. However please note it is difficult to guarantee that the oldest keys will be evicted first once the timeout has been triggered. More explanations here.
For your specific use case however, I would not use key expiration but rather try to simulate capped collections. If the activity feed for a given user is represented as a list of objects, it is easy to LPUSH the activity objects, and use LTRIM to limit the size of the list. You get FIFO behavior and keep memory consumption under control for free.
UPDATE:
Now, if you really need to isolate data, you have two main possibilities with Redis:
using two distinct databases. Redis database are identified by an integer, and you can have several of them per instance. Use the select command to switch between databases. Databases can be used to isolate data, but not to assign them different properties (like an expiration policy for instance).
using two distinct instances. An empty Redis instance is a very light process. So several of them can be started without any problem. It is actually the best and the more scalable way to isolate data with Redis. Each instance can have its own policies (including eviction policy). The clients should open as many connections as instances.
But again, you do not need to isolate data to implement your eviction policy requirements.