Why is there a "Phantom" value attached to the Redis Key value? - spring-boot

"refine-address:3_9b712c0d6213a5c26cd397d330599d442e9974a8417aa92bf448e5121103526b:phantom"
"refine-address:-3_9b712c0d6213a5c26cd397d330599d442e9974a8417aa92bf448e5121103526b"
"refine-address:-12_9b712c0d6213a5c26cd397d330599d442e9974a8417aa92bf448e5121103526b:phantom"
"refine-address:-12_9b712c0d6213a5c26cd397d330599d442e9974a8417aa92bf448e5121103526b"
"refine-address"
If I save it to Redis and then do the "keys *" command, these result values will come out.
What does "Phantom" mean?
Is the performance of Redis a problem because the key containing "Phantom" is forcibly generated?

Related

Redis expire a key but don't delete?

In redis, is there a way to mark a key as expired but don't actually drop it from the database? A use case for this would be to let a client know that they need to pull in fresh data from a producer of that data, but if the producer is unreachable, still have the stale data available in the cache. This would make it simpler as one would not have to leverage a separate database / redis cache for stale data and data that is meant to expire to trigger an update to the cache.
There's no builtin way to do that.
In order to achieve the goal, you have to do the expire work by yourself. You can save the data into a hash with last modification time:
hset key data value time last-modification-time
When you retrieve the data, you can compare the last-modification-time and current time to check if the data is stale.
When you update the value, also update the last-modification time with current time.
In order to make it easy to use, and atomic, wrap these logic into a Lua script.

Is it better to use txt file to get the current counter value instead of database?

I am working on a website in laravel, wherein I am loading a current counter value from the database. And then the user clicks on the button to increase the score.
But as the website has around 4000 concurrent users at any given time, the Database connection is taking its toll on the server and resulting in timeouts.
If I load the current score from the txt file and then write it back to the same file, will it be better?
Or should I use an Application variable to store the score?
I have tried using the cache, but it doesn't pull the latest value. Database optimization is also not working due to the number of users.
I am looking at best way to show and increment counter without database usage.
A database would do a better job. A NoSQL database is perfect for your use case. You can use Redis, it stores the data in-memory (RAM), which means read and write operations will be much faster than other database that operates in secondary disk (Hard Drive).
Redis itself supports data structure to increment values, using INCR command. INCR increments the number stored at key by one. If the key does not exist, it is set to 0 before performing the operation.
For example your key that holds the value is my_counter. You can play around with redis-cli like so.
redis> SET my_counter "10"
"OK"
redis> INCR my_counter
(integer) 11
redis> GET my_counter
"11"
Fortunately, there is a Redis client for Laravel. You can have a read here:
https://laravel.com/docs/5.8/redis
Good luck :)
Edit 1:
If a high amount of user is causing the server to slow down, you have other server and architectural options that can be set alongside a new database. Such as horizontal and vertical scaling.
References:
https://github.com/phpredis/phpredis

how to keep caching up to date

when memecached or Redis is used for data-storage caching. How is the cache being updated when the value changed?
For, example. If I read key1 from cache the first time and it missed, then I pull value1 and put key1=value1 into cache.
After that if the value of key1 changed to value2.
How is value in cache updated or invalidated?
Does that mean whenever there is a change on key1's value. Either the application or database need to check if this key1 is in cache and update it?
Since you are using a cache, you have to tolerate the data inconsistency problem, i.e. at some time point, data in cache is different from data in database.
You don't need to update the value in cache, whenever the value has been changed. Otherwise, the whole cache system will be very complicated (e.g. you have to maintain a list of keys that have been cached), and it also might be unnecessary to do that (e.g. the key-value might be used only once, and no need to update it any more).
How can we update the data in cache and keep the cache system simple?
Normally, besides setting or updating a key-value pair in cache, we also set a TIMEOUT for each key. After that, client can get the key-value pair from the cache. However, if a key reaches the timeout, the cache system removes the key-value pair from the cache. This is called THE KEY HAS BEEN EXPIRED. The next time, the client trying to get that key from cache, will get nothing. This is called CACHE MISS. In this case, client has to get the key-value pair from database, and update it to cache with a new timeout.
If the data has been updated in database, while the key has NOT been expired in cache, client will get inconsistent data. However, when the key has been expired, its value will be retrieved from database and inserted into cache by some client. After that, other clients will get updated data until the data has been changed again.
How to set the timeout?
Normally, there're two kinds of expiration policy:
Expire in N seconds/minutes/hours...
Expire at some future timepoint, e.g. expire at 2017/7/30 00:00:00
A large timeout can largely reduce the load of database, while the data might be out-of-date for a long time. A small timeout can keep the data up-to-date as much as possible, while the database will have a heavy load. So you have to balance the trade-off when designing the timeout.
How does Redis expire keys?
Redis has two ways to expire keys:
When client tries to operate on a key, Redis checks if the key has reached the timeout. If it does, Redis removes the key, and acts as if the key doesn't exist. In this way, Redis ensures that client doesn't get expired data.
Redis also has an expiration thread that samples keys at a configured frequency. If the keys reach the timeout, Redis removes these keys. In this way, Redis can accelerate the key expiration process.
You can simply empty the particular cache value in the api function where insertion or updation of that particular value is performed. This way the server will fetch the updated value in the next request because you had already emptied the cache value.
Here is a diagram which will make it easier for you to understand:
I had similar issue related to stale data esp. in two cases:
When i get bulk messages/events
In this (my) use case, I am writing score to Redis cache and reading it again in subsequent call. In case of bulk messages, due to weak consistency in Redis, data might not be replicated to all replicas when I request again to read the data against same key(which is generally few ms(1-2 ms).
Remediation:
In this case, I was getting stale data. In order to address that, used cache on cache i.e. Loading TTL cache on Redis Cache. Here, it used to check the data in loading cache first, if not present, it checks data in Redis cache. Once done, both the caches are being updated.
in distributed system(k8s) where I have multiple pods
(kafka is being used as messaging broker)
When went for above strategy, we have another problem, what if data for a key previously served by say pod1, reaches to pod2. This has bigger impact, as it leads to data inconsistencies.
Remediation:
Here kafka partition key was set as "key" which is set in Redis. This way, we are getting subsequent messages to a particular pod only. In case of restart of pods, cache will be build again.
This solved our problem.

How to get values over n minutes old on redis?

I have redis datastore with data stored using alphanumeric non-date keys. How might I get the values that have been stored longer than a certain time period?
Store the name of every key you add in a Sorted Set, with the score being the creation timestamp. To retrieve ranges, such as keys created before x time, refer to ZRANGE.

Cassandra -What is really happens once Key-cache get filled

Consider that I have configured 1 Mb of key-cache (Consider it can hold 13000 of keys ).
Then I wrote some records in a column family(say 20000).
Then read it at first (All keys sequentially in the same order used to write ), and keys are started to stored in key-cache.
When the read reached # 13000 the key cache is filled completely.
What will happen to the key-cache when the next keys are read? (Which key is removed for the newly read key ?).
Key-Cache following FIFO or LIFO or Random out ?.
Key cache uses ConcurrentLinkedHashMap underneath and hence its eviction policy is LRU (least recently used).
https://code.google.com/p/concurrentlinkedhashmap/#Features
https://code.google.com/p/concurrentlinkedhashmap/wiki/Design#Beyond_LRU

Resources