What is Redis Value operations in Spring boot?
Is it like we can directly store Key-value pair in Redis database without creating the entity and stuff just by using RedisTemplate<String, Object> ?
Also, if we use ValueOperations how will it impact the performance?
When using Redis, you should think about what data format/datatype suits your needs best, similar to what you would do when coding in any general programming language. All those operations, ValueOperations, ListOperations, SetOperations, HashOperations, StreamOperations are the support provided for interacting with the mentioned datatypes. They are provided by the RedisTemplate.
When you are using ValueOperations, you are more or less treating your whole Redis instance as a giant hash map. For example, you can store entries in Redis like current_user = "John Doe". However, you can also do something silly such as keeping a string representation of a huge hashmap against a key, top_users = <huge_string_representing_a_hash_map> when thinking from the perspective of the second case, what if you want to get the value for one key in the mentioned hash map. Then, the task becomes more or less impossible without transferring the whole hash map in RAM. Yet, if you have used Redis Hashes and HashOperations that would have been a more trivial task.
Going back to your question, if you want to store a simple object using ValueOperations. That wouldn't degrade the performance. In contrast, if you are moving huge maps around, you'll utilise a lot of your network bandwidth and RAM capacity.
In summary, choose your Redist data types carefully to suit your needs.
https://redis.io/topics/data-types
Related
Writing some code in dart I found recently a case when it is simpler to use certain objects as map keys. Actually this code is executed in web mode (compiled to javascript) but eventually it will be used also in dart VM mode.
Does using objects as keys of map can have significant memory / performance impact in dart?
I didn't found a lot of resources on benchmarking that (https://dart.dev/articles/benchmarking is defunct), so any directions are also welcome.
There are some considerations:
Memory
Since each object used as keys will be refereed by the Map, each object cannot be garbage collected unless the Map itself can be garbage collected or the key has been removed from the Map. The same can be said for the associated value for each key.
As of storage used by the Map for keeping the keys and values, it should not make any difference at all if you are using e.g. String as a key or a custom object since both kind of keys will just be saved as references.
Performance
The performance of operations on a Map are very much a question about the performance of the == operator and hashCode property since both are used for several of the operations used on a Map.
I can recommend reading about them here:
https://api.dart.dev/stable/2.7.2/dart-core/Object/operator_equals.html
https://api.dart.dev/stable/2.7.2/dart-core/Object/hashCode.html
Conclusion
A lot of projects are using custom objects as keys for Maps and usually there are no problems of during that. In fact, using a custom object are no more different than using e.g. a String as key since String are also just a normal class with its own == operator and hashCode property.
I'm learning about redis/memcache and redis is clearly the more popular option. My question is about supported data types. At my company we use the memcashier library which is built in memcached. We store temporary user data when they're making a purchase in memcache. We can easily update this object as things are added to the cart or more info about the user is given. This appears to be the same functionality as a hash in redis. I don't understand how this is only a basic string data type and how it's less powerful than a hash.
If you are using strings, that's fine - but any change involves loading the data to your application, parsing it, modifying it, and serializing it back to Redis/Memcache.
This has two problems: it's slow and non atomic. You can have two servers modifying the same object arriving in an inconsistent state - such as double or missing items in a shopping cart. And again, it's slow.
With a Redis hash key, you can atomically modify specific fields of the object without loading the entire object into memory. Instead of read, parse, modify, save - you just update.
Besides, Redis has many many data structures that can create very flexible data stores with different properties, whereas Memcache can only store strings.
BTW Redis has a module that allows you to store JSON objects just as you would a string, and manipulate them directly and atomically without getting them to the client. See Rejson.io for details.
Memcached doesn't support complex datastructures
In redis you have Lists, Sets, SortedSets, HashTables , and more.
Each data-structure mentioned above supports mutation of one or more of its elements atomically and without replacing the entire data-structure/value.
Memcached on the other hand , is a simple key-value store - that means every operation involving an attribute change within a complex object is a read-modify-write. If you just go around blindly replacing fields in objects then you are risking race-conditions and operations atomicity issues (which you can get away from by using CAS )
If the library abstracts that complexity, well - that's great but it's still less efficient than mutating only the relevant field(s)
This answer only relates to your usecase. Redis holds many other virtues over memcached, which are not relevant to this question.
I need your advice on Redis datatypes for my project. The project is a torrent-tracker (ruby, simple sinatra-based) with pure in-memory data store for current information about peers. I feel like this is what Redis is made for. But I'm stuck at choosing proper data types for this. For now I tend to the following setup:
Use list for seeders. Actually I'd better need a ring buffer to get a sequential range of seeders (with given size and start position) and save new start position for the next time.
Use sorted set for leechers. Score for each leecher is downloaded/(downloaded+left) so I can also extract a range for any specific case.
All string values in set and list are string (bencoded) representation of peer data.
What I actually lack in the setup above is:
Necessity to store offset for seeders so data access needs synchronization.
Unknown method of finding a specific seeder in list. Here I may benefit from set but then I won't be able to extract a range of items at once.
(General problem) Need TTL for set/list members (if client shuts down without sending any data before this). Possible option is to make each peer an ordinary string key/value (string or hash), give it TTL, subscribe on destroy and delete it in corresponding list or set.
What could you suggest? Any practical advice?
I need fast and reliable key-value store for Ruby. Is there anything like it already?
The requirement is for it to run wholly inside the Ruby process, not needing any outside processes.
It might be in-memory with explicit disk flushes.
It needs to have minimal value-for-key retrieval times, write times may be not so good.
The amount of data stored won't be terrible, about few hundred thousand keys, each with ~1kb text value.
It turns out that the best option for me was to use plain Hash along with Marshal to serialize it to disk.
YAML is definitely too slow for that number of objects.
Thanks to #ian-armit for reinforcing my trust in the core Ruby libraries.
You could also try Moneta which allows you to build your own key/value store embedded in a ruby process.
Like DBM? http://www.ruby-doc.org/stdlib-1.9.3/libdoc/dbm/rdoc/DBM.html
(filler for spambot)
The DBM class provides a wrapper to a Unix-style dbm or Database Manager library.
Dbm databases do not have tables or columns; they are simple key-value data stores, like a Ruby Hash except not resident in RAM. Keys and values must be strings.
You could try Oria: https://github.com/intridea/oria
Oria (oh-rye-uh) is an in-memory, Ruby-based, zero-configuration Key-Value Store. It's designed to handle moderate amounts of data quickly and easily without causing deployment issues or server headaches. It uses EventMachine to provide a networked interface to a semi-persistent store and asynchronously writes the in-memory data to YAML files.
Check out PStore. Not sure if it's fast enough though.
Daybreak is a nice new option. Data is stored in a table in memory so Ruby niceties are available (each, filter, map, reduce, etc) and appears to be faster than pstore or dbm.
See this blog post for more info.
There's LevelDB, here's the ruby bindings.
Does there exist some sort of persistent key-value like store that allows for quick and easy incrementing, decrementing, and retrieval of integers (and nothing else). I know that I could implement something with a SQL database, but I see two drawbacks to that:
It's heavyweight for the task at hand. All I need is the ability to say "server[key].inc()" or "server[key].dec()"
I need the ability to handle potentially thousands of writes to a single key simultaneously. I don't want to deal with excessive resource contention. Change the value and get out - that's all I need.
I know memcached supports inc/dec, but it's not persistent. My strategy at this point is going to be to use a SQL server behind a queueing system of some sort such that there's only one process updating the database. It just seems... harder than it should be.
Is there something someone can recommend?
Redis is a key-value store that supports several data types. Integer is present, along with incr and decr commands.