What's the difference between query for a specific key once and many times? does it cost the same time every time, or 2nd will be faster than 1st? Hope someone can give a hand:) I've checked the official site and found nothing on this.
Redis is an in-memory database, which means all data it manages is stored in RAM. That being the case, there is no need/way for Redis to cache the data and every request is served in constant and predictable latency.
Related
I have a few queries to a database that return absolutely constant responses, i.e. some entries on this database are never changed after written.
I'm wondering if I'm to implement caching on them with Redis, should I set an expiration time?
Pros and cons of not doing that -
Pros: Users will always benefit from caching (except for the first query)
Cons: The number of these entries to be queried is growing. So Redis will end up using more and more memory.
Edit
To give more context, the queries run quite slow. Each of them may take seconds. It will be beneficial to minimize the number of users that experience this.
Also, each of these results has size around the magnitude a several kB; The number (not size) of entries may be increasing for 1 per minute.
Sorry for answering with questions. Still waiting for enough reputation to comment and clarify.
Answering your direct question:
Are the number of queries you expect unbounded?
No: You could improve first user experience by triggering the queries on startup and leaving in cache. Other responses that are expected to change you could attach a TTL to and use any of the following maxmemory-policy settings in the config: volatile-ttl, allkeys-lru, 'volatile-lfu, or volatile-random` to only evict keys with TTLs.
Yes: Prioritize these by attaching a TTL and updating each time it's requested to keep in cache as long as possible and use any of the memory management policies that best fit the rest of your use case.
Related concerns:
If these are really static values, why are you querying a database rather than reading from a flat file of constants generated once and read at startup?
Have you attempted to optimize your queries?
I want to use Redis as cache in my project, so as we know that redis store data in the memory, absolutely there are limitations on that, how long the data will persist on memory ? Do I want to implement some algorithms in that(least recently used for example) ?
There is no need of implementing algorithms explicitly. Redis comes with built in eviction policies. You can configure one of them. http://redis.io/topics/lru-cache
Redis support expiring keys after a certain time range. Suppose you need the cache only for 4 hours you can implement this. http://redis.io/commands/expire
Redis does compression for data within a range. You can implement all you hashes, sorted sets in such a way that it can hold a lot of data in a lesser memory space. http://redis.io/topics/memory-optimization
Go through all these docs, you will get a better idea on implementing. Hope this helps.
Lets say there I have 2 servers that are using Hazelcasts distributed cache. If on server #1, I store 2 items in a map in that distributed cache. One of those items will be saved in the local back up, and the other will be stored in the backup of the other servers Hazelcast instance(Please correct me if that is incorrect).
My question is, if I try to retrieve the second item from the cache(stored in the backup on server #2), a TCP call will be made to retrieve that data. How is this faster than just calling the DB?
First of all let me correct how data is stored on Hazelcast.
Hazelcast uses a distribution algorithm based on consistent hashing, meaning the hashing algorithm returns the same output for the same input all the time. This distribution is not 100% equal distribution but for high number of elements pretty good and cost effective. That said it doesn't mean you'll have one element on each node in the worst case.
By default Hazelcast also keeps on backup, that means each node will have both elements (in a 2 node setup), either owned data or as a backup for failure case. You can make backups readable (read-from-backup=true), however that introduces a slight chance to read staled data (time between owner is updated but backup is not yet).
In addition data in Hazelcast, again by default, is stored in serialized form, means binary streamable representation.
Ok so how can all this be faster than a TCP connection to your database?
The answer is twofold:
Hazelcast is a key-value store. Therefore it is optimized for requesting data by key and answering with the value as quickly as possible.
Data is already serialized, therefore the byte stream is just "smashed" into the socket without any real further work to be done.
Your database on the other hand has to really query data from a table. The internal data structures to hold the information is optimized for complex queries but not to access on a key base. But, and this is important, current database implementation optimize internally (in RAM) for fast access too. So the effect will only happen for databases that serve under high load. Caches (local or distributed) are designed to speed up slow operations, resulting in: if your database is blazingly fast you won't see a benefit.
Anyways designing a system you expect to grow exponentially you should consider caching right from the start. A comprehensive introduction into caching and the behind ideas is available in a caching whitepaper and article I wrote some time ago: https://dzone.com/articles/caching-why-you-should-care
I hope this answers your question :-)
We are having a situation in which the values we store on memcache are bigger than 1MB.
It is not possible to make such values smaller, and even if there was a way, we need to persist them to disk.
One solution would be to recompile the memcache server to allow say 2MB values, but this is either not clean nor a complete solution (again, we need to persist the values).
Good news is that
We can predict quite acurately how many key/values pair we are going to have
We can also predict the total size we will need.
A key feature for us is the speed of memcache.
So question is: is there any noSQL replacement for memcache which will allow us to have values longer than 1MB AND store them in disk without loss of speed?
In the past I have used tokyotyrant/cabinet but seems to be deprecated now.
Any idea?
I'd use redis.
Redis addresses the issues you've listed, supports keys up to 512Mb, and values up to 2Gb.
You can persist data to disc using AOF snap-shotting given a frequency, 1s, 5s, etc., although RDB persistence provides maximum performance over AOF, in most cases.
We use redis for caching json documents. We've learned that, for maximum performance, deploy redis on physical hardware, if you can; virtual machines dramatically impacts redis network performance.
You also have Couchbase which is compatible with the Memcache API and allows you to either only store your data in Memcache or in a persisted cluster.
Redis is fine if the total ammount of your data will not exceed the size of you physical memory. If the total ammount of your data is too much to fit the memmory, you will need to install more Redis instances on different servers.
Or you may try SSDB(https://github.com/ideawu/ssdb), which will automatically migrate cold data into disk, so you will get more storage capacity with SSDB.
Any key/value store will do, really. See this list for example: http://www.metabrew.com/article/anti-rdbms-a-list-of-distributed-key-value-stores
Also take a look at MongoDB - durability doesn't seem to be an issue for you, and that's basically where Mongo sucks, so you can get fast document-database (key/value store on steroids, basically) with indexes for free. At least until you grow too large.
I would go with couchbase, it can support up to 20mb for a document, it's possible to run a bucket as either memcache or couchbase protocol, the latter providing persistence.
Take a look at the other limits for keys/metadata here: http://www.couchbase.com/docs/couchbase-manual-2.0/couchbase-server-limits.html
And a presentation on how mongodb/cassandra and couchbase stack up on throughput/operations a second. http://www.slideshare.net/renatko/couchbase-performance-benchmarking
I've used both redis and couchbase in production, for a persistent sit in replacement for memcache its hard to argue against a nosql db that is built upon the protocol.
When you have peaks of 600 requests/second, then the memcache flushing an item due to the TTL expiring has some pretty negative effects. At almost the same time, 200 threads/processes find the cache empty and fire of a DB request to fill it up again
What is the best practice to deal with these situations?
p.s. what is the term for this situation? (gives me chance to get better google results on the topic)
If you have memcached objects which will be needed on a large number of requests (which you imply is the case), then I would look into having a separate process or cron job that regularly calculated and refreshed these objects. That way it should never go TTL. It's a common trade-off: you add a little unnecessary load during low traffic time to help reduce the load during peaking (the time you probably care the most about).
I found out this is referred to as "stampeding herd" by the memcached folks, and they discuss it here: http://code.google.com/p/memcached/wiki/NewProgrammingTricks#Avoiding_stampeding_herd
My next suggestion was actually going to be using soft cache limits as discussed in the link above.
If your object is expiring because you've set an expiry and it's gone past date, there is nothing you can do but increase the expiry time.
If you are worried about stale data, a few techniques exist you could consider:
Consider making the cache the authoritative source for whatever data you are looking at, and make a thread whose job is to keep it fresh. This will make the other threads block on refilling the cache, so it may only make sense if you can
Rather than setting a TTL on the data, change whatever process updates the data to update the cache. One technique I use for frequently changing data is to do this probabilistically -- 10% of the time data is written, it is updated. You can tune this for whatever is sensible, depending on how expensive the DB query is and how severe the impact of stale data.