I was wondering which cache solutions support Write-Through or Read-Through caching.
I found out that Memcached only supports Cache-Aside caching and also that DAX supports Write-Through.
I was wondering about more caching engines such as Redis etc. and couldn't find the answer.
THX
Take a look at this project (https://github.com/RedisGears/rgsync) that uses RedisGears (https://oss.redislabs.com/redisgears/) to achieve Write-Behind and Write-Through on Redis.
Though it is not supporting Read-Through you can achieve it using RedisGears command reader (https://oss.redislabs.com/redisgears/readers.html#commandreader), just register a command that checks if the data exists on Redis, if its there then just return it. Otherwise, fetch it from wherever you want, save it on Redis, and return it.
Related
I'm on a 2.6.3 Parse Server and I need to cache the results of queries, to speed things up!
I understand that Parse Server offers a Redis adapter. What exactly do I have to do, in order to start using Redis? Are there any modules I should install? Anything I should import or configure?
Also, I found this on Parse's documentation:
Those cache adapters can be cleaned at anytime internally, you should not use them to cache data and you should let parse-server manage their data lifecycle.
What do they mean by saying you should not use them to cache data and you should let parse-server manage their data lifecycle.? Should I not use the adapter?
What the doc is saying is that parse caches with it's own in-memory structure by default, but it leaves developers the option to use reddis as a substitute. To opt for that, just (1) setup redis as you typically would, (2) initialize the parse server with a RedisCacheAdapter that's been configured with your redis URL.
The point you're asking about: "you should not use them to cache data ..." means that Parse will continue to decide when to cache, when to retrieve from cache, and when to clean, etc. but it will do so by invoking the redis that you configured with.
I think the major advantage to this more elaborate setup is redis's distributed capability. If you're not running on a cluster, you may find the redis idea to be about equivalent performance-wise and a little messier setup-wise as not doing it.
HBase has its own cache system and for reading requests it will search from cache before fetch data from HDFS. But its cache performance can be hindered by JVM memory size, and this is the reason why I want to use Redis as HBase's cache.
Please don't do it. Using one database as a cache for another database can easily turn into a nightmare situation. Dealing with cache invalidation scenarios itself can be a difficult task.
If you need an in-memory cache on application level, I would still discourage it, but that's a separate discussion.
On database level, if HBase block cache is not good enough for your use case, either HBase is not a good system for your use case or you are not using it correctly. If your only concern is that you have a lot of memory/flash(SSD) but HBase cannot properly utilize it because of JVM restrictions; you can use HBase's bucketcache, which can be used to cache blocks off-heap or on a solid state storage (hbase.bucketcache.ioengine). I would advise you read up on HBase's caching basics here.
i have a simple Question, i think so.
I am a webdeveloper and want to cache some AVPs that i receive from "slow" webservices.
Also i want to cache some AVPs from a Databases.
What would be the better in-memory database for caching redis or memcache?
Important for me is scalability and performance.
I want to disable doctrine's default caching for a specific function(page), on all other pages, it should work as usual. Also I don't even want to clear the current caching inside that function. Just need that, no caching will be considered for that particular function call and its inside workflow.
Is there any easy way to achieve this? Thanks.
Rana, I believe you can use $query->useResultCache(false); to disable the cache for the page you want. Take a look at the documentation if needed.
Cheers,
You could use another entity manager for that specific page with caching turned off. As you didn't mention what kind of framework you use, I am unable to make any further assumptions.
According to the documentation (and personal experience) you shouldn't use doctrine without a cache:
Do not use Doctrine without a metadata and query cache! Doctrine is optimized for working with caches. The main parts in Doctrine that are optimized for caching are the metadata mapping information with the metadata cache and the DQL to SQL conversions with the query cache. These 2 caches require only an absolute minimum of memory yet they heavily improve the runtime performance of Doctrine. The recommended cache driver to use with Doctrine is APC. APC provides you with an opcode-cache (which is highly recommended anyway) and a very fast in-memory cache storage that you can use for the metadata and query caches as seen in the previous code snippet.
Would you mind sharing that specific need of yours to be able to provide you with a better answer/solution to your problem?
I'm currently storing generated HTML pages in a memcached in-memory cache. This works great, however I am wanting to increase the storage capacity of the cache beyond available memory. What I would really like is:
memcached semantics (i.e. not reliable, just a cache)
memcached api preferred (but not required)
large in-memory first level cache (MRU)
huge on-disk second level cache (main)
evicted from on-disk cache at maximum storage using LRU or LFU
proven implementation
In searching for a solution I've found the following solutions but they all miss my marks in some way. Does anyone know of either:
other options that I haven't considered
a way to make memcachedb do evictions
Already considered are:
memcachedb
best fit but doesn't do evictions: explicitly "not a cache"
can't see any way to do evictions (either manual or automatic)
tugela cache
abandoned, no support
don't want to recommend it to customers
nmdb
doesn't use memcache api
new and unproven
don't want to recommend it to customers
Tokyo Cabinet/Tokyo Tyrant?
Seems that later versions of memcachedb can be cleaned up manually if desired using the rget command and storing the expiry time in the data record. Of course, this means that I pound both the server and network with requests for the entire data block even though I only want the expiry time. Not the best solution but seemingly the only one currently available.
I worked with EhCache and it works very good. It has in memory cache and disk storage with differents eviction policies. It's a mature library a with good support. There is a memcached api that wraps EhCache, specially developed for GAE support.
Regards,
Jonathan.