Redis or Memcache for caching AVPs ? - performance

i have a simple Question, i think so.
I am a webdeveloper and want to cache some AVPs that i receive from "slow" webservices.
Also i want to cache some AVPs from a Databases.
What would be the better in-memory database for caching redis or memcache?
Important for me is scalability and performance.

Related

Write-Through and Read-Through caches

I was wondering which cache solutions support Write-Through or Read-Through caching.
I found out that Memcached only supports Cache-Aside caching and also that DAX supports Write-Through.
I was wondering about more caching engines such as Redis etc. and couldn't find the answer.
THX
Take a look at this project (https://github.com/RedisGears/rgsync) that uses RedisGears (https://oss.redislabs.com/redisgears/) to achieve Write-Behind and Write-Through on Redis.
Though it is not supporting Read-Through you can achieve it using RedisGears command reader (https://oss.redislabs.com/redisgears/readers.html#commandreader), just register a command that checks if the data exists on Redis, if its there then just return it. Otherwise, fetch it from wherever you want, save it on Redis, and return it.

Could I can improve HBase reading performance by using Redis as cache?

HBase has its own cache system and for reading requests it will search from cache before fetch data from HDFS. But its cache performance can be hindered by JVM memory size, and this is the reason why I want to use Redis as HBase's cache.
Please don't do it. Using one database as a cache for another database can easily turn into a nightmare situation. Dealing with cache invalidation scenarios itself can be a difficult task.
If you need an in-memory cache on application level, I would still discourage it, but that's a separate discussion.
On database level, if HBase block cache is not good enough for your use case, either HBase is not a good system for your use case or you are not using it correctly. If your only concern is that you have a lot of memory/flash(SSD) but HBase cannot properly utilize it because of JVM restrictions; you can use HBase's bucketcache, which can be used to cache blocks off-heap or on a solid state storage (hbase.bucketcache.ioengine). I would advise you read up on HBase's caching basics here.

Does using Elasticsearch as key value cache like redis makes sense

I have recently encountered a question that since redis is not distributed and don't support parallelism(multi-core), isn't elastic search be a better choice to use instead of redis for caching purpose.
This is all in reference to a simple web, where we used redis to cache db query.
I have kind of got the idea here,
but still not sure whether it has any real benefits. Opening up this thread to discuss the advantages/disadvantages in doing so.
It's not really what you asked for but you might want to have a look at Aerospike.
Redis is an in-memory data structure store known for speed and often used as a cache. Both Redis and Aerospike are open source, however, when applications need persistence or when applications must scale but servers have maxed out of RAM, developers should consider Aerospike, a distributed key-value store that is just as fast or faster than Redis, but scales more simply and with the economics of flash/SSDs.

With Memcached and Squid, is there any need for ASP.NET caching?

With squid, we can cache webpages. I am not sure if it provides the same number of caching methods as ASP.NET caching (I primarily use ASP.NET), but it's a tool to cache webpages.
Then we have memcached, which can cache database tables. I believe this is correct, and it is like SqlCacheDependency (correct me if I am wrong).
However, is there any situation in a large web application where one would find room to use memcached, squid, AND ASP.NET (or PHP, JSP - application framework-level) caching.
Thanks!
You may find that caching entire pages is too coarsely-grained, and caching database tables doesn't get you enough of a boost, leaving a big need for caching chunks of stuff.
Say, for example, you had an application that showed the name of the logged-in user on every page. Caching entire pages wouldn't really work, so you need to drop down a level and cache somewhere within the app framework.
Then we have memcached, which can cache database tables. I believe this is correct, and it is like SqlCacheDependency (correct me if I am wrong).
Memcached is a distributed hashtable. The main benefits over the built in .NET caching is that your cache is scalable (you can add as many memcached boxen as you want) and synchronized (all your web servers have access to the same stuff, and invalidating or updating data from one web server is instantly propagated to all of them)
Performance-wise, it is worse than the .NET cache (you are looking up objects across servers, as opposed to an in-memory lookup on one machine)
However, is there any situation in a large web application where one would find room to use memcached, squid, AND ASP.NET (or PHP, JSP - application framework-level) caching.
For the reasons above, I can imagine a 2-level cache, using the .NET cache first, then memcached. (e.g. a Get() looks at memcached, stores the result in the .NET cache set to expire in 10 seconds, then uses the .NET cache for all the get calls with the same cache key during the next 10 seconds, rinse, repeat)
This way, you get the performance of the in-memory cache lookup without the network IO cost of a pure memcached solution, with the synchronization and scalability benefits of memcached.

Caching with multiple server

I'm building an application with multiple server involved. (4 servers where each one has a database and a webserver. 1 master database and 3 slaves + one load balancer)
There is several approach to enable caching. Right now it's fairly simple and not efficient at all.
All the caching is done on an NFS partition share between all servers. NFS is the bottleneck in the architecture.
I have several ideas implement
caching. It can be done on a server
level (local file system) but the
problem is to invalidate a cache
file when the content has been
update on all server : It can be
done by having a small cache
lifetime (not efficient because the
cache will be refresh sooner that it
should be most of the time)
It can also be done by a messaging
sytem (XMPP for example) where each
server communicate with each other.
The server responsible for the
invalidation of the cache send a
request to all the other to let them
know that the cache has been
invalidated. Latency is probably
bigger (take more time for everybody
to know that the cache has been
invalidated) but my application
doesn't require atomic cache
invalidation.
Third approach is to use a cloud
system to store the cache (like
CouchDB) but I have no idea of the
performance for this one. Is it
faster than using a SQL database?
I planned to use Zend Framework but I don't think it's really relevant (except that some package probably exists in other Framework to deal with XMPP, CouchDB)
Requirements: Persistent cache (if a server restart, the cache shouldn't be lost to avoid bringing down the server while re-creating the cache)
http://www.danga.com/memcached/
Memcached covers most of the requirements you lay out - message-based read, commit and invalidation. High availability and high speed, but very little atomic reliability (sacrificed for performance).
(Also, memcached powers things like YouTube, Wikipedia, Facebook, so I think it can be fairly well-established that organizations with the time, money and talent to seriously evaluate many distributed caching options settle with memcached!)
Edit (in response to comment)
The idea of a cache is for it to be relatively transitory compared to your backing store. If you need to persist the cache data long-term, I recommend looking at either (a) denormalizing your data tier to get more performance, or (b) adding a middle-tier database server that stores high-volume data in straight key-value-pair tables, or something closely approximating that.
In defence of memcached as a cache store, if you want high peformance with low impact of a server reboot, why not just have 4 memcached servers? Or 8? Each 'reboot' would have correspondingly less effect on the database server.
I think I found a relatively good solution.
I use Zend_Cache to store locally each cache file.
I've created a small daemon based on nanoserver which manage cache files locally too.
When one server create/modify/delete a cache file locally, it send the same action to all server through the daemon which do the same action.
That mean I have local caching files and remote actions at the same time.
Probably not perfect, but should work for now.
CouchDB was too slow and NFS is not reliable enough.

Resources