The memcached evicts data slab wise due to which the LRU is running on the respective size slabs. Therefore even if the free space is available in the memcache, keys are being evicted.
I want to build a monitoring system to check which keys are being evicted prematurely due to the slabing algorithm.
I am thinking of creating a system to hit the memcached at regular intervals for all the keys inserted in to the memcached. I have a logging system already which records all the insertion keys into the memcache, this log data is stored in mongo.
Please suggest if i am correct in my approach or any better alternative ?
If we talk about your approach only, it is correct as its workable. But the problem is, this method can hurt the performance of your app as it is continually hitting the Memcache and fetching the keys.
As far as alternatives are concerned, There can be three alternative eviction policies are there,
1) Least Frequently Used
2) Least Recently Used
3) Priority based Eviction
These are the eviction policies offered by NCache which is an enterprise level distributed cache for .NET and Java and also provides a fast and reliable storage for ASP.NET and JSP Sessions. To learn more about these eviction policies, please check the following link,
http://www.alachisoft.com/resources/docs/ncache/help-4-1/eviction-policy.html?mw=NDE2&st=MQ==&sct=NTAw&ms=CQYAAAABAAAAACIBBAQC
Related
I am a new developer and am trying to implement Laravel's (5.1) caching facility to improve the speed of my app. I started out caching a large DB table that my app constantly references - but it got too large so I have backed away from that and am now 'forever' caching smaller chunks of data - for example, for each page only the portions of that large DB table that are relevant.
I have watched 'Caching Essentials' on Laracasts, done some Googling and had a search in this forum (and Laracasts') but I still have a couple of questions:
I am not totally clear on how the cache size limits work when you are using Laravel's file-based system - is there an overall in-app size limit for the cache or is one limited size-wise only per key and by your server size?
What are the signs you should switch from file-based caching to something like Memcached or Redis - and what are the benefits of using one of those services? Is it the fact that your caching is handled on a different server (thereby lightening the load on your own)? Do you switch over to one of these services when your local, file-based cache gets too big for your server?
My app utilizes several tables that have 3,000-4,000 rows - the data in these tables is constantly referenced and will remain static unless I decide to add new options. I am basically looking for the best way to speed up queries to the data in these tables.
Thanks!
I don't think Laravel imposes any limitations on its file i/o at all - the limitations will be with how much what PHP can read / write to a file at once, or hold in its memory / process at any one time.
It does serialise the data that you cache, and unserialise it when you reload it, so your PHP environment would have to be able to process the entire cache file (which is equivalent to the top level cache key) at once. So, if you are getting cacheduser.firstname, it would have to load the whole cacheduser key from the file, unserialise it, then get the firstname key from that.
I would take the PHP memory limit (classic, i know!) as a first point to investigate if you want to keep down this road.
Caching services like Redis or memcached are bespoke, optimised caching solutions. They take some of the logic and responsibility out of your PHP environment.
They can, for example, retrieve sub-keys from items without having to process the whole thing, so can retrieve part of some cached data in a memory efficient way. So, when you request cacheduser.firstname from redis, it just returns you the firstname attribute.
They have other advantages regarding tagging / clearing out subsets of caches (see [the cache tags Laravel docs] (https://laravel.com/docs/5.4/cache#cache-tags))
Another thing to think about is scaling. If your site is large enough, and is load-balanced across multiple servers, the filesystem caching may be different across those servers, as each server can only check their local filesystem for the cache files. A caching service can be on a different server (many hosts will have a separate redis / memcached services available), so isn't victim to this issue.
Also - as I understand it (and this might be the most important thing), the file cache driver in Laravel is mainly for local development and testing. Although it can work fine for simple applications with basic caching needs, it's not intended for large scalable production environments.
Personally, I develop locally and test with file caching, as i'm only dealing with small amounts of data then, and use redis to cache on production environments.
It doesn't necessarily need to be on a separate server to get the benefits. If you are never going to scale to multiple application servers, then using a caching service on the same server will already be a large improvement to caching large documents.
Which is better suited for the following environment:
Persistence not a compulsion.
Multiple servers (with Ehcache some cache sync must be required).
Infrequent writes and frequent reads.
Relatively small database (very less memory requirement).
I will pour out what's in my head currently. I may be wrong about these.
I know Redis requires a separate server (?) and Ehcache provides local cache so it must be faster but will replicate cache across servers (?). Updating all caches after some update on one is possible with Ehcache.
My question is which will suit better for the environment I mentioned?
Whose performance will be better or what are scenarios when one may outperform another?
Thanks in advance.
You can think Redis as a shared data structure, while Ehcache is a memory block storing serialized data objects. This is the main difference.
Redis as a shared data structure means you can put some predefined data structure (such as String, List, Set etc) in one language and retrieve it in another language. This is useful if your project is multilingual, for example: Java the backend side , and PHP the front side. You can use Redis for a shared cache. But it can only store predefined data structure, you cannot insert any Java objects you want.
If your project is only Java, i.e. not multilingual, Ehcache is a convenient solution.
You will meet issues with EhCache scaling and need resources to manage it during failover and etc.
Redis benefits over EhCache:
It uses time proven gossip protocol for Node discovery and synchronization.
Availability of fully managed services like AWS ElastiCache, Azure Redis Cache. Such services offers full automation, support and management of Redis, so developers can focus on their applications and not maintaining their databases.
Correct large memory amount handling (we all know that Redis can manage with hundreds of gigabytes of RAM on single machine). It doesn't have problems with Garbage collection like Java.
And finally existence of Java Developer friendly Redis client - Redisson.
Redisson provides many Java friendly objects on top of Redis, like:
Set
ConcurrentMap
List
Queue
Deque
BlockingQueue
BlockingDeque
ReadWriteLock
Semaphore
Lock
AtomicLong
CountDownLatch
Publish / Subscribe
ExecutorService
and many more...
Redisson supports local cache for Map structure which cold give you 45x performance boost for read operations.
Here is the article describing detailed feature comparison of Ehcache and Redis.
I have recently encountered a question that since redis is not distributed and don't support parallelism(multi-core), isn't elastic search be a better choice to use instead of redis for caching purpose.
This is all in reference to a simple web, where we used redis to cache db query.
I have kind of got the idea here,
but still not sure whether it has any real benefits. Opening up this thread to discuss the advantages/disadvantages in doing so.
It's not really what you asked for but you might want to have a look at Aerospike.
Redis is an in-memory data structure store known for speed and often used as a cache. Both Redis and Aerospike are open source, however, when applications need persistence or when applications must scale but servers have maxed out of RAM, developers should consider Aerospike, a distributed key-value store that is just as fast or faster than Redis, but scales more simply and with the economics of flash/SSDs.
I'm currently storing generated HTML pages in a memcached in-memory cache. This works great, however I am wanting to increase the storage capacity of the cache beyond available memory. What I would really like is:
memcached semantics (i.e. not reliable, just a cache)
memcached api preferred (but not required)
large in-memory first level cache (MRU)
huge on-disk second level cache (main)
evicted from on-disk cache at maximum storage using LRU or LFU
proven implementation
In searching for a solution I've found the following solutions but they all miss my marks in some way. Does anyone know of either:
other options that I haven't considered
a way to make memcachedb do evictions
Already considered are:
memcachedb
best fit but doesn't do evictions: explicitly "not a cache"
can't see any way to do evictions (either manual or automatic)
tugela cache
abandoned, no support
don't want to recommend it to customers
nmdb
doesn't use memcache api
new and unproven
don't want to recommend it to customers
Tokyo Cabinet/Tokyo Tyrant?
Seems that later versions of memcachedb can be cleaned up manually if desired using the rget command and storing the expiry time in the data record. Of course, this means that I pound both the server and network with requests for the entire data block even though I only want the expiry time. Not the best solution but seemingly the only one currently available.
I worked with EhCache and it works very good. It has in memory cache and disk storage with differents eviction policies. It's a mature library a with good support. There is a memcached api that wraps EhCache, specially developed for GAE support.
Regards,
Jonathan.
I'm building an application with multiple server involved. (4 servers where each one has a database and a webserver. 1 master database and 3 slaves + one load balancer)
There is several approach to enable caching. Right now it's fairly simple and not efficient at all.
All the caching is done on an NFS partition share between all servers. NFS is the bottleneck in the architecture.
I have several ideas implement
caching. It can be done on a server
level (local file system) but the
problem is to invalidate a cache
file when the content has been
update on all server : It can be
done by having a small cache
lifetime (not efficient because the
cache will be refresh sooner that it
should be most of the time)
It can also be done by a messaging
sytem (XMPP for example) where each
server communicate with each other.
The server responsible for the
invalidation of the cache send a
request to all the other to let them
know that the cache has been
invalidated. Latency is probably
bigger (take more time for everybody
to know that the cache has been
invalidated) but my application
doesn't require atomic cache
invalidation.
Third approach is to use a cloud
system to store the cache (like
CouchDB) but I have no idea of the
performance for this one. Is it
faster than using a SQL database?
I planned to use Zend Framework but I don't think it's really relevant (except that some package probably exists in other Framework to deal with XMPP, CouchDB)
Requirements: Persistent cache (if a server restart, the cache shouldn't be lost to avoid bringing down the server while re-creating the cache)
http://www.danga.com/memcached/
Memcached covers most of the requirements you lay out - message-based read, commit and invalidation. High availability and high speed, but very little atomic reliability (sacrificed for performance).
(Also, memcached powers things like YouTube, Wikipedia, Facebook, so I think it can be fairly well-established that organizations with the time, money and talent to seriously evaluate many distributed caching options settle with memcached!)
Edit (in response to comment)
The idea of a cache is for it to be relatively transitory compared to your backing store. If you need to persist the cache data long-term, I recommend looking at either (a) denormalizing your data tier to get more performance, or (b) adding a middle-tier database server that stores high-volume data in straight key-value-pair tables, or something closely approximating that.
In defence of memcached as a cache store, if you want high peformance with low impact of a server reboot, why not just have 4 memcached servers? Or 8? Each 'reboot' would have correspondingly less effect on the database server.
I think I found a relatively good solution.
I use Zend_Cache to store locally each cache file.
I've created a small daemon based on nanoserver which manage cache files locally too.
When one server create/modify/delete a cache file locally, it send the same action to all server through the daemon which do the same action.
That mean I have local caching files and remote actions at the same time.
Probably not perfect, but should work for now.
CouchDB was too slow and NFS is not reliable enough.