Even though Redis is single threaded system it scales very well compared to memcache multithreaded, I am not able to understand exactly can someone help me understand.
Related
I have recently encountered a question that since redis is not distributed and don't support parallelism(multi-core), isn't elastic search be a better choice to use instead of redis for caching purpose.
This is all in reference to a simple web, where we used redis to cache db query.
I have kind of got the idea here,
but still not sure whether it has any real benefits. Opening up this thread to discuss the advantages/disadvantages in doing so.
It's not really what you asked for but you might want to have a look at Aerospike.
Redis is an in-memory data structure store known for speed and often used as a cache. Both Redis and Aerospike are open source, however, when applications need persistence or when applications must scale but servers have maxed out of RAM, developers should consider Aerospike, a distributed key-value store that is just as fast or faster than Redis, but scales more simply and with the economics of flash/SSDs.
Which cache system is better for mondrian SegmentCache?
Memcached, Redis, Hazelcast or anything else?
Not so much because it's better, but because setting it up is simpler, the Community Distributed Cache uses Hazelcast to create a cluster of cache nodes which keep results in memory. From previous experience I think it works quite nicely and it's very easy to set up. You can find the CDC plugin in the Pentaho Marketplace or on github: https://github.com/webdetails/cdc
This question is directed towards Jeroen and is a follow-up to this answer: https://stackoverflow.com/a/12482918/177984
Jeroen wrote "the server does caching" .. "so if enough memory is available it will automatically be available from memory."
How can I confirm if an object is cached 'in-memory' or not? From what I can tell (by performance) all of my objects are being read from disk. I'd like to have things read from memory to speed up data load times. Is there a way to view what's in the in-memory cache? Is there a way to force caching objects in-memory?
Thanks for your help.
The OpenCPU project is rapidly evolving. Things have changed in OpenCPU 1.0. Have a look at the website for the latest information: http://www.opencpu.org.
The answer that you cited is outdated. Currently indeed all the caching is done on disk. In a previous version, OpenCPU used Varnish to do caching, which is completely in-memory. However this turned out to make things more complicated (especially https), and performance was a bit disappointing (especially in comparison with fast disks these days). So now we're back at nginx which caches on disk, but is much more mature and configurable as a web server, and has other performance benefits.
I would like to know if any of you had, or may propose a solution to increase efficiency for online store based on Magento platform.
We currently use multifront architecture (front == each separate server) using load-balancing and two Memcache servers.
We're considering connection for each separate front an Memcache server, but at this point a problem arises with memcache synchronization, so that each store the same value.
Any advice appreciated :)
If you are running memcached on its own hardware, there is no benefit to giving each store its own memcached server.
Configure all front ends to use both memcached instances. This way, all front ends will go to the same memcached instance for a given key. Plus, you get automatic fail-over if one instance croaks, and you can scale up almost linearly as the demand for cache increases.
One of the biggest improvements I've seen for more advanced setups like these is to use a PHP Opcode cache like Xcode. Since Magento uses many PHP files, the opcode cache will end up saving a lot of compilation between runs. Also make sure that all your caches are turned on and leveraged as much as possible. Enable flat catalog and flat product tables as well.
I was currently looking into memcached as way to coordinate a group of server, but came across Apache's ZooKeeper along the way. It looks interesting, and Yahoo uses it, so it shouldn't be bad, but I'd never heard of it before, so I'm kind of skeptical. Has anyone else given it a try? Any comments or ideas?
ZooKeeper and Memcached have different purposes. You can use memcached to do server coordination, but you'll have to do most of this work yourself. Memcached only allows coordination in that it caches common data lookups to be used by multiple clients. From reading ZooKeeper's documentation, it has a much broader focus than this. ZooKeeper seems to provide support for server clustering, which isn't the same as the cache clustering memcached provides.
Have a look at Brad Fitzpatrick's Linux Journal article on memcached to get a better idea what I mean.
To get an overview of what Zookeper is capable of, watch the following presentation by it's creators. It's capable of so much more (creating queue's, electing master processes amongst a group of peers, distributed high performance run time configurations, rendezvous points for dis-joined processes, determining if processes are still running, etc).
http://zookeeper.sourceforge.net/index.sf.shtml
To answer your question, if "coordination" is what you are looking for Zookeeper is much better targeted at that than memcached.
Zookeeper is great for coordinating data across servers. It does a good job of ordering every transaction and making guarantees that transactions happen in order. However when first breaking into it the documentation sucks; it's very 'high-level' without enough concrete examples or explanations as how to properly handle certain events. One of the included examples (as of version 3.3.3) had its own bugs in it.
Your code will also need to be cognizant of event driven interactions, and polling interactions. With massively distributed architecture, when acting upon 'events' you can inadvertently create a stampede that could not be desirable for your environment (herding effect).