Keep cached data after server restart - caching

I want to use a distributed caching solution that will have a backup of the cache to retain the cache after server restart. Is it possible to achieve the same using memcached ?

The point of a cache is that it isn't the primary storage. You might have a mysql database or you might have external api calls to fetch data. Regardless of what your set up, you shouldn't rely on the data in the cache. You should have it set up such that if your cache was flushed, you shouldn't lose any data. The only downside is that it might be a tad costly to fill your cache initially. Additionally, memcached will drop records without prompting you for a handful of reasons (passing of the exp date, full cache, etc).
So I wouldn't really worry about keeping the cache intact after restart. If you have a mysql database as a primary storage, that obviously keeps it's data after a server restart, and that's all you really should worry about.

Related

Laravel multi-servers memcached session

I want to make Laravel use all memcached servers listed in config file to store session, because it can only use one server now.
Also every session must be setted to all servers and getted even if one of servers is down.
I know that the session system wraps the cache system, but still i dont know where to start.
Assuming you are using the php-memcached client there is no way of doing this. When you configure this client with multiple servers it hashes the key to determine which server the value (in your case the session) will be sent to. There are some clients that allow replication by sending the value to multiple servers but this is not a common feature.
If you use long lived sessions or want to make sure they do not get deleted you should not use memcached (any cache for that matter) to store sessions. Even if you could use multiple servers a session might get evicted by the LRU algorithm when the cache is full. Use a permanent storage such as a file or a database in that case.

Difference between in-memory-store and managed-store in mule cache

What are the main differences between in-memory-store and managed-store in mule cache scope and which gives a best performance.
What is the best way to configure caching in global scope?
We are currently using in-memory-store caching. We are always getting issues with memory outage as we are using a server with less HW configurations. We are using mule 3.7v.
Please provide your suggestions to configure cache in optimized way.
We are facing issue with cache expiration with in-memory-store. cache date is not being expunged after expiration time also. But when we use "managed-store" its working as expected.
Below is my configuration:
In-memory:
This store the data inside system memory. The data stored with In-memory is non-persistent which means in case of API restart or crash, the data been cached will be lost.
Managed-store:
This stores the data in a place defined by ListableObjectStore. The data stored with Managed-store is persistent which means in case of API restart or crash, the data been cached will no be lost.
Source (explained in detail with configuration difference):
http://www.tutorialsatoz.com/caching-in-mule-cache-scope/
One of my friend clearly explained me this difference as follows:
in-memory cache--> It is a temperoy memory storage area where it will store the data. for example: Consider using a VM component in Mule, the data will be stored in VM in the form of in-memory queue
in the case of Managed store--> we can store the data and use it in later stages. example: object store
mainly cache will store the frequently used data. It will reduce the db or http calls by saving the frequently used data or results in cache scope.
But both are for temporary storage only, means they are valid for that particular session alone.

WebApi - Redis cache vs Output cache

I have been studying about Redis (no experience at all - just studied theory), and after doing some research, found out that its also being used as cache. e.g. StackOverfolow it self.
My question is, if I have an asp.net WebApi service, and I use output caching at the WebApi level to cache responses, I am basically storing kind of key/value (request/response) in server's memory to deliver cached responses.
Now as redis is an in memory database, how will it help me to substitute WebApi's output caching with redis cache?
Is there any advantage?
I tried to go through this answer redis-cache-vs-using-memory-directyly, but I guess I didn't got the key line in the answer:
"Basically, if you need your application to scale on several nodes sharing the same data, then something like Redis (or any other remote key/value store) will be required."
I am basically storing kind of key/value (request/response) in server's memory to deliver cached responses.
This means that after a server restart, the server will have to rebuild the cache . That won't be the case with Redis. So one advantage of Redis over a homemade in-memory solution is persistence (only if that's an issue for you and that you did not planned to write persistence yourself).
Then instead of coding your own expiring mechanism, you can use Redis EXPIRE or command EXPIREAT or even simply specifying the expire timestamp when putting the api output string into cache with SETEX.
if you need your application to scale on several nodes sharing the same data
What it means is that if you have multiple instances of the same api servers, putting the cache into redis will allow these servers to share the same cache, thus reducing, for instance, memory consumption (1 cache instead of 3 in-memory cache), and so on...

Store persistent data in session

This might be super stupid. Shoot me, but I was in a strange mood yesterday and thought about the following:
What if I store webapp data in a persistent way, just by using sessions. So I store a sessioncookie with an hash, way longer so it's not bruteable. Then just save all stored data in the session. I also set sessiontime to unlimited...
Would there be any use for this? :D
Not really. Most session state implementations keep the sessions in-memory. On app restart (or hardware failure, etc) memory is cleared and session cache is lost.
You could do so if you have your sessions stored in a database rather than in-proc but could be a bit of work depending on what platform you're working with. It's slower as well.
Generally you don't want to keep sessions very large because if they are in-proc sessions, you're going to eat up your servers memory real fast. Even if you go with the database approach for sessions, this is still often done but using in-memory temp tables for sessions and, therefore will eat up the ram of the database server.
Sessions should be light-weight and non-essential to the applications functionality. For anything important that must be persisted, keep it in a database.

Play framework session via client cookie

In my application I want to keep a large amount of data in memory specific to a user currently accessing my web application in a user specific session. As for as I know play framework uses cookie to store session data which has a limit of 4k. How can I have much larger session data? Does ehacache memcache help here? This session has expiration time from last activity of the user.
If a session data is cache'ble its better to keep it in Cache with key as userid and clear it when user logs off. Get it reloaded from DB on relevant DB update/delete. Keeping the content in external cache like memcache, will help you to scale well and will enable you to move to distributed cache in the long run, if required. Check this interesting article on Share Nothing.
The idea with Play is to dispel the need for the session and the keeping of lots of information in memory. The problem with the in-memory approach, is that you tie the user to the specific server that their data is held, where-as the play share nothing approach means you can scale horizontally easily without worry of sticky sessions and the like.
The options you have are
- store transient data in a temporary database that can be accessed via a userId or other unique idenifier of the users session. This database would be the equivalent of your server side session.
- use a cache. However the idea of a cache is that if the information is not in the cache, it can be retrieved from the database (or other source) instead. A cache should not have to guarantee that the data will be available. If in the case of an in memory cache (like ehcache) if you have a load balanced set of servers, you may not be able to guarantee that all requets go back to the same server, so data in the cache may not be available on all servers for a particular session.
The answer to your question depends on your use case, but I think the database is your best approach based on the information you have supplied.

Resources