WebApi - Redis cache vs Output cache - caching

I have been studying about Redis (no experience at all - just studied theory), and after doing some research, found out that its also being used as cache. e.g. StackOverfolow it self.
My question is, if I have an asp.net WebApi service, and I use output caching at the WebApi level to cache responses, I am basically storing kind of key/value (request/response) in server's memory to deliver cached responses.
Now as redis is an in memory database, how will it help me to substitute WebApi's output caching with redis cache?
Is there any advantage?
I tried to go through this answer redis-cache-vs-using-memory-directyly, but I guess I didn't got the key line in the answer:
"Basically, if you need your application to scale on several nodes sharing the same data, then something like Redis (or any other remote key/value store) will be required."

I am basically storing kind of key/value (request/response) in server's memory to deliver cached responses.
This means that after a server restart, the server will have to rebuild the cache . That won't be the case with Redis. So one advantage of Redis over a homemade in-memory solution is persistence (only if that's an issue for you and that you did not planned to write persistence yourself).
Then instead of coding your own expiring mechanism, you can use Redis EXPIRE or command EXPIREAT or even simply specifying the expire timestamp when putting the api output string into cache with SETEX.
if you need your application to scale on several nodes sharing the same data
What it means is that if you have multiple instances of the same api servers, putting the cache into redis will allow these servers to share the same cache, thus reducing, for instance, memory consumption (1 cache instead of 3 in-memory cache), and so on...

Related

Can Digital Ocean Stateless Servers (I have 4 servers running on Digital Ocean) work with the Caching Policy I implemented on Spring Boot?

I have implemented Caching in my Spring Boot REST Application. My policy includes a time based cache eviction strategy, and an update-based cache eviction strategy. I am worried that since I employ a stateless server, if there is a method called to update certain data, and this was handled by server instance A, then the corresponding caches in server instance B, C and D, are not updated as well.
Is this an issue I would face / is there a way to overcome this issue?
This is the oldest problem in software development - cache invalidation when you have multiple servers
One way to handle it is to move your cache out of the individual servers and move them to somewhere shared like another instance that holds the cache entries that every other app refers to or something like redis [centralized cache]
Second way is to do a broadcast message so that each server now knows to invalidate the entry once the data has been modified or deleted - here you run the risk of the message not being processed and thus a stale entry is left in some server[s]
Another option is to have some sort of write ahead log [like kafka or redis streams] which is processed by each server and thus they will all process the events deterministically and have the same cache state
Lmk if you need more help - we can setup some time outside of SO

What is the difference between a dedicated and colocated cache? How would a colocated cache work?

According to the following video, a dedicated cache is a cache process hosted on a separate server while a colocated cache is a cache process hosted directly on the service hosts. Is that the standard definition? I can not find any more on this topic online.
In the colocated cache scenario would the service always reference the cache that is on this specific host or would it need to query other hosts as well? Is it possible to route requests to only hosts that have a colocated cache for that partition of data then in order to avoid the extra network hop to a cache server that would be needed to retrieve data in the dedicated cache host scenario?
In a colocated cache querying only the service instance hosting the data can be accomplished by sharding the data (typically by some key). Given a key, a requestor can resolve which shard owns the data, then resolve which instance owns the shard and direct the request at that instance. As long as everyone agrees on both levels of ownership, this works really well. It's also possible to have each instance of the service be able to forward requests so that even if a request gets misdirected it will with some likelihood eventually reach the correct instance.
Adya (2019) describes the broad approach, calling it a LInK store, which can take the approach of embedding the cache into actual service (which doesn't even require local interprocess communication). When this is done, there's basically no line between your service and the cache: the service is the cache and databases/object stores only exist to allow cold data to be evicted and to provide durability. One benefit of this approach is that your database/object store in the sunny day case mostly handles writes, and it ends up having a lot of mechanical sympathy to CQRS.
I personally have had a lot of success using Akka to implement services following this approach (Akka Cluster manages cluster membership and failure detection, Akka Cluster Sharding handles shard distribution and resolution, and Akka Persistence provides durability).

Difference between in-memory-store and managed-store in mule cache

What are the main differences between in-memory-store and managed-store in mule cache scope and which gives a best performance.
What is the best way to configure caching in global scope?
We are currently using in-memory-store caching. We are always getting issues with memory outage as we are using a server with less HW configurations. We are using mule 3.7v.
Please provide your suggestions to configure cache in optimized way.
We are facing issue with cache expiration with in-memory-store. cache date is not being expunged after expiration time also. But when we use "managed-store" its working as expected.
Below is my configuration:
In-memory:
This store the data inside system memory. The data stored with In-memory is non-persistent which means in case of API restart or crash, the data been cached will be lost.
Managed-store:
This stores the data in a place defined by ListableObjectStore. The data stored with Managed-store is persistent which means in case of API restart or crash, the data been cached will no be lost.
Source (explained in detail with configuration difference):
http://www.tutorialsatoz.com/caching-in-mule-cache-scope/
One of my friend clearly explained me this difference as follows:
in-memory cache--> It is a temperoy memory storage area where it will store the data. for example: Consider using a VM component in Mule, the data will be stored in VM in the form of in-memory queue
in the case of Managed store--> we can store the data and use it in later stages. example: object store
mainly cache will store the frequently used data. It will reduce the db or http calls by saving the frequently used data or results in cache scope.
But both are for temporary storage only, means they are valid for that particular session alone.

Azure Redis Memory Usage (w3wp.exe dump) Insight

Can anyone tell me if the type of behavior outlined in the memory dump from Visual Studio
Is normal? for instance does the StackExchange.Redis.PhysicalConnection run that high on inclusive size (bytes)? Or is that really high?
Basically we are experiencing slowness with our web head after converting our code to run on Azure Redis from Session (we are now serializing and deserializing as needed and storing in Redis cache) but overall performance is horrible.
The requests complete but it can take a while, is that due to the single threaded nature of Redis? We are using the configuration outlined as best practice by the Azure Redis team as outlined here https://stackoverflow.com/a/28821220
What else can we look at to help increase the performance as the current performance is not acceptable as a viable replacement for our session based implementation (asp.net webforms/sql server/azure IaaS) we currently have.
PS - Serialization and Deserialization does cause a hit, we understand that IIS spoiled us with its own special memory pool for non-serialized datasets and such, but there is no way that it should cause a 300-500% increase in page loads like it is now for us.
Thoughts appreciated!
#Tim Wieman
How large are your cached objects?
They can range in size, there are some datasets stored in redis.
What type of objects are they?
Most objects are custom objects w/variable number of properties, some even contain collections.
What serializer are you using?
We are using Newtonsoft for anything that doesn't require Rowstate and the required binary serializer for the datasets that do need rowstate.
All serialization, and subsequent deserialization, is done in code before call redis databases StringGet or StringSet.
If appears the memory was in fact extremely high, we were erroneously creating thousands of connections to Redis instead of a singleton instance of the Redis Cache.
The multiple connections were not getting cleaned up by the GC before the CPU would get to 98% and the server would become unresponsive.
We adjusted our code to ensure a single instance of the connection to Azure Redis is used for all Redis calls and have tested thoroughly.
It appears to be resolves as Azure Redis is no longer eating up memory or CPU resources.

How to avoid single point of failure from given distributed architecture

I went through this video - Scalability Harvard Web Development David Malan
This is where I got stuck. Explaining the problem -
Lets assume LB is using round robin kind of approach.
As per First image, all servers are storing session in their local space, that is not accessible to other servers. If same request comes next time, and if LB redirects this request to another server, then that server will ask about authentication. That is very irritating from user point of view.
As per second image, all servers are sharing sessions. In this case, when next request comes from same client, and LB redirects to another server. Now, instead of asking for authentication, it will fetch information from Session host.
This is mentioned in above video link.
Question -
Now session host become single point of failure. If the host is down, it will impact availability badly. How can we avoid such case ?
You have these options (assuming session is something which cannot be lost at any cost)
1) The session data store is a highly available data store. For eg: You can use MongoDB replica set for such a session store. It consists of three nodes of MongoDB with a master and two slaves (minimum) and when the master goes down one of the nodes is promoted as the master. This election may take a few seconds but the session would not be lost.
2) Use an in memory data sharing library which does data partioning as well as replication. An example would be Hazelcast for Java. It gives you object level sharing across the web tier and here you can store the session which is shared. Please note AFAIK there is no data persistence in this case (on disk).
3) The most scalable approach that I have used till now is to have client side session and no server side data/session storage. What you can do in this case is to have a very long secret key stored in each app server and you set all the data in the cookie after encrypting the data with this secret key. The only problem with this approach is that you need to be very selective with what you store in the session as there is a limit to the data size on cookie. This encryption is a 2-way. Most of the SAAS based tools use this approach.
Implementing Session host as a replicated data store helps remove single point of failure. Example, using a replicated cache like Hazelcast will keep the cache replicated and distributed thus eliminating the single point of failure. There are others like Memcached and Mongo. Automatic fail over on these can be achieved via virtual ip addresses.
For this exact reason, usually session hosts (eg memcache) are fronted with a VIP (virtual IP) and have more than one hosts. In a distributed architecture you generally want to have 1-N hosts. Most companies that operate at scale generate use data storage like Couchbase (memcahce buckets) to store session state because it's fast, redundant, and highly scalable.

Resources