Distributed caching with nhibernate orm - asp.net-web-api

I am trying to implement caching in my application.
We are using Oracle database, asp.net web api to serve data to ui.
Api calls take more time, so we are thinking of implementing caching. Our code is deployed on 2 servers with load balancers.
How caching should be implemented.
What i am planning to implement is,
There should be a service API on any server, this api will store all data in memory. Ui will call our existing API, hit can go to any node, this api then will get data from new api(cache) and serve it to ui.
Is this architecture correct for distruted caching.
Can any one share their experience or guidance to implementation?

You might want to check NCache. Being a distributed caching solution, it provides first class support for sharing cache data between multiple clients due to the ache process running autonomously outside the address of any one application address space.
For your case, every web server in your load-balanced web farm will have be the client of NCache and have direct access to the cache servers. All the web servers,being clients to a central caching solution, will see the same cache data through simple-to-use NCache APIs. Any modification through insert, update or delete cache operations will be immediately observable to all the web servers.
The intelligence driving NCache allows for a seamless behind-the-scenes handling of all the tasks of storing and distributing the cache data among multiple cache server nodes on which the cache instance is distributed.
Furthermore, all the caching operations are completely independent of the framework used for database content retrieval and can be applied equally well with NHibernate, EF, EF Core and, of course, ADO.NET.
You can find more information about how to integrate NCache into your web farm environment and much more by using the following link:
http://www.alachisoft.com/resources/docs/ncache/admin-guide/ncache-architecture.html

Related

What is a well documented caching strategy pattern for a microservice architecture dealing with legacy?

I'm building a microservices architecture that should deal with:
Direct database access
Call to external legacy services
I can think about 2 caching strategies, but can't figure out what is the best considering that I will not have control on what other people could do across the layers.
Caching at application level (#Cacheable)
I only provide a caching feature that everyone can use, by enforcing the spring.cache.redis.key-prefix to the microservice name to limit conflicting keys.
PRO: most flexible way
CONS:
No control over cache except for maximum space: people would just create new cache entries
No control over cache invalidation: we don't know what kind of data is actually stored so if, for example, a legacy system needs to be reloaded we cannot empty some cache keys
Possible redundancy: as caching is at application layer it could happen that microservices store about the same data. While I could have control on the database (one MS should own its own db or at least a subset of tables) I can't guarantee about the legacy SOAP layer
Caching at service layer (connectors)
I don't provide a caching feature but I provide custom soap connectors that will/will not cache response based on a configuration that I will provide (could also be a blacklist/whitelist)
PROS:
cache is controlled
easy to invalidate
CONS:
need to update connectors each time a cache policy changes
dependency between development and architecture
edit: I need suggestion about the theoretical approach, not about a specific technology.
I suppose you should build different microservices (apis) to deal with different set of responsibilities. Like , you can have a one microservice which deals with legacy and other one which deals with database. In order for these two microservices to communicate, you can have a message broker architecture like apache kafka (hazelcast being cost effective or Rabbit MQ).
Communication between these two microservices can be event driven as well.
Once you decide this, then you can finalize where to place your cache.
You will need to place cache at application level and not service level if there is an UI where you are showing these values.

Setting Up Ncache (Distributed?/Shared)

I have two servers, where I will be deploying the same application. Basically these two servers will handle work from a common Web API, the work that handed out will be transformed and go through some logic and loaded into DB. I want to cache the data the get loaded/update or deleted in the database, so that when the same data is referenced i can get it from the Cache (Kind of explained the cache mechanism). Now I am using Ncache and it working perfectly fine within one application. I am trying have kind of a shared cache, so that both my application can have access to. How do i go about doing it?
NCache is a distributed cache so you can continue to use that.
There is good general documentation available and very good getting started material that walks you through all the steps required.
In essence you install NCache on both the servers and then reference both servers in your client configuration (%NCHOME%\config\client.ncconf)
In cluster caches, a single logical cache instance is distributed over multiple server nodes and because the cache process is running outside the application address space, multiple applications can share and see the same exact cache data change in terms of addition, removal and update of the cache content.
Local out-proc caches are limited to one server node but as they are outside the application address space, they also support sharing of data between applications.
In fact, besides allowing multiple applications to share data, NCache supports a pub/sub infrastructure to allow for multiple applications to actually communicate with each other. This allows NCache to play a key part in setting up a fast and reliable microservices environment wherein all the participating services send messages to each other through the NCache platform.
See the link below where they have shared information about NCache topologies
http://www.alachisoft.com/resources/docs/ncache/admin-guide/cache-topologies.html
http://www.alachisoft.com/resources/videos/five-steps-getting-started.html

Caching for both Angular and ASP .NET MVC

Background
In my project app, many small applications has been integrated and they have built-in different technologies such as
Angular JS 1.5
ASP.Net MVC
Angular 5
My app also uses AWS as cloud partner.
Problem
I need to implement Caching mechanism in my app. I am storing some values in S3 bucket and I am using API calls to pull the values. I want to keep those values in Cache. Since, it is implemented in multiple technologies (especially Angular and ASP.Net MVC), Does any caching mechanism can be used in common?
Observations
I have done some work on this and observed the following caching is available
.NET MVC - In Memory Caching
Angular - In Memory Cache with ReactJS
As AWS is my Cloud Partner, it is offering ElastiCache as a Web service, which supports MemCached and Redis. I am not clear whether this will behave like normal In-Memory Cache ( in ASP .NET Core) or this will refer database for caching and retrieve details (cause round-trip!) from there?
Question
Can anyone let me know best caching technique can be handled to my app for both .net mvc and angular?
This is bit tricky (I am assuming that you are using multiple servers of memcache). When you use memcache a lot depends on the clients implementation. Your client decides, on which server a particular key will be stored. And the servers are unaware of the existences of the other servers.
As you are using different languages you will be using different clients so each client will be implementing its own algorithm to decide the server on which the key will be placed. So there will be cases where your client on Angular will store key "K1" on server "S1" and the .Net Client will store the same key on server "S2"
This will add another problem of invalidating these keys, as it will be needed to invalidate the key on both servers.
There is one more problem. You will have to store all objects in a json format if the keys are common. So that the keys is stored on the same memcache server will be read by all programing languages.
I think the best thing is to set a small enough time to invalidate keys on memcache (if it is feasible) and storing keys with different prefix or suffix for each client type. So .net client will store key K1 as 'K1-net' and the one with Angular will store it as "k1-ang".
I would also explore redis which might help (not sure)

Is it good idea to provide cache as service?

We have many web services and web app applications which have caching needs so we are trying to come up with caching strategy which can help all the teams irrespective of their technical choices. We have used Memcached(not replicated) & Couchbase(multi master) running locally on each server node and applications connect to them locally using Memcached protocol but going forward we are planning to go with centralized cache cluster exposed via REST APIs which can be used by all the applications running on different server nodes in a datacenter. Following are reasons behind this thought process:
Easy maintenance of a cluster without worrying about app server
nodes.
Single protocol(HTTP) used to access the cache without worrying
about underlying implementation.(We might use Redis or Couchbase or
Aerospike cluster)
But we are not sure about this strategy because we are worried about performance impact due to network overhead because of HTTP.
Has anyone tried this strategy? Is it a good idea to make cache as centralized service or local caches are the best?
While it's true that HTTP and network add latency, generally you need a cache because the actual operation takes significantly longer. So the question is: if you add 1-2 milliseconds to the cache access, does that still shorten the un-cached operation time significantly? If the answer is yes, and you follow some common best practices, having a centralized cache could be a good idea.
You might want to look into low latency, high throughput server-side frameworks for the HTTP service, like Node.js or Go. Also, you will probably benefit from implementing proper ETag support in your cache HTTP API.
Another alternative might be centralizing the cache server(s) without wrapping them in an HTTP layer. There are standard cache provider implementations for all the technologies you mentioned available for most modern web frameworks.
Disclaimer: I work for Redis Labs, a commercial company that makes tools for managing Redis and Memcached clusters. My employer, Redis Labs, has made a business of the strategy that you want affirmed :)
Cache is a dish best served close, but remote caching has benefits (e.g., offloading the DB) even if the latency penalty suggests differently. In most cases, compared to the time spent in the application, the local area network latency becomes negligible, so using a shared network-attached cache makes a lot of sense.
To get the best performance, interact from your app directly with the shared cache using its own protocol. An HTTP API, unless provided by the caching engine itself, could add latency to the client app's requests. OTOH, formalizing your apps access to the cache with a custom layer (such as a REST API) has a lot of nice benefits too, so you should evaluate the cost in the context of your latency budgets.
Your strategy is sound and it is used everywhere to build scalable and performant applications. Feel free to hit me if you need further advice.

what are some caches that are responsible for fetching the data on miss?

The book 'architecture of open source software' says that the most common type of global cache in a web application is responsible for fetching the data itself, in case it is missing, as shown on this fixure. This seems different than what I've encountered so far. Most applications I have encountered make the application server responsible for fetching data from the db, and updating the server. At first, I thought the book might be talking about caching proxies, like Varnish, but they cover those in the next section, so that doesn't seem to be the case.
What cache systems actually fetch the data in case of a miss, and how do they know how to interact with the database?
Caching solutions provide read-through/write behind features which enable users to configure a read-through/write-behind provider be implementing some interface and deploying it with cache server. These providers contain logic about how cache server can interact with database to load/save data in database.
On a cache fetch operation if data is not present in cache server, cache loads data from database using configured provider thus avoiding a cache miss.
This way client applications deal cache as only data source and cache itself is responsible for interactions with database. You can read further details in this article by Iqbal Khan.
NCache and TayzGrid are enterprise solutions among many others that provide this feature.

Resources