Setting Up Ncache (Distributed?/Shared) - caching

I have two servers, where I will be deploying the same application. Basically these two servers will handle work from a common Web API, the work that handed out will be transformed and go through some logic and loaded into DB. I want to cache the data the get loaded/update or deleted in the database, so that when the same data is referenced i can get it from the Cache (Kind of explained the cache mechanism). Now I am using Ncache and it working perfectly fine within one application. I am trying have kind of a shared cache, so that both my application can have access to. How do i go about doing it?

NCache is a distributed cache so you can continue to use that.
There is good general documentation available and very good getting started material that walks you through all the steps required.
In essence you install NCache on both the servers and then reference both servers in your client configuration (%NCHOME%\config\client.ncconf)

In cluster caches, a single logical cache instance is distributed over multiple server nodes and because the cache process is running outside the application address space, multiple applications can share and see the same exact cache data change in terms of addition, removal and update of the cache content.
Local out-proc caches are limited to one server node but as they are outside the application address space, they also support sharing of data between applications.
In fact, besides allowing multiple applications to share data, NCache supports a pub/sub infrastructure to allow for multiple applications to actually communicate with each other. This allows NCache to play a key part in setting up a fast and reliable microservices environment wherein all the participating services send messages to each other through the NCache platform.
See the link below where they have shared information about NCache topologies
http://www.alachisoft.com/resources/docs/ncache/admin-guide/cache-topologies.html
http://www.alachisoft.com/resources/videos/five-steps-getting-started.html

Related

Distributed caching with nhibernate orm

I am trying to implement caching in my application.
We are using Oracle database, asp.net web api to serve data to ui.
Api calls take more time, so we are thinking of implementing caching. Our code is deployed on 2 servers with load balancers.
How caching should be implemented.
What i am planning to implement is,
There should be a service API on any server, this api will store all data in memory. Ui will call our existing API, hit can go to any node, this api then will get data from new api(cache) and serve it to ui.
Is this architecture correct for distruted caching.
Can any one share their experience or guidance to implementation?
You might want to check NCache. Being a distributed caching solution, it provides first class support for sharing cache data between multiple clients due to the ache process running autonomously outside the address of any one application address space.
For your case, every web server in your load-balanced web farm will have be the client of NCache and have direct access to the cache servers. All the web servers,being clients to a central caching solution, will see the same cache data through simple-to-use NCache APIs. Any modification through insert, update or delete cache operations will be immediately observable to all the web servers.
The intelligence driving NCache allows for a seamless behind-the-scenes handling of all the tasks of storing and distributing the cache data among multiple cache server nodes on which the cache instance is distributed.
Furthermore, all the caching operations are completely independent of the framework used for database content retrieval and can be applied equally well with NHibernate, EF, EF Core and, of course, ADO.NET.
You can find more information about how to integrate NCache into your web farm environment and much more by using the following link:
http://www.alachisoft.com/resources/docs/ncache/admin-guide/ncache-architecture.html

Hazelcast data isolation ("Memory Regions")

We are building a multi tenant application which has restrictions on the regions/countries where the data is persisted.
The application is based on microsoft .Net microservice architecture but we have shared Domains, although we have separate DBs at very lower levels say for each city a separate DB. We cannot persist the data of one country in another country's data center. Hazelcast will be used as the distributed cache. I could not find any direct ways to configure data isolation for ex. like "Memory Regions" in apache ignite. Do we have "Memory Regions" in hazelcast?
I need to write behind the data from cache to respective Database. Can I segregate a part/partition of cache specific to a database instance?
Any help would be greatly appreciated. Thanks in advance.
I am not directly replying to your question. IMHO, from my understanding when you have a data stored across different clusters / nodes, there will still be a network call, despite you having some key formats so that the data is stored within the same Cluster / Node.
Based on my experience, you could easily setup a MemoryCache that comes as part of the System.Runtime.Caching to store the data in every node and then use Redis Pub-Sub or Azure Service bus as the back-bone for the pub-sub.
In that case,
any data that is updated in a cache is notified to all the other instances of the application via a ServiceBus / Redis message which is typically the key.
Upon receipt of the key, each application clears out its internal cache and then gets the data cached back on the next DB access.
This method is more commonly prevalent in Multi-Tenant Applications and also is fail-safe and light weight. The payloads / network transfers are less and each AppDomain has its internal memory used as a cache which does support different regions via different instances of MemoryCache.
Hope this helps if no direct response is available regarding HazelCast
Also, you may refer to this link for some details regarding the Hazelcast

What is the best practices to implement caching layer?

I'm going to use Redis as a cache service.
What is the best practices to access the caching service?
Through a service/API or in-memory component?
I'm not sure I want to have access to the DB from all the services.
Thanks
All your questions depends on topology and/or architecture of your system. I don't think that you would provide a service on separated computer if your application resided completely on one computer.
But suppose you have distributed app.
In this case it makes sense to do caching using separated service on separated node. It's same as within OOP, you can simple encapsulate data also in cache. Other services depends on your cache, not directly on Redis - you can decide to change redis for something else. Another advantage of caching service is that you can cache data in memory depending on throughput and fetches data from redis time to time. Note that you can simple buy a server having a lot of RAM, e.g. 192gb, because caching service needs a memory more than anything else.

Windows Azure in-role caching vs shared cache

We have a website in Azure and we want to cache the content on the website. The app that will update the content will be outside Azure. We got this scenario working with Shared Cache. Shared caching however is considered a legacy feature and so we wanted to take a look at alternate solutions including using in-role caching. The cached content is very small should not exceed 1 MB and will be consumed by C# code.
We could use co-located cache within the web roles or dedicated cache using a worker role.
The questions we had using in-role cache are:
How can the co-located cache be updated from an external app?
If there was a way to update co-located cache from an external app,
cache notifications could be used to invalidated all co-located cache nodes, correct?
We use extra-small web role instances now - do we need to upgrade to
small/medium instances?
Is dedicated caching better for our scenario?
Thanks in advance.
After doing a bunch of research and guided by Simon's responses in the SO thread he already mentioned, here are my responses:
Q: How can the co-located cache be updated from an external app?
A: I would expose a public endpoint on your Webrole that would clear cache. And I would call that endpoint from your external apps (this endpoint can be a service, rest URL, etc). Alternatively, throw a message onto a queue and have your Webroles monitor that queue and clear the item from cache when they receive a message in the queue. Either way, you're implementing your own notification mechanism
Q: If there was a way to update co-located cache from an external app, cache notifications could be used to invalidated all co-located cache nodes, correct?
A: I don't believe so. The endpoints to co-located cache are strictly internal.
Q: We use extra-small web role instances now - do we need to upgrade to small/medium instances?
A: Yes. I believe colocated cache is supported at Small instance and higher. You will need to try this out to see how much ram you get vs. how much is left over and whether or not that is of any use to your main application
Q: Is dedicated caching better for our scenario?
A: Dedicated vs. colocated cache is really about the load. Do you
have enough load on your cache and on your app servers to justify
moving the cache out into a separate Role? Check out this article
for Microsoft's recommendation:
http://msdn.microsoft.com/en-us/library/windowsazure/hh914129.aspx

Can AppFabric Cache be used like memcached with independent nodes?

With Memcached, it is my understanding that each of the cache servers doesn't need to know diddly about the other servers. With AppFabric Cache on the other hand, the shared configuration links the servers (and consequently becomes a single point of failure).
Is it possible to use AppFabric cache servers independently? In other words, can the individual clients choose where to store their key/values based on the available cache servers and would that decision be the same for all clients (the way it is with memcached).
NOTE: I do realize that more advanced features such as tagging would be broken without all the servers knowing about each other.
Are you viewing the shared configuration as a single point of failure? If you are using SQL Server as your configuration repository, then this shouldn't be an issue with a redundant SQL Server setup.
This approach would obviously loose you all of the benefits of using a distributed cache, however, if you really want to do this then simply don't use a shared configuration. When you configure a new AppFabric node create a new configuration file or database. Choosing an existing one basically says "add this new node to the existing cache cluster".

Resources