Windows Azure in-role caching vs shared cache - caching

We have a website in Azure and we want to cache the content on the website. The app that will update the content will be outside Azure. We got this scenario working with Shared Cache. Shared caching however is considered a legacy feature and so we wanted to take a look at alternate solutions including using in-role caching. The cached content is very small should not exceed 1 MB and will be consumed by C# code.
We could use co-located cache within the web roles or dedicated cache using a worker role.
The questions we had using in-role cache are:
How can the co-located cache be updated from an external app?
If there was a way to update co-located cache from an external app,
cache notifications could be used to invalidated all co-located cache nodes, correct?
We use extra-small web role instances now - do we need to upgrade to
small/medium instances?
Is dedicated caching better for our scenario?
Thanks in advance.

After doing a bunch of research and guided by Simon's responses in the SO thread he already mentioned, here are my responses:
Q: How can the co-located cache be updated from an external app?
A: I would expose a public endpoint on your Webrole that would clear cache. And I would call that endpoint from your external apps (this endpoint can be a service, rest URL, etc). Alternatively, throw a message onto a queue and have your Webroles monitor that queue and clear the item from cache when they receive a message in the queue. Either way, you're implementing your own notification mechanism
Q: If there was a way to update co-located cache from an external app, cache notifications could be used to invalidated all co-located cache nodes, correct?
A: I don't believe so. The endpoints to co-located cache are strictly internal.
Q: We use extra-small web role instances now - do we need to upgrade to small/medium instances?
A: Yes. I believe colocated cache is supported at Small instance and higher. You will need to try this out to see how much ram you get vs. how much is left over and whether or not that is of any use to your main application
Q: Is dedicated caching better for our scenario?
A: Dedicated vs. colocated cache is really about the load. Do you
have enough load on your cache and on your app servers to justify
moving the cache out into a separate Role? Check out this article
for Microsoft's recommendation:
http://msdn.microsoft.com/en-us/library/windowsazure/hh914129.aspx

Related

Setting Up Ncache (Distributed?/Shared)

I have two servers, where I will be deploying the same application. Basically these two servers will handle work from a common Web API, the work that handed out will be transformed and go through some logic and loaded into DB. I want to cache the data the get loaded/update or deleted in the database, so that when the same data is referenced i can get it from the Cache (Kind of explained the cache mechanism). Now I am using Ncache and it working perfectly fine within one application. I am trying have kind of a shared cache, so that both my application can have access to. How do i go about doing it?
NCache is a distributed cache so you can continue to use that.
There is good general documentation available and very good getting started material that walks you through all the steps required.
In essence you install NCache on both the servers and then reference both servers in your client configuration (%NCHOME%\config\client.ncconf)
In cluster caches, a single logical cache instance is distributed over multiple server nodes and because the cache process is running outside the application address space, multiple applications can share and see the same exact cache data change in terms of addition, removal and update of the cache content.
Local out-proc caches are limited to one server node but as they are outside the application address space, they also support sharing of data between applications.
In fact, besides allowing multiple applications to share data, NCache supports a pub/sub infrastructure to allow for multiple applications to actually communicate with each other. This allows NCache to play a key part in setting up a fast and reliable microservices environment wherein all the participating services send messages to each other through the NCache platform.
See the link below where they have shared information about NCache topologies
http://www.alachisoft.com/resources/docs/ncache/admin-guide/cache-topologies.html
http://www.alachisoft.com/resources/videos/five-steps-getting-started.html

Distributed caching with nhibernate orm

I am trying to implement caching in my application.
We are using Oracle database, asp.net web api to serve data to ui.
Api calls take more time, so we are thinking of implementing caching. Our code is deployed on 2 servers with load balancers.
How caching should be implemented.
What i am planning to implement is,
There should be a service API on any server, this api will store all data in memory. Ui will call our existing API, hit can go to any node, this api then will get data from new api(cache) and serve it to ui.
Is this architecture correct for distruted caching.
Can any one share their experience or guidance to implementation?
You might want to check NCache. Being a distributed caching solution, it provides first class support for sharing cache data between multiple clients due to the ache process running autonomously outside the address of any one application address space.
For your case, every web server in your load-balanced web farm will have be the client of NCache and have direct access to the cache servers. All the web servers,being clients to a central caching solution, will see the same cache data through simple-to-use NCache APIs. Any modification through insert, update or delete cache operations will be immediately observable to all the web servers.
The intelligence driving NCache allows for a seamless behind-the-scenes handling of all the tasks of storing and distributing the cache data among multiple cache server nodes on which the cache instance is distributed.
Furthermore, all the caching operations are completely independent of the framework used for database content retrieval and can be applied equally well with NHibernate, EF, EF Core and, of course, ADO.NET.
You can find more information about how to integrate NCache into your web farm environment and much more by using the following link:
http://www.alachisoft.com/resources/docs/ncache/admin-guide/ncache-architecture.html

how to update local memory cache in all server instances

I have a web server cluster that contains many running web server instances. each instance cache some configurations in its local memory, the original configurations are stored in Database.
these configurations are used for every request, so the cache may necessary for performance reason.
I want to provide an admin page, in which, the administrator can change the configurations. how do I update all the cache in every server instance?
now I have two solutions for this:
set an expire time for the cache.
when administrator update the configuration, notify each instance via some pub/sub mechanism(e.g. use redis).
for solution 1, the drawback is the changes can not take effect immediately.
for solution 2, I'm wondering, if the pub/sub will have impact on the performance of the web server.
which one is better? or is there any common solution for this problem?
Another drawback of option 1 is that you'll periodically hit your database unnecessarily.
If you're already using Redis then option 2 is a good solution. I've used it successfully and can't imagine how there could be a performance impact just because you're using pubsub.
Another option is to create a cache invalidation URL on each website, e.g. /admin/cache-reset/, and have your administration tool call the cache-reset URL on each individual server. The drawback of this solution is that you need to maintain a list of servers. If you're not already using Redis it could just be the simple/practical/low-tech solution that you're looking for.

Performance difference between Azure Redis cache and In-role cache for outputcaching

We are moving an asp.net site to Azure Web Role and Azure Sql Database. The site is using output cache and normal Cache[xxx] (i.e. HttpRuntime.Cache). These are now stored in the classic way in the web role instance memory.
The low hanging fruit is to first start using a distributed cache for output caching. I can use in-role cache, either as co-located or with a dedicated cache role, or Redis cache. Both have outputcache providers ready made.
Are there any performance differences between the two (thee with co-located/dedicated) cache methods?
One thing to consider is that will getting the page from Redis for every pageload on every server be faster or slower than composing the page from scratch one every server every 120 seconds but inbetween just getting it from local memory?
Which will scale better when we want to start caching our own data (i.e. pocos) in a distributed cache instead of HttpRuntime.Cache?
-Mathias
Answering to your each question individually:
Are there any performance differences between the two (thee with
co-located/dedicated) cache methods?
Definately co-located caching solution is faster than dedicated cache server, as in co-located/inproc solution request will be handled locally within the process where as dedicated cache solution will involve getting data over the network. However since data will be in-memory on cache server, getting will still be faster than getting from DB.
One thing to consider is that will getting the page from Redis for
every pageload on every server be faster or slower than composing the
page from scratch one every server every 120 seconds but inbetween
just getting it from local memory?
It will depend on number of objects on page i.e. time taken to compose the page from scratch. Though getting from cache will involve network trip time but its mostly in fractions of a millisecond.
Which will scale better when we want to start caching our own data
(i.e. pocos) in a distributed cache instead of HttpRuntime.Cache?
Since HttpRuntime.Cache is in-process caching, it is limited to single process's memory therefore it is not scalable. A distributed cache on the other hand is a scalable solution where you can always add more servers to increase cache space and throughput. Also out-proc nature of distributed cache solution makes it possible to access data cached by on application process to be used by any other process.
You can also look into NCache for Azure as a distributed caching solution. NCache is a native .Net distributed caching solution.
Following blog posts by Iqbal Khan will help you better understand the need of distributed cache for ASP.Net applications:
Improve ASP.NET Performance in Microsoft Azure with Output Cache
How to use a Distributed Cache for ASP.NET Output Cache
I hope this helps :-)
-Sameer

How to limit memory/item size in a named cache in windows app fabric cache?

I am relatively new to .net / windows technologies and I need to use appfabric cache for a project.
After spending some time, I feel that one of the basic functionalities of a cache framework, namely limiting the size of a cache, is absent in the appfabric caching framework. I know that popular java caching frameworks like ehcache and hazelcast has this functionality through xml configuration elements (maxElementsInMemory attribute in ehcache , max-size attribute in hazelcast).
I know that this question has been asked previously in a similar form:
How to set Windows Server AppFabric named cache size?
However, I could not find a conclusive proposal to limit cache size per named cache basis in app fabric.
I need to expose caching apis to several application development groups. Each group is supposed to be assigned their own named cache but I need a mechanism to prevent cache abuse. Each cache user should live with their own limited named cache space. I.e they should not consume more memory than the amount reserved for them.
I do not want to write ugly custom code in my api to limit this and I believe that this is a basic requirement that a caching framework should support.
Any proposal for how to achieve this in app fabric will highly be appreciated.
Thanks

Resources