Background
In my project app, many small applications has been integrated and they have built-in different technologies such as
Angular JS 1.5
ASP.Net MVC
Angular 5
My app also uses AWS as cloud partner.
Problem
I need to implement Caching mechanism in my app. I am storing some values in S3 bucket and I am using API calls to pull the values. I want to keep those values in Cache. Since, it is implemented in multiple technologies (especially Angular and ASP.Net MVC), Does any caching mechanism can be used in common?
Observations
I have done some work on this and observed the following caching is available
.NET MVC - In Memory Caching
Angular - In Memory Cache with ReactJS
As AWS is my Cloud Partner, it is offering ElastiCache as a Web service, which supports MemCached and Redis. I am not clear whether this will behave like normal In-Memory Cache ( in ASP .NET Core) or this will refer database for caching and retrieve details (cause round-trip!) from there?
Question
Can anyone let me know best caching technique can be handled to my app for both .net mvc and angular?
This is bit tricky (I am assuming that you are using multiple servers of memcache). When you use memcache a lot depends on the clients implementation. Your client decides, on which server a particular key will be stored. And the servers are unaware of the existences of the other servers.
As you are using different languages you will be using different clients so each client will be implementing its own algorithm to decide the server on which the key will be placed. So there will be cases where your client on Angular will store key "K1" on server "S1" and the .Net Client will store the same key on server "S2"
This will add another problem of invalidating these keys, as it will be needed to invalidate the key on both servers.
There is one more problem. You will have to store all objects in a json format if the keys are common. So that the keys is stored on the same memcache server will be read by all programing languages.
I think the best thing is to set a small enough time to invalidate keys on memcache (if it is feasible) and storing keys with different prefix or suffix for each client type. So .net client will store key K1 as 'K1-net' and the one with Angular will store it as "k1-ang".
I would also explore redis which might help (not sure)
Related
I'm building a microservices architecture that should deal with:
Direct database access
Call to external legacy services
I can think about 2 caching strategies, but can't figure out what is the best considering that I will not have control on what other people could do across the layers.
Caching at application level (#Cacheable)
I only provide a caching feature that everyone can use, by enforcing the spring.cache.redis.key-prefix to the microservice name to limit conflicting keys.
PRO: most flexible way
CONS:
No control over cache except for maximum space: people would just create new cache entries
No control over cache invalidation: we don't know what kind of data is actually stored so if, for example, a legacy system needs to be reloaded we cannot empty some cache keys
Possible redundancy: as caching is at application layer it could happen that microservices store about the same data. While I could have control on the database (one MS should own its own db or at least a subset of tables) I can't guarantee about the legacy SOAP layer
Caching at service layer (connectors)
I don't provide a caching feature but I provide custom soap connectors that will/will not cache response based on a configuration that I will provide (could also be a blacklist/whitelist)
PROS:
cache is controlled
easy to invalidate
CONS:
need to update connectors each time a cache policy changes
dependency between development and architecture
edit: I need suggestion about the theoretical approach, not about a specific technology.
I suppose you should build different microservices (apis) to deal with different set of responsibilities. Like , you can have a one microservice which deals with legacy and other one which deals with database. In order for these two microservices to communicate, you can have a message broker architecture like apache kafka (hazelcast being cost effective or Rabbit MQ).
Communication between these two microservices can be event driven as well.
Once you decide this, then you can finalize where to place your cache.
You will need to place cache at application level and not service level if there is an UI where you are showing these values.
I am trying to implement caching in my application.
We are using Oracle database, asp.net web api to serve data to ui.
Api calls take more time, so we are thinking of implementing caching. Our code is deployed on 2 servers with load balancers.
How caching should be implemented.
What i am planning to implement is,
There should be a service API on any server, this api will store all data in memory. Ui will call our existing API, hit can go to any node, this api then will get data from new api(cache) and serve it to ui.
Is this architecture correct for distruted caching.
Can any one share their experience or guidance to implementation?
You might want to check NCache. Being a distributed caching solution, it provides first class support for sharing cache data between multiple clients due to the ache process running autonomously outside the address of any one application address space.
For your case, every web server in your load-balanced web farm will have be the client of NCache and have direct access to the cache servers. All the web servers,being clients to a central caching solution, will see the same cache data through simple-to-use NCache APIs. Any modification through insert, update or delete cache operations will be immediately observable to all the web servers.
The intelligence driving NCache allows for a seamless behind-the-scenes handling of all the tasks of storing and distributing the cache data among multiple cache server nodes on which the cache instance is distributed.
Furthermore, all the caching operations are completely independent of the framework used for database content retrieval and can be applied equally well with NHibernate, EF, EF Core and, of course, ADO.NET.
You can find more information about how to integrate NCache into your web farm environment and much more by using the following link:
http://www.alachisoft.com/resources/docs/ncache/admin-guide/ncache-architecture.html
I'm working on a web app that receives data from an API provider. Now I need a way to cache the data to save from calling the provider again for the same data.
Then I stumbled on Redis which seems to serve my purpose but I'm not 100% clear about the concept of caching using Redis. I've checked their documentation but I don't really follow what they have to say.
Let's suppose I have just deployed my website to live and I have my first visitor called A. Since A is the first person that visits, my website will request a new set of data over API provider and after a couple seconds, the page will be loaded with the data that A wanted.
My website caches this data to Redis to serve future visitors that will hit the same page.
Now I have my second visitor B.
B hits the same page url as A did and because my website has this data stored in the cache, B is served from the cache and will experience much faster loading time than what A has experienced.
Is my understanding in line with with the concept of web caching?
I always thought of caching as per user basis so my interaction on a website has no influence or whatsoever to other people but Redis seems to work per application basis.
Yes, your understanding of web caching is spot on, but it can get much more complex, depending on your use case. Redis is essentially a key-value store. So, if you want application-level caching, your theoretical key/value pair would look like this:
key: /path/to/my/page
value: <html><...whatever...></html>
If you want user-level caching, your theoretical key would just change slightly:
key: visitorA|/path/to/my/page
value: <html><...whatever...></html>
Make sense? Essentially, there would be a tag in the key to define the user (but it would generally be a hash or something, not a plain-text string).
There are redis client libraries that are written for different web-development frameworks and content-management systems that will define how they handle caching (ie. user-specific or application-specific). If you are writing a custom web app, then you can choose application-level caching or user-level caching and do whatever else you want with caching.
if I understand correctly, the Play Framework uses cookies to store the whole session, while PHP just stores a Session-ID in a cookie and saves the real session itself on the server-side.
The Play Framework promotes good horizontal scalability with its approach. But I do not see the advantage, if I use any other framework and save my session into a database, for example with Symfony and Redis.
So how is the Play Framework better (for some use cases)?
The initial idea behind Play's architecture is that the designers wanted it to be stateless ie. no data being maintained between requests on the server-side - this is why it does not follow the servlet spec. This opens up flexibility with things like scalability as you have mentioned - this is in itself a big advantage if your application is large enough that it needs to scale across more than a single machine - managing server-side session data across a cluster is a pain.
But of course, anything other than a trivial application, Will have a need to maintain some session data and as you have mentioned yourself you would usually do this with a cache or DB. The cookie session Play uses is restricted to about 4Kb so is only intended for a relatively small amount of data.
The benefits of a stateless architecture are manyfold and are what Play's architecture is designed to exploit.
A bit dated but the relevancy still applies in this article.
I'm writing a web application using ASP .NET MVC 3. I want to use the MemoryCache object but I'm worried about it causing issues with load balanced web servers. When I google for it looks like that problem is solved on the server ie using AppFabric. If a company has load balanced servers is it on them to make sure they have AppFabric or something similar running? or is there anything I can or should do as a developer for this?
First of all, for ASP.NET you should look at the ASP.NET Cache instead of MemoryCache. MemoryCache is a generic caching API that was introduced in .NET 4.0 to provide an equivalent of the ASP.NET Cache in non-web applications.
You're correct to say that AppFabric resolves the issue of multiple servers having their own instances of cached data, in that it provides a single logical cache accessible from all your web servers. Before you leap on it as the solution to your problem, there's a couple of things to consider:
It does not ship as part of Windows Server - it is, as you say, on
you to install it on your servers if you want to use it. When
AppFabric was released, there was a suggestion that it would ship as
part of the next release of Windows Server, but I haven't seen
anything about Windows Server 2012 that confirms that to be the case.
You need extra servers for it, or at least you're advised to have
them. Microsoft's recommendation for AppFabric is that you run it on
dedicated servers. Which means that whilst AppFabric itself is a free
download, you may be incurring additional Windows Server licence
costs. Speaking of which...
You might need Enterprise Edition licences. If you want to use the
High Availability features of AppFabric, you can only do this with
servers running Enterprise Edition, which is a more expensive licence
than Standard Edition.
You might not need it after all. Some of this will depend on your application and why you want to use a shared caching layer. If your concern is that caches on multiple servers could get out of sync with the database (or indeed each other), some judicious use of SqlCacheDependency objects might get you past the issue.
This CodeProject article Implementing Local MemoryCache Invalidation with Redis suggests an approach for handling the scenario you describe.
You didn't mention the flavor of load balancing that you are using: "sticky" or "stateless". By far the easiest solution is to use sticky sessions.
If you want to use local memory caches and stateless load balancing, you can end up with race conditions the cross-server invalidation messages arrive late. This can be particularly problematic if you use the Post-Redirect-Get pattern so common in ASP.Net MVC. This can be overcome by using cookies to supplement the cache invalidation broadcasts. I detail this in a blog post here.