I have a web server cluster that contains many running web server instances. each instance cache some configurations in its local memory, the original configurations are stored in Database.
these configurations are used for every request, so the cache may necessary for performance reason.
I want to provide an admin page, in which, the administrator can change the configurations. how do I update all the cache in every server instance?
now I have two solutions for this:
set an expire time for the cache.
when administrator update the configuration, notify each instance via some pub/sub mechanism(e.g. use redis).
for solution 1, the drawback is the changes can not take effect immediately.
for solution 2, I'm wondering, if the pub/sub will have impact on the performance of the web server.
which one is better? or is there any common solution for this problem?
Another drawback of option 1 is that you'll periodically hit your database unnecessarily.
If you're already using Redis then option 2 is a good solution. I've used it successfully and can't imagine how there could be a performance impact just because you're using pubsub.
Another option is to create a cache invalidation URL on each website, e.g. /admin/cache-reset/, and have your administration tool call the cache-reset URL on each individual server. The drawback of this solution is that you need to maintain a list of servers. If you're not already using Redis it could just be the simple/practical/low-tech solution that you're looking for.
Related
Correct me if I'm wrong, but from my understanding, "database caches" are usually implemented with an in-memory database that is local to the web server (same machine as the web server). Also, these "database caches" store the actual results of queries. I have also read up on the multiple caching strategies like - Cache Aside, Read Through, Write Through, Write Behind, Write Around.
For some context, the Write Through strategy looks like this:
and the Cache Aside strategy looks like this:
I believe that the "Application" refers to a backend server with a REST API.
My first question is, in the Write Through strategy (application writes to cache, cache then writes to database), how does this work? From my understanding, the most commonly used database caches are Redis or Memcached - which are just key-value stores. Suppose you have a relational database as the main database, how are these key-value stores going to write back to the relational database? Do these strategies only apply if your main database is also a key-value store?
In a Write Through (or Read Through) strategy, the cache sits in between the application and the database. How does that even work? How do you get the cache to talk to the database server? From my understanding, the web server (the application) is always the one facilitating the communication between the cache and the main database - which is basically a Cache Aside strategy. Unless Redis has some kind of functionality that allows it to talk to another database, I don't quite understand how this works.
Isn't it possible to mix and match caching strategies? From how I see it, Cache Aside and Read Through are caching strategies for application reads (user wants to read data), while Write Through and Write Behind are caching strategies for application writes (user wants to write data). Couldn't you have a strategy that uses both Cache Aside and Write Through? Why do most articles always seem to portray them as independent strategies?
What happens if you have a cluster of webs servers? Do they each have their own local in-memory database that acts as a cache?
Could you implement a cache using a normal (not in-memory) database? I suppose this would still be somewhat useful since you do not need to make an additional network hop to the database server (since the cache lives on the same machine as the web server)?
Introduction & clarification
I guess you have one misunderstood point, that the cache is NOT expclicitely stored on the same server as the werbserver. Sometimes, not even the database is sperated on it's own server from the webserver. If you think of APIs, like HTTP REST APIs, you can use caching to not spend too many resources on database connections & queries. Generally, you want to use as few database connections & queries as possible. Now imagine the following setting:
You have a werbserver who serves your application and a REST API, which is used by the webserver to work with some resources. Those resources come from a database (lets say a relational database) which is also stored on the same server. Now there is one endpoint which serves e.g. a list of posts (like blog-posts). Every user can fetch all posts (to make it simple in this example). Now we have a case where one can say that this API request could be cached, to not let all users always trigger the database, just to query the same resources (via the REST API) over and over again. Here comes caching. Redis is one of many tools which can be used for caching. Since redis is a simple in-memory key-value storage, you can just put all of your posts (remember the REST API) after the first DB-query, into the cache. All future requests for the posts-list would first check whether the posts are alreay cached or not. If they are, the API will return the cache-content for this specific request.
This is one simple example to show off, what caching can be used for.
Answers on your question
My first question is, why would you ever write to a cache?
To reduce the amount of database connections and queries.
how is writing to these key-value stores going to help with updating the relational database?
It does not help you with updating, but instead it helps you with spending less resources. It also helps you in terms of "temporary backing up" some data - but that only as a very little side effect. For this, out there are more attractive solutions (Since redis is also not persistent by default. But it supports persistence.)
Do these cache writing strategies only apply if your main database is also a key-value store?
No, it is not important which database you use. Whether it's a NoSQL or SQL DB. It strongly depends on what you want to cache and how the database and it's tables are set up. Do you have frequent changes in your recources? Do resources get updated manually or only on user-initiated actions? Those are questions, leading you to the right caching implementation.
Isn't it possible to mix and match caching strategies?
I am not an expert at caching strategies, but let me try:
I guess it is possible but it also, highly depends on what you are doing in your DB and what kind of application you have. I guess if you find out what kind of application you are building up, then you will know, what strategy you have to use - i guess it is also not recommended to mix those strategies up, because those strategies are coupled to your application type - in other words: It will not work out pretty well.
What happens if you have a cluster of webs servers? Do they each have their own local in-memory database that acts as a cache?
I guess that both is possible. Usually you have one database, maybe clustered or synchronized with copies, to which your webservers (e.g. REST APIs) make their requests. Then whether each of you API servers would have it's own cache, to not query the database at all (in cloud-based applications your database is also maybe on another separated server - so another "hop" in terms of networking). OR (what i also can imagine) you have another middleware between your APIs (clusterd up) and your DB (maybe also clustered up) - but i guess that no one would do that because of the network traffic. It would result in a higher response-time, what you usually want to prevent.
Could you implement a cache using a normal (not in-memory) database?
Yes you could, but it would be way slower. A machine can access in-memory data faster then building up another (local) connection to a database and query your cached entries. Also, because your database has to write the entries into files on your machine, to persist the data.
Conclusion
All in all, it is all about being fast in terms of response times and to prevent much network traffic. I hope that i could help you out a little bit.
I have a small web and mobile application partly running on a webserver written in PHP (Symfony). I have a few clients using the application, and slowly expanding to more clients.
My back-end architecture looks like this at the moment:
Database is Cloud SQL running on GCP (every client has it's own
database instance)
Files are stored on Cloud Storage (GCP) or S3 (AWS), depending on the client. (every client has it's own bucket)
PHP application is running in a Compute Engine VM (GCP), (every client has it's own VM)
Now the thing is, in the PHP code, the only thing client specific is a settings file with the database credentials and the Storage/S3 keys in it. All the other code is exactly the same for every client. And mostly the different VMs sit idle all day, waiting on a few hours usage per client.
I'm trying to find a way to avoid having to create and maintain a VM for every customer. How could I rearchitect my back-end so I can keep separate Databases and Storage Buckets per client, but only scale up my VM's when capacity is needed?
I'm hearing alot about Docker, was thinking about keeping db credentials and keys in a Redis DB or Cloud Datastore, was looking at Heroku, AppEngine, Elastic Beanstalk, ...
This is my ideal scenario as I see it now
An incoming request is done, hits a load balancer
From the request, determine which client the request is for
Find the correct settings file, or credentials from a DB
Inject the settings file in an unused "container"
Handle the request
Make the container idle again
And somewhere in there, determine based on the the amount of incoming requests or traffic, if I need to spin up or spin down containers to handle the extra or reduced (temporary) load.
All this information overload has me stuck, I have no idea what direction to choose, and I fail seeing how implementing any of the above technologies will actually fix my problem.
There are several ways do it with minimum efforts:
Rewrite loading of config file depending from customer
Make several back-end web sites on one VM (best choice i think)
I am very much new to redis. I have been investigating on redis for past few days.I read the documentation on cache management(lru cache), commands ,etc. I want to know how to implement caching for multiple microservice(s) data .
I have few questions:
Can all microservices data(cached) be kept under a single instance of redis
server?
Should every microservice have its own cache database in redis?
How to refresh cache data without setting EXPIRE? Since it would consume more memory.
Some more information on best practices on redis with microservices will be helpful.
It's possible to use the same Redis for multiple microservices, just make sure to prefix your redis cache keys to avoid conflict between all microservices.
You can use multi db in the same redis instance (i.e one for each microservice) but it's discouraged because Redis is single threaded.
The best way is to use one Redis for each microservices, then you can easily flush one of them without touching others.
From my personal experience with a redis cache in production (with 2 million keys), there is no problem using EXPIRE. I encourage you to use it.
Please find below the answer to all your questions -
Can all microservices data(cached) be kept under a single instance of redis server? Ans - Yes you can keep all the data under single redis instance, all you need to do is to set that data using different key Name. As redis is basically a Key-Value Database.
Should every microservice have its own cache database in redis? Ans - Not required. Just make different key for each microservice. Also please note that you can use colon (:) to make folders in redis, to identify different microservices easily on Redis Desktop Manager.
Example - Key Name X:Y:Z, here Z is placed in Y folder and Y is in X. SO you will get a folder kind of structure. That would be helpful to differentiate different microservices.
How to refresh cache data without setting EXPIRE? Since it would consume more memory. Ans - You can set data again on the same key if you have any change in Microservice response. That Key value will get over written in that case.
Can all microservices data(cached) be kept under a single instance of redis server?
In microservice architecture it's prefirible "elastic scale SaaS". You can think your Cache service is perse a microservice (that will response on demand) Then you have multiple options here. The recommended practice on data storage is sharding https://azure.microsoft.com/en-us/documentation/articles/best-practices-caching/#partitioning-a-redis-cache .See the diagram below for book Microservices, IoT and Azure
Should every microservice have its own cache database in redis? It's possible to still thinking "vertical partition" but you should consider "horizontal partitions" so again consider sharding; additionally It's not a bad idea to have "local cache" specialy to avoid DoS
"Be careful not to introduce critical dependencies on the availability of a shared cache service into your solutions. An application should be able to continue functioning if the service that provides the shared cache is unavailable. The application should not hang or fail while waiting for the cache service to resume."
How to refresh cache data without setting EXPIRE? Since it would consume more memory.
You can define your synch polices; I think cache is suitable for things that have few changes.
"It might also be appropriate to have a background process that periodically updates reference data in the cache to ensure it is up to date, or that refreshes the cache when reference data changes."
For cahe best practices check
Caching Best Practices
With Memcached, it is my understanding that each of the cache servers doesn't need to know diddly about the other servers. With AppFabric Cache on the other hand, the shared configuration links the servers (and consequently becomes a single point of failure).
Is it possible to use AppFabric cache servers independently? In other words, can the individual clients choose where to store their key/values based on the available cache servers and would that decision be the same for all clients (the way it is with memcached).
NOTE: I do realize that more advanced features such as tagging would be broken without all the servers knowing about each other.
Are you viewing the shared configuration as a single point of failure? If you are using SQL Server as your configuration repository, then this shouldn't be an issue with a redundant SQL Server setup.
This approach would obviously loose you all of the benefits of using a distributed cache, however, if you really want to do this then simply don't use a shared configuration. When you configure a new AppFabric node create a new configuration file or database. Choosing an existing one basically says "add this new node to the existing cache cluster".
We have 3 front-end servers each running multiple web applications. Each web application has an in memory cache.
Recreating the cache is very expensive (>1 min). Therefore we repopulate it using a web service call to each web application on each front-end server every 5 minutes.
The main problem with this setup is maintaining the target list for updating and the cost of creating the cache several times every few minutes.
We are considering using AppFabric or something similar but I am unsure how time consuming it is to get up and running. Also we really need the easiest solution.
How would you update an expensive in memory cache across multiple front-end servers?
The problem with memory caching is that it's unique to the server. I'm going with the idea that this is why you want to use AppFabric. I'm also assuming that you're re-creating the cache every few minutes to keep the in memory caches in sync across all servers. With all this work, I can well appreciate that caching is expensive for you.
It sounds like you're doing a lot of work that probably isn't necessary. This article has some detail about the caching mechanisms available within SharePoint. You may be interested in the output cache discussed near the top of the article. You may also want to read the linked TechNet article and the linked article called "Custom Caching Overview".
The only SharePoint way to do that is to use Service Application infrastructure. The only problem is that it requires some time to understand how it works. Also it's too complicated to do it from scratch. You might consider downloading one of existing applications and rename classes/GUIDs to match your naming conventions. I used this one: http://www.parago.de/2011/09/paragoservices-a-sharepoint-2010-service-application-sample/. In this case you can have single cache per N front-end servers.