With the Azure Cache service, I am trying to find details of the following aspects of the service:
What is the maximum length of time something can be kept in the cache? I presume it would be until the cache service is restarted;
Is there a way to detect that the cache service has been restarted?
My intention is to use Azure Cache to store datasets that are frequently being accessed, and that would be updated / added to over time as data that is incoming into my system is processed.
How would I know / be notified that the cache has restarted (I guess apart from seeing if it is empty) so I could kick off a process to repopulate it?
I think you'll find answer to your questions here: http://msdn.microsoft.com/en-us/library/dn386128.aspx
Alright, information on High Availability of the cache service is here: http://msdn.microsoft.com/en-us/library/dn386134.aspx
Related
I have implemented Caching in my Spring Boot REST Application. My policy includes a time based cache eviction strategy, and an update-based cache eviction strategy. I am worried that since I employ a stateless server, if there is a method called to update certain data, and this was handled by server instance A, then the corresponding caches in server instance B, C and D, are not updated as well.
Is this an issue I would face / is there a way to overcome this issue?
This is the oldest problem in software development - cache invalidation when you have multiple servers
One way to handle it is to move your cache out of the individual servers and move them to somewhere shared like another instance that holds the cache entries that every other app refers to or something like redis [centralized cache]
Second way is to do a broadcast message so that each server now knows to invalidate the entry once the data has been modified or deleted - here you run the risk of the message not being processed and thus a stale entry is left in some server[s]
Another option is to have some sort of write ahead log [like kafka or redis streams] which is processed by each server and thus they will all process the events deterministically and have the same cache state
Lmk if you need more help - we can setup some time outside of SO
In my project i use infinispan to manage my data and improve the performance,
so i have a problem is when we stop de server and restart it all my data are deleted ans it's normal beacause its a cache.
so i demand you if you have a sugggestion for me for saving my data of my application even if the server is stopped ?
I searched in the internet , i found a lot of solution like using database with infinispan or store the data into a file like using (filecacheStore, jdbccachestore, casassandraCachedatastore) and i dont know which one is the best solution!
thank you very much in advance for your reply.
There's a multitude of options and none is best for all use-cases; that's why there are the options. You haven't said much about your app needs.
1) Use persistent cache store (single file store is the simplest option probably). This is and OOTB solution.
2) Before shutdown fetch and persist all data from your app (use streams API to iterate through), and upload them after boot. This does not add any overhead during runtime but requires you to handle the process yourselves.
3) Use cluster of nodes and always keep some nodes with the data up. However, backups (either via 1) or 2)) might be advisable anyway.
I have been studying about Redis (no experience at all - just studied theory), and after doing some research, found out that its also being used as cache. e.g. StackOverfolow it self.
My question is, if I have an asp.net WebApi service, and I use output caching at the WebApi level to cache responses, I am basically storing kind of key/value (request/response) in server's memory to deliver cached responses.
Now as redis is an in memory database, how will it help me to substitute WebApi's output caching with redis cache?
Is there any advantage?
I tried to go through this answer redis-cache-vs-using-memory-directyly, but I guess I didn't got the key line in the answer:
"Basically, if you need your application to scale on several nodes sharing the same data, then something like Redis (or any other remote key/value store) will be required."
I am basically storing kind of key/value (request/response) in server's memory to deliver cached responses.
This means that after a server restart, the server will have to rebuild the cache . That won't be the case with Redis. So one advantage of Redis over a homemade in-memory solution is persistence (only if that's an issue for you and that you did not planned to write persistence yourself).
Then instead of coding your own expiring mechanism, you can use Redis EXPIRE or command EXPIREAT or even simply specifying the expire timestamp when putting the api output string into cache with SETEX.
if you need your application to scale on several nodes sharing the same data
What it means is that if you have multiple instances of the same api servers, putting the cache into redis will allow these servers to share the same cache, thus reducing, for instance, memory consumption (1 cache instead of 3 in-memory cache), and so on...
I'm using AppFabric for Windows Server 1.1 with Entity Framework and the Entity Framework Cache Adapter.
Recently, for one of our customer, we encountered memory pressure on one of the cache node. AppFabric Cache started evicting least recently used objects.
The problem is that the Entity Framework Cache Adapter stores objects in dependent regions. So if a region is cleared or removed by AppFabric, the cache adapter must remove objects in the dependent regions as well.
I've successfully tested cache notifications, but I'm wondering if I could be notified only for evictions done at the server level and not for what is programmatically removed from a cache client. If not, it will be very difficult to know how to properly react when an item is removed.
Thanks in advance.
AppFabric records eviction information in Performance Counters and the AppFabric Operational log in the Event Log. The Health Monitoring page shows how to enable the Operational log.
You should be able to use one of these in order to receive notifications of evictions, you'll probably need a monitoring service to do the actual notification.
I'm using Windows Azure Shared Caching. I encountered a few problems:
How to know what keys are present in the cache? Is there something like a GetAllKeys() method?
Is it possible to call clearAll()?
Why can't I use regions?
Thanks.
This section applies to Windows Azure Caching
Windows Azure provides two types of cache modes:
Dedicated Role caching - The role instances are used exclusively for
caching (there is no other code running in that instance).
Co-located Role caching - The cache shares the VM resources
(bandwidth, CPU, and memory) with the application.
How to know what is in the cache? Is there something like "GetAllKeys()" method?
Do you need that information for your application of more for reporting / auditing?
I think, Microsoft did not provide that method for one good reason: the information it returned could be obsolete shortly after. See, cache items may expire any time (depends on expiration time and time of adding item to cache) so information you would receive from GetAllKeys() method could be invalid seconds or even milliseconds later.
Cache usage standard pattern would be
Get item from cache by a key
If cache return Null then create that item and put / add into the cache
Perform operation on the item (either taken from cache or recreated)
Co-located Role caching
Is it possible to clearAll()?
I do not think you should worry about purging your cache. If you set the cache eviction policy to LRU (Last Recently Used) then the least recently used items are discarded first. So you will never get anything like "no space in cache".
Why can't I use regoins?
You can but only with cache locate on the same instance. Dedicated Role caching does not support it.
This section applies to Windows Azure Shared Caching
Windows Azure Shared Caching is very similar to Windows Azure Caching (described above) from client side point of view and all of the explanations applies to Shared Caching too.
There is a small change to items eviction:
In Shared Caching, items without a specific expiration time will expire after 48 hours. However, you can add items to the cache (via various overloads of the Add and Put methods) with an explicit expiration time, such as X minutes or Y days.
When you exceed the size of your cache (cache sizes you chose during creation), the caching service will start "evict items" in the cache until the memory issue is resolved (you have enough memory to add new cache items). During "eviction" LRU mechanism is used - the least recently used items in the cache are removed.
Get, check, and recreate approach (described above) of dealing with cache items will work for Shared Caching too.
I hope that will help you to better understand Azure Caching and Shared Caching.
Following method clears all the data in a cache.
public static void InvalidateCache(string cacheName)
{
DataCache desiredCache = new DataCache(cacheName);
foreach (string regionName in desiredCache.GetSystemRegions())
{
desiredCache.ClearRegion(regionName);
}
}