setup Azure Redis Cache as LRU - caching

Using the preview of Azure redis cache, and it's working great. But I can't figure out how to configure it as LRU cache, as described by the redis docs.
The exception is
StackExchange.Redis.RedisServerException: ERR unknown command 'CONFIG'
My code was
server.ConfigSet("maxmemory", "250m");
server.ConfigSet("maxmemory-policy", "allkeys-lru");

Config has been currently been disabled for the initial Azure Redis Cache (Preview).
We will be selectively be opening this up as we refresh the Preview.
By default the maxmemory-policy is set to volatile-lru.
Update - Max Memory Policy is now configurable via the Cache blade.

Related

How to update Abp Permission Cache for each application

I have multiple services (Administration.Api, Project.Api)
Administration service is managing permissions (create,update).
But i have a problem about caching, when i update permissions through Administration.Api, Project api's cache Permission grant don't change immediately(it's grant change after 20minutes, when cach removed automatically)
I want to change all permission cache under different cache prefixes immediately. How can i fix this?
You really need a true distributed cache service (like Redis) to do this properly. That way a cache-dump for one affects all services.
There are other solutions you could try, but really they are just bandaids, and more work with potential other sideeffects.
use a message bus to notify all services of the permission change and to dump their in-memory cache
use a new shared db table to add a new row with "LastUpdated". The permission service would need to write the updated time when permissions changed. Each service would need to query this table to check for a newer updated time (on each request), and dump in-memory cache if exists.
You can use AbpDistributedCacheOptions to change default cache settings and add prefix to your application for caching.
Configure<AbpDistributedCacheOptions>(options =>
{
options.GlobalCacheEntryOptions = new DistributedCacheEntryOptions()
{
AbsoluteExpiration = //20 mins default
};
options.KeyPrefix = "MyApp1";
});
You can also extend override permission management providers, such as RolePermissionManagementProvider and handle cache invalidation.
Docs about permission management providers: https://docs.abp.io/en/abp/latest/Modules/Permission-Management#permission-management-providers
One application has ONE ABP default cache (we are not talking about global caches like Redis now). So to have a single control of different applications caches, you can use RabbitMQ: you have a RabbitMQ queue in each application, named something like "abp-cache[appName]". In RabbitMQ receiver, you send messages to EACH of these queues. In the RabbitMQ receiver of the specific app, you handle the received message. I've already implemented this mechanism to update ABP permission cache for all my apps. Everything is easily wrapped inside Extensions Nuget package.

How to find out if terracotta server is using disk storage?

I've setup an ehcache that uses a disk store to cache some files. This is working, and I can see the cache file created on disk, but I want to have this behaviour also in a terracotta server, so the cache can be accessed by multiple clients.
I've setup the terracotta server, tweaked the ehcache configuration, I can see that the cache is working, but I'm not sure if it is using memory or disk. I only want to use disk for this cache.
I also get some warnings like this one: WARN - Asking for a large amount of memory: 26179740 bytes
Terracotta config:
<servers>
<mirror-group>
<server host="localhost" name="localhost" >
<data>/opt/terracotta/data</data>
<tsa-port>9510</tsa-port>
<management-port>9540</management-port>
<tsa-group-port>9530</tsa-group-port>
<dataStorage size="2g">
<offheap size="100m"/>
<hybrid/>
</dataStorage>
<logs>stdout:</logs>
</server>
</mirror-group>
I'm configuring ehcache programmatically, and I'm certain the following config is wrong, but maybe close to what is needed.
TerracottaConfiguration config = new TerracottaConfiguration()
.clustered(true)
.compressionEnabled(true);
Cache httpCache = new Cache(new CacheConfiguration()
.name(HTTP_CACHE)
.maxEntriesLocalHeap(1)
.memoryStoreEvictionPolicy(MemoryStoreEvictionPolicy.LRU)
.diskExpiryThreadIntervalSeconds(Properties.CACHE_HTTP_EXPIRY)
.persistence(new PersistenceConfiguration().strategy(PersistenceConfiguration.Strategy.DISTRIBUTED))
.terracotta(config));
Given the configuration and version information given in comments:
Open source Terracotta server only uses in-memory storage.
<dataStorage size="2g">
<offheap size="2g"/>
</dataStorage>
In this example, you have 2Gb of data storage, all using offheap.
And of course there will be no on disk content.
This means that if the server is shut down, all data is lost.
Of course you can have two servers in a single mirror group to gain high availability.
With Enterprise feature, you can effectively have data persisted on disk to enable restartability.
<dataStorage size="2g">
<offheap size="200m"/>
<hybrid/>
</dataStorage>
The example above declares 2Gb of Storage of which 200Mb will be served from memory and the rest from disk.
Note that in order to have full server restartability, you need to enable it through: <restartable enabled="true"/> in each server element.
For more details on all this, please refer to the product documentation.
Note also that you should use the same versions for the client and the server. While the 4.3 line supports different client and server version, it is aimed at rolling upgrades and is not a recommended long running setup.

How to change Infinispan cache settings after it is created?

In my application i'm using Infinispan 5.3 version and I want to change setting after cache is initialized. Default settings will be loaded from xml file and some of the settings ( ex : eviction maxEntries, lifespan, etc ) should be able to change any time of application running (This is changed by sysadmin). Is there way to changed settings of already created cache ?
I tried EmbeddedCacheManager.defineConfiguration(String cacheName, Configuration configurationOverride); but this has no effect on already created cache.
Please, take into account that in the Infinispan version 5.3 there is no possibility to change cache configuration "on the fly". You need to restart your service with new configuration in case of any wanted change.
This is something the community might want to work on in the future. However, such a task is not easy because you need to figure out how to correctly deal with affected data immediately after the configuration change.
Feel free to raise new feature request: https://issues.jboss.org/browse/ISPN/

FuelPHP Cache visibility between task & app controller

I use memcache as cache driver and I noticed that my app does not see \Cache::set('key', $value); saved in the task. When I try to read it from the app controller level the \Cache::get('key') is not set.
There is no memcache cache driver available in the FuelPHP framework.
If you mean "memcached" (which is not the same), then assuming your config is ok and both the webserver and the commandline use the same config, that should work without problems.

When is OpenAFS cache cleared?

Let's say I have a bunch of users who all access the same set of files, that have permission system:anyuser. User1 logs in and accesses some files, and then logs out. When User2 logs in and tries to access the same files, will the cache serve the files, or will it be cleared between users?
The cache should serve the files (in the example above).
How long a file will persist in the OpenAFS cache manager depends on how the client is configured, variables include the configured size of the cache, whether or not the memcache feature is enabled, and how "busy" the client is.
If OpenAFS memcache (cache chunks stored in RAM) is enabled, then the cache is cleared upon reboot. With the more traditional disk cache, the cache can persist across reboots. Aside from that key difference files persist in the cache following the same basic rules. The cache is a fixed size stack, recently accessed files stay in the cache and older files are purged as needed when newer files are requested.
More details are available in the OpenAFS wiki:
http://wiki.openafs.org/

Resources