I'm working on micorservice using springboot . I have three questions here . Answers to any/all are much appreciated .Thanks in advance
Background: We need to read some key from vault during application startup and save it in variable for later use (to avoid hits on vault) . There will be TTL for this value so application should refresh and take whenever new value configured in vault.
Q1 : How to load and ensure values are loaded only once(i.e vault hit only once)
Q2 :How to get the new values whenever there is a change
Q3 : How to test locally.
Use guava cache to store values (assuming they are strings, but you can change it to any type) like this:
LoadingCache<String, String> vaultData = CacheBuilder.newBuilder()
.expireAfterAccess(10, TimeUnit.MINUTES)
.build(
new CacheLoader<String, String>() {
public String load(String key) throws AnyException {
return actuallyLoadFromVault(String);
}
});
This way when your code will read some key from vaultData for the first time it will loaded using actuallLoadFromVault (which you need to write of cause) and after that any new access to that key via vaultData will hit the cached value that is stored in memory.
With proper configuration after 10 minutes the value will be wiped from the cache (please read https://github.com/google/guava/wiki/CachesExplained#when-does-cleanup-happen and How does Guava expire entries in its CacheBuilder? to configure that correctly).
You might need to set max cache size to limit the memory consumption. See documentation for details.
Here's my code of using ehcache when I do multi-threaded reading and writing:
write code:
try {
targetCache.acquireWriteLockOnKey(key);
targetCache.putIfAbsent(new Element(key, value));
}
finally {
targetCache.releaseWriteLockOnKey(key);
}
reading code:
try{
cache.acquireReadLockOnKey(key);
cacheCarId = (String)ele.getObjectValue();
}
finally {
cache.releaseReadLockOnKey(key);
}
key and value are both String.
My config is as follows:
CacheConfiguration config = new CacheConfiguration();
config.name("carCache");
config.maxBytesLocalHeap(128, MemoryUnit.parseUnit("M"));
config.eternal(false);
config.timeToLiveSeconds(60);
config.setTimeToIdleSeconds(60);
SizeOfPolicyConfiguration sizeOfPolicyConfiguration = new SizeOfPolicyConfiguration();
sizeOfPolicyConfiguration.maxDepth(10000);
sizeOfPolicyConfiguration.maxDepthExceededBehavior("abort");
config.addSizeOfPolicy(sizeOfPolicyConfiguration);
Cache memoryOnlyCache = new Cache(config);
CacheManager.getInstance().addCache(memoryOnlyCache);
Values are evict within 60s and will be written by multi-thread. The total number of key is less than 25,000.
The reading and writing was ok at the beginning, but after a couple of hours, i get inconsistence of reading and writing...
Could Anybody help me with this problem? Thanks a lot
A Cache is already a thread safe data structure, so you should not need to use explicit locking as you do.
Also the method Cache.putIfAbsent is already an atomic operation that guarantees that only one thread will succeed with the put.
Note that eviction and expiry are two different things. With your configuration, eviction happens when the cache size grows beyond 128MB and expiry indeed happens after 60 seconds. However Ehcache does expiry in-line, so it is triggered when you read or write the mapping.
As for your remark on inconsistence, you will need to describe in more detail what you mean by that.
I have just upgraded Tomcat from version 7.0.52 to 8.0.14.
I am getting this for lots of static image files:
org.apache.catalina.webresources.Cache.getResource Unable to add the
resource at [/base/1325/WA6144-150x112.jpg] to the cache because there
was insufficient free space available after evicting expired cache
entries - consider increasing the maximum size of the cache
I haven't specified any particular resource settings, and I didn't get this for 7.0.52.
I have found mention of this happening at startup in a bug report that was supposedly fixed. For me this is happening not at startup but constantly when the resource is requested.
Anybody else having this issue?
Trying to at least just disable the cache, but I cannot find an example of how to specify not to use the cache. The attributes have gone from the context in Tomcat version 8. Have tried adding a resource but cannot get the config right.
<Resource name="file"
cachingAllowed="false"
className="org.apache.catalina.webresources.FileResourceSet"
/>
Thanks.
I had the same issue when upgrading from Tomcat 7 to 8: a continuous large flood of log warnings about cache.
1. Short Answer
Add this within the Context xml element of your $CATALINA_BASE/conf/context.xml:
<!-- The default value is 10240 kbytes, even when not added to context.xml.
So increase it high enough, until the problem disappears, for example set it to
a value 5 times as high: 51200. -->
<Resources cacheMaxSize="51200" />
So the default is 10240 (10 mbyte), so set a size higher than this. Than tune for optimum settings where the warnings disappear.
Note that the warnings may come back under higher traffic situations.
1.1 The cause (short explanation)
The problem is caused by Tomcat being unable to reach its target cache size due to cache entries that are less than the TTL of those entries. So Tomcat didn't have enough cache entries that it could expire, because they were too fresh, so it couldn't free enough cache and thus outputs warnings.
The problem didn't appear in Tomcat 7 because Tomcat 7 simply didn't output warnings in this situation. (Causing you and me to use poor cache settings without being notified.)
The problem appears when receiving a relative large amount of HTTP requests for resources (usually static) in a relative short time period compared to the size and TTL of the cache. If the cache is reaching its maximum (10mb by default) with more than 95% of its size with fresh cache entries (fresh means less than less than 5 seconds in cache), than you will get a warning message for each webResource that Tomcat tries to load in the cache.
1.2 Optional info
Use JMX if you need to tune cacheMaxSize on a running server without rebooting it.
The quickest fix would be to completely disable cache: <Resources cachingAllowed="false" />, but that's suboptimal, so increase cacheMaxSize as I just described.
2. Long Answer
2.1 Background information
A WebSource is a file or directory in a web application. For performance reasons, Tomcat can cache WebSources. The maximum of the static resource cache (all resources in total) is by default 10240 kbyte (10 mbyte). A webResource is loaded into the cache when the webResource is requested (for example when loading a static image), it's then called a cache entry.
Every cache entry has a TTL (time to live), which is the time that the cache entry is allowed to stay in the cache. When the TTL expires, the cache entry is eligible to be removed from the cache. The default value of the cacheTTL is 5000 milliseconds (5 seconds).
There is more to tell about caching, but that is irrelevant for the problem.
2.2 The cause
The following code from the Cache class shows the caching policy in detail:
152 // Content will not be cached but we still need metadata size153 long delta = cacheEntry.getSize();154 size.addAndGet(delta);156 if (size.get() > maxSize) {157 // Process resources unordered for speed. Trades cache158 // efficiency (younger entries may be evicted before older159 // ones) for speed since this is on the critical path for160 // request processing161 long targetSize =162 maxSize * (100 - TARGET_FREE_PERCENT_GET) / 100;163 long newSize = evict(164 targetSize, resourceCache.values().iterator());165 if (newSize > maxSize) {166 // Unable to create sufficient space for this resource167 // Remove it from the cache168 removeCacheEntry(path);169 log.warn(sm.getString("cache.addFail", path));170 }171 }
When loading a webResource, the code calculates the new size of the cache. If the calculated size is larger than the default maximum size, than one or more cached entries have to be removed, otherwise the new size will exceed the maximum. So the code will calculate a "targetSize", which is the size the cache wants to stay under (as an optimum), which is by default 95% of the maximum. In order to reach this targetSize, entries have to be removed/evicted from the cache. This is done using the following code:
215 private long evict(long targetSize, Iterator<CachedResource> iter) {217 long now = System.currentTimeMillis();219 long newSize = size.get();221 while (newSize > targetSize && iter.hasNext()) {222 CachedResource resource = iter.next();224 // Don't expire anything that has been checked within the TTL225 if (resource.getNextCheck() > now) {226 continue;227 }229 // Remove the entry from the cache230 removeCacheEntry(resource.getWebappPath());232 newSize = size.get();233 }235 return newSize;236 }
So a cache entry is removed when its TTL is expired and the targetSize hasn't been reached yet.
After the attempt to free cache by evicting cache entries, the code will do:
165 if (newSize > maxSize) {166 // Unable to create sufficient space for this resource167 // Remove it from the cache168 removeCacheEntry(path);169 log.warn(sm.getString("cache.addFail", path));170 }
So if after the attempt to free cache, the size still exceeds the maximum, it will show the warning message about being unable to free:
cache.addFail=Unable to add the resource at [{0}] to the cache for web application [{1}] because there was insufficient free space available after evicting expired cache entries - consider increasing the maximum size of the cache
2.3 The problem
So as the warning message says, the problem is
insufficient free space available after evicting expired cache entries - consider increasing the maximum size of the cache
If your web application loads a lot of uncached webResources (about maximum of cache, by default 10mb) within a short time (5 seconds), then you'll get the warning.
The confusing part is that Tomcat 7 didn't show the warning. This is simply caused by this Tomcat 7 code:
1606 // Add new entry to cache1607 synchronized (cache) {1608 // Check cache size, and remove elements if too big1609 if ((cache.lookup(name) == null) && cache.allocate(entry.size)) {1610 cache.load(entry);1611 }1612 }
combined with:
231 while (toFree > 0) {232 if (attempts == maxAllocateIterations) {233 // Give up, no changes are made to the current cache234 return false;235 }
So Tomcat 7 simply doesn't output any warning at all when it's unable to free cache, whereas Tomcat 8 will output a warning.
So if you are using Tomcat 8 with the same default caching configuration as Tomcat 7, and you got warnings in Tomcat 8, than your (and mine) caching settings of Tomcat 7 were performing poorly without warning.
2.4 Solutions
There are multiple solutions:
Increase cache (recommended)
Lower the TTL (not recommended)
Suppress cache log warnings (not recommended)
Disable cache
2.4.1. Increase cache (recommended)
As described here: http://tomcat.apache.org/tomcat-8.0-doc/config/resources.html
By adding <Resources cacheMaxSize="XXXXX" /> within the Context element in $CATALINA_BASE/conf/context.xml, where "XXXXX" stands for an increased cache size, specified in kbytes. The default is 10240 (10 mbyte), so set a size higher than this.
You'll have to tune for optimum settings. Note that the problem may come back when you suddenly have an increase in traffic/resource requests.
To avoid having to restart the server every time you want to try a new cache size, you can change it without restarting by using JMX.
To enable JMX, add this to $CATALINA_BASE/conf/server.xml within the Server element:
<Listener className="org.apache.catalina.mbeans.JmxRemoteLifecycleListener" rmiRegistryPortPlatform="6767" rmiServerPortPlatform="6768" /> and download catalina-jmx-remote.jar from https://tomcat.apache.org/download-80.cgi and put it in $CATALINA_HOME/lib.
Then use jConsole (shipped by default with the Java JDK) to connect over JMX to the server and look through the settings for settings to increase the cache size while the server is running. Changes in these settings should take affect immediately.
2.4.2. Lower the TTL (not recommended)
Lower the cacheTtl value by something lower than 5000 milliseconds and tune for optimal settings.
For example: <Resources cacheTtl="2000" />
This comes effectively down to having and filling a cache in ram without using it.
2.4.3. Suppress cache log warnings (not recommended)
Configure logging to disable the logger for org.apache.catalina.webresources.Cache.
For more info about logging in Tomcat: http://tomcat.apache.org/tomcat-8.0-doc/logging.html
2.4.4. Disable cache
You can disable the cache by setting cachingAllowed to false.
<Resources cachingAllowed="false" />
Although I can remember that in a beta version of Tomcat 8, I was using JMX to disable the cache. (Not sure why exactly, but there may be a problem with disabling the cache via server.xml.)
In your $CATALINA_BASE/conf/context.xml add block below before </Context>
<Resources cachingAllowed="true" cacheMaxSize="100000" />
For more information: http://tomcat.apache.org/tomcat-8.0-doc/config/resources.html
You have more static resources that the cache has room for. You can do one of the following:
Increase the size of the cache
Decrease the TTL for the cache
Disable caching
For more details see the documentation for these configuration options.
This isn’t a solution in the sense that it doesn’t resolve the conditions which cause the message to appear in the logs, but the message can be suppressed by appending the following to conf/logging.properties:
org.apache.catalina.webresources.Cache.level = SEVERE
This filters out the “Unable to add the resource” logs, which are at level WARNING.
In my view a WARNING is not necessarily an error that needs to be addressed, but rather can be ignored if desired.
Some more tip (issue i meet):
if $CATALINA_BASE/conf/context.xml ovverrides by intellij.
Just add inside block <Context> </Context>:
<Resources cachingAllowed="true" cacheMaxSize="100000" />
in your Tomcat/apache-tomcat-x.x.x/conf/context.xml
Is it possible to change Hazelcast configuration at runtime and if so what parameters are modifiable.
It seems to be possible using Hazelcast Management Center but can't find any examples/references in official docos/forums.
Might be a bit late to answer your question but better late than never :)
You can modify some of the map config properties after the map has been created using the MapService:
HazelcastInstance instance = Hazelcast.newHazelcastInstance();
// create map
IMap<String, Integer> myMap = instance.getMap("myMap");
// create a new map config
MapConfig newMapConfig = instance.getConfig().getMapConfig("myMap").setAsyncBackupCount(1);
// submit the new map config to the map service
MapService mapService = (MapService)(((AbstractDistributedObject)instance.getDistributedObject(MapService.SERVICE_NAME, "")).getService());
mapService.getMapServiceContext().getMapContainer("myMap").setMapConfig(newMapConfig);
Note that this API is not visible/documented so it might not work in future versions.
We are using this in our application when we need to insert several million entries in a distributed map at startup. Disabling the backup cut the insertion time by 30%. After the data are inserted, we enable the backup.
The Hazelcast internals are not really designed to be modifiable. What do you want to modify?
I'm using .net 4 Memory Cache. I would like to limit the size of the cache say to 10mb because I don't want my application to be abusing what goes in there.
I would also like to know how much memory my cache is at any given time. How can I tell at run time?
You can specify the maximum amount of physical memory dedicated to the MemoryCache in the application config file using the namedCaches element, or by passing in the setting when you create your MemoryCache instance via the NameValueCollection passed into the constructor by putting an entry in the collection with a key of cacheMemoryLimitMegabytes and a value of 10.
Here is an example of the namedCaches configuration element:
<configuration>
<system.runtime.caching>
<memoryCache>
<namedCaches>
<add name="Default"
cacheMemoryLimitMegabytes="10"
physicalMemoryLimitPercentage="0"
pollingInterval="00:05:00" />
</namedCaches>
</memoryCache>
</system.runtime.caching>
</configuration>
And here is how you can configure the MemoryCache during creation:
//Create a name / value pair for properties
var config = new NameValueCollection();
config.Add("pollingInterval", "00:05:00");
config.Add("physicalMemoryLimitPercentage", "0");
config.Add("cacheMemoryLimitMegabytes", "10");
//instantiate cache
var cache = new MemoryCache("CustomCache", config);
This blog post details just about all of the ways to configure the MemoryCache object, and some examples were adapted from this source.
You can do this in configuration... for example...
<system.runtime.caching>
<memoryCache>
<namedCaches>
<add name="Default"
cacheMemoryLimitMegabytes="52"
physicalMemoryLimitPercentage="40"
pollingInterval="00:04:01" />
</namedCaches>
</memoryCache>
</system.runtime.caching>
To do this in code see... this msdn page
It seems at this time the maximum amount of memory allocated for the cache can not be enforced. See this post for further reference: MemoryCache does not obey memory limits in configuration