Coherence Flush Delay Setting - caching

I want a cache that checks its own items if they are expired or not. My cache config is below:
<?xml version="1.0" encoding="UTF-8"?>
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>subscriberinfo</cache-name>
<scheme-name>distributed-scheme</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<distributed-scheme>
<scheme-name>distributed-scheme</scheme-name>
<lease-granularity>member</lease-granularity>
<service-name>DistributedCache</service-name>
<serializer>
<instance>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
<init-params>
<init-param>
<param-type>String</param-type>
<param-value>rbm-shovel-pof-config.xml</param-value>
</init-param>
</init-params>
</instance>
</serializer>
<backing-map-scheme>
<local-scheme>
<unit-calculator>BINARY</unit-calculator>
<expiry-delay>24h</expiry-delay>
<flush-delay>180</flush-delay>
</local-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
</caching-schemes>
</cache-config>
But the thing is, flush-delay can not be set. Any ideas?
Thanks

which version of Coherence do you use ?
In Coherence 3.7, the flush-delay has been removed from dtd as deprecated since version 3.5.
Flushing is just active when inserting new objects (have a look at eviction-policy) or accessing expired objects (look at expiry-delay).

Coherence deprecated the FlushDelay and related settings in 3.5. All of that work is done automatically now:
Expired items are automatically removed, and the eviction / expiry events are raised accordingly
You will never see expired data in the cache; even if you try to access it just as it expires, the expiry will occur as an event before the data access occurs
Eviction (for memory limits) is now done asynchronously, so that the "sharp edges" of the side-effects of eviction (such as event storms) are spread out across natural cycles (with the cycle length calculated based on the estimated rate of eviction)
I hope that helps.
For the sake of full disclosure, I work at Oracle. The opinions and views expressed in this post are my own, and do not necessarily reflect the opinions or views of my employer.

But the thing is, flush-delay can not be set. Any ideas?
What do you mean by this? Does the system throw errors or the expired items not getting removed from cache. Based on the configuration you have, the entry should be removed from cache after 24hours and 180seconds since last update to the entry.

Related

Tomcat 8 throwing - org.apache.catalina.webresources.Cache.getResource Unable to add the resource

I have just upgraded Tomcat from version 7.0.52 to 8.0.14.
I am getting this for lots of static image files:
org.apache.catalina.webresources.Cache.getResource Unable to add the
resource at [/base/1325/WA6144-150x112.jpg] to the cache because there
was insufficient free space available after evicting expired cache
entries - consider increasing the maximum size of the cache
I haven't specified any particular resource settings, and I didn't get this for 7.0.52.
I have found mention of this happening at startup in a bug report that was supposedly fixed. For me this is happening not at startup but constantly when the resource is requested.
Anybody else having this issue?
Trying to at least just disable the cache, but I cannot find an example of how to specify not to use the cache. The attributes have gone from the context in Tomcat version 8. Have tried adding a resource but cannot get the config right.
<Resource name="file"
cachingAllowed="false"
className="org.apache.catalina.webresources.FileResourceSet"
/>
Thanks.
I had the same issue when upgrading from Tomcat 7 to 8: a continuous large flood of log warnings about cache.
1. Short Answer
Add this within the Context xml element of your $CATALINA_BASE/conf/context.xml:
<!-- The default value is 10240 kbytes, even when not added to context.xml.
So increase it high enough, until the problem disappears, for example set it to
a value 5 times as high: 51200. -->
<Resources cacheMaxSize="51200" />
So the default is 10240 (10 mbyte), so set a size higher than this. Than tune for optimum settings where the warnings disappear.
Note that the warnings may come back under higher traffic situations.
1.1 The cause (short explanation)
The problem is caused by Tomcat being unable to reach its target cache size due to cache entries that are less than the TTL of those entries. So Tomcat didn't have enough cache entries that it could expire, because they were too fresh, so it couldn't free enough cache and thus outputs warnings.
The problem didn't appear in Tomcat 7 because Tomcat 7 simply didn't output warnings in this situation. (Causing you and me to use poor cache settings without being notified.)
The problem appears when receiving a relative large amount of HTTP requests for resources (usually static) in a relative short time period compared to the size and TTL of the cache. If the cache is reaching its maximum (10mb by default) with more than 95% of its size with fresh cache entries (fresh means less than less than 5 seconds in cache), than you will get a warning message for each webResource that Tomcat tries to load in the cache.
1.2 Optional info
Use JMX if you need to tune cacheMaxSize on a running server without rebooting it.
The quickest fix would be to completely disable cache: <Resources cachingAllowed="false" />, but that's suboptimal, so increase cacheMaxSize as I just described.
2. Long Answer
2.1 Background information
A WebSource is a file or directory in a web application. For performance reasons, Tomcat can cache WebSources. The maximum of the static resource cache (all resources in total) is by default 10240 kbyte (10 mbyte). A webResource is loaded into the cache when the webResource is requested (for example when loading a static image), it's then called a cache entry.
Every cache entry has a TTL (time to live), which is the time that the cache entry is allowed to stay in the cache. When the TTL expires, the cache entry is eligible to be removed from the cache. The default value of the cacheTTL is 5000 milliseconds (5 seconds).
There is more to tell about caching, but that is irrelevant for the problem.
2.2 The cause
The following code from the Cache class shows the caching policy in detail:
152 // Content will not be cached but we still need metadata size153 long delta = cacheEntry.getSize();154 size.addAndGet(delta);156 if (size.get() > maxSize) {157 // Process resources unordered for speed. Trades cache158 // efficiency (younger entries may be evicted before older159 // ones) for speed since this is on the critical path for160 // request processing161 long targetSize =162 maxSize * (100 - TARGET_FREE_PERCENT_GET) / 100;163 long newSize = evict(164 targetSize, resourceCache.values().iterator());165 if (newSize > maxSize) {166 // Unable to create sufficient space for this resource167 // Remove it from the cache168 removeCacheEntry(path);169 log.warn(sm.getString("cache.addFail", path));170 }171 }
When loading a webResource, the code calculates the new size of the cache. If the calculated size is larger than the default maximum size, than one or more cached entries have to be removed, otherwise the new size will exceed the maximum. So the code will calculate a "targetSize", which is the size the cache wants to stay under (as an optimum), which is by default 95% of the maximum. In order to reach this targetSize, entries have to be removed/evicted from the cache. This is done using the following code:
215 private long evict(long targetSize, Iterator<CachedResource> iter) {217 long now = System.currentTimeMillis();219 long newSize = size.get();221 while (newSize > targetSize && iter.hasNext()) {222 CachedResource resource = iter.next();224 // Don't expire anything that has been checked within the TTL225 if (resource.getNextCheck() > now) {226 continue;227 }229 // Remove the entry from the cache230 removeCacheEntry(resource.getWebappPath());232 newSize = size.get();233 }235 return newSize;236 }
So a cache entry is removed when its TTL is expired and the targetSize hasn't been reached yet.
After the attempt to free cache by evicting cache entries, the code will do:
165 if (newSize > maxSize) {166 // Unable to create sufficient space for this resource167 // Remove it from the cache168 removeCacheEntry(path);169 log.warn(sm.getString("cache.addFail", path));170 }
So if after the attempt to free cache, the size still exceeds the maximum, it will show the warning message about being unable to free:
cache.addFail=Unable to add the resource at [{0}] to the cache for web application [{1}] because there was insufficient free space available after evicting expired cache entries - consider increasing the maximum size of the cache
2.3 The problem
So as the warning message says, the problem is
insufficient free space available after evicting expired cache entries - consider increasing the maximum size of the cache
If your web application loads a lot of uncached webResources (about maximum of cache, by default 10mb) within a short time (5 seconds), then you'll get the warning.
The confusing part is that Tomcat 7 didn't show the warning. This is simply caused by this Tomcat 7 code:
1606 // Add new entry to cache1607 synchronized (cache) {1608 // Check cache size, and remove elements if too big1609 if ((cache.lookup(name) == null) && cache.allocate(entry.size)) {1610 cache.load(entry);1611 }1612 }
combined with:
231 while (toFree > 0) {232 if (attempts == maxAllocateIterations) {233 // Give up, no changes are made to the current cache234 return false;235 }
So Tomcat 7 simply doesn't output any warning at all when it's unable to free cache, whereas Tomcat 8 will output a warning.
So if you are using Tomcat 8 with the same default caching configuration as Tomcat 7, and you got warnings in Tomcat 8, than your (and mine) caching settings of Tomcat 7 were performing poorly without warning.
2.4 Solutions
There are multiple solutions:
Increase cache (recommended)
Lower the TTL (not recommended)
Suppress cache log warnings (not recommended)
Disable cache
2.4.1. Increase cache (recommended)
As described here: http://tomcat.apache.org/tomcat-8.0-doc/config/resources.html
By adding <Resources cacheMaxSize="XXXXX" /> within the Context element in $CATALINA_BASE/conf/context.xml, where "XXXXX" stands for an increased cache size, specified in kbytes. The default is 10240 (10 mbyte), so set a size higher than this.
You'll have to tune for optimum settings. Note that the problem may come back when you suddenly have an increase in traffic/resource requests.
To avoid having to restart the server every time you want to try a new cache size, you can change it without restarting by using JMX.
To enable JMX, add this to $CATALINA_BASE/conf/server.xml within the Server element:
<Listener className="org.apache.catalina.mbeans.JmxRemoteLifecycleListener" rmiRegistryPortPlatform="6767" rmiServerPortPlatform="6768" /> and download catalina-jmx-remote.jar from https://tomcat.apache.org/download-80.cgi and put it in $CATALINA_HOME/lib.
Then use jConsole (shipped by default with the Java JDK) to connect over JMX to the server and look through the settings for settings to increase the cache size while the server is running. Changes in these settings should take affect immediately.
2.4.2. Lower the TTL (not recommended)
Lower the cacheTtl value by something lower than 5000 milliseconds and tune for optimal settings.
For example: <Resources cacheTtl="2000" />
This comes effectively down to having and filling a cache in ram without using it.
2.4.3. Suppress cache log warnings (not recommended)
Configure logging to disable the logger for org.apache.catalina.webresources.Cache.
For more info about logging in Tomcat: http://tomcat.apache.org/tomcat-8.0-doc/logging.html
2.4.4. Disable cache
You can disable the cache by setting cachingAllowed to false.
<Resources cachingAllowed="false" />
Although I can remember that in a beta version of Tomcat 8, I was using JMX to disable the cache. (Not sure why exactly, but there may be a problem with disabling the cache via server.xml.)
In your $CATALINA_BASE/conf/context.xml add block below before </Context>
<Resources cachingAllowed="true" cacheMaxSize="100000" />
For more information: http://tomcat.apache.org/tomcat-8.0-doc/config/resources.html
You have more static resources that the cache has room for. You can do one of the following:
Increase the size of the cache
Decrease the TTL for the cache
Disable caching
For more details see the documentation for these configuration options.
This isn’t a solution in the sense that it doesn’t resolve the conditions which cause the message to appear in the logs, but the message can be suppressed by appending the following to conf/logging.properties:
org.apache.catalina.webresources.Cache.level = SEVERE
This filters out the “Unable to add the resource” logs, which are at level WARNING.
In my view a WARNING is not necessarily an error that needs to be addressed, but rather can be ignored if desired.
Some more tip (issue i meet):
if $CATALINA_BASE/conf/context.xml ovverrides by intellij.
Just add inside block <Context> </Context>:
<Resources cachingAllowed="true" cacheMaxSize="100000" />
in your Tomcat/apache-tomcat-x.x.x/conf/context.xml

Oracle Coherence Refresh-Ahead: refresh doesn't work if the cache is queried earlier than soft-expiration period

I got strange behaviour of refresh ahead functionality.
Here is my configuration:
<cache-config>
<defaults>
<serializer>pof</serializer>
<socket-provider system-property="tangosol.coherence.socketprovider"/>
</defaults>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>sample</cache-name>
<scheme-name>extend-near-distributed</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<near-scheme>
<scheme-name>extend-near-distributed</scheme-name>
<front-scheme>
<local-scheme>
<high-units>20000</high-units>
<expiry-delay>10s</expiry-delay>
</local-scheme>
</front-scheme>
<back-scheme>
<distributed-scheme>
<scheme-ref>distributed</scheme-ref>
</distributed-scheme>
</back-scheme>
<invalidation-strategy>all</invalidation-strategy>
</near-scheme>
<distributed-scheme>
<scheme-name>distributed</scheme-name>
<service-name>sample</service-name>
<thread-count>20</thread-count>
<backing-map-scheme>
<read-write-backing-map-scheme>
<internal-cache-scheme>
<local-scheme>
<expiry-delay>10s</expiry-delay>
</local-scheme>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-name>
com.sample.CustomCacheStore
</class-name>
</class-scheme>
</cachestore-scheme>
<refresh-ahead-factor>0.5</refresh-ahead-factor>
</read-write-backing-map-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
</caching-schemes>
</cache-config>
and if I request my service with a period of 6s (10s*0.5) seconds everything is fine. I have no delaying in response(except for the first time), but if i change a period to 3 seconds for example, then i start getting delays every 10 seconds. I have no idea why it is happening. It looks like if i request my service before expectable period (from 5 to 10 seconds) asynchronous loading doesn't happen even if after that i request it again. Is there any explanation of it and how can i bypass this behaviour?
Thanks
The problem has been solved. The reason why i've got such a situation is that front-scheme didn't notify the back-scheme because of the same expiration time. In a few words, to use refresh-ahead functionality with near cache you have to set expiration time of front-scheme equal to soft-expiration time(in that case it will be 10s*0.5).

Web flow Exception: No flow execution snapshot could be found with id '1'

I am getting below exception whenever I switch one state to another more then 15 times in web-flow.
No flow execution snapshot could be found with id '1'; perhaps the snapshot has been removed? . Stacktrace follows:
org.springframework.webflow.execution.repository.FlowExecutionRestorationFailureException: A problem occurred restoring the flow execution with key 'e7s1'
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: org.springframework.webflow.execution.repository.snapshot.SnapshotNotFoundException: No flow execution snapshot could be found with id '1'; perhaps the snapshot has been removed?
... 3 more
I am using grails webflow plug-in.
Are anyone have any idea why this is occurring and how to resolve this?
Web Flow only retains a specified number of Executions ("e7") and Snapshots ("s1") in its repository at one time. This is useful in order to limit how much memory can be consumed. I'm guessing the default is 15, in which case Execution "e7" can have 15 Snapshots as you move from state to state. Once you've hit "e7s16", "s1" will be discarded, thus giving you the result you see.
You can change the defaults with the <webflow:flow-execution-repository> configuration element:
<!-- Executes flows: the entry point into the Spring Web Flow system -->
<webflow:flow-executor id="flowExecutor" flow-registry="flowRegistry">
<webflow:flow-execution-repository max-execution-snapshots="20" max-executions="2"/>
</webflow:flow-executor>
Quoting that link above:
Tune the max-execution-snapshots attribute to place a cap on the number of history
snapshots that can be taken per flow execution. To disable
snapshotting, set this value to 0. To enable an unlimited number of
snapshots, set this value to -1.
I do find the default behavior unacceptable, though, when you happen to visit an expired Snapshot and you just get an Exception. Another recent question asked about how to catch that case, presumably so you can do something more useful when it occurs.
(We implemented custom code to carry along in the HttpSession the last valid Snapshot so that we could send the user there when that exception occurs.)
Thanks for yours helps. But I am using grails application. I had fixed it using the following code.
DefaultFlowExecutionRepository defaultFlowExecutionRepository=(DefaultFlowExecutionRepository)Holders.applicationContext.getBean('flowExecutionRepository');
defaultFlowExecutionRepository.setMaxSnapshots(100)
For newer versions of SpringWebflow, as seen here:
https://docs.spring.io/spring-webflow/docs/2.4.4.RELEASE/reference/html/system-setup.html#tuning-flow-execution-repository
You can simple write this on your java configuration code:
#Bean
public FlowExecutor flowExecutor() {
return getFlowExecutorBuilder(flowRegistry())
.setMaxFlowExecutions(5)
.setMaxFlowExecutionSnapshots(30)
.build();
}
If you want to keep infinite values of Snapshot versions, just set the, maxFlorExecutionSnapshots to -1.

How to implement "Distributed cache clearing" in Ofbiz?

We have multiple instances of Ofbiz/Opentaps running. All the instances talk to the same database. There are many tables that are rarely updated hence they are cached and all the instances maintain their individual copies of cache as a standard Ofbiz cache mechanism. But in rare situations when we update some entity using one of many instances then all other instances keep showing dirty cache data. So it requires a manual action to go and clear all the cache copies on other instances as well.
I want this cache clearing operation on all the instances to happen automatically. On Ofbiz confluence page here there is a very brief mention of "Distributed cache clearing". It relies on JMS it seems so whenever an instance's cache is cleared it sends notification over JMS to a topic and other instances subscribing to the same JMS topic clear their corresponding copies of cache upon this notification. But I could not find any other reference or documentation on how to do that? What are the files that need to be updated to set it all up in Ofbiz? An example page/link is what I'm looking for.
Alright I believe I've figured it all out. I have used ActiveMQ as my JMS broker to set it up so here are the steps in Ofbiz to make it working:
1. Copy activemq-all.jar to framework/base/lib folder inside your Ofbiz base directory.
2. Edit File base/config/jndiservers.xml: Add following definition inside <jndi-config> tag:
<jndi-server name="activemq"
context-provider-url="failover:(tcp://jms.host1:61616,tcp://jms.host2:61616)?jms.useAsyncSend=true&timeout=5000"
initial-context-factory="org.apache.activemq.jndi.ActiveMQInitialContextFactory"
url-pkg-prefixes=""
security-principal=""
security-credentials=""/>
3. Edit File base/config/jndi.properties: Add this line at the end:
topic.ofbiz-cache=ofbiz-cache
4. Edit File service/config/serviceengine.xml: Add following definition inside <service-engine> tag:
<jms-service name="serviceMessenger" send-mode="all">
<server jndi-server-name="activemq"
jndi-name="ConnectionFactory"
topic-queue="ofbiz-cache"
type="topic"
listen="true"/>
</jms-service>
5. Edit File entityengine.xml: Change default delegator to enable distributed caching:
<delegator name="default" entity-model-reader="main" entity-group-reader="main" entity-eca-reader="main" distributed-cache-clear-enabled="true">
6. Edit File framework/service/src/org/ofbiz/service/jms/AbstractJmsListener.java: This one is probably a bug in the Ofbiz code
Change following line from:
this.dispatcher = GenericDispatcher.getLocalDispatcher("JMSDispatcher", null, null, this.getClass().getClassLoader(), serviceDispatcher);
To:
this.dispatcher = GenericDispatcher.getLocalDispatcher("entity-default", null, null, this.getClass().getClassLoader(), serviceDispatcher);
7. And finally build the serviceengine code by issuing following command:
ant -f framework/service/build.xml
With this entity data changes in Ofbiz on one instances are immediately propagated to all the other Ofbiz instances clearing cache line item on its own without any need of manual cache clearing.
Cheers.
I have a added a page on this subject in OFBiz wiki https://cwiki.apache.org/OFBIZ/distributed-entity-cache-clear-mechanism.html. Though it's well explained here, the OFBiz wiki page adds other important information.
Note that the bug reported here has been fixed since, but another is currently pending, I should fix it soon https://issues.apache.org/jira/browse/OFBIZ-4296
Jacques
Yes, I fixed this behaviour sometimes ago at http://svn.apache.org/viewvc?rev=1090961&view=rev. But it still needs another fix related to https://issues.apache.org/jira/browse/OFBIZ-4296.
The patch below fixes this issue locally, but still creates 2 listeners on clusters, not sure why... Still investigating (not a priority)...
Index: framework/entity/src/org/ofbiz/entity/DelegatorFactory.java
===================================================================
--- framework/entity/src/org/ofbiz/entity/DelegatorFactory.java (revision 1879)
+++ framework/entity/src/org/ofbiz/entity/DelegatorFactory.java (revision 2615)
## -39,10 +39,10 ##
if (delegator != null) {
+ // setup the distributed CacheClear
+ delegator.initDistributedCacheClear();
+
// setup the Entity ECA Handler
delegator.initEntityEcaHandler();
//Debug.logInfo("got delegator(" + delegatorName + ") from cache", module);
-
- // setup the distributed CacheClear
- delegator.initDistributedCacheClear();
return delegator;
Please notify me using #JacquesLeRoux in your post, if ever you have something new to share.

How long do Drupal caches last?

Using the devel module I can see a lot of calls to cache_get() and cache_set(). After how long does a cached value need to be refreshed? Does the cache get invalidated every few minutes?
The module that is using cache_set sets the expiration in the call. Some things have explicit durations, others have permanent or semi-permanent lifetimes, based on the situation.
Caches get explicitly cleared when you invoke the method through the admin interface (or drush), or otherwise through the use of drupal_flush_all_caches or cache_clear_all.
Lately, I have been using a hook_cron to clear certain cache tables each night.
EDIT to answer comment:
To see which cache, I usually put this in a separate script somewhere:
require_once './includes/bootstrap.inc';
drupal_bootstrap(DRUPAL_BOOTSTRAP_FULL);
header("Content-Type: text/plain; encoding=utf-8");
$user = user_load(1);
print "Modules implementing hook_cron:\n" . implode("\n", module_implements('cron'));
To see expirations, examine the various cache tables in the database and look at the expire column. Modules can set expirations on each individual call to cache_set, so it can vary entry by entry.

Resources