I am using Infinispan as L2 cache and I have two application nodes. The L2 cache in two apps are replicated. The two apps are not identical.
One of my app fill the database using web services while other app run GUI for the database.
The both app do the extensively read and write to the database. After running the app I have seen following error. I do not know which cause this error.
I wonder why
- My cache instances are not properly replicated each change to other
L2 cache got a two reposes
L2 responses are not equal
ERROR org.infinispan.interceptors.InvocationContextInterceptor - ISPN000136: Execution error
2013-05-29 06:32:32 ERROR - Exception while processing event, reason: org.infinispan.loaders.CacheLoaderException: Responses contains more than 1 element and these elements are not equal, so can't decide which one to use:
[SuccessfulResponse{responseValue=TransientCacheValue{maxIdle=100000, lastUsed=1369809152081} TransientCacheValue {value=MarshalledValue{instance=, serialized=ByteArray{size=1911, array=0x0301fe0409000000..}, cachedHashCode=1816114786}#57991642}} ,
SuccessfulResponse{responseValue=TransientCacheValue{maxIdle=100000, lastUsed=1369809152116} TransientCacheValue {value=MarshalledValue{instance=, serialized=ByteArray{size=1911, array=0x0301fe0409000000..}, cachedHashCode=1816114786}#6cdaa731}} ]
My Infinispan configuration is
<globalJmxStatistics enabled="true" jmxDomain="org.infinispan" allowDuplicateDomains="true"/>
<transport
transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport"
clusterName="infinispan-hibernate-cluster"
distributedSyncTimeout="50000"
strictPeerToPeer="false">
<properties>
<property name="configurationFile" value="jgroups.xml"/>
</properties>
</transport>
</global>
<default>
</default>
<namedCache name="my-cache-entity">
<clustering mode="replication">
<stateRetrieval fetchInMemoryState="false" timeout="60000"/>
<sync replTimeout="20000"/>
</clustering>
<locking isolationLevel="READ_COMMITTED" concurrencyLevel="1000"
lockAcquisitionTimeout="15000" useLockStriping="false"/>
<eviction maxEntries="10000" strategy="LRU"/>
<expiration maxIdle="100000" wakeUpInterval="5000"/>
<lazyDeserialization enabled="true"/>
<!--<transaction useSynchronization="true"
transactionMode="TRANSACTIONAL" autoCommit="false"
lockingMode="OPTIMISTIC"/>-->
<loaders passivation="false" shared="false" preload="false">
<loader class="org.infinispan.loaders.cluster.ClusterCacheLoader"
fetchPersistentState="false"
ignoreModifications="false" purgeOnStartup="false">
<properties>
<property name="remoteCallTimeout" value="20000"/>
</properties>
</loader>
</loaders>
</namedCache>
Replicated entity caches should be configured with state retrieval, as already indicated in the default Infinispan configuration file and you've already done so. ClusterCacheLoader should only used in special situations (for query caching). Why not just use the default Infinsipan configuration provided? In fact, if you don't configure a config file, it'll use the default one.
Related
My JMX JConsole always show numberOfEntries=-1. How can I make it reflects the right number?
Detail:
I'm working with Infinispan 14.0.3.Final with following simple configuration. The JConsole statistic is showing that the statisticEnabled=Unavailable and the numberOfEntries=-1 even the hits value is increased when I put entries into the cache.
<infinispan>
<cache-container name="default" statistics="true">
<jmx enabled="true" />
<replicated-cache name="invoices" mode="SYNC"
statistics="true" statistics-available="true">
</replicated-cache>
</cache-container>
</infinispan>
I am using Mule 3.5.0 and trying to implement the Cache Strategy. The cache is supposed to be hit by APIs for grabbing a Sugar CRM OAuth Token. Multiple endpoints are hitting this cache.
My requirement is to keep only one active element which in the queue which serves this active token to every API call for 5 minutes. When the TTL expires, the cache should grab another token and cache it for subsequent calls.
The problem arises, when multiple inbound endpoints are hitting the cache, old values are also being spit out by the cache. Is all I need to do is change the maxEntries to 1? OR is there a better way of achieving this?
<ee:object-store-caching-strategy name="Caching_Strategy" doc:name="Caching Strategy">
<in-memory-store name="sugar-cache-in-memory" maxEntries="500" entryTTL="300000" expirationInterval="300000"/>
</ee:object-store-caching-strategy>
<flow name="get-oauth-token-cache" doc:name="get-oauth-token-cache" tracking:enable-default-events="true">
<ee:cache cachingStrategy-ref="Caching_Strategy" doc:name="Cache">
..............................
..............................
..............................
<logger message="------------------------ Direct Call for Token----------------------" level="INFO" doc:name="Logger"/>
<DATAMAPPER to set #payload.access_token />
</ee:cache>
<set-session-variable variableName="access_token" value="#[payload.access_token]" doc:name="Session Variable"/>
</flow>
The problem was that in the first line after ee:cache I had the Set Payload function. Had to take it outside the Cache Scope.
Sorry.
I am using JBoss Cache (tried versions 3.2.7 and 3.1.0) to have a replicated Map for caching data between Application Servers. In the past I did some checks if it works out and it did. My testenvironment always were 2 nodes in the same network segment.
Since IT deparments sometimes have a problem with UDP, we use TCP (TCPPING for discovery).
Now a customer reported problems with our nodes losing their sync, not replicating data.
They have 4 nodes in 2 subnets (2 and 2). They say when they onbly use 2 nodes in any subnet it works. When they start the third, the problems begin.
The logfile state a lot of "merge" problems indicating a partitioning problem.
So I did my own tests in my company. My setup is my Laptop under Windows and two virtual machines with Ubuntu. The virtual machines use bridged networking interfaces. DHCP is used and our IT department provides my 3 nodes with different subnet IPs. My Host Laptop is in a different net than the virtual machines are. TCP Communication between the nodes works. There should be no firewall involved.
So much to my setup.
I wrote a little Program that just initializes JBoss Cache, gets the cache(MAP) and in an intervall changes values in the map, and afterwards showing the content of the whole map. Pretty simple, 2 Classes involved.
My JBoss Cache setup is as follows:
<?xml version="1.0" encoding="UTF-8"?>
<jbosscache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="urn:jboss:jbosscache-core:config:3.1">
<clustering mode="replication" clusterName="${jgroups.clustername:DEFAULT}">
<stateRetrieval timeout="20000" fetchInMemoryState="true" />
<sync replTimeout="20000" />
<jgroupsConfig>
<TCP start_port="${jgroups.tcpping.start_port:7800}" loopback="true" recv_buf_size="20000000"
send_buf_size="640000" discard_incompatible_packets="true"
max_bundle_size="64000" max_bundle_timeout="30"
use_incoming_packet_handler="true" enable_bundling="false"
use_send_queues="false" sock_conn_timeout="3000"
skip_suspected_members="true" use_concurrent_stack="true"
thread_pool.enabled="true" thread_pool.min_threads="1"
thread_pool.max_threads="25" thread_pool.keep_alive_time="5000"
thread_pool.queue_enabled="false" thread_pool.queue_max_size="100"
thread_pool.rejection_policy="run" oob_thread_pool.enabled="true"
oob_thread_pool.min_threads="1" oob_thread_pool.max_threads="8"
oob_thread_pool.keep_alive_time="5000"
oob_thread_pool.queue_enabled="false"
oob_thread_pool.queue_max_size="100"
oob_thread_pool.rejection_policy="run" />
<TCPPING timeout="3000"
initial_hosts="${jgroups.tcpping.initial_hosts:localhost[7800],localhost[7801]}"
port_range="3" num_initial_members="3" />
<MERGE2 max_interval="100000" min_interval="20000" />
<MERGE3 max_interval="100000" min_interval="20000" />
<FD_SOCK />
<FD timeout="10000" max_tries="5" shun="true" />
<VERIFY_SUSPECT timeout="1500" />
<BARRIER />
<pbcast.NAKACK use_mcast_xmit="false" gc_lag="0"
retransmit_timeout="300,600,1200,2400,4800" discard_delivered_msgs="true" />
<UNICAST timeout="300,600,1200" />
<pbcast.STABLE stability_delay="1000"
desired_avg_gossip="50000" max_bytes="400000" />
<VIEW_SYNC avg_send_interval="60000" />
<pbcast.GMS print_local_addr="true" join_timeout="6000"
shun="true" view_bundling="true" />
<FC max_credits="2000000" min_threshold="0.10" />
<FRAG2 frag_size="60000" />
<pbcast.STREAMING_STATE_TRANSFER />
</jgroupsConfig>
</clustering>
</jbosscache>
starting my test nodes I provide them with the following System properties.
-Djgroups.bind_addr=NODE1
-Djgroups.tcpping.initial_hosts=NODE1[7900],NODE2[7900],NODE3[7900]
-Djgroups.tcpping.start_port=7900
From the logging messages I can see the GMS message that the NODE address is indeed as specified NODE-X[7900] for all nodes.
NODE1-3 are given as IP numbers. Those IP numbers can be reached from other nodes.
NODE1,2 are in the same subnet
NODE3 is in a different subnet
I did an incredible amount of tests, changing the config, the JBossCache version, the combinations of nodes running etc.
Sometimes it works, sometimes it doesn't.
One cause that seems to influence if the members find each other is the initial hosts info. Depending on the order of the hosts, if there are additional hosts to the 3 given, leaving out the own ip out of the list the setup works or not. It also depends on the order the nodes are started.
I am sure it has to do with the JGroups Group membership. Maybe parameters need to be added to make it more robust.
I really would appreciate some hints on what to try in order to get the nodes talking to each other reliably.
I addition to trying to figure out the problems with JBoss Cache(JGroups) I did the same test using Hazelcast (TCP) instead. It PERFECTLY works without any problem, so the basic networking should work for the nodes.
I consider to switch to Hazelcast, but this requires redeployments in several of our customers IT departments and I would want to avoid that.
Sorry in advance for the long post however I wanted to be precise in my post.
I have a spring mvc 3.1 web application and have just built ehcache into my web application which I am using to cache all my list of values (drop downs) built from the database.
Here is an example of my settings...
<!-- Cache -->
<dependency>
<groupId>net.sf.ehcache</groupId>
<artifactId>ehcache-core</artifactId>
<version>2.5.0</version>
</dependency>
...
<ehcache>
<diskStore path="java.io.tmpdir"/>
<cache
name="lovStore"
maxElementsInMemory="512"
eternal="false"
timeToIdleSeconds="60"
timeToLiveSeconds="60"
overflowToDisk="false"
memoryStoreEvictionPolicy="LRU"/>
</ehcache>
...
<cache:annotation-driven />
<bean id="cacheManager"
class="org.springframework.cache.ehcache.EhCacheCacheManager"
p:cache-manager-ref="ehcache"/>
<bean id="ehcache"
class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean"
p:config-location="classpath:ehcache.xml"/>
...
#Cacheable(value = "lovStore", key = "#code")
public Map<String, String> getLov(String code) throws ReportingManagerException {
return MockLOVHelper.getInstance().getLov(code);
}
A lot of the tutorials on the net talk about evicting the cache using a #cacheEvict method however this does not suit me. I think I am better suited off using the timeToLiveSeconds option to evict the cache.
When i look in the logs the cache is definitely working however the eviction of the cache is not. I've read some other articles on the net about how timeToLiveSeconds doesn't truly evict the cache but other articles like this one http://code.google.com/p/ehcache-spring-annotations/wiki/EvictExpiredElements which say there are special settings you have to create to get the cache to evict.
Can someone please help me understand if my cache should be evicting and also how I can evict because what is mentioned in the article is not something i was able to understand how to implement.
Here are what my logs look like. But there are no signs of eviction...
2014-01-20 13:32:41,791 DEBUG [AnnotationCacheOperationSource] - Adding cacheable method 'getLov' with attribute: [CacheableOperation[public java.util.Map com.myer.reporting.dao.mock.MockLovStoreDaoImpl.getLov(java.lang.String) throws com.myer.reporting.exception.ReportingManagerException] caches=[lovStore] | condition='' | key='#code']
thanks
Ecache does not evict elements until there's need to do that. And that's reasonable since there is no need to waste CPU resources for operation(evicting elemets) that wouldn't make much difference.
If someone would try to get element from cache and the element's time to live/idle would be expired, then ehcache would evict that element and return null. In case max elements or memory for cache/pool would be reached then ehcache would evict elements that are expired, based on eviction policy.
And if I understand correctly it's not guaranteed that all expired element's would be evicted since only sample elements are selected to evict(from documentation):
#param sampledElements this should be a random subset of the population
I have a strange behavior with caching and JPA Entities (EclipseLink 2.4.1 ) + GUICE PERSIST
I will not use caching, nevertheless I get randomly an old instance that has already changed in MySQL database.
I have tried the following:
Add # Cacheable (false) to the JPA Entity.
Disable Cache properties in the persistence.xml file :
<class>MyEntity</class>
<shared-cache-mode>NONE</shared-cache-mode>
<properties>
<property name="eclipselink.cache.shared.default" value="false"/>
<property name="eclipselink.cache.size.default" value="0"/>
<property name="eclipselink.cache.type.default" value="None"/>
<property name="eclipselink.refresh" value="true"/>
<property name="eclipselink.query-results-cache" value="false"/>
<property name="eclipselink.weaving" value="false"/>
</properties>
Even activating trace EclipseLink, i see the JPQL query:
ReadObjectQuery Execute query (name = "readObject" referenceClass = XX sql = "... (just making a call" find "the entityManager
but, However randomly returns an old value of that class.
Note
Perhaps happens for using different instances of EntityManager and everyone has their cache?
I have seen the following related post : Disable JPA EclipseLink 2.4 cache
If so, is possible to clear the cache of ALL EntityManager whithout using : ????
em.getEntityManagerFactory().getCache().evictAll();
Is it possible to clear ALL caches whithout using evictALL ??
Evict all is for the shared cache which you have disabled already anyway. EntityManager instances are required by default to have a first level cache of their own to keep track of all managed instances they have created. An entityManager is meant to represent logical transactions and so should not be long lived. You need to throw away your EntityManagers and re obtain them or just clear them at logical points rather than let the number of managed entitites in its cache grow endlessly. This will also help limit the stale data issue, though nothing other than pessimistic locking can eliminate it. I recommend using optimistic locking if you aren't already to avoid overwriting with stale data.