EHCache Automatic peer discovery on different environments - ehcache

What's the requested configuration to avoid different environments to share the same cache data in an automatic multicast EHCache configuration ?
I have a test and a prod environment and the cache names are identical, so I presume that if I don't modify the ehcache configuration the prod data will be cached in the test environment too ?
Is a modification of the multiCastGroupAddress is enough to avoid it ?

The short answer is yes.
But, I guess (hope?) the test and prod machines are located in different subnets, so even having the same multiCastGroupAddress would work because the default timeToLive is 1 (= multicast is restricted to the same subnet). Please refer the Ehcache docs on RMI Replicated Caching for more information.

Related

MyBatis caching strategy in distributed system

I'd like to know how myBatis cache (local and second level) to handle data in distributed system. I have 5 instances running against Oracle db, and I use MyBatis for data access. All 5 instances are same but running on different servers. The Mybatis are configured to use SESSION cache, which being said the cache is cleared when any insert/delete/update statement is executed.
When 1 instance runs , the local cache of that server is cleared. How does the other 4 instances know the cache needs to be flushed/renewed?
If you are using the built-in cache, no they don't. Never enable secondary cache if you are using MyBatis in a distributed environment with default cache because they don't know what happens each other and won't clear the staled cache when change happens.
You need to set up an external cache service, such as Ehcache and Redis, to make MyBatis secondary cache usable.
Please refer to http://mybatis.org/ehcache-cache/ and http://mybatis.org/redis-cache/
Giving a track:
I guess all instances are behind a load balancer and running against a single Oracle DB.
Instances nodes would better be in a cluster, otherwise, how could they communicate with each other. Then cache may be shared between cluster's nodes, for example as stated in Jboss doc, working with Hibernate.
The question is more about how to configure server (or application, in files such as beans.xml) to use MyBatis cache.
If the SessionFactory is declared #ApplicationScoped, it could be enough.

running tomcat as coherence node

I have a question for someone who is familiar with tomcat and coherence.
I am using tomcat 8 and coherence 12.2.1 now and here I have, maybe not a problem, but interesting case.
I am trying to start web application on tomcat as coherence node. I already know that there is ExtendTcpCacheService and now I am using it to make additional node which can communicate with coherence cluster.
But my question is: Is there a way to make tomcat start node which IS NOT Extend? I mean, I need tomcat to start coherence node but like grizzly rest server (automatically connecting to existing cluster), not like I have it now - it needs all IP addresses and configuration to connect to existing coherence node.
Thank you for any advice!
I am assuming that the other nodes in the cluster have the ExtendTcpCacheService enabled and you just want to disable only this service when running in tomcat. This is easy to do and you can continue to use one cache config file for all cluster nodes but you will need to make a slight change to your coherence cache configuration file. Go to the <proxy-scheme> section pertaining to your ExtendTcpCacheService service and change the <autostart> tag with a system-property attribute as shown below:
<proxy-scheme>
<scheme-name>some-name</scheme-name>
<service-name>ExtendTcpCacheService</service-name>
....
<autostart system-property="ExtendTcpCacheService.enabled">true</autostart>
</proxy-scheme>
In the JVM start-up parameters for Tomcat you will need to pass -DExtendTcpCacheService.enabled=false to turn off starting the service. In the other JVMs you will not need to do anything since this property is on by default.
You can use this feature to modify almost any xml tag in the coherence config using system parameters. More details on this feature is detailed in the coherence docs

Why everyone recommend to avoid use EHCache as a distributed cache in Play 2.x?

I want to cluster EHCache in Play Framework 2.x web application in several node. Why everyone recommend to avoid to use EHCache as a distributed cache in Play 2.x clustered web application?
I use nginx proxy to serve request across Play node and i want to make default EHCache of each node share its content.
Well according to this EHCache page, using EHCache in distributed mode is a commercial product. So if you want to use a free distributed cache, you need something different like Memcached or Redis.
My experience deploying a (Java)Play 2.2.3 to Amazon EC2 was terrible with EHCache. It requires a few workarounds with the localhost resolve (going su for each of your nodes - hard work when you have a few dozens of servers) and regardless, being free only for standalone version without ostensively letting us know upfront is a big no-no for me. I'm done with EHCache.
Edit: moved to Redis in 2015 (thanks #Traveler)
I am not aware of any Play Framework issues here, but the use of ehcache 2.x should fine as you can set it up with JGroups (faster than RMI) and use invalidation mode (infinispan slang).
Invalidation is a clustered mode that does not actually share any data at all, but simply aims to remove data that may be stale from remote caches. This cache mode only makes sense if you have another, permanent store for your data.
In ehcache 2.x you can set up invalidation mode with replicatePuts=false in your jgroups config.
In ehcache 3.x they do not have such a mode. You have to set up a commercial Terracotta server which is a distributed cache. So all date is moved between nodes and the terracotta server.
We tried it once and failed terribly.
As ehcache2.x is no longer active we just switched to Infinispan which has all features of ehcache2.x and a lot more.
So my recommendation: Use ehcache 2.x or infinispan. Do not use ehcache 3.x

replicated ehcache on Glassfish

I am afraid I have got some pretty basic questions about ehcache. I would like to use caching mechanism on clustered Glassfish without any significant infrastrucure.
As I know using ditributed cache with ehcache means that I have to use the terracotta server array, don't?
I am not so experienced in caching so could I use the ehcache on clustered glassfish that I just put some JAR into the classpath of Glassfish or deploy a WAR or something onto Glassfish and that's it? Do I have to use an external cache server anyway?
The replicated cache in ehcache doesn't need the terracotta server array, do it?
I would like to store a java Map object in the store which is going to be changed quite often. In this case the replicated cache is not best choice, as I know. The Hazelcast distributed cache needs any external cache server?
Thank you very much for your help in advance!
Have a nice day, experts!
Hazelcast doesn't need any externel server if you are running Java.
Basically add hazelcast.jar into your classpath. And from your application creata an Hazelcast instance:
HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance(new Config());
then to get a distributed map:
Map map = hazelcast.getMap("myMap");
that's it. In this example I provided the default config which uses Multicast to discovery of the nodes. You can update and change any parameter.
For more information see Quick Start Tutorial
The replication feature in Ehcache does not require any server. You simply add the Ehcache jar to your web application and configure Ehcache to replicate to all cluster nodes. You can choose whether to automatically discover all GlassFish nodes using multicast or you can manually tell Ehcache where to find the other nodes. You can find the Ehcache replication configuration instructions here: http://ehcache.org/documentation/replication/rmi-replicated-caching#configuring-the-peer-provider
Hazelcast works similarly. See here for documentation: http://hazelcast.org/docs/3.0/manual/html/ch12s02.html

Spring cache of two Grails applications in the same machine (different Jetty server)

Hi I have one Grails application, it uses Spring cache. I want to clone it (say APP_A and APP_B) and deploy on separate it as each access different DB and has some different configuration.
Currently I have two copy of Jetty servers (JETTY_A, JETTY_B. different port). I put APP_A in Jetty_A and APP_B in Jetty_B.
I'm not familiar with Spring cache.
Is this deployment save? I mean, will there be any mix of cache between both? Because both using the same code base. So, the cache will use the same key name.
#cacheable("someCache")
SpringCache uses EHCache under the covers. The caches are in-process caches and they do not affect caches running in other processes on the same machine, unless you had explicitly configured distributed caching.
As #KenLiu said in his answer, Spring Cache is strictly in-process when using EHCache as it's cache provider. Since you are working with Grails, however, there are better alternatives that will require only minimal changes.
The Grails Cache Plugin is a offers a Spring Cache API-compatible cache abstraction over a number of (plugable) cache providers, including some, like the Redis provider, that allow you to cache between processes (and entire machines) very easily.

Resources