I'd like to enable versioning for a replicated cache in a locally-running Infinispan server (8.2.4 final, two Infinispan servers form a cluster).
This is documented in the user guide.
Quote:
10.2.5. Configuration
By default versioning will be disabled.
and the user guide contains the following snippet:
<versioning scheme="SIMPLE|NONE" />
I am using locally-running Infinispan servers, the configuration is in clustered.xml.
A fragment thereof:
<subsystem xmlns="urn:infinispan:server:core:8.2" default-cache-container="clustered">
<cache-container name="clustered" default-cache="default" statistics="true">
[...]
<replicated-cache name="demoCache" mode="ASYNC" >
<versioning scheme="SIMPLE"/>
</replicated-cache>
So when I add the versioning element, starting fails with
Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[186,6]
Message: WFLYCTL0198: Unexpected element '{urn:infinispan:server:core:8.2}versioning' encountered
The XML element versioning indeed exists in urn:infinispan:config:8.2, but not in urn:infinispan:server:core:8.2 (which is used in clustered.xml).
urn:infinispan:config:8.2 is defined in infinispan-server-8.2.4.Final/docs/schema/infinispan-config-8.2.xsd.
urn:infinispan:server:core:8.2 is defined in infinispan-server-8.2.4.Final/docs/schema/jboss-infinispan-core_8_2.xsd
How can I enable (cluster aware) versioning when running Infinispan as a separate server?
Versioning does not make sense when using Infinispan remotely since versioning is purely used to detect write skew situations with repeteable read transactions, and that functionality is not really available to users in server mode.
Related
Questions
Why does it use localhost?
What does keystone have to do with it?
I can't seem to configure a keystone endpoint
Context
App: Spring Boot (1.5.6) REST API
Hibernate 5.2
Hazelcast 3.9 - as 2nd-level cache only
hazelcast-jclouds 3.7.1
jclouds-compute and jclouds-allcompute 2.0.2
Openstack cloud for VMs running the app
The Setup
I have my hazelcast.xml configured as follows:
<discovery-strategies>
<discovery-strategy class="com.hazelcast.jclouds.JCloudsDiscoveryStrategy" enabled="true">
<properties>
<property name="modules">org.jclouds.logging.slf4j.config.SLF4JLoggingModule</property>
<property name="provider">openstack-nova</property>
<property name="endpoint">http://dev.nova.cloud.youdontknow.net:8774/v2/</property>
<property name="identity">redacted</property>
<property name="credential">cens0red</property>
</properties>
</discovery-strategy>
</discovery-strategies>
The problem
App initialization fails. Here's some log tidbits:
[TRACE] o.j.r.internal.RestAnnotationProcessor : looking up default endpoint for org.jclouds.openstack.keystone.v2_0.AuthenticationApi.public abstract org.jclouds.openstack.keystone.v2_0.domain.Access org.jclouds.openstack.keystone.v2_0.AuthenticationApi.authenticateWithTenantNameAndCredentials(java.lang.String,org.jclouds.openstack.keystone.v2_0.domain.PasswordCredentials)[bnet-web, PasswordCredentials{username=redacted, password=*****}]
[TRACE] o.j.r.internal.RestAnnotationProcessor : using default endpoint Optional.of(http://localhost:5000/v2.0/) for org.jclouds.openstack.keystone.v2_0.AuthenticationApi.public abstract org.jclouds.openstack.keystone.v2_0.domain.Access org.jclouds.openstack.keystone.v2_0.AuthenticationApi.authenticateWithTenantNameAndCredentials(java.lang.String,org.jclouds.openstack.keystone.v2_0.domain.PasswordCredentials)[bnet-web, PasswordCredentials{username=redacted, password=*****}]
[TRACE] o.j.rest.internal.InvokeHttpMethod : << converted AuthenticationApi.authenticateWithTenantNameAndCredentials to POST http://localhost:5000/v2.0/tokens HTTP/1.1
And here's bits of the exception stack traces:
Caused by: com.hazelcast.core.HazelcastException: Failed to get registered addresses
at com.hazelcast.jclouds.JCloudsDiscoveryStrategy.discoverNodes(JCloudsDiscoveryStrategy.java:93)
at com.hazelcast.jclouds.JCloudsDiscoveryStrategy.discoverLocalMetadata(JCloudsDiscoveryStrategy.java:106)
at com.hazelcast.spi.discovery.impl.DefaultDiscoveryService.discoverLocalMetadata(DefaultDiscoveryService.java:91)
...
Caused by: org.jclouds.http.HttpResponseException: Connection refused: connect connecting to POST http://localhost:5000/v2.0/tokens HTTP/1.1
at org.jclouds.http.internal.BaseHttpCommandExecutorService.invoke(BaseHttpCommandExecutorService.java:122)
...
at com.sun.proxy.$Proxy147.authenticateWithTenantNameAndCredentials(Unknown Source)
at org.jclouds.openstack.keystone.v2_0.functions.AuthenticatePasswordCredentials.authenticateWithTenantName(AuthenticatePasswordCredentials.java:43)
Other Notes
Looks like it's using the default keystone address configured in org.jclouds.openstack.keystone.v2_0.KeystoneApiMetadata - but I don't know how that's involved.
By looking at the code, I think hazlecast-jclouds is not prepared to manage generic APIs. When connecting to a provider, you don't need to specify the endpoint, as it is well-known (the AWS endpoints, Google, Azure, etc), but when using generic APIs such as OpenStack or CloudStack, you need to tell jclouds where to connect. Unfortunately, it looks like hazlecast-jclouds lacks support for configuring custom endpoints for generic APIs.
A quick look at the code suggests that it could be easy to add, though. The properties that are taken into account are defined in the JCloudsDiscoveryStrategyFactory, and then read in the ComputeServiceBuilder to create the jclouds context.
I'm not familiar with Hazlecast, but I'd say that adding the definition for the "endpoint" property, and then, if present, configuring it by calling the jclouds contextBuilder.endpoint(endpoitn) method should do the trick.
I'm trying to figure out the difference between this factories, used in hibernate.cache.region.factory_class property.
Example:
<property name="hibernate.cache.region.factory_class" value="org.hibernate.cache.infinispan.JndiInfinispanRegionFactory" />
<property name="hibernate.cache.infinispan.cachemanager" value="java:jboss/infinispan/container/hibernate" />
There are 4 possible options.
The 2 options that I know something about is:
org.hibernate.cache.infinispan.InfinispanRegionFactory: for standalone aplications (not in a cluster, I think).
org.hibernate.cache.infinispan.JndiInfinispanRegionFactory: this is bounded to a JNDI in the property hibernate.cache.infinispan.cachemanager.
And I don't have any idea about these 2:
org.jboss.as.jpa.hibernate5.infinispan.SharedInfinispanRegionFactory: ?
org.jboss.as.jpa.hibernate5.infinispan.InfinispanRegionFactory: ?
We have a cluster configured on Wildfly 10.1.0 using domain mode. We want to share the entity cache among the nodes and we are having some doubts about how to do that.
If you're using Wildfly, you don't have to worry about setting the region factory class because Wildfly uses Infinispan as second-level cache provider by default. It's all explained here.
All you have to do is enable hibernate.cache.use_second_level_cache and you're good to go. See examples in the docu.
I agree with Galder, +1!
Regarding the purpose of [org.jboss.as.jpa.hibernate5.infinispan.SharedInfinispanRegionFactory][1] + [org.jboss.as.jpa.hibernate5.infinispan.InfinispanRegionFactory][2], these classes extend the Hibernate ORM [hibernate-infinispan][3] implementation classes, the purpose being to start the internal WildFly Infinispan cache services used for the JPA second level caching. They also deal with configuration as well. The below links may become outdated over time, as I think we [3] code might move to the Infinispan project (eventually).
A little more of the related code is at [HibernateSecondLevelCache.java][4], which backs up what Galder said. You can see that the WildFly JPA container automatically is setting the region factory class for you (if caching is enabled via [HibernatePersistenceProviderAdaptor.java][5].
I'm not sure if the code links are helpful to you, I thought they might be. :)
As a stackoverflow newbie, I am not allowed to post with more than 2 links, which is why [3] - [5] are invalid links.
Scott
[1] https://github.com/wildfly/wildfly/blob/master/jpa/hibernate5/src/main/java/org/jboss/as/jpa/hibernate5/infinispan/SharedInfinispanRegionFactory.java
[2] https://github.com/wildfly/wildfly/blob/master/jpa/hibernate5/src/main/java/org/jboss/as/jpa/hibernate5/infinispan/InfinispanRegionFactory.java
[3] github.com/hibernate/hibernate-orm/tree/master/hibernate-infinispan
[4] github.com/wildfly/wildfly/blob/master/jpa/hibernate5/src/main/java/org/jboss/as/jpa/hibernate5/HibernateSecondLevelCache.java
[5] github.com/wildfly/wildfly/blob/master/jpa/hibernate5/src/main/java/org/jboss/as/jpa/hibernate5/HibernatePersistenceProviderAdaptor.java#L91
The configuration envolves Infinispan, Hibernate and JGroups.
Using domain-mode on Wildfly10 you'll need this configuration on your application EAR:
<property name="hibernate.cache.use_second_level_cache" value="true"/>
Your server group need to use a profile that has the ha resources(high availability) such as full-ha or ha profiles. These profiles have the Infinispan and JGroups default configuration.
Then, you need to have the private 'Network Interfaces' on ALL Hosts' configuration that share the cache. JGroups uses the private Edit the domain/configuration/host.xml or use the wildfly console admin to add this configuration (200.0.0.171 needs to be replaced by the server's IP):
<interfaces>
...
<interface name="private">
<inet-address value="${jboss.bind.address.private:200.0.0.171}"/>
</interface>
<!-- .... -->
</interfaces>
For example, supposing you have a HostController HC1(with server-1 and server-2) and HC2(with server-3 and server-4)
Starting all the servers and Host Controllers, you'll see on your server.log:
INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-4) ISPN000078: Starting JGroups channel hibernate
....
....
Received new cluster view for channel hibernate: [HC1:server-1, HC2-server-2, HC2-server-3, HC2-server-4]
I have configured Tomcat 6 with in-memory session replication. I am also using IIS 7 (I know, I know) and the AJP connector via isapi_redirector. The cluster is working properly and I am able to replicate session attributes using the SessionExample in the examples war. The problem is that I am unable to do the same in my custom application. I have added the distributable tag to the web.xml file on both servers in my test cluster. However, I don't see any message in the logs mentioning the attributes getting sent to the cluster (I see them for SessionExample). The only primary differences that I can see in my app from the examples:
The examples war uses servlet 2.5. I am still required to use 2.4.
My application uses SSO and requires the user to login.
The application is a portal application.
Also, in the code of the application, I am setting a simple string in the attribute, so nothing fancy.
So, I was wondering if anyone has some tips to get this working?
Thanks
Here is the cluster section within of my server.xml:
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
channelSendOptions="6">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
address="228.0.0.104"
port="45564"
frequency="500"
dropTime="10000"/>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="auto"
port="4000"
autoBind="100"
selectorTimeout="7000"
maxThreads="6"
timeout="15000"/>
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"
timeout="70000"/>
</Sender>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/apache-tomcat-6.0.37/war-deploy/war-temp/"
deployDir="/apache-tomcat-6.0.37/webapps/"
watchDir="/apache-tomcat-6.0.37/war-deploy/war-listen/"
watchEnabled="true"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
Sorry. I found the issue. I was expecting to see messages in the log regarding the creation of the session attributes. I didn't realize that the examples project had a session listener that was outputting the messages to the log. I was thinking that it was simply from the log level that I had set.
Thanks to anyone who read this post.
I'm developing a module I was planning on using Spring's declarative caching to handle. I wrote a number of methods using the cache
#Override
#Cacheable("businessUnitCache")
public BusinessUnit getBusinessUnit(String businessUnitId){
I was planning to provide a classpath beans file and classpath eh-cache configuration to provide the functionality without requiring consuming projects to know the internals of my implementation and which methods need to be cached (many of these methods they'd never access directly).
However, reading the question Using Spring cache annotation in multiple modules and it's answers this is obviously going to cause a problem is any of the consuming projects use Spring cache annotations as well. I was hopeful that Sprint would fail silently if there was no declared cache matching an annotation, but it fails with the error:
java.lang.IllegalArgumentException: Cannot find cache named [businessUnitCache] for CacheableOperation[public
Leading me to the conclusion I can't use the cache annotations (which conflicts with my original conclusion from the question Is it possible to use multiple ehcache.xml (in different projects, same war)?. My testing backs this up.
So: Is it possible to declare the caching separately from the implementation classes, preferably in xml? This would allow me to prepare an additional file with the caching rules, and replace the cache manager name using standard spring property replacement (I'm already doing something similar with the datasource)? Unfortunately the refernece documentation only describes the annotation based configuration.
you can configure the cache using the xml file, see the spring reference manual:
http://static.springsource.org/spring/docs/current/spring-framework-reference/html/cache.html#cache-declarative-xml
<!-- the service we want to make cacheable -->
<bean id="bookService" class="x.y.service.DefaultBookService"/>
<!-- cache definitions -->
<cache:advice id="cacheAdvice" cache-manager="cacheManager">
<cache:caching cache="books">
<cache:cacheable method="findBook" key="#isbn"/>
<cache:cache-evict method="loadBooks" all-entries="true"/>
</cache:caching>
</cache:advice>
<!-- apply the cacheable behaviour to all BookService interfaces -->
<aop:config>
<aop:advisor advice-ref="cacheAdvice" pointcut="execution(* x.y.BookService.*(..))"/>
</aop:config>
I tested Tomcat Clustering for session replication on ubuntu servers with apache as front end load balancers. From my testing experience I say it's better not using tomcat clustering but running each node as standalone not knowing one another without any session replication as I felt it's slow, takes much time to startup tomcat service and consumes more memory. And the FarmDeployer is not reliable always in deploying and whole configuration should be placed under<Host></Host> element for farm deployer to work and also for each virtual hosting and thus a huge server.xml file. Below is the tomcat virtual hosting with cluster configuration from one of the node I used.
<Host name="site1.mydomain.net" debug="0" appBase="webapps" unpackWARs="true" autoDeploy="true">
<Logger className="org.apache.catalina.logger.FileLogger"
directory="logs" prefix="virtual_log1." suffix=".log" timestamp="true"/>
<Context path="" docBase="/usr/share/tomcat/webapps/myapp" debug="0" reloadable="true"/>
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="192.168.1.8"
port="4001"
selectorTimeout="100"
maxThreads="6"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.StaticMembershipInterceptor">
<Member className="org.apache.catalina.tribes.membership.StaticMember"
port="4002"
securePort="-1"
host="192.168.1.9"
domain="staging-cluster"
uniqueId="{0,1,2,3,4,5,6,7,8,9}"/>
<!-- <Member className="org.apache.catalina.tribes.membership.StaticMember"
port="4002"
securePort="-1"
host="192.168.1.9"
domain="staging-cluster"
uniqueId="{0,1,2,3,4,5,6,7,8,9}"/> -->
</Interceptor>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/usr/share/tomcat/temp/"
deployDir="/usr/share/tomcat/webapps/"
watchDir="/usr/share/tomcat/watch/"
watchEnabled="true"/>
</Cluster>
</Host>
Is Tomcat clustering good to use on production or is there any alternate way for session replication?. Or I'm missing anything in the above configuration which could be fine tuned?
Any ideas are welcome. Thanks!
One session-failover / session-replication solution for tomcat is memcached-session-manager (msm), supporting both sticky and non-sticky sessions. msm uses memcached (or any backend speaking the memcached protocol) as backend for session backup/storage.
In sticky mode sessions are still kept in tomcat, and memcached is only used as an additional backup - for session failover.
In non-sticky mode sessions are only stored in memcached and no longer in tomcat, as with non-sticky sessions the session-store must be external (to avoid stale data).
There's also special support for membase / membase buckets, which is useful for hosted solutions where you get access to a certain bucket with the appropriate authentication.
Session serialization is pluggable, so you're not tied to java serialization (and classes implementing Serializable). E.g. there's a kryo serializer available, which is one of the fastest serialization strategies available.
The msm home page mainly describes the sticky session approach, for details regarding non-sticky sessions you might search or ask on the mailing list.
Details and examples regarding the configuration can be found in the msm wiki (SetupAndConfiguration).