Tomcat session replication through mod_jk, getting expired when one is down - session

I have setup session replication between two tomcats running on different machine using mod_jk, when one tomcat is down, my session is getting expired, and when I refresh its going to next tomcat. But ideally, it shouldn't get expired. Any help will be highly appreciated. Here is my config files.
Server.xml
<Engine name="Catalina" defaultHost="localhost" jvmRoute="worker1">
<!--For clustering, please take a look at documentation at:
/docs/cluster-howto.html (simple how to)
/docs/config/cluster.html (reference documentation) -->
<!--
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
-->
<!-- Clustering configuration start -->
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="8">
<!--<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>-->
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
address="228.0.0.4"
port="45564" frequency="500"
dropTime="3000"/>
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="auto" port="4000" autoBind="100"
selectorTimeout="5000" maxThreads="6"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" />
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener" />
workers.properties
# First we define virtual worker's list
worker.list=jkstatus,LoadBalancer
# Enable virtual workers earlier
worker.jkstatus.type=status
worker.LoadBalancer.type=lb
# Add Tomcat instances as workers, three workers in our case
worker.worker1.type=ajp13
worker.worker1.host=localhost
worker.worker1.port=8009
worker.worker2.type=ajp13
worker.worker2.host=10.57.79.232
worker.worker2.port=8019
# Provide workers list to the load balancer
worker.LoadBalancer.balance_workers=worker1,worker2

Related

How to connect to a Kerberos-secured Apache Phoenix data source with WildFly?

I have recently spent several weeks trying to get WildFly to successfully connect to a Kerberized Apache Phoenix data source. There is a surprisingly limited amount of documentation on how to do this, but now that I have cracked it, I'm sharing.
Environment:
WildFly 9+. An equivalent JBoss version should also work (but untested). WildFly 8 does not contain the required org.jboss.security.negotiation.KerberosLoginModule class (but you can hack it, see Kerberos sql server datasource in Wildfly 8.2). I used WildFly 10.1.0.Final, and used a standalone deployment.
Apache Phoenix 4.2.0.2.2.4.10. I have not tested any other version.
Kerberos v5. My KDC is running on Windows Active Directory, but this should not make a noticable difference.
My Hadoop environment is a HortonWorks version, and maintained by Ambari. Ambari ensures that all of the configuration files and Kerberos implementation settings are correct.
Firstly, you'll want to add a system property to WildFly's standalone.xml to specify the location of the Kerberos configuration file:
...
</extensions>
<system-properties>
<property name="java.security.krb5.conf" value="/path/to/krb5.conf"/>
</system-properties>
...
I'm not going to go into the format of the krb5.conf file here, as it is dependent on your own implementation of Kerberos. What is important is that it contains the default realm and network location of the KDC. On Linux you can normally find it at /etc/krb5.conf or /etc/security/krb5.conf. If you're running WildFly on Windows, then make sure you use forward-slashes in your path, e.g. "C:/Source/krb5.conf"
Secondly, add two new security domains to standalone.xml - one called "Client" which is used by ZooKeeper, and another called "host", which is used by WildFly. Do not ask me why (it caused me so much pain) but the name of the "Client" security domain must match that defined in Zookeeper's JAAS client configuration file on the server. If you've set up with Ambari, "Client" is the default name. Also note that you cannot simply provide a jaas.config file as a system property, you must define it here:
<security-domain name="Client" cache-type="default">
<login-module code="com.sun.security.auth.module.Krb5LoginModule" flag="required">
<module-option name="useTicketCache" value="true"/>
<module-option name="debug" value="true"/>
</login-module>
</security-domain>
<security-domain name="host" cache-type="default">
<login-module code="org.jboss.security.negotiation.KerberosLoginModule" flag="required" module="org.jboss.security.negotiation">
<module-option name="useTicketCache" value="true"/>
<module-option name="debug" value="true"/>
<module-option name="refreshKrb5Config" value="true"/>
<module-option name="addGSSCredential" value="true"/>
</login-module>
</security-domain>
The module options will vary depending on your implementation. I'm getting my tickets from the default Java ticket cache, which is defined in the java.security file of your JRE, but you can supply a keytab here if you want. Note that setting storeKey to true broke my implementation. Check the Java documentation for all of the options. Note that each security domain uses a different login module: this is not by accident - Phoenix does not know how to use the org.jboss... version.
Now you need to provide WildFly with the org.apache.phoenix.jdbc.PhoenixDriver class in phoenix-<version>-client.jar. Create the following directory tree under the WildFly directory:
/modules/system/layers/base/org/apache/phoenix/main/
In the main directory, paste the phoenix--client.jar which you can find on the server (e.g. /usr/hdp/<version>/phoenix/client/bin) and create a module.xml file:
<?xml version="1.0" ?>
<module xmlns="urn:jboss:module:1.1" name="org.apache.phoenix">
<resources>
<resource-root path="phoenix-<version>-client.jar">
<filter>
<exclude-set>
<path name="javax" />
<path name="org/xml" />
<path name="org/w3c/dom" />
<path name="org/w3c/sax" />
<path name="javax/xml/parsers" />
<path name="com/sun/org/apache/xerces/internal/jaxp" />
<path name="org/apache/xerces/jaxp" />
<path name="com/sun/jersey/core/impl/provider/xml" />
</exclude-set>
</filter>
</resource-root>
<resource-root path=".">
</resource-root>
</resources>
<dependencies>
<module name="javax.api"/>
<module name="sun.jdk"/>
<module name="org.apache.log4j"/>
<module name="javax.transaction.api"/>
<module name="org.apache.commons.logging"/>
</dependencies>
</module>
You also need to paste the hbase-site.xml and core-site.xml from the server into the main directory. These are typically located in /usr/hdp/<version>/hbase/conf and /usr/hdp/<version>/hadoop/conf. If you don't add these, you will get a lot of unhelpful ZooKeeper getMaster errors! If you want the driver to log to the same place as WildFly, then you should also create a log4j.xml file in the main directory. You can find an example elsewhere on the web. The <resource-root path="."></resource-root> element is what adds those xml files to the classpath when deployed by WildFly.
Finally, add a new datasource and driver in the <subsystem xmlns="urn:jboss:domain:datasources:2.0"> section. You can do this with the CLI or by directly editing standalone.xml, I did the latter:
<datasource jndi-name="java:jboss/datasources/PhoenixDS" pool-name="PhoenixDS" enabled="true" use-java-context="true">
<connection-url>jdbc:phoenix:first.quorumserver.fqdn,second.quorumserver.fqdn:2181/hbase-secure</connection-url>
<connection-property name="phoenix.connection.autoCommit">true</connection-property>
<driver>phoenix</driver>
<validation>
<check-valid-connection-sql>SELECT 1 FROM SYSTEM.CATALOG LIMIT 1</check-valid-connection-sql>
</validation>
<security>
<security-domain>host</security-domain>
</security>
</datasource>
<drivers>
<driver name="phoenix" module="org.apache.phoenix">
<xa-datasource-class>org.apache.phoenix.jdbc.PhoenixDriver</xa-datasource-class>
</driver>
</drivers>
It's important that you replace first.quorumserver.fqdn,second.quorumserver.fqdn with the correct ZooKeeper quorum string for your environment. You can find this in hbase-site.xml in the HBase configuration directory: hbase.zookeeper.quorum. You don't need to add Kerberos information to the connection URL string!
tl;dr
Make sure that hbase-site.xml and core-site.xml are in your classpath.
Make sure that you have a <security-domain> with a name that ZooKeeper expects (probably "Client"), that uses the com.sun.security.auth.module.Krb5LoginModule.
The Phoenix connection URL must contain the entire ZooKeeper quorum. You can't miss one server out! Make sure it matches the value in hbase-site.xml.
References:
Using Kerberos for Datasource Authentication
Phoenix data source configuration by Mark S

JGroups ec2 cluster fails to connect (times out) with Hibernate Search / Infinispan setup

I'm trying to set up a distributed Hibernate Search (5.5.4) cluster on my Elastic Beanstalk (Tomcat8) environment, using Infinispan (8.2.4) and JGroups.
I'm currently stuck on an issue where a node can't connect to an existing cluster, and it times out trying to connect.
Starting JGroups channel ISPN
variable "${jgroups.s3.pre_signed_delete_url}" in S3_PING could not be substituted; pre_signed_delete_url is removed from properties
variable "${jgroups.s3.prefix}" in S3_PING could not be substituted; prefix is removed from properties
variable "${jgroups.s3.pre_signed_put_url}" in S3_PING could not be substituted; pre_signed_put_url is removed from properties
ip-172-31-24-216-1799: JOIN(ip-172-31-24-216-1799) sent to ip-172-31-14-33-238 timed out (after 5000 ms), on try 1
ip-172-31-24-216-1799: JOIN(ip-172-31-24-216-1799) sent to ip-172-31-14-33-238 timed out (after 5000 ms), on try 2
...
ip-172-31-24-216-1799: JOIN(ip-172-31-24-216-1799) sent to ip-172-31-14-33-238 timed out (after 5000 ms), on try 10
ip-172-31-24-216-1799: too many JOIN attempts (10): becoming singleton
ISPN000094: Received new cluster view for channel ISPN: [ip-172-31-24-216-Channel ISPN local address is ip-172-31-24-216-1799, physical addresses are [127.0.0.1:7800]
I have enabled all types of inbound traffic within the elastic beanstalk security group, and can successfully ping the other nodes in the group using the internal IP addresses.
This is my infinispan.xml file
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:8.2 http://infinispan.org/schemas/infinispan-config-8.2.xsd"
xmlns="urn:infinispan:config:8.2">
<jgroups>
<stack-file name="default-jgroups-ec2" path="default-configs/default-jgroups-ec2.xml"/>
</jgroups>
<cache-container name="HibernateSearch" default-cache="default" statistics="false" shutdown-hook="DONT_REGISTER">
<transport stack="default-jgroups-ec2"/>
<!-- Duplicate domains are allowed so that multiple deployments with default configuration
of Hibernate Search applications work - if possible it would be better to use JNDI to share
the CacheManager across applications -->
<jmx duplicate-domains="true"/>
<!-- *************************************** -->
<!-- Cache to store Lucene's file metadata -->
<!-- *************************************** -->
<replicated-cache name="LuceneIndexesMetadata" mode="SYNC" remote-timeout="25000">
<locking striping="false" acquire-timeout="10000" concurrency-level="500" write-skew="false"/>
<transaction mode="NONE"/>
<eviction max-entries="-1" strategy="NONE"/>
<expiration max-idle="-1"/>
<persistence>
<file-store path="LuceneIndexes/Metadata" preload="true" />
</persistence>
<indexing index="NONE"/>
<state-transfer enabled="true" timeout="480000" await-initial-transfer="true"/>
</replicated-cache>
<!-- **************************** -->
<!-- Cache to store Lucene data -->
<!-- **************************** -->
<distributed-cache name="LuceneIndexesData" mode="SYNC" remote-timeout="25000">
<locking striping="false" acquire-timeout="10000" concurrency-level="500" write-skew="false"/>
<transaction mode="NONE"/>
<eviction max-entries="-1" strategy="NONE"/>
<expiration max-idle="-1"/>
<persistence>
<file-store path="LuceneIndexes/Data" />
</persistence>
<indexing index="NONE"/>
<state-transfer enabled="true" timeout="480000" await-initial-transfer="true"/>
</distributed-cache>
<!-- ***************************** -->
<!-- Cache to store Lucene locks -->
<!-- ***************************** -->
<replicated-cache name="LuceneIndexesLocking" mode="SYNC" remote-timeout="25000">
<locking striping="false" acquire-timeout="10000" concurrency-level="500" write-skew="false"/>
<transaction mode="NONE"/>
<eviction max-entries="-1" strategy="NONE"/>
<expiration max-idle="-1"/>
<persistence>
<file-store path="LuceneIndexes/Locking" />
</persistence>
<indexing index="NONE"/>
<state-transfer enabled="true" timeout="480000" await-initial-transfer="true"/>
</replicated-cache>
</cache-container>
</infinispan>
And the jgroups config file is the default ec2 config packaged with Infinispan default-jgroups-ec2.xml
Does anyone have any idea of where I may have gone wrong, or what exactly I need to do to get this working?
Your local address is 127.0.0.1:7800, which is the default. This will definitely not work if you need to talk to other nodes.
Also you can see those messages in the logs:
variable "${jgroups.s3.pre_signed_delete_url}" in S3_PING could not be substituted; pre_signed_delete_url is removed from properties
variable "${jgroups.s3.prefix}" in S3_PING could not be substituted; prefix is removed from properties
variable "${jgroups.s3.pre_signed_put_url}" in S3_PING could not be substituted; pre_signed_put_url is removed from properties
You should probably define those variables.

Tomcat clustering - configuring 2 tomcats on different machines

I'm trying to create a tomcat cluster in order to replicate sessions.
my 2 tomcats are existing on two different machines,
All examples available showed clustering of 2 tomcats on same machine.
Where do i configure the ips of the tomcats in the configuration files?
(currently when i use the default configurations i get the error:"
Oct 30, 2013 10:21:03 AM org.apache.catalina.ha.session.DeltaManager getAllClusterSessions
INFO: Manager [localhost#/HATest]: skipping state transfer. No members active in cluster group."
Thanks!
EDIT: my (default and not so interesting) server.xml:
<?xml version='1.0' encoding='utf-8'?>
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!-- Note: A "Server" is not itself a "Container", so you may not
define subcomponents such as "Valves" at this level.
Documentation at /docs/config/server.html
-->
<Server port="8005" shutdown="SHUTDOWN">
<!-- Security listener. Documentation at /docs/config/listeners.html
<Listener className="org.apache.catalina.security.SecurityListener" />
-->
<!--The connectors can use a shared executor, you can define one or more named thread pools-->
<!--
<Executor name="tomcatThreadPool" namePrefix="catalina-exec-"
maxThreads="150" minSpareThreads="4"/>
-->
<!-- A "Connector" represents an endpoint by which requests are received
and responses are returned. Documentation at :
Java HTTP Connector: /docs/config/http.html (blocking & non-blocking)
Java AJP Connector: /docs/config/ajp.html
APR (HTTP/AJP) Connector: /docs/apr.html
Define a non-SSL HTTP/1.1 Connector on port 8080
-->
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
<!-- A "Connector" using the shared thread pool-->
<!--
<Connector executor="tomcatThreadPool"
port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
-->
<!-- Define a SSL HTTP/1.1 Connector on port 8443
This connector uses the JSSE configuration, when using APR, the
connector should be using the OpenSSL style configuration
described in the APR documentation -->
<!--
<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
maxThreads="150" scheme="https" secure="true"
clientAuth="false" sslProtocol="TLS" />
-->
<!-- Define an AJP 1.3 Connector on port 8009 -->
<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />
<!-- An Engine represents the entry point (within Catalina) that processes
every request. The Engine implementation for Tomcat stand alone
analyzes the HTTP headers included with the request, and passes them
on to the appropriate Host (virtual host).
Documentation at /docs/config/engine.html -->
<!-- You should set jvmRoute to support load-balancing via AJP ie :
<Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1">
-->
<Engine name="Catalina" defaultHost="localhost">
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
<!--For clustering, please take a look at documentation at:
/docs/cluster-howto.html (simple how to)
/docs/config/cluster.html (reference documentation) -->
<!--
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
-->
<!-- Use the LockOutRealm to prevent attempts to guess user passwords
via a brute-force attack -->
<Realm className="org.apache.catalina.realm.LockOutRealm">
<!-- This Realm uses the UserDatabase configured in the global JNDI
resources under the key "UserDatabase". Any edits
that are performed against this UserDatabase are immediately
available for use by the Realm. -->
<Realm className="org.apache.catalina.realm.UserDatabaseRealm"
resourceName="UserDatabase"/>
</Realm>
<Host name="localhost" appBase="webapps"
unpackWARs="true" autoDeploy="true">
<!-- SingleSignOn valve, share authentication between web applications
Documentation at: /docs/config/valve.html -->
<!--
<Valve className="org.apache.catalina.authenticator.SingleSignOn" />
-->
<!-- Access log processes all example.
Documentation at: /docs/config/valve.html
Note: The pattern used is equivalent to using pattern="common" -->
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="localhost_access_log." suffix=".txt"
pattern="%h %l %u %t "%r" %s %b" />
</Host>
</Engine>
For the default cluster configuration to work between multipl machines, multicast needs to be working. If the nodes can't see each then then the chances are that multicast is not enabled in one or more of the machnes / blocked at the network level / machines are on a different subnet.
In this case, you can use static membership to explicitly define the other machines in the cluster. See the Tomcat clustering docs for details.
Inside <Host></Host>, you need to do clustering.
Make use of the following example to modify your server.xml:
<Host name="site1.mydomain.net" debug="0" appBase="webapps" unpackWARs="true" autoDeploy="true">
<Logger className="org.apache.catalina.logger.FileLogger" directory="logs" prefix="virtual_log1." suffix=".log" timestamp="true"/>
<Context path="" docBase="/usr/share/tomcat/webapps/myapp" debug="0" reloadable="true"/>
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster">
<Manager className="org.apache.catalina.ha.session.DeltaManager" expireSessionsOnShutdown="false" notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="10.99.60.170"
port="8082"
selectorTimeout="100"
maxThreads="6"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.StaticMembershipInterceptor">
<Member className="org.apache.catalina.tribes.membership.StaticMember"
port="8083"
securePort="-1"
host="10.99.60.172"
domain="staging-cluster"
uniqueId="{0,1,2,3,4,5,6,7,8,9}"/>
</Interceptor>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/usr/share/tomcat/temp/"
deployDir="/usr/share/tomcat/webapps/"
watchDir="/usr/share/tomcat/watch/"
watchEnabled="true"/>
</Cluster>
</Host>
Try to use configuration as per below server.xml
<?xml version='1.0' encoding='utf-8'?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!-- Note: A "Server" is not itself a "Container", so you may not
define subcomponents such as "Valves" at this level.
Documentation at /docs/config/server.html
-->
<Server port="8006" shutdown="SHUTDOWN">
<!-- TomEE plugin for Tomcat -->
<Listener className="org.apache.tomee.catalina.ServerListener" />
<!-- Security listener. Documentation at /docs/config/listeners.html
<Listener className="org.apache.catalina.security.SecurityListener" />
-->
<!--APR library loader. Documentation at /docs/apr.html -->
<Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
<!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html -->
<Listener className="org.apache.catalina.core.JasperListener" />
<!-- Prevent memory leaks due to use of particular java/javax APIs-->
<Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
<Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
<Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" />
<!-- Global JNDI resources
Documentation at /docs/jndi-resources-howto.html
-->
<GlobalNamingResources>
<!-- Editable user database that can also be used by
UserDatabaseRealm to authenticate users
-->
<Resource name="UserDatabase" auth="Container"
type="org.apache.catalina.UserDatabase"
description="User database that can be updated and saved"
factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
pathname="conf/tomcat-users.xml" />
</GlobalNamingResources>
<!-- A "Service" is a collection of one or more "Connectors" that share
a single "Container" Note: A "Service" is not itself a "Container",
so you may not define subcomponents such as "Valves" at this level.
Documentation at /docs/config/service.html
-->
<Service name="Catalina">
<!--The connectors can use a shared executor, you can define one or more named thread pools-->
<!--
<Executor name="tomcatThreadPool" namePrefix="catalina-exec-"
maxThreads="150" minSpareThreads="4"/>
-->
<!-- A "Connector" represents an endpoint by which requests are received
and responses are returned. Documentation at :
Java HTTP Connector: /docs/config/http.html (blocking & non-blocking)
Java AJP Connector: /docs/config/ajp.html
APR (HTTP/AJP) Connector: /docs/apr.html
Define a non-SSL HTTP/1.1 Connector on port 8080
-->
<Connector port="8082" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
<!-- A "Connector" using the shared thread pool-->
<!--
<Connector executor="tomcatThreadPool"
port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
-->
<!-- Define a SSL HTTP/1.1 Connector on port 8443
This connector uses the JSSE configuration, when using APR, the
connector should be using the OpenSSL style configuration
described in the APR documentation -->
<!--
<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
maxThreads="150" scheme="https" secure="true"
clientAuth="false" sslProtocol="TLS" />
-->
<!-- Define an AJP 1.3 Connector on port 8009 -->
<Connector port="8011" protocol="AJP/1.3" redirectPort="8443" />
<!-- An Engine represents the entry point (within Catalina) that processes
every request. The Engine implementation for Tomcat stand alone
analyzes the HTTP headers included with the request, and passes them
on to the appropriate Host (virtual host).
Documentation at /docs/config/engine.html -->
<!-- You should set jvmRoute to support load-balancing via AJP ie :
<Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1">
-->
<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat2">
<!--For clustering, please take a look at documentation at:
/docs/cluster-howto.html (simple how to)
/docs/config/cluster.html (reference documentation) -->
<!--
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
-->
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="8">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
address="228.0.0.4"
port="45566"
frequency="500"
dropTime="3000"/>
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="172.16.17.244"
port="4002"
autoBind="100"
selectorTimeout="5000"
maxThreads="6"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<!-- <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
-->
<Interceptor className="org.apache.catalina.tribes.group.interceptors.StaticMembershipInterceptor">
<Member className="org.apache.catalina.tribes.membership.StaticMember"
port="4001"
securePort="-1"
host="192.168.80.128"
domain="staging-cluster"
uniqueId="{0,1,2,3,4,5,6,7,8,9}"/>
</Interceptor>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
<!-- Use the LockOutRealm to prevent attempts to guess user passwords
via a brute-force attack -->
<Realm className="org.apache.catalina.realm.LockOutRealm">
<!-- This Realm uses the UserDatabase configured in the global JNDI
resources under the key "UserDatabase". Any edits
that are performed against this UserDatabase are immediately
available for use by the Realm. -->
<Realm className="org.apache.catalina.realm.UserDatabaseRealm"
resourceName="UserDatabase"/>
</Realm>
<Host name="localhost" appBase="webapps"
unpackWARs="true" autoDeploy="true">
<!-- SingleSignOn valve, share authentication between web applications
Documentation at: /docs/config/valve.html -->
<!--
<Valve className="org.apache.catalina.authenticator.SingleSignOn" />
-->
<!-- Access log processes all example.
Documentation at: /docs/config/valve.html
Note: The pattern used is equivalent to using pattern="common" -->
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="localhost_access_log." suffix=".txt"
pattern="%h %l %u %t "%r" %s %b" />
</Host>
</Engine>
</Service>
</Server>
In this xml please refer bellow tag...
<Interceptor className="org.apache.catalina.tribes.group.interceptors.StaticMembershipInterceptor">
<Member className="org.apache.catalina.tribes.membership.StaticMember"
**port="4001"**
securePort="-1"
**host="192.168.80.128"**
domain="staging-cluster"
uniqueId="{0,1,2,3,4,5,6,7,8,9}"/>
</Interceptor>
In this
host = system ip of the tomcat running on another machine and
port = receiver port of the tomcat running on another machine
In below receiver tag ...
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="172.16.17.244"
port="4002"
autoBind="100"
selectorTimeout="5000"
maxThreads="6"/>
address = is current machines ip address
Make these above changes in server.xml of tomcat running on current machine.
Similar changes need to be done in server.xml of tomcat running on other machine; only system ips and port will change.
Hope this will help you..

How to Cluster infinispan cache in Jboss 7 1 1 Final?

Bean Decleration:
bean id="cacheManager" class="org.infinispan.spring.provider.SpringEmbeddedCacheManagerFactoryBean"
p:configurationFileLocation="classpath:infinispan.xml" ..
infinispan.xml
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:5.1 http://www.infinispan.org/schemas/infinispan-config-5.1.xsd"
xmlns="urn:infinispan:config:5.1">
<global>
<transport clusterName="CASCluster"/>
<globalJmxStatistics enabled="true"/>
</global>
<default>
<jmxStatistics enabled="true"/>
<clustering mode="distribution">
<hash numOwners="2" rehashRpcTimeout="120000"/>
<sync/>
</clustering>
</default>
<namedCache name="mtx.infinispan.global">
<eviction strategy="LIRS" maxEntries="50000" />
</namedCache>
<namedCache name="books">
<eviction strategy="LIRS" maxEntries="50000" />
</namedCache>
<namedCache name="scheduleprofiletemplates">
<eviction maxEntries="1000000" strategy="LIRS" />
<loaders passivation="false" shared="false" preload="true">
<!-- We can have multiple cache loaders, which get chained -->
<loader class="org.infinispan.loaders.file.FileCacheStore"
fetchPersistentState="true" purgerThreads="3" purgeSynchronously="true"
ignoreModifications="false" purgeOnStartup="false">
<!-- See the documentation for more configuration examples and flags. -->
<properties>
<property name="location" value="/home/cas/infinispanCache" />
</properties>
</loader>
</loaders>
</namedCache>
I want deploy the application Jboss cluster so that the cache created in one node is accessible/replicated to other node also....
I am using Jboss Domain mode full-ha for the deployment....I have HornetQ, Mod_cluster working properly on the same cluster.
By googling, I cam to know that it achieving thru JNDI....Can you pls tel how to modify the XMl files to achiiev this....I have to create 4 named cache(Where to create this ? In sping config file or Jboss domain.xml).
Thanks in advance
Create a Jboss cluster full-ha profile.
2.Create queus on Jboss side
3.Ovveride the SpringEmbeddedCacheManagerFactoryBean cacheManager

Infinispan and JGroups discovery on EC2

I'm trying to use my application on AWS EC2 on some Linux boxes with Tomcat servers. Previously I used my application with Infinispan on LAN and I used UDP multicasting for JGroups member discovery. EC2 does not support UDP multicasting and this is the default node discovery approach used by Infinispan to detect nodes running in a cluster. I looked into using the S3_PING protocol, but I have not figured out why it doesn't work.
Does anyone have any ideas what the problem might be here?
Here is my configuration files:
1. applicationContext-cache.xml
<!-- Infinispan cache -->
<cache:annotation-driven/>
<import resource="classpath:/applicationContext-dao.xml"/>
<bean id="cacheManager" class="org.infinispan.spring.provider.SpringEmbeddedCacheManagerFactoryBean">
<property name="configurationFileLocation" value="classpath:/infinispan-replication.xml"/>
</bean>
<context:component-scan base-package="com.alex.cache"/>
2.infinispan-replication.xml
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:5.1 http://www.infinispan.org/schemas/infinispan-config-5.1.xsd"
xmlns="urn:infinispan:config:5.1">
<global>
<transport transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport">
<properties>
<property name="configurationFile" value="/home/akasiyanik/dev/projects/myapp/myapp-configs/jgroups.xml"/>
</properties>
</transport>
</global>
<default>
<!-- Configure a synchronous replication cache -->
<clustering mode="replication">
<sync/>
<hash numOwners="2"/>
</clustering>
</default>
</infinispan>
3. jgroups.xml
<config>
<TCP bind_port="${jgroups.tcp.port:7800}"
loopback="true"
port_range="30"
recv_buf_size="20000000"
send_buf_size="640000"
discard_incompatible_packets="true"
max_bundle_size="64000"
max_bundle_timeout="30"
enable_bundling="true"
use_send_queues="true"
sock_conn_timeout="300"
enable_diagnostics="false"
thread_pool.enabled="true"
thread_pool.min_threads="2"
thread_pool.max_threads="30"
thread_pool.keep_alive_time="60000"
thread_pool.queue_enabled="false"
thread_pool.queue_max_size="100"
thread_pool.rejection_policy="Discard"
oob_thread_pool.enabled="true"
oob_thread_pool.min_threads="2"
oob_thread_pool.max_threads="30"
oob_thread_pool.keep_alive_time="60000"
oob_thread_pool.queue_enabled="false"
oob_thread_pool.queue_max_size="100"
oob_thread_pool.rejection_policy="Discard"
/>
<S3_PING location="r********s" access_key="AK***************SIA"
secret_access_key="y*************************************BJ" timeout="2000" num_initial_members="2"/>
<MERGE2 max_interval="30000"
min_interval="10000"/>
<FD_SOCK/>
<FD timeout="3000" max_tries="3"/>
<VERIFY_SUSPECT timeout="1500"/>
<BARRIER />
<pbcast.NAKACK use_mcast_xmit="false"
exponential_backoff="500"
discard_delivered_msgs="true"/>
<UNICAST />
<pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
max_bytes="4M"/>
<pbcast.GMS print_local_addr="true" join_timeout="3000"
view_bundling="true"/>
<UFC max_credits="2M"
min_threshold="0.4"/>
<MFC max_credits="2M"
min_threshold="0.4"/>
<FRAG2 frag_size="60K" />
<pbcast.STATE_TRANSFER/>
</config>
Use this: https://github.com/meltmedia/jgroups-aws
It is an implementation of JGroups discovery protocol for AWS using AWS API (multicast replacement)

Resources