Tomcat clustering - configuring 2 tomcats on different machines - session

I'm trying to create a tomcat cluster in order to replicate sessions.
my 2 tomcats are existing on two different machines,
All examples available showed clustering of 2 tomcats on same machine.
Where do i configure the ips of the tomcats in the configuration files?
(currently when i use the default configurations i get the error:"
Oct 30, 2013 10:21:03 AM org.apache.catalina.ha.session.DeltaManager getAllClusterSessions
INFO: Manager [localhost#/HATest]: skipping state transfer. No members active in cluster group."
Thanks!
EDIT: my (default and not so interesting) server.xml:
<?xml version='1.0' encoding='utf-8'?>
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!-- Note: A "Server" is not itself a "Container", so you may not
define subcomponents such as "Valves" at this level.
Documentation at /docs/config/server.html
-->
<Server port="8005" shutdown="SHUTDOWN">
<!-- Security listener. Documentation at /docs/config/listeners.html
<Listener className="org.apache.catalina.security.SecurityListener" />
-->
<!--The connectors can use a shared executor, you can define one or more named thread pools-->
<!--
<Executor name="tomcatThreadPool" namePrefix="catalina-exec-"
maxThreads="150" minSpareThreads="4"/>
-->
<!-- A "Connector" represents an endpoint by which requests are received
and responses are returned. Documentation at :
Java HTTP Connector: /docs/config/http.html (blocking & non-blocking)
Java AJP Connector: /docs/config/ajp.html
APR (HTTP/AJP) Connector: /docs/apr.html
Define a non-SSL HTTP/1.1 Connector on port 8080
-->
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
<!-- A "Connector" using the shared thread pool-->
<!--
<Connector executor="tomcatThreadPool"
port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
-->
<!-- Define a SSL HTTP/1.1 Connector on port 8443
This connector uses the JSSE configuration, when using APR, the
connector should be using the OpenSSL style configuration
described in the APR documentation -->
<!--
<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
maxThreads="150" scheme="https" secure="true"
clientAuth="false" sslProtocol="TLS" />
-->
<!-- Define an AJP 1.3 Connector on port 8009 -->
<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />
<!-- An Engine represents the entry point (within Catalina) that processes
every request. The Engine implementation for Tomcat stand alone
analyzes the HTTP headers included with the request, and passes them
on to the appropriate Host (virtual host).
Documentation at /docs/config/engine.html -->
<!-- You should set jvmRoute to support load-balancing via AJP ie :
<Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1">
-->
<Engine name="Catalina" defaultHost="localhost">
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
<!--For clustering, please take a look at documentation at:
/docs/cluster-howto.html (simple how to)
/docs/config/cluster.html (reference documentation) -->
<!--
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
-->
<!-- Use the LockOutRealm to prevent attempts to guess user passwords
via a brute-force attack -->
<Realm className="org.apache.catalina.realm.LockOutRealm">
<!-- This Realm uses the UserDatabase configured in the global JNDI
resources under the key "UserDatabase". Any edits
that are performed against this UserDatabase are immediately
available for use by the Realm. -->
<Realm className="org.apache.catalina.realm.UserDatabaseRealm"
resourceName="UserDatabase"/>
</Realm>
<Host name="localhost" appBase="webapps"
unpackWARs="true" autoDeploy="true">
<!-- SingleSignOn valve, share authentication between web applications
Documentation at: /docs/config/valve.html -->
<!--
<Valve className="org.apache.catalina.authenticator.SingleSignOn" />
-->
<!-- Access log processes all example.
Documentation at: /docs/config/valve.html
Note: The pattern used is equivalent to using pattern="common" -->
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="localhost_access_log." suffix=".txt"
pattern="%h %l %u %t "%r" %s %b" />
</Host>
</Engine>

For the default cluster configuration to work between multipl machines, multicast needs to be working. If the nodes can't see each then then the chances are that multicast is not enabled in one or more of the machnes / blocked at the network level / machines are on a different subnet.
In this case, you can use static membership to explicitly define the other machines in the cluster. See the Tomcat clustering docs for details.

Inside <Host></Host>, you need to do clustering.
Make use of the following example to modify your server.xml:
<Host name="site1.mydomain.net" debug="0" appBase="webapps" unpackWARs="true" autoDeploy="true">
<Logger className="org.apache.catalina.logger.FileLogger" directory="logs" prefix="virtual_log1." suffix=".log" timestamp="true"/>
<Context path="" docBase="/usr/share/tomcat/webapps/myapp" debug="0" reloadable="true"/>
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster">
<Manager className="org.apache.catalina.ha.session.DeltaManager" expireSessionsOnShutdown="false" notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="10.99.60.170"
port="8082"
selectorTimeout="100"
maxThreads="6"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.StaticMembershipInterceptor">
<Member className="org.apache.catalina.tribes.membership.StaticMember"
port="8083"
securePort="-1"
host="10.99.60.172"
domain="staging-cluster"
uniqueId="{0,1,2,3,4,5,6,7,8,9}"/>
</Interceptor>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/usr/share/tomcat/temp/"
deployDir="/usr/share/tomcat/webapps/"
watchDir="/usr/share/tomcat/watch/"
watchEnabled="true"/>
</Cluster>
</Host>

Try to use configuration as per below server.xml
<?xml version='1.0' encoding='utf-8'?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!-- Note: A "Server" is not itself a "Container", so you may not
define subcomponents such as "Valves" at this level.
Documentation at /docs/config/server.html
-->
<Server port="8006" shutdown="SHUTDOWN">
<!-- TomEE plugin for Tomcat -->
<Listener className="org.apache.tomee.catalina.ServerListener" />
<!-- Security listener. Documentation at /docs/config/listeners.html
<Listener className="org.apache.catalina.security.SecurityListener" />
-->
<!--APR library loader. Documentation at /docs/apr.html -->
<Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
<!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html -->
<Listener className="org.apache.catalina.core.JasperListener" />
<!-- Prevent memory leaks due to use of particular java/javax APIs-->
<Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
<Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
<Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" />
<!-- Global JNDI resources
Documentation at /docs/jndi-resources-howto.html
-->
<GlobalNamingResources>
<!-- Editable user database that can also be used by
UserDatabaseRealm to authenticate users
-->
<Resource name="UserDatabase" auth="Container"
type="org.apache.catalina.UserDatabase"
description="User database that can be updated and saved"
factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
pathname="conf/tomcat-users.xml" />
</GlobalNamingResources>
<!-- A "Service" is a collection of one or more "Connectors" that share
a single "Container" Note: A "Service" is not itself a "Container",
so you may not define subcomponents such as "Valves" at this level.
Documentation at /docs/config/service.html
-->
<Service name="Catalina">
<!--The connectors can use a shared executor, you can define one or more named thread pools-->
<!--
<Executor name="tomcatThreadPool" namePrefix="catalina-exec-"
maxThreads="150" minSpareThreads="4"/>
-->
<!-- A "Connector" represents an endpoint by which requests are received
and responses are returned. Documentation at :
Java HTTP Connector: /docs/config/http.html (blocking & non-blocking)
Java AJP Connector: /docs/config/ajp.html
APR (HTTP/AJP) Connector: /docs/apr.html
Define a non-SSL HTTP/1.1 Connector on port 8080
-->
<Connector port="8082" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
<!-- A "Connector" using the shared thread pool-->
<!--
<Connector executor="tomcatThreadPool"
port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
-->
<!-- Define a SSL HTTP/1.1 Connector on port 8443
This connector uses the JSSE configuration, when using APR, the
connector should be using the OpenSSL style configuration
described in the APR documentation -->
<!--
<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
maxThreads="150" scheme="https" secure="true"
clientAuth="false" sslProtocol="TLS" />
-->
<!-- Define an AJP 1.3 Connector on port 8009 -->
<Connector port="8011" protocol="AJP/1.3" redirectPort="8443" />
<!-- An Engine represents the entry point (within Catalina) that processes
every request. The Engine implementation for Tomcat stand alone
analyzes the HTTP headers included with the request, and passes them
on to the appropriate Host (virtual host).
Documentation at /docs/config/engine.html -->
<!-- You should set jvmRoute to support load-balancing via AJP ie :
<Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1">
-->
<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat2">
<!--For clustering, please take a look at documentation at:
/docs/cluster-howto.html (simple how to)
/docs/config/cluster.html (reference documentation) -->
<!--
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
-->
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="8">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
address="228.0.0.4"
port="45566"
frequency="500"
dropTime="3000"/>
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="172.16.17.244"
port="4002"
autoBind="100"
selectorTimeout="5000"
maxThreads="6"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<!-- <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
-->
<Interceptor className="org.apache.catalina.tribes.group.interceptors.StaticMembershipInterceptor">
<Member className="org.apache.catalina.tribes.membership.StaticMember"
port="4001"
securePort="-1"
host="192.168.80.128"
domain="staging-cluster"
uniqueId="{0,1,2,3,4,5,6,7,8,9}"/>
</Interceptor>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
<!-- Use the LockOutRealm to prevent attempts to guess user passwords
via a brute-force attack -->
<Realm className="org.apache.catalina.realm.LockOutRealm">
<!-- This Realm uses the UserDatabase configured in the global JNDI
resources under the key "UserDatabase". Any edits
that are performed against this UserDatabase are immediately
available for use by the Realm. -->
<Realm className="org.apache.catalina.realm.UserDatabaseRealm"
resourceName="UserDatabase"/>
</Realm>
<Host name="localhost" appBase="webapps"
unpackWARs="true" autoDeploy="true">
<!-- SingleSignOn valve, share authentication between web applications
Documentation at: /docs/config/valve.html -->
<!--
<Valve className="org.apache.catalina.authenticator.SingleSignOn" />
-->
<!-- Access log processes all example.
Documentation at: /docs/config/valve.html
Note: The pattern used is equivalent to using pattern="common" -->
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="localhost_access_log." suffix=".txt"
pattern="%h %l %u %t "%r" %s %b" />
</Host>
</Engine>
</Service>
</Server>
In this xml please refer bellow tag...
<Interceptor className="org.apache.catalina.tribes.group.interceptors.StaticMembershipInterceptor">
<Member className="org.apache.catalina.tribes.membership.StaticMember"
**port="4001"**
securePort="-1"
**host="192.168.80.128"**
domain="staging-cluster"
uniqueId="{0,1,2,3,4,5,6,7,8,9}"/>
</Interceptor>
In this
host = system ip of the tomcat running on another machine and
port = receiver port of the tomcat running on another machine
In below receiver tag ...
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="172.16.17.244"
port="4002"
autoBind="100"
selectorTimeout="5000"
maxThreads="6"/>
address = is current machines ip address
Make these above changes in server.xml of tomcat running on current machine.
Similar changes need to be done in server.xml of tomcat running on other machine; only system ips and port will change.
Hope this will help you..

Related

Websphere Liberty AdminCenter login does not work when custom JAAS context is defined

I am new to WebSphere Liberty profile. I am working on 17.0.0.4 version. What I am trying to achieve is to have custom JAAS login setup for the application. The application works fine on WebSphere 8.5.
I have reviewed so many link from IBM Knowledge Center for the same, but got result with either or with JAAS custom login, but not both of them together. With WebSphere 8.5 we are having level of hierarchy to decide which authentication mechanism goes where, but with Liberty if I setup Custom JAAS authentication mechanism then I can't login to WebSphere Liberty AdminCenter, and if I configure server.xml for , then it does not authenticate my application's user (because it is designed to get authenticate users via JAAS).
Here is my server.xml file.
<?xml version="1.0" encoding="UTF-8"?>
<server description="new server">
<!-- Enable features -->
<featureManager>
<feature>adminCenter-1.0</feature>
<feature>ssl-1.0</feature>
<feature>webProfile-7.0</feature>
<feature>appSecurity-2.0</feature>
</featureManager>
<keyStore id="defaultKeyStore" password="WASLiberty" />
<!-- Admin Center username/password -->
<!--<quickStartSecurity userName="admin" userPassword="admin123" />-->
<!-- Define an Administrator and non-Administrator -->
<basicRegistry id="basic">
<user name="admin" password="{xor}PjsyNjFubWw=" /> <!-- Encoded version of "admin123" -->
<user name="nonadmin" password="nonadmin123" />
</basicRegistry>
<!-- Assign 'admin' to Administrator -->
<administrator-role>
<user>admin</user>
</administrator-role>
<!-- JNDI Connection configuration -->
<dataSource id="MY_CUSTOM_DS" jndiName="jdbc/MY_CUSTOM_DS">
<jdbcDriver libraryRef="sqlserverjdbc"/>
<properties.microsoft.sqlserver databaseName="mydb"
serverName="localhost" portNumber="1433"
user="sa" password="root" />
</dataSource>
<!-- JDBC Driver file location -->
<library id="sqlserverjdbc">
<file name="${wlp.install.dir}/lib/sqljdbc4.jar"/>
</library>
<library id="MyLoginModuleLib">
<fileset dir="${wlp.install.dir}/lib" includes="custom_auth.jar"/>
</library>
<!-- JAAS Login Module for web application -->
<jaasLoginModule id="myCustom"
className="com.kana.auth.websphere.MyLoginModule"
controlFlag="REQUIRED" libraryRef="MyLoginModuleLib">
<options myOption1="value1" myOption2="value2"/>
</jaasLoginModule>
<!-- JAAS Login Context -->
<jaasLoginContextEntry id="system.WEB_INBOUND" name="system.WEB_INBOUND"
loginModuleRef="myCustom, hashtable, userNameAndPassword, certificate, token" />
<!-- To access this server from a remote client add a host attribute to the following element, e.g. host="*" -->
<httpEndpoint id="defaultHttpEndpoint"
host="*"
httpPort="9080"
httpsPort="9443" />
<webApplication contextRoot="mywebapp" location="mywebapp.war" />
<!-- Automatically expand WAR files and EAR files -->
<applicationManager autoExpand="true"/>
<!-- Enable remote file access -->
<remoteFileAccess>
<writeDir>${server.config.dir}</writeDir>
</remoteFileAccess>
</server>
Can anyone please point out where I am making mistake?
If you are using custom JAAS login module, then your custom JAAS login module need to ignore the admin center authentication and let the default login modules handle it.
The better option is to use custom TAI to handle application authentication and let the default login modules handle the admin center authentication.
Regards,
Ut Le
Many many thanks Ut Le for providing solution.
From the log file, found that the root cause was ClassNotFound exception. It does not have class which I was pointing to from <jaasLoginModule>. Later on found that I was pointing to jar file which is not at that location. So that was a silly mistake I made for configuration.

Tomcat session replication through mod_jk, getting expired when one is down

I have setup session replication between two tomcats running on different machine using mod_jk, when one tomcat is down, my session is getting expired, and when I refresh its going to next tomcat. But ideally, it shouldn't get expired. Any help will be highly appreciated. Here is my config files.
Server.xml
<Engine name="Catalina" defaultHost="localhost" jvmRoute="worker1">
<!--For clustering, please take a look at documentation at:
/docs/cluster-howto.html (simple how to)
/docs/config/cluster.html (reference documentation) -->
<!--
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
-->
<!-- Clustering configuration start -->
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="8">
<!--<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>-->
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
address="228.0.0.4"
port="45564" frequency="500"
dropTime="3000"/>
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="auto" port="4000" autoBind="100"
selectorTimeout="5000" maxThreads="6"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" />
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener" />
workers.properties
# First we define virtual worker's list
worker.list=jkstatus,LoadBalancer
# Enable virtual workers earlier
worker.jkstatus.type=status
worker.LoadBalancer.type=lb
# Add Tomcat instances as workers, three workers in our case
worker.worker1.type=ajp13
worker.worker1.host=localhost
worker.worker1.port=8009
worker.worker2.type=ajp13
worker.worker2.host=10.57.79.232
worker.worker2.port=8019
# Provide workers list to the load balancer
worker.LoadBalancer.balance_workers=worker1,worker2

JGroups ec2 cluster fails to connect (times out) with Hibernate Search / Infinispan setup

I'm trying to set up a distributed Hibernate Search (5.5.4) cluster on my Elastic Beanstalk (Tomcat8) environment, using Infinispan (8.2.4) and JGroups.
I'm currently stuck on an issue where a node can't connect to an existing cluster, and it times out trying to connect.
Starting JGroups channel ISPN
variable "${jgroups.s3.pre_signed_delete_url}" in S3_PING could not be substituted; pre_signed_delete_url is removed from properties
variable "${jgroups.s3.prefix}" in S3_PING could not be substituted; prefix is removed from properties
variable "${jgroups.s3.pre_signed_put_url}" in S3_PING could not be substituted; pre_signed_put_url is removed from properties
ip-172-31-24-216-1799: JOIN(ip-172-31-24-216-1799) sent to ip-172-31-14-33-238 timed out (after 5000 ms), on try 1
ip-172-31-24-216-1799: JOIN(ip-172-31-24-216-1799) sent to ip-172-31-14-33-238 timed out (after 5000 ms), on try 2
...
ip-172-31-24-216-1799: JOIN(ip-172-31-24-216-1799) sent to ip-172-31-14-33-238 timed out (after 5000 ms), on try 10
ip-172-31-24-216-1799: too many JOIN attempts (10): becoming singleton
ISPN000094: Received new cluster view for channel ISPN: [ip-172-31-24-216-Channel ISPN local address is ip-172-31-24-216-1799, physical addresses are [127.0.0.1:7800]
I have enabled all types of inbound traffic within the elastic beanstalk security group, and can successfully ping the other nodes in the group using the internal IP addresses.
This is my infinispan.xml file
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:8.2 http://infinispan.org/schemas/infinispan-config-8.2.xsd"
xmlns="urn:infinispan:config:8.2">
<jgroups>
<stack-file name="default-jgroups-ec2" path="default-configs/default-jgroups-ec2.xml"/>
</jgroups>
<cache-container name="HibernateSearch" default-cache="default" statistics="false" shutdown-hook="DONT_REGISTER">
<transport stack="default-jgroups-ec2"/>
<!-- Duplicate domains are allowed so that multiple deployments with default configuration
of Hibernate Search applications work - if possible it would be better to use JNDI to share
the CacheManager across applications -->
<jmx duplicate-domains="true"/>
<!-- *************************************** -->
<!-- Cache to store Lucene's file metadata -->
<!-- *************************************** -->
<replicated-cache name="LuceneIndexesMetadata" mode="SYNC" remote-timeout="25000">
<locking striping="false" acquire-timeout="10000" concurrency-level="500" write-skew="false"/>
<transaction mode="NONE"/>
<eviction max-entries="-1" strategy="NONE"/>
<expiration max-idle="-1"/>
<persistence>
<file-store path="LuceneIndexes/Metadata" preload="true" />
</persistence>
<indexing index="NONE"/>
<state-transfer enabled="true" timeout="480000" await-initial-transfer="true"/>
</replicated-cache>
<!-- **************************** -->
<!-- Cache to store Lucene data -->
<!-- **************************** -->
<distributed-cache name="LuceneIndexesData" mode="SYNC" remote-timeout="25000">
<locking striping="false" acquire-timeout="10000" concurrency-level="500" write-skew="false"/>
<transaction mode="NONE"/>
<eviction max-entries="-1" strategy="NONE"/>
<expiration max-idle="-1"/>
<persistence>
<file-store path="LuceneIndexes/Data" />
</persistence>
<indexing index="NONE"/>
<state-transfer enabled="true" timeout="480000" await-initial-transfer="true"/>
</distributed-cache>
<!-- ***************************** -->
<!-- Cache to store Lucene locks -->
<!-- ***************************** -->
<replicated-cache name="LuceneIndexesLocking" mode="SYNC" remote-timeout="25000">
<locking striping="false" acquire-timeout="10000" concurrency-level="500" write-skew="false"/>
<transaction mode="NONE"/>
<eviction max-entries="-1" strategy="NONE"/>
<expiration max-idle="-1"/>
<persistence>
<file-store path="LuceneIndexes/Locking" />
</persistence>
<indexing index="NONE"/>
<state-transfer enabled="true" timeout="480000" await-initial-transfer="true"/>
</replicated-cache>
</cache-container>
</infinispan>
And the jgroups config file is the default ec2 config packaged with Infinispan default-jgroups-ec2.xml
Does anyone have any idea of where I may have gone wrong, or what exactly I need to do to get this working?
Your local address is 127.0.0.1:7800, which is the default. This will definitely not work if you need to talk to other nodes.
Also you can see those messages in the logs:
variable "${jgroups.s3.pre_signed_delete_url}" in S3_PING could not be substituted; pre_signed_delete_url is removed from properties
variable "${jgroups.s3.prefix}" in S3_PING could not be substituted; prefix is removed from properties
variable "${jgroups.s3.pre_signed_put_url}" in S3_PING could not be substituted; pre_signed_put_url is removed from properties
You should probably define those variables.

How to Cluster infinispan cache in Jboss 7 1 1 Final?

Bean Decleration:
bean id="cacheManager" class="org.infinispan.spring.provider.SpringEmbeddedCacheManagerFactoryBean"
p:configurationFileLocation="classpath:infinispan.xml" ..
infinispan.xml
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:5.1 http://www.infinispan.org/schemas/infinispan-config-5.1.xsd"
xmlns="urn:infinispan:config:5.1">
<global>
<transport clusterName="CASCluster"/>
<globalJmxStatistics enabled="true"/>
</global>
<default>
<jmxStatistics enabled="true"/>
<clustering mode="distribution">
<hash numOwners="2" rehashRpcTimeout="120000"/>
<sync/>
</clustering>
</default>
<namedCache name="mtx.infinispan.global">
<eviction strategy="LIRS" maxEntries="50000" />
</namedCache>
<namedCache name="books">
<eviction strategy="LIRS" maxEntries="50000" />
</namedCache>
<namedCache name="scheduleprofiletemplates">
<eviction maxEntries="1000000" strategy="LIRS" />
<loaders passivation="false" shared="false" preload="true">
<!-- We can have multiple cache loaders, which get chained -->
<loader class="org.infinispan.loaders.file.FileCacheStore"
fetchPersistentState="true" purgerThreads="3" purgeSynchronously="true"
ignoreModifications="false" purgeOnStartup="false">
<!-- See the documentation for more configuration examples and flags. -->
<properties>
<property name="location" value="/home/cas/infinispanCache" />
</properties>
</loader>
</loaders>
</namedCache>
I want deploy the application Jboss cluster so that the cache created in one node is accessible/replicated to other node also....
I am using Jboss Domain mode full-ha for the deployment....I have HornetQ, Mod_cluster working properly on the same cluster.
By googling, I cam to know that it achieving thru JNDI....Can you pls tel how to modify the XMl files to achiiev this....I have to create 4 named cache(Where to create this ? In sping config file or Jboss domain.xml).
Thanks in advance
Create a Jboss cluster full-ha profile.
2.Create queus on Jboss side
3.Ovveride the SpringEmbeddedCacheManagerFactoryBean cacheManager

How do I set the number of "webcontainer worker-threads" in jboss as 7?

Most app. servers provide a way of tuning the number of WebContainer worker-threads when it goes down to tuning. Is it possible to do that in JBoss AS 7.x?
Thanks.
you can tune the HTTP Conector of the AS7 web subsystem. The available attributes you can tune for the HTTP Connector are described here The Http Connector. To define the max-connections for this connector you need change it in $JBOSS_HOME/standalone/configuration/standalone.xml or $JBOSS_HOME/domain/configuration/domain.xml
See this piece of configuration:
<subsystem xmlns="urn:jboss:domain:web:1.1" default-virtual-server="default-host" native="false">
<connector name="http"
protocol="HTTP/1.1"
scheme="http"
socket-binding="http"
max-connections="250"/>
...
</subsystem>
To define a thread pool specific for the HTTP Connector you need to use the AS7 threads subsystem like this one:
<subsystem xmlns="urn:jboss:domain:threads:1.0">
<bounded-queue-thread-pool name="http-executor" blocking="true">
<core-threads count="10" per-cpu="20" />
<queue-length count="10" per-cpu="20" />
<max-threads count="10" per-cpu="20" />
<keepalive-time time="10" unit="seconds" />
</bounded-queue-thread-pool>
</subsystem>
and then you need to reference it in the executor attribute of the HTTP Connector. See this piece of config:
<subsystem xmlns="urn:jboss:domain:web:1.1" default-virtual-server="default-host" native="false">
<connector name="http"
protocol="HTTP/1.1"
scheme="http"
socket-binding="http"
max-connections="250"
executor="http-executor"/>
...
</subsystem>
For more details about tuning the AS7 see this post JBoss AS 7 Performance tuning - Tuning Web server thread pool on the masterjboss.com.

Resources