I'm facing a problem regarding a refused connection on the cluster node protocol port.
I'm using the following configs to create the two nodes cluster:
For the First node manager :
####################
# State Management #
####################
nifi.state.management.configuration.file=./conf/state-management.xml
# The ID of the local state provider
nifi.state.management.provider.local=local-provider
# The ID of the cluster-wide state provider. This will be ignored if NiFi is not clustered but must be populated if running in a cluster.
nifi.state.management.provider.cluster=zk-provider
# Specifies whether or not this instance of NiFi should run an embedded ZooKeeper server
nifi.state.management.embedded.zookeeper.start=true
# Properties file that provides the ZooKeeper properties to use if <nifi.state.management.embedded.zookeeper.start> is set to true
nifi.state.management.embedded.zookeeper.properties=./conf/zookeeper.properties
# web properties #
nifi.web.war.directory=./lib
nifi.web.http.host=10.129.140.22
nifi.web.http.port=3000
nifi.web.http.network.interface.default=
nifi.web.https.host=
nifi.web.https.port=
nifi.web.https.network.interface.default=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200
nifi.web.max.header.size=16 KB
nifi.web.proxy.context.path=
nifi.web.proxy.host=
# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=
nifi.cluster.node.protocol.port=10000
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.protocol.max.threads=50
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.node.max.concurrent.requests=100
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=5 mins
nifi.cluster.flow.election.max.candidates=
# cluster load balancing properties #
nifi.cluster.load.balance.host=
nifi.cluster.load.balance.port=6342
nifi.cluster.load.balance.connections.per.node=4
nifi.cluster.load.balance.max.thread.count=8
nifi.cluster.load.balance.comms.timeout=30 sec
# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=localhost:2181
nifi.zookeeper.connect.timeout=3 secs
nifi.zookeeper.session.timeout=3 secs
nifi.zookeeper.root.node=/nifi
For the second node slave:
####################
# State Management #
####################
nifi.state.management.configuration.file=./conf/state-management.xml
# The ID of the local state provider
nifi.state.management.provider.local=local-provider
# The ID of the cluster-wide state provider. This will be ignored if NiFi is not clustered but must be populated if running in a cluster.
nifi.state.management.provider.cluster=zk-provider
# Specifies whether or not this instance of NiFi should run an embedded ZooKeeper server
nifi.state.management.embedded.zookeeper.start=false
# Properties file that provides the ZooKeeper properties to use if <nifi.state.management.embedded.zookeeper.start> is set to true
nifi.state.management.embedded.zookeeper.properties=./conf/zookeeper.properties
# web properties #
nifi.web.war.directory=./lib
nifi.web.http.host=
nifi.web.http.port=9021
nifi.web.http.network.interface.default=
nifi.web.https.host=
nifi.web.https.port=
nifi.web.https.network.interface.default=
nifi.web.jetty.working.directory=./work/jetty
nifi.web.jetty.threads=200
nifi.web.max.header.size=16 KB
nifi.web.proxy.context.path=
nifi.web.proxy.host=
# cluster node properties (only configure for cluster nodes) #
nifi.cluster.is.node=true
nifi.cluster.node.address=
nifi.cluster.node.protocol.port=10001
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.protocol.max.threads=50
nifi.cluster.node.event.history.size=25
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.node.max.concurrent.requests=100
nifi.cluster.firewall.file=
nifi.cluster.flow.election.max.wait.time=5 mins
nifi.cluster.flow.election.max.candidates=
# cluster load balancing properties #
nifi.cluster.load.balance.host=10.129.140.22
nifi.cluster.load.balance.port=6343
nifi.cluster.load.balance.connections.per.node=4
nifi.cluster.load.balance.max.thread.count=8
nifi.cluster.load.balance.comms.timeout=30 sec
# zookeeper properties, used for cluster management #
nifi.zookeeper.connect.string=10.129.140.22:2181
nifi.zookeeper.connect.timeout=3 secs
nifi.zookeeper.session.timeout=3 secs
nifi.zookeeper.root.node=/nifi
The logs fils shows the following :
For the slave
2019-05-23 10:37:07,384 INFO [main] o.a.n.c.repository.FileSystemRepository Initializing FileSystemRepository with 'Always Sync' set to false
2019-05-23 10:37:07,541 INFO [main] o.apache.nifi.controller.FlowController Not enabling RAW Socket Site-to-Site functionality because nifi.remote.input.socket.port is not set
2019-05-23 10:37:07,546 INFO [main] o.apache.nifi.controller.FlowController Checking if there is already a Cluster Coordinator Elected...
2019-05-23 10:37:07,591 INFO [main] o.a.c.f.imps.CuratorFrameworkImpl Starting
2019-05-23 10:37:07,658 INFO [main-EventThread] o.a.c.f.state.ConnectionStateManager State change: CONNECTED
2019-05-23 10:37:07,693 INFO [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
2019-05-23 10:37:07,697 INFO [main] o.apache.nifi.controller.FlowController The Election for Cluster Coordinator has already begun (Leader is localhost:10000). Will not register to be elected for this role until after connecting to the cluster and inheriting the cluster's flow.
2019-05-23 10:37:07,699 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=true] Registered new Leader Selector for role Cluster Coordinator; this node is a silent observer in the election.
2019-05-23 10:37:07,699 INFO [main] o.a.c.f.imps.CuratorFrameworkImpl Starting
2019-05-23 10:37:07,703 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=false] Registered new Leader Selector for role Cluster Coordinator; this node is a silent observer in the election.
2019-05-23 10:37:07,703 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=false] started
2019-05-23 10:37:07,703 INFO [main] o.a.n.c.c.h.AbstractHeartbeatMonitor Heartbeat Monitor started
2019-05-23 10:37:07,706 INFO [main-EventThread] o.a.c.f.state.ConnectionStateManager State change: CONNECTED
2019-05-23 10:37:09,587 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#1a6a4595{nifi-api,/nifi-api,file:///home/superman/nifi-1.9.2/work/jetty/nifi-web-api-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-api-1.9.2.war}
2019-05-23 10:37:09,850 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=77ms
2019-05-23 10:37:09,852 INFO [main] o.e.j.s.h.C._nifi_content_viewer No Spring WebApplicationInitializer types detected on classpath
2019-05-23 10:37:09,873 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#4b1b2255{nifi-content-viewer,/nifi-content-viewer,file:///home/superman/nifi-1.9.2/work/jetty/nifi-web-content-viewer-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-content-viewer-1.9.2.war}
2019-05-23 10:37:09,895 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=6ms
2019-05-23 10:37:09,896 WARN [main] o.e.j.webapp.StandardDescriptorProcessor Duplicate mapping from / to default
2019-05-23 10:37:09,915 INFO [main] o.e.j.s.h.ContextHandler._nifi_docs No Spring WebApplicationInitializer types detected on classpath
2019-05-23 10:37:09,917 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#4965454c{nifi-docs,/nifi-docs,file:///home/superman/nifi-1.9.2/work/jetty/nifi-web-docs-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-docs-1.9.2.war}
2019-05-23 10:37:09,936 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=8ms
2019-05-23 10:37:09,955 INFO [main] o.e.j.server.handler.ContextHandler._ No Spring WebApplicationInitializer types detected on classpath
2019-05-23 10:37:09,957 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#1e4a4ed5{nifi-error,/,file:///home/superman/nifi-1.9.2/work/jetty/nifi-web-error-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-error-1.9.2.war}
2019-05-23 10:37:09,967 INFO [main] o.eclipse.jetty.server.AbstractConnector Started ServerConnector#4518bffd{HTTP/1.1,[http/1.1]}{0.0.0.0:9021}
2019-05-23 10:37:09,967 INFO [main] org.eclipse.jetty.server.Server Started #28769ms
2019-05-23 10:37:09,978 INFO [main] org.apache.nifi.web.server.JettyServer Loading Flow...
2019-05-23 10:37:09,982 INFO [main] org.apache.nifi.io.socket.SocketListener Now listening for connections from nodes on port 10001
2019-05-23 10:37:10,026 INFO [main] o.apache.nifi.controller.FlowController Successfully synchronized controller with proposed flow
2019-05-23 10:37:10,071 INFO [main] o.a.nifi.controller.StandardFlowService Connecting Node: localhost:9021
2019-05-23 10:37:10,073 INFO [main] o.a.n.c.c.n.LeaderElectionNodeProtocolSender Determined that Cluster Coordinator is located at localhost:10000; will use this address for sending heartbeat messages
2019-05-23 10:37:10,074 WARN [main] o.a.nifi.controller.StandardFlowService Failed to connect to cluster due to: org.apache.nifi.cluster.protocol.ProtocolException: Failed to create socket to localhost:10000 due to: java.net.ConnectException: Connection refused (Connection refused)
2019-05-23 10:37:12,715 WARN [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Failed to determine which node is elected active Cluster Coordinator: ZooKeeper reports the address as localhost:10000, but there is no node with this address. Attempted to determine the node's information but failed to retrieve its information due to org.apache.nifi.cluster.protocol.ProtocolException: Failed to create socket due to: java.net.ConnectException: Connection refused (Connection refused)
2019-05-23 10:37:12,720 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for localhost:9021 -- Received heartbeat from node previously disconnected due to Has Not Yet Connected to Cluster. Issuing reconnection request.
2019-05-23 10:37:12,721 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for localhost:9021 -- Requesting that node connect to cluster
2019-05-23 10:37:12,721 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Status of localhost:9021 changed from NodeConnectionStatus[nodeId=localhost:9021, state=DISCONNECTED, Disconnect Code=Has Not Yet Connected to Cluster, Disconnect Reason=Has Not Yet Connected to Cluster, updateId=1] to NodeConnectionStatus[nodeId=localhost:9021, state=CONNECTING, updateId=3]
2019-05-23 10:37:15,075 INFO [main] o.a.n.c.c.n.LeaderElectionNodeProtocolSender Determined that Cluster Coordinator is located at localhost:10000; will use this address for sending heartbeat messages
2019-05-23 10:37:15,076 WARN [main] o.a.nifi.controller.StandardFlowService Failed to connect to cluster due to: org.apache.nifi.cluster.protocol.ProtocolException: Failed to create socket to localhost:10000 due to: java.net.ConnectException: Connection refused (Connection refused)
For the manager
2019-05-23 10:36:59,752 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2019-05-23 10:36:59,752 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:java.io.tmpdir=/tmp
2019-05-23 10:36:59,752 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:java.compiler=<NA>
2019-05-23 10:36:59,752 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:os.name=Linux
2019-05-23 10:36:59,752 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:os.arch=amd64
2019-05-23 10:36:59,753 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:os.version=4.15.0-20-generic
2019-05-23 10:36:59,753 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:user.name=root
2019-05-23 10:36:59,753 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:user.home=/root
2019-05-23 10:36:59,753 INFO [main] o.a.zookeeper.server.ZooKeeperServer Server environment:user.dir=/home/superman/nifi-1.9.2
2019-05-23 10:36:59,753 INFO [main] o.a.zookeeper.server.ZooKeeperServer tickTime set to 2000
2019-05-23 10:36:59,754 INFO [main] o.a.zookeeper.server.ZooKeeperServer minSessionTimeout set to -1
2019-05-23 10:36:59,754 INFO [main] o.a.zookeeper.server.ZooKeeperServer maxSessionTimeout set to -1
2019-05-23 10:36:59,855 INFO [main] o.apache.nifi.controller.FlowController Checking if there is already a Cluster Coordinator Elected...
2019-05-23 10:36:59,903 INFO [main] o.a.c.f.imps.CuratorFrameworkImpl Starting
2019-05-23 10:36:59,950 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] o.a.zookeeper.server.ZooKeeperServer Client attempting to establish new session at /127.0.0.1:40388
2019-05-23 10:36:59,950 INFO [SyncThread:0] o.a.z.server.persistence.FileTxnLog Creating new log file: log.3c
2019-05-23 10:36:59,963 INFO [SyncThread:0] o.a.zookeeper.server.ZooKeeperServer Established session 0x16ae443f4130000 with negotiated timeout 4000 for client /127.0.0.1:40388
2019-05-23 10:36:59,975 INFO [main-EventThread] o.a.c.f.state.ConnectionStateManager State change: CONNECTED
2019-05-23 10:36:59,998 INFO [Curator-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl backgroundOperationsLoop exiting
2019-05-23 10:37:00,003 INFO [main] o.apache.nifi.controller.FlowController The Election for Cluster Coordinator has already begun (Leader is localhost:10001). Will not register to be elected for this role until after connecting to the cluster and inheriting the cluster's flow.
2019-05-23 10:37:00,005 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=true] Registered new Leader Selector for role Cluster Coordinator; this node is a silent observer in the election.
2019-05-23 10:37:00,005 INFO [main] o.a.c.f.imps.CuratorFrameworkImpl Starting
2019-05-23 10:37:00,017 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=false] Registered new Leader Selector for role Cluster Coordinator; this node is a silent observer in the election.
2019-05-23 10:37:00,017 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=false] started
2019-05-23 10:37:00,017 INFO [main] o.a.n.c.c.h.AbstractHeartbeatMonitor Heartbeat Monitor started
2019-05-23 10:37:00,019 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] o.a.zookeeper.server.ZooKeeperServer Client attempting to establish new session at /127.0.0.1:40390
2019-05-23 10:37:00,020 INFO [SyncThread:0] o.a.zookeeper.server.ZooKeeperServer Established session 0x16ae443f4130001 with negotiated timeout 4000 for client /127.0.0.1:40390
2019-05-23 10:37:00,020 INFO [main-EventThread] o.a.c.f.state.ConnectionStateManager State change: CONNECTED
2019-05-23 10:37:02,022 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#1a05ff8e{nifi-api,/nifi-api,file:///home/superman/nifi-1.9.2/work/jetty/nifi-web-api-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-api-1.9.2.war}
2019-05-23 10:37:02,373 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=165ms
2019-05-23 10:37:02,375 INFO [main] o.e.j.s.h.C._nifi_content_viewer No Spring WebApplicationInitializer types detected on classpath
2019-05-23 10:37:02,401 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#251e2f4a{nifi-content-viewer,/nifi-content-viewer,file:///home/superman/nifi-1.9.2/work/jetty/nifi-web-content-viewer-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-content-viewer-1.9.2.war}
2019-05-23 10:37:02,419 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=6ms
2019-05-23 10:37:02,420 WARN [main] o.e.j.webapp.StandardDescriptorProcessor Duplicate mapping from / to default
2019-05-23 10:37:02,421 INFO [main] o.e.j.s.h.ContextHandler._nifi_docs No Spring WebApplicationInitializer types detected on classpath
2019-05-23 10:37:02,441 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#1abea1ed{nifi-docs,/nifi-docs,file:///home/superman/nifi-1.9.2/work/jetty/nifi-web-docs-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-docs-1.9.2.war}
2019-05-23 10:37:02,457 INFO [main] o.e.j.a.AnnotationConfiguration Scanning elapsed time=6ms
2019-05-23 10:37:02,475 INFO [main] o.e.j.server.handler.ContextHandler._ No Spring WebApplicationInitializer types detected on classpath
2019-05-23 10:37:02,478 INFO [main] o.e.jetty.server.handler.ContextHandler Started o.e.j.w.WebAppContext#6f5288c5{nifi-error,/,file:///home/superman/nifi-1.9.2/work/jetty/nifi-web-error-1.9.2.war/webapp/,AVAILABLE}{./work/nar/framework/nifi-framework-nar-1.9.2.nar-unpacked/NAR-INF/bundled-dependencies/nifi-web-error-1.9.2.war}
2019-05-23 10:37:02,488 INFO [main] o.eclipse.jetty.server.AbstractConnector Started ServerConnector#167ed1cf{HTTP/1.1,[http/1.1]}{10.129.140.22:3000}
2019-05-23 10:37:02,488 INFO [main] org.eclipse.jetty.server.Server Started #26145ms
2019-05-23 10:37:02,500 INFO [main] org.apache.nifi.web.server.JettyServer Loading Flow...
2019-05-23 10:37:02,503 INFO [main] org.apache.nifi.io.socket.SocketListener Now listening for connections from nodes on port 10000
2019-05-23 10:37:02,545 INFO [main] o.apache.nifi.controller.FlowController Successfully synchronized controller with proposed flow
2019-05-23 10:37:02,587 INFO [main] o.a.nifi.controller.StandardFlowService Connecting Node: 10.129.140.22:3000
2019-05-23 10:37:02,589 INFO [main] o.a.n.c.c.n.LeaderElectionNodeProtocolSender Determined that Cluster Coordinator is located at localhost:10001; will use this address for sending heartbeat messages
2019-05-23 10:37:02,590 WARN [main] o.a.nifi.controller.StandardFlowService Failed to connect to cluster due to: org.apache.nifi.cluster.protocol.ProtocolException: Failed to create socket to localhost:10001 due to: java.net.ConnectException: Connection refused (Connection refused)
2019-05-23 10:37:04,001 INFO [SessionTracker] o.a.zookeeper.server.ZooKeeperServer Expiring session 0x16ae42f180d0003, timeout of 4000ms exceeded
2019-05-23 10:37:04,001 INFO [SessionTracker] o.a.zookeeper.server.ZooKeeperServer Expiring session 0x16ae42f180d0002, timeout of 4000ms exceeded
2019-05-23 10:37:05,026 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for 10.129.140.22:3000 -- Received heartbeat from node previously disconnected due to Has Not Yet Connected to Cluster. Issuing reconnection request.
2019-05-23 10:37:05,028 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for 10.129.140.22:3000 -- Requesting that node connect to cluster
2019-05-23 10:37:05,028 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Status of 10.129.140.22:3000 changed from NodeConnectionStatus[nodeId=10.129.140.22:3000, state=DISCONNECTED, Disconnect Code=Has Not Yet Connected to Cluster, Disconnect Reason=Has Not Yet Connected to Cluster, updateId=0] to NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=5]
2019-05-23 10:37:07,591 WARN [main] o.a.nifi.controller.StandardFlowService There is currently no Cluster Coordinator. This often happens upon restart of NiFi when running an embedded ZooKeeper. Will register this node to become the active Cluster Coordinator and will attempt to connect to cluster again
2019-05-23 10:37:07,594 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager CuratorLeaderElectionManager[stopped=false] Registered new Leader Selector for role Cluster Coordinator; this node is an active participant in the election.
2019-05-23 10:37:07,612 INFO [Leader Election Notification Thread-1] o.a.n.c.l.e.CuratorLeaderElectionManager org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener#1d6dcdcb This node has been elected Leader for Role 'Cluster Coordinator'
2019-05-23 10:37:07,612 INFO [Leader Election Notification Thread-1] o.apache.nifi.controller.FlowController This node elected Active Cluster Coordinator
2019-05-23 10:37:07,668 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for 10.129.140.22:3000 -- Received heartbeat from node previously disconnected due to Has Not Yet Connected to Cluster. Issuing reconnection request.
2019-05-23 10:37:07,668 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for 10.129.140.22:3000 -- Requesting that node connect to cluster
2019-05-23 10:37:07,669 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Status of 10.129.140.22:3000 changed from NodeConnectionStatus[nodeId=10.129.140.22:3000, state=DISCONNECTED, Disconnect Code=Has Not Yet Connected to Cluster, Disconnect Reason=Has Not Yet Connected to Cluster, updateId=1] to NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=6]
2019-05-23 10:37:07,675 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for 10.129.140.22:3000 -- Received heartbeat from node previously disconnected due to Has Not Yet Connected to Cluster. Issuing reconnection request.
2019-05-23 10:37:07,675 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for 10.129.140.22:3000 -- Requesting that node connect to cluster
2019-05-23 10:37:07,675 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Status of 10.129.140.22:3000 changed from NodeConnectionStatus[nodeId=10.129.140.22:3000, state=DISCONNECTED, Disconnect Code=Has Not Yet Connected to Cluster, Disconnect Reason=Has Not Yet Connected to Cluster, updateId=2] to NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=7]
2019-05-23 10:37:07,694 INFO [Process Cluster Protocol Request-1] o.a.n.c.c.node.NodeClusterCoordinator Status of 10.129.140.22:3000 changed from NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=5] to NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=5]
2019-05-23 10:37:07,695 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for 10.129.140.22:3000 -- Received heartbeat from node previously disconnected due to Has Not Yet Connected to Cluster. Issuing reconnection request.
2019-05-23 10:37:07,699 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for 10.129.140.22:3000 -- Requesting that node connect to cluster
2019-05-23 10:37:07,700 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.node.NodeClusterCoordinator Status of 10.129.140.22:3000 changed from NodeConnectionStatus[nodeId=10.129.140.22:3000, state=DISCONNECTED, Disconnect Code=Has Not Yet Connected to Cluster, Disconnect Reason=Has Not Yet Connected to Cluster, updateId=3] to NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=8]
2019-05-23 10:37:07,701 INFO [Process Cluster Protocol Request-5] o.a.n.c.c.node.NodeClusterCoordinator Status of 10.129.140.22:3000 changed from NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=7] to NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=7]
2019-05-23 10:37:07,702 INFO [Process Cluster Protocol Request-1] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 19834836-9bda-41b3-8fef-4a288d90c7bf (type=NODE_STATUS_CHANGE, length=1103 bytes) from localhost.localdomain in 33 millis
2019-05-23 10:37:07,702 INFO [Process Cluster Protocol Request-5] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 85b0bb3f-c2a6-4dfd-abd6-e9df14710c4d (type=NODE_STATUS_CHANGE, length=1103 bytes) from localhost.localdomain in 10 millis
2019-05-23 10:37:07,703 INFO [Process Cluster Protocol Request-3] o.a.n.c.c.node.NodeClusterCoordinator Status of 10.129.140.22:3000 changed from NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=6] to NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=6]
2019-05-23 10:37:07,705 INFO [Process Cluster Protocol Request-3] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 80447901-4ad3-44e3-91ad-d9f075624eae (type=NODE_STATUS_CHANGE, length=1103 bytes) from localhost.localdomain in 31 millis
2019-05-23 10:37:07,706 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Processing reconnection request from cluster coordinator.
2019-05-23 10:37:07,706 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Received a Reconnection Request that contained no DataFlow. Will attempt to connect to cluster using local flow.
2019-05-23 10:37:07,707 INFO [Process Cluster Protocol Request-2] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 22cdceee-c01f-445f-a091-38812e878d10 (type=RECONNECTION_REQUEST, length=3095 bytes) from 10.129.140.22:3000 in 34 millis
2019-05-23 10:37:07,708 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Processing reconnection request from cluster coordinator.
2019-05-23 10:37:07,708 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Received a Reconnection Request that contained no DataFlow. Will attempt to connect to cluster using local flow.
2019-05-23 10:37:07,709 INFO [Process Cluster Protocol Request-4] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 8605cf39-2034-4ee2-92c4-0fbe54e97fb2 (type=RECONNECTION_REQUEST, length=3013 bytes) from 10.129.140.22:3000 in 27 millis
2019-05-23 10:37:07,712 INFO [Reconnect 10.129.140.22:3000] o.a.n.c.c.node.NodeClusterCoordinator Successfully requested that 10.129.140.22:3000 join the cluster
2019-05-23 10:37:07,712 INFO [Reconnect 10.129.140.22:3000] o.a.n.c.c.node.NodeClusterCoordinator Successfully requested that 10.129.140.22:3000 join the cluster
2019-05-23 10:37:07,725 INFO [Process Cluster Protocol Request-6] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 0ca55348-44eb-416b-91dd-3d80da4c5ebe (type=RECONNECTION_REQUEST, length=3013 bytes) from 10.129.140.22:3000 in 29 millis
2019-05-23 10:37:07,725 INFO [Reconnect 10.129.140.22:3000] o.a.n.c.c.node.NodeClusterCoordinator Successfully requested that 10.129.140.22:3000 join the cluster
2019-05-23 10:37:07,728 INFO [Process Cluster Protocol Request-7] o.a.n.c.c.node.NodeClusterCoordinator Status of 10.129.140.22:3000 changed from NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=8] to NodeConnectionStatus[nodeId=10.129.140.22:3000, state=CONNECTING, updateId=8]
2019-05-23 10:37:07,728 INFO [Process Cluster Protocol Request-7] o.a.n.c.p.impl.SocketProtocolListener Finished processing request c9b647d7-67ac-4d0a-833b-8a0a8cc0ba6d (type=NODE_STATUS_CHANGE, length=1103 bytes) from localhost.localdomain in 3 millis
2019-05-23 10:37:07,728 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Connecting Node: 10.129.140.22:3000
2019-05-23 10:37:07,725 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Processing reconnection request from cluster coordinator.
2019-05-23 10:37:07,732 INFO [Heartbeat Monitor Thread-1] o.a.n.c.c.h.AbstractHeartbeatMonitor Finished processing 4 heartbeats in 2 seconds, 708 millis
2019-05-23 10:37:07,732 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Received a Reconnection Request that contained no DataFlow. Will attempt to connect to cluster using local flow.
2019-05-23 10:37:07,733 INFO [Reconnect to Cluster] o.a.n.c.c.n.LeaderElectionNodeProtocolSender Determined that Cluster Coordinator is located at localhost:10000; will use this address for sending heartbeat messages
2019-05-23 10:37:07,734 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Connecting Node: 10.129.140.22:3000
2019-05-23 10:37:07,735 INFO [Reconnect to Cluster] o.a.nifi.controller.StandardFlowService Connecting Node: 10.129.140.22:3000
2019-05-23 10:37:07,736 INFO [Reconnect to Cluster] o.a.n.c.c.n.LeaderElectionNodeProtocolSender Determined that Cluster Coordinator is located at localhost:10000; will use this address for sending heartbeat messages
2019-05-23 10:37:07,736 INFO [Reconnect to Cluster] o.a.n.c.c.n.LeaderElectionNodeProtocolSender Determined that Cluster Coordinator is located at localhost:10000; will use this address for sending heartbeat messages
2019-05-23 10:37:07,748 INFO [Process Cluster Protocol Request-8] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 434daf63-1beb-4b82-9290-bb0da4e89b7f (type=RECONNECTION_REQUEST, length=2972 bytes) from 10.129.140.22:3000 in 16 millis
2019-05-23 10:37:07,749 INFO [Reconnect 10.129.140.22:3000] o.a.n.c.c.node.NodeClusterCoordinator Successfully requested that 10.129.140.22:3000 join the cluster**
Set these properties on both:
nifi.web.http.host=<host>
nifi.cluster.node.address=<host>
Beware of this value visibility in network scopes:
nifi.zookeeper.connect.string=localhost:2181
e.g 'localhost', in the other side you're using real IP addr
They share it during replication, primary/coordinator node election and flow election.
I have a Java EE project that ran with WF-8 w/ HornetMQ:
--- Sever side was deployed as an .ear package.
--- Client side is a Java GUI that communicated with the server via JMS messaging.
It all worked when put away.
I’m trying to resurrect it using WF-10 w/ ArtemisMQ, and it’s killing me.
After clearing all the Exceptions dealing with Hornet -> Artemis, here’s where I am:
--- WF Console confirms gotest.ear deploys with no Exceptions. (Console out put on startup pasted below)
--- Client’s Eclipse console output confirms it has a Connection and that it sent an Object Message. (my formatted output is pasted below)
--- Standalone-full.xml shows the Queue my MDB listens to configured. AND WF browser console double confirms it. (Info on WF’s configuration also pasted below)
But my MDB will not write or log to the WF console at all. I.e. not even from its constructor method. And there is no response at all from it's onMessage () to messages sent from the Client.
I’m helpless, because I don’t get Exceptions anywhere to hint at what’s wrong.
Can anyone point me in the right direction?
Help!!
MDB CODE
(I’ve cut it to the bone until I can get it to write or log to the WF console acknowledging it is hearing messages.)
import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;
import javax.annotation.Resource;
import javax.ejb.ActivationConfigProperty;
import javax.ejb.MessageDriven;
import javax.ejb.MessageDrivenContext;
import javax.enterprise.context.ApplicationScoped;
import javax.jms.Message;
import javax.jms.MessageListener;
import javax.persistence.Transient;
import org.apache.log4j.Logger;
#MessageDriven(
activationConfig ={
#ActivationConfigProperty(propertyName="destinationType", propertyValue="javax.jms.Queue"),
#ActivationConfigProperty(propertyName="destination", propertyValue="jms/queue/sendToServerQueue"),
#ActivationConfigProperty(propertyName="acknowledgeMode", propertyValue = "Auto-acknowledge"),
})
public class GoMsgBean implements MessageListener {
final Logger logger = Logger.getLogger(GoMsgBean.class.getName());
public GoMsgBean () {
System.out.println("System.out message FROM GoMsgBean Constructor");
logger.info("Logger message FROM GoMsgBean Constructor");
}
#PostConstruct
public void myInit () {
System.out.println("System.out message FROM GoMsgBean PostConstruct");
logger.info("Logger message FROM GoMsgBean PostConstruct");
}
public void onMessage(Message msg) {
System.out.println("System.out message FROM GoMsgBean onMessage()");
logger.info("Logger message FROM GoMsgBean onMessage()");
}
}
STANDALONE-FULL.XML
<subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0">
<server name="default">
. . .
<jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>
<jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>
<jms-queue name="SendToServerQueue" entries="java:jboss/exported/jms/queue/sendToServerQueue"/>
<jms-queue name="SendToClientQueue2" entries="java:jboss/exported/jms/queue/sendToClientQueue2"/>
. . .
<connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector"/>
<pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm" transaction="xa"/>
</server>
</subsystem>
WFLY localhost :9990 Console Configuration Details
Queues/Topics
Name: SendToServerQueue
JNDINames: java:jboss/exported/jms/queue/sendToServerQueue
**Durable?: true**
Selector: <blank>
Connection Factories
Name InVmConnectionFactory
JNDI java:/ConnectionFactory
Name: RemoteConnectionFactory
JNDI java:jboss/exported/jms/RemoteConnectionFactory
Security Settings
Pattern #
Role guest
Address Settings
Pattern #
Diverts No Items!
ECLIPSE CONSOLE OUTPUT (when client GUI opens)
(I’ve added an insane number of System.out.printlns to track what’s happening line by line. Each output string is prefaced by the Class.method () that’s writing the line.)
ECLIPSE CONSOLE OUTPUT when Client is opened
MsgCtrSnd.run () beg
MsgCtrSnd.run () Requesting InitialContext
CONNECTION VARIABLES
key: java.naming.provider.url value: http-remoting://localhost:8080
key: java.naming.factory.initial value: org.jboss.naming.remote.client.InitialContextFactory
key: java.naming.security.principal value: jmsUser
key: java.naming.security.credentials value: jmsUser123!
MsgCtrSnd.run () InitialContext OK: javax.naming.InitialContext#4135c3b
MsgCtrSnd.run () Look up ConnectionFactory with: "jms/RemoteConnectionFactory"
MsgCtrSnd.run () ConnectionFactory: Ok:
org.apache.activemq.artemis.jms.client.ActiveMQJMSConnectionFactory
MsgCtrSnd.run () Instantiating Connection
MsgCtrSnd.run () JMS Connection OK:
org.apache.activemq.artemis.jms.client.ActiveMQConnection#4d5d943d
Instantiating Session
MsgCtrSnd.run () JMS Session OK:
ActiveMQSession->ClientSessionImpl
[name=212aa734-90f5-11e7-aa7a-a3fb7876c1f2, username=appUser, closed=false, factory = org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl#2a4fb17b, metaData=(jms-session=,)]#368f2016
MsgCtrSnd.run () Lookup Queue w/ JNDI name: ["jms/queue/sendToServerQueue"]
MsgCtrSnd.run () Queue secured:
ActiveMQQueue[SendToServerQueue]IS NOT NULL
MsgCtrSnd.run () Instantiating Message Producer
MsgCtrSnd.run () Message Producer [IS NOT NULL]
ActiveMQMessageProducer->org.apache.activemq.artemis.core.client.impl.ClientProducerImpl#59474f18
MsgCtrSnd.run () Starting jmsConnection
MsgCtrSnd.run () JMS Send Connection : Ok.
MsgCtrSnd.run () end
ECLIPSE CONOSOLE OUTPUT (when client GUI is used to send message to log in)
Cntrl.executeMenuAction () beg
Cntrl.executeMenuAction () Switching to: Log In
Cntrl.loginSend () beg
Cntrl.loginSend () calling DataDialog.collectData()
Cntrl.loginSend () EntityFieldsCollector ok
Cntrl.loginSend () returned from DataDialog.collectData()
Cntrl.loginSend () EntityFieldsCollector contains 2 entries.
Cntrl.loginSend () key: Member ID: 308486 value: ID
Cntrl.loginSend () key: Member PW 308487 value: PW
Cntrl.loginSend () message center connection ok
MsgCtrSnd.sendMsg () beg
MsgCtrSnd.sendMsg () Action: Log In
MsgCtrSnd.sendMsg () clientHash: aaaaaa
MsgCtrSnd.sendMsg () memberId : ID
MsgCtrSnd.sendMsg () memberPw : PW
MsgCtrSnd.sendMsg () clientUserId : null
MsgCtrSnd.sendMsg () clientUserPw : null
MsgCtrSnd.sendMsg () calling createObjectMessage ()
MsgCtrSnd.sendMsg () ObjectMessage instantiated.
MsgCtrSnd.sendMsg () Object Message object: java.util.ArrayList
MsgCtrSnd.sendMsg () message sent.
MsgCtrSnd.sendMsg () end
Cntrl.loginSend () end
Cntrl.executeMenuAction () end
WILDFLY CONSOLE OUTPUT WHEN GOTEST.EAR IS DEPLOYED ON STARTUP
Calling "C:\ProgramFilesGeo\Wildfly\wildfly-10.1.0.Final\bin\standalone.conf.bat"
Setting JAVA property to "C:\Program Files\Java\jdk1.8.0_121\bin\java"
===============================================================================
JBoss Bootstrap Environment
JBOSS_HOME: "C:\ProgramFilesGeo\Wildfly\wildfly-10.1.0.Final"
JAVA: "C:\Program Files\Java\jdk1.8.0_121\bin\java"
JAVA_OPTS: "-Dprogram.name=standalone.bat -Xms64M -Xmx512M -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman"
===============================================================================
INFO [org.jboss.modules] (main) JBoss Modules version 1.5.2.Final
INFO [org.jboss.msc] (main) JBoss MSC version 1.2.6.Final
INFO [org.jboss.as] (MSC service thread 1-7) WFLYSRV0049: WildFly Full 10.1.0.Final (WildFly Core 2.2.0.Final) starting
INFO [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0039: Creating http management service using socket-binding (management-http)
INFO [org.xnio] (MSC service thread 1-3) XNIO version 3.4.0.Final
INFO [org.xnio.nio] (MSC service thread 1-3) XNIO NIO Implementation Version 3.4.0.Final
INFO [org.wildfly.iiop.openjdk] (ServerService Thread Pool -- 42) WFLYIIOP0001: Activating IIOP Subsystem
INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 41) WFLYCLINF0001: Activating Infinispan subsystem.
INFO [org.jboss.as.jsf] (ServerService Thread Pool -- 48) WFLYJSF0007: Activated the following JSF Implementations: [main]
INFO [org.jboss.as.connector] (MSC service thread 1-4) WFLYJCA0009: Starting JCA Subsystem (WildFly/IronJacamar 1.3.4.Final)
INFO [org.jboss.as.naming] (ServerService Thread Pool -- 52) WFLYNAM0001: Activating Naming Subsystem
INFO [org.jboss.as.webservices] (ServerService Thread Pool -- 62) WFLYWS0002: Activating WebServices Extension
INFO [org.jboss.as.security] (ServerService Thread Pool -- 59) WFLYSEC0002: Activating Security Subsystem
INFO [org.wildfly.extension.io] (ServerService Thread Pool -- 40) WFLYIO001: Worker 'default' has auto-configured to 16 core threads with 128 task threads based on your 8 available processors
INFO [org.jboss.as.security] (MSC service thread 1-7) WFLYSEC0001: Current PicketBox version=4.9.6.Final
INFO [org.wildfly.extension.undertow] (MSC service thread 1-3) WFLYUT0003: Undertow 1.4.0.Final starting
INFO [org.jboss.remoting] (MSC service thread 1-5) JBoss Remoting version 4.0.21.Final
INFO [org.jboss.as.naming] (MSC service thread 1-8) WFLYNAM0003: Starting Naming Service
INFO [org.jboss.as.mail.extension] (MSC service thread 1-8) WFLYMAIL0001: Bound mail session [java:jboss/mail/Default]
INFO [org.jboss.as.connector.subsystems.datasources] (ServerService Thread Pool -- 36) WFLYJCA0004: Deploying JDBC-compliant driver class org.h2.Driver (version 1.3)
INFO [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-7) WFLYJCA0018: Started Driver service with driver-name = h2
INFO [org.jboss.as.connector.subsystems.datasources] (ServerService Thread Pool -- 36) WFLYJCA0005: Deploying non-JDBC-compliant driver class org.mariadb.jdbc.Driver (version 1.5)
INFO [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-2) WFLYJCA0018: Started Driver service with driver-name = mysql
INFO [org.jboss.as.ejb3] (MSC service thread 1-8) WFLYEJB0481: Strict pool slsb-strict-max-pool is using a max instance size of 128 (per class), which is derived from thread worker pool sizing.
INFO [org.jboss.as.ejb3] (MSC service thread 1-5) WFLYEJB0482: Strict pool mdb-strict-max-pool is using a max instance size of 32 (per class), which is derived from the number of CPUs on this host.
INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 61) WFLYUT0014: Creating file handler for path 'C:\ProgramFilesGeo\Wildfly\wildfly-10.1.0.Final/welcome-content' with options [directory-listing: 'false', follow-symlink: 'false', case-sensitive: 'true', safe-symlink-paths: '[]']
INFO [org.wildfly.extension.undertow] (MSC service thread 1-1) WFLYUT0012: Started server default-server.
INFO [org.wildfly.extension.undertow] (MSC service thread 1-5) WFLYUT0018: Host default-host starting
INFO [org.wildfly.extension.undertow] (MSC service thread 1-8) WFLYUT0006: Undertow HTTP listener default listening on 127.0.0.1:8080
INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-8) WFLYJCA0001: Bound data source [java:jboss/datasources/ExampleDS]
INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-1) WFLYJCA0001: Bound data source [java:jboss/jdbc/gotestdb]
INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-5) WFLYJCA0001: Bound data source [java:jboss/jdbc/tappdb]
INFO [org.wildfly.extension.messaging-activemq] (MSC service thread 1-4) WFLYMSGAMQ0001: AIO wasn't located on this platform, it will fall back to using pure Java NIO.
WARN [org.jboss.as.domain.management.security] (MSC service thread 1-2) WFLYDM0111: Keystore C:\ProgramFilesGeo\Wildfly\wildfly-10.1.0.Final\standalone\configuration\application.keystore not found, it will be auto generated on first use with a self signed certificate for host localhost
INFO [org.jboss.as.server.deployment] (MSC service thread 1-6) WFLYSRV0027: Starting deployment of "GoTest.ear" (runtime-name: "GoTest.ear")
INFO [org.jboss.as.server.deployment.scanner] (MSC service thread 1-1) WFLYDS0013: Started FileSystemDeploymentService for directory C:\ProgramFilesGeo\Wildfly\wildfly-10.1.0.Final\standalone\deployments
INFO [org.wildfly.iiop.openjdk] (MSC service thread 1-3) WFLYIIOP0009: CORBA ORB Service started
INFO [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 64) AMQ221000: live Message Broker is starting with configuration Broker Configuration (clustered=false,journalDirectory=C:\ProgramFilesGeo\Wildfly\wildfly-10.1.0.Final\standalone\data\activemq\journal,bindingsDirectory=C:\ProgramFilesGeo\Wildfly\wildfly-10.1.0.Final\standalone\data\activemq\bindings,largeMessagesDirectory=C:\ProgramFilesGeo\Wildfly\wildfly-10.1.0.Final\standalone\data\activemq\largemessages,pagingDirectory=C:\ProgramFilesGeo\Wildfly\wildfly-10.1.0.Final\standalone\data\activemq\paging)
INFO [org.infinispan.factories.GlobalComponentRegistry] (MSC service thread 1-6) ISPN000128: Infinispan version: Infinispan 'Chakra' 8.2.4.Final
INFO [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 64) AMQ221013: Using NIO Journal
INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (ServerService Thread Pool -- 66) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated.
INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (ServerService Thread Pool -- 72) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated.
INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (ServerService Thread Pool -- 66) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated.
INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (ServerService Thread Pool -- 71) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated.
INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (ServerService Thread Pool -- 72) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated.
INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (ServerService Thread Pool -- 71) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated.
INFO [org.jboss.as.server.deployment] (MSC service thread 1-3) WFLYSRV0207: Starting subdeployment (runtime-name: "GoTest.jar")
INFO [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 64) AMQ221043: Protocol module found: [artemis-server]. Adding protocol support for: CORE
INFO [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 64) AMQ221043: Protocol module found: [artemis-amqp-protocol]. Adding protocol support for: AMQP
INFO [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 64) AMQ221043: Protocol module found: [artemis-hornetq-protocol]. Adding protocol support for: HORNETQ
INFO [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 64) AMQ221043: Protocol module found: [artemis-stomp-protocol]. Adding protocol support for: STOMP
INFO [org.wildfly.extension.undertow] (MSC service thread 1-6) WFLYUT0006: Undertow HTTPS listener https listening on 127.0.0.1:8443
INFO [org.jboss.ws.common.management] (MSC service thread 1-3) JBWS022052: Starting JBossWS 5.1.5.Final (Apache CXF 3.1.6)
INFO [org.jboss.as.jpa] (MSC service thread 1-8) WFLYJPA0002: Read persistence.xml for GoTestDataBase
INFO [org.jboss.as.jpa] (ServerService Thread Pool -- 71) WFLYJPA0010: Starting Persistence Unit (phase 1 of 2) Service 'GoTest.ear/GoTest.jar#GoTestDataBase'
INFO [org.jboss.weld.deployer] (MSC service thread 1-8) WFLYWELD0003: Processing weld deployment GoTest.ear
INFO [org.hibernate.jpa.internal.util.LogHelper] (ServerService Thread Pool -- 71) HHH000204: Processing PersistenceUnitInfo [
name: GoTestDataBase
...]
INFO [org.wildfly.extension.messaging-activemq] (MSC service thread 1-5) WFLYMSGAMQ0016: Registered HTTP upgrade for activemq-remoting protocol handled by http-acceptor acceptor
INFO [org.wildfly.extension.messaging-activemq] (MSC service thread 1-1) WFLYMSGAMQ0016: Registered HTTP upgrade for activemq-remoting protocol handled by http-acceptor acceptor
INFO [org.wildfly.extension.messaging-activemq] (MSC service thread 1-7) WFLYMSGAMQ0016: Registered HTTP upgrade for activemq-remoting protocol handled by http-acceptor-throughput acceptor
INFO [org.wildfly.extension.messaging-activemq] (MSC service thread 1-2) WFLYMSGAMQ0016: Registered HTTP upgrade for activemq-remoting protocol handled by http-acceptor-throughput acceptor
INFO [org.hibernate.validator.internal.util.Version] (MSC service thread 1-8) HV000001: Hibernate Validator 5.2.4.Final
INFO [org.hibernate.Version] (ServerService Thread Pool -- 71) HHH000412: Hibernate Core {5.0.10.Final}
INFO [org.hibernate.cfg.Environment] (ServerService Thread Pool -- 71) HHH000206: hibernate.properties not found
INFO [org.hibernate.cfg.Environment] (ServerService Thread Pool -- 71) HHH000021: Bytecode provider name : javassist
INFO [org.hibernate.annotations.common.Version] (ServerService Thread Pool -- 71) HCANN000001: Hibernate Commons Annotations {5.0.1.Final}
INFO [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 64) AMQ221007: Server is now live
INFO [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 64) AMQ221001: Apache ActiveMQ Artemis Message Broker version 1.1.0.wildfly-017 [nodeID=e2b89808-fdf2-11e6-9f54-3956fe24eb2d]
INFO [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 64) AMQ221003: trying to deploy queue jms.queue.SendToServerQueue
INFO [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 67) AMQ221003: trying to deploy queue jms.queue.DLQ
INFO [org.wildfly.extension.messaging-activemq] (ServerService Thread Pool -- 72) WFLYMSGAMQ0002: Bound messaging object to jndi name java:jboss/exported/jms/RemoteConnectionFactory
INFO [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 70) AMQ221003: trying to deploy queue jms.queue.SendToClientQueue2
INFO [org.wildfly.extension.messaging-activemq] (ServerService Thread Pool -- 65) WFLYMSGAMQ0002: Bound messaging object to jndi name java:/ConnectionFactory
INFO [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 66) AMQ221003: trying to deploy queue jms.queue.ExpiryQueue
INFO [org.jboss.weld.deployer] (MSC service thread 1-8) WFLYWELD0003: Processing weld deployment GoTest.jar
INFO [org.jboss.as.ejb3.deployment] (MSC service thread 1-8) WFLYEJB0473: JNDI bindings for session bean named 'EnrollerBean' in deployment unit 'subdeployment "GoTest.jar" of deployment "GoTest.ear"' are as follows:
java:global/GoTest/GoTest/EnrollerBean!org.america3.gotest.server.sessionbeans.EnrollerBean
java:app/GoTest/EnrollerBean!org.america3.gotest.server.sessionbeans.EnrollerBean
java:module/EnrollerBean!org.america3.gotest.server.sessionbeans.EnrollerBean
java:global/GoTest/GoTest/EnrollerBean
java:app/GoTest/EnrollerBean
java:module/EnrollerBean
INFO [org.jboss.as.ejb3.deployment] (MSC service thread 1-8) WFLYEJB0473: JNDI bindings for session bean named 'ExiterBean' in deployment unit 'subdeployment "GoTest.jar" of deployment "GoTest.ear"' are as follows:
java:global/GoTest/GoTest/ExiterBean!org.america3.gotest.server.sessionbeans.ExiterBean
java:app/GoTest/ExiterBean!org.america3.gotest.server.sessionbeans.ExiterBean
java:module/ExiterBean!org.america3.gotest.server.sessionbeans.ExiterBean
java:global/GoTest/GoTest/ExiterBean
java:app/GoTest/ExiterBean
java:module/ExiterBean
INFO [org.jboss.as.ejb3.deployment] (MSC service thread 1-8) WFLYEJB0473: JNDI bindings for session bean named 'LoginerBean' in deployment unit 'subdeployment "GoTest.jar" of deployment "GoTest.ear"' are as follows:
java:global/GoTest/GoTest/LoginerBean!org.america3.gotest.server.sessionbeans.LoginerBean
java:app/GoTest/LoginerBean!org.america3.gotest.server.sessionbeans.LoginerBean
java:module/LoginerBean!org.america3.gotest.server.sessionbeans.LoginerBean
java:global/GoTest/GoTest/LoginerBean
java:app/GoTest/LoginerBean
java:module/LoginerBean
INFO [org.jboss.as.ejb3.deployment] (MSC service thread 1-8) WFLYEJB0473: JNDI bindings for session bean named 'LogouterBean' in deployment unit 'subdeployment "GoTest.jar" of deployment "GoTest.ear"' are as follows:
java:global/GoTest/GoTest/LogouterBean!org.america3.gotest.server.sessionbeans.LogouterBean
java:app/GoTest/LogouterBean!org.america3.gotest.server.sessionbeans.LogouterBean
java:module/LogouterBean!org.america3.gotest.server.sessionbeans.LogouterBean
java:global/GoTest/GoTest/LogouterBean
java:app/GoTest/LogouterBean
java:module/LogouterBean
INFO [org.jboss.as.ejb3.deployment] (MSC service thread 1-8) WFLYEJB0473: JNDI bindings for session bean named 'ReplierBean' in deployment unit 'subdeployment "GoTest.jar" of deployment "GoTest.ear"' are as follows:
java:global/GoTest/GoTest/ReplierBean!org.america3.gotest.server.sessionbeans.ReplierBean
java:app/GoTest/ReplierBean!org.america3.gotest.server.sessionbeans.ReplierBean
java:module/ReplierBean!org.america3.gotest.server.sessionbeans.ReplierBean
java:global/GoTest/GoTest/ReplierBean
java:app/GoTest/ReplierBean
java:module/ReplierBean
INFO [org.jboss.as.connector.deployment] (MSC service thread 1-5) WFLYJCA0007: Registered connection factory java:/JmsXA
INFO [org.apache.activemq.artemis.ra] (MSC service thread 1-5) Resource adaptor started
INFO [org.jboss.as.connector.services.resourceadapters.ResourceAdapterActivatorService$ResourceAdapterActivator] (MSC service thread 1-5) IJ020002: Deployed: file://RaActivatoractivemq-ra
INFO [org.jboss.as.connector.deployment] (MSC service thread 1-4) WFLYJCA0002: Bound JCA ConnectionFactory [java:/JmsXA]
INFO [org.wildfly.extension.messaging-activemq] (MSC service thread 1-2) WFLYMSGAMQ0002: Bound messaging object to jndi name java:jboss/DefaultJMSConnectionFactory
INFO [org.jboss.weld.Version] (MSC service thread 1-8) WELD-000900: 2.3.5 (Final)
INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (ServerService Thread Pool -- 71) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated.
INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (ServerService Thread Pool -- 71) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated.
INFO [org.jboss.as.jpa] (ServerService Thread Pool -- 66) WFLYJPA0010: Starting Persistence Unit (phase 2 of 2) Service 'GoTest.ear/GoTest.jar#GoTestDataBase'
INFO [org.jboss.as.ejb3] (MSC service thread 1-8) WFLYEJB0042: Started message driven bean 'GoMsgBean' with 'activemq-ra.rar' resource adapter
INFO [org.hibernate.dialect.Dialect] (ServerService Thread Pool -- 66) HHH000400: Using dialect: org.hibernate.dialect.MySQL5Dialect
INFO [org.hibernate.envers.boot.internal.EnversServiceImpl] (ServerService Thread Pool -- 66) Envers integration enabled? : true
INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 71) WFLYCLINF0002: Started client-mappings cache from ejb container
Member.<init>................................beg
Member.<init>................................Hello World. See! I can Log messages again.
Member.<init>................................end
INFO [org.hibernate.hql.internal.QueryTranslatorFactoryInitiator] (ServerService Thread Pool -- 66) HHH000397: Using ASTQueryTranslatorFactory
INFO [org.jboss.as.server] (ServerService Thread Pool -- 37) WFLYSRV0010: Deployed "GoTest.ear" (runtime-name : "GoTest.ear")
INFO [org.apache.activemq.artemis.ra] (default-threads - 1) AMQ151000: awaiting topic/queue creation jms/queue/sendToServerQueue
INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://127.0.0.1:9990/management
INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console listening on http://127.0.0.1:9990
INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Full 10.1.0.Final (WildFly Core 2.2.0.Final) started in 5392ms - Started 691 of 931 services (430 services are lazy, passive or on-demand)
WILDFLY CONSOLE OUTPUT AFTER DEPLOYEMENT THAT SEEMS TO BE THE PROBLEM.
Note: the next INFO line says the jms/queue/sendToServerQueue is NOT durable, while WF Console says its configured to be durable)
INFO [org.apache.activemq.artemis.ra] (default-threads - 1) AMQ151001: Attempting to reconnect org.apache.activemq.artemis.ra.inflow.ActiveMQActivationSpec(ra=org.apache.activemq.artemis.ra.ActiveMQResourceAdapter#78712571 destination=jms/queue/sendToServerQueue destinationType=javax.jms.Queue ack=Auto-acknowledge durable=false clientID=null user=null maxSession=15)
This is in reference to J. R. Perkins below concerning whether there was a log4j.xml in the deployment. This is another excerpt from WF’s standalone-full.xml I use for logging if I can ever get my MDB to even write to System.out:
<profile>
<subsystem xmlns="urn:jboss:domain:logging:3.0">
<console-handler name="CONSOLE">
<level name="INFO"/>
<formatter><named-formatter name="COLOR-PATTERN"/></formatter>
</console-handler>
<console-handler name="MY-CONSOLE" autoflush="true">
<formatter><named-formatter name="MY-PATTERN"/></formatter>
<target name="System.out"/>
</console-handler>
<console-handler name="GOTEST-HANDLER">
<level name="INFO"/>
<formatter><named-formatter name="GOTEST-PATTERN"/></formatter>
</console-handler>
. . .
<logger category="org.america3.gotest" use-parent-handlers="false">
<level name="ALL"/>
<handlers><handler name="GOTEST-HANDLER"/></handlers>
</logger>
<logger category="com.arjuna">
<level name="WARN"/>
</logger>
<logger category="org.jboss.as.config">
<level name="DEBUG"/>
</logger>
<logger category="sun.rmi">
<level name="WARN"/>
</logger>
<root-logger>
<level name="INFO"/>
<handlers><handler name="CONSOLE"/><handler name="FILE"/></handlers>
</root-logger>
<formatter name="PATTERN">
<pattern-formatter pattern="%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n"/>
</formatter>
<formatter name="COLOR-PATTERN">
<pattern-formatter pattern="%K{level}%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n"/>
</formatter>
<formatter name="MY-PATTERN">
<pattern-formatter pattern="MeMeMe%s%n"/>
</formatter>
<formatter name="GOTEST-PATTERN">
<pattern-formatter pattern="%s%n"/>
</formatter>
</subsystem>
. . .
</profile>
I believe the problem is logged here:
INFO [org.apache.activemq.artemis.ra] (default-threads - 1) AMQ151000: awaiting topic/queue creation jms/queue/sendToServerQueue
In other words, the MDB is not fully activated because it can't find the destination "jms/queue/sendToServerQueue". I believe this is because you haven't defined the JNDI bindings properly (since you've only defined an "exported" entry). Try using this:
<jms-queue name="SendToServerQueue" entries="java:jboss/exported/jms/queue/sendToServerQueue java:/jms/queue/sendToServerQueue"/>
Things to do to diagnose issues like this:
Read the log carefully for helpful information.
Check the consumerCount of the queue from which the MDB is supposed to receive messages. If it's 0 then it means there's a problem with the MDB.
I have an autoscaling group in Amazon. I've configured Jboss in each instance as:
<subsystem xmlns="urn:jboss:domain:jgroups:1.1" default-stack="tcps3">
<stack name="tcps3">
<transport type="TCP" socket-binding="jgroups-tcp" diagnostics-socket-binding="jgroups-diagnostics"/>
<protocol type="S3_PING">
<property name="access_key">
xxxxxxxxxxxxx
</property>
<property name="secret_access_key">
/xxxxxxxxxxxx
</property>
<property name="location">
mybucket
</property>
</protocol>
<protocol type="MERGE2"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="BARRIER"/>
<protocol type="pbcast.NAKACK"/>
<protocol type="UNICAST2"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="UFC"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
</stack>
</subsystem>
When I enter in my mybucket I see files for each node, but the sessions are not being replicated.
This is part of my jboss log file in initialization of node-2:
17:27:19,674 INFO [stdout] (pool-25-thread-1) -------------------------------------------------------------------
17:27:19,674 INFO [stdout] (pool-25-thread-1) -------------------------------------------------------------------
17:27:19,675 INFO [stdout] (pool-25-thread-1) GMS: address=ip-172-31-20-76/hibernate, cluster=hibernate, physical address=127.0.0.1:7600
17:27:19,675 INFO [stdout] (pool-25-thread-1) GMS: address=ip-172-31-20-76/hibernate, cluster=hibernate, physical address=127.0.0.1:7600
17:27:19,677 INFO [stdout] (pool-25-thread-1) -------------------------------------------------------------------
17:27:19,677 INFO [stdout] (pool-25-thread-1) -------------------------------------------------------------------
17:27:19,830 INFO [stdout] (pool-15-thread-1)
17:27:19,830 INFO [stdout] (pool-15-thread-1)
17:27:19,831 INFO [stdout] (pool-15-thread-1) -------------------------------------------------------------------
17:27:19,831 INFO [stdout] (pool-15-thread-1) -------------------------------------------------------------------
17:27:19,835 INFO [stdout] (pool-15-thread-1) GMS: address=ip-172-31-20-76/web, cluster=web, physical address=127.0.0.1:7600
17:27:19,835 INFO [stdout] (pool-15-thread-1) GMS: address=ip-172-31-20-76/web, cluster=web, physical address=127.0.0.1:7600
17:27:19,837 INFO [stdout] (pool-15-thread-1) -------------------------------------------------------------------
17:27:19,837 INFO [stdout] (pool-15-thread-1) -------------------------------------------------------------------
17:27:23,569 INFO [org.jboss.as.jpa] (MSC service thread 1-2) JBAS011402: Starting Persistence Unit Service 'QuestoesEAR.ear/QuestoesEJB.jar#CrudPU'
17:27:23,569 INFO [org.jboss.as.jpa] (MSC service thread 1-2) JBAS011402: Starting Persistence Unit Service 'QuestoesEAR.ear/QuestoesEJB.jar#CrudPU'
17:27:23,642 INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (MSC service thread 1-3) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be pasivated.
17:27:23,642 INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (MSC service thread 1-3) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be pasivated.
17:27:23,652 INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (MSC service thread 1-4) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be pasivated.
17:27:23,652 INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (MSC service thread 1-4) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be pasivated.
17:27:23,820 INFO [org.hibernate.annotations.common.Version] (MSC service thread 1-2) HCANN000001: Hibernate Commons Annotations {4.0.1.Final}
17:27:23,820 INFO [org.hibernate.annotations.common.Version] (MSC service thread 1-2) HCANN000001: Hibernate Commons Annotations {4.0.1.Final}
17:27:23,826 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (pool-16-thread-1) ISPN000078: Starting JGroups Channel
17:27:23,826 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (pool-16-thread-1) ISPN000078: Starting JGroups Channel
17:27:23,831 INFO [org.hibernate.Version] (MSC service thread 1-2) HHH000412: Hibernate Core {4.0.1.Final}
17:27:23,831 INFO [org.hibernate.Version] (MSC service thread 1-2) HHH000412: Hibernate Core {4.0.1.Final}
17:27:23,842 INFO [org.hibernate.cfg.Environment] (MSC service thread 1-2) HHH000206: hibernate.properties not found
17:27:23,842 INFO [org.hibernate.cfg.Environment] (MSC service thread 1-2) HHH000206: hibernate.properties not found
17:27:23,845 INFO [org.hibernate.cfg.Environment] (MSC service thread 1-2) HHH000021: Bytecode provider name : javassist
17:27:23,845 INFO [org.hibernate.cfg.Environment] (MSC service thread 1-2) HHH000021: Bytecode provider name : javassist
17:27:23,850 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (pool-16-thread-1) ISPN000094: Received new cluster view: [ip-172-31-20-76/web|0] [ip-172-31-20-76/web]
17:27:23,850 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (pool-16-thread-1) ISPN000094: Received new cluster view: [ip-172-31-20-76/web|0] [ip-172-31-20-76/web]
17:27:23,853 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (pool-16-thread-1) ISPN000079: Cache local address is ip-172-31-20-76/web, physical addresses are [127.0.0.1:7600]
17:27:23,853 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (pool-16-thread-1) ISPN000079: Cache local address is ip-172-31-20-76/web, physical addresses are [127.0.0.1:7600]
17:27:23,872 INFO [org.infinispan.factories.GlobalComponentRegistry] (pool-16-thread-1) ISPN000128: Infinispan version: Infinispan 'Brahma' 5.1.2.FINAL
17:27:23,872 INFO [org.infinispan.factories.GlobalComponentRegistry] (pool-16-thread-1) ISPN000128: Infinispan version: Infinispan 'Brahma' 5.1.2.FINAL
17:27:23,875 INFO [org.infinispan.config.ConfigurationValidatingVisitor] (pool-16-thread-1) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be pasivated.
17:27:23,875 INFO [org.infinispan.config.ConfigurationValidatingVisitor] (pool-16-thread-1) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be pasivated.
17:27:23,909 INFO [org.hibernate.ejb.Ejb3Configuration] (MSC service thread 1-2) HHH000204: Processing PersistenceUnitInfo [
name: CrudPU
...]
17:27:23,909 INFO [org.hibernate.ejb.Ejb3Configuration] (MSC service thread 1-2) HHH000204: Processing PersistenceUnitInfo [
name: CrudPU
...]
17:27:24,126 INFO [org.infinispan.jmx.CacheJmxRegistration] (pool-16-thread-1) ISPN000031: MBeans were successfully registered to the platform mbean server.
17:27:24,126 INFO [org.infinispan.jmx.CacheJmxRegistration] (pool-16-thread-1) ISPN000031: MBeans were successfully registered to the platform mbean server.
17:27:24,184 INFO [org.jboss.as.clustering.infinispan] (pool-16-thread-1) JBAS010281: Started repl cache from web container
17:27:24,184 INFO [org.jboss.as.clustering.infinispan] (pool-16-thread-1) JBAS010281: Started repl cache from web container
17:27:24,212 INFO [org.jboss.as.clustering.impl.CoreGroupCommunicationService.web] (MSC service thread 1-4) JBAS010206: Number of cluster members: 1
17:27:24,212 INFO [org.jboss.as.clustering.impl.CoreGroupCommunicationService.web] (MSC service thread 1-4) JBAS010206: Number of cluster members: 1
First, check you have added <distributable/> in the web.xml file of the application.
After, take a look at these links, there are some problems in replication in EC2:
Problem with packet sizes on EC2
Possible workaround, limiting packet size with FRAG2: FRAG and FRAG2