Using Artemis 2.14 set up in a 4 x node cluster and message redistribution is not behaving as I expected - looking for some help to clarify how it should behave, ie whether my config is wrong or if I'm just expecting the system to do something it doesn't!
The Artemis cluster is acting as central messaging hub serving multiple applications. All nodes in the cluster are configured identically. The various client apps are consumers, producers or both, and are normally also clustered and scaled as appropriate.
An example of the problem situation I have is a with a consumer app which only has two nodes and operates one consumer thread per node, so there are only ever 2 consumers on the Artemis queue it uses - i.e. 2 of the 4 artemis nodes (at most) will have consumers. Producer apps send message to the queue, and for various reasons the situation can arise where messages end up on nodes that don't have a consumer E.g. because the producer client side load doesn't seem to "prefer" nodes with consumers (might ask a separate question on this!) or maybe because the consumer application might be down for maintenance or something but the producer apps are still up and sending messages. We have "redistribution-delay" configured for queue (with a value of 600000) and we expected that these messages would automatically be moved after that time to one of the other nodes that does have consumers, but that doesn't seem to be happening.
Having looked back at the documentation I realise it says "...delay in milliseconds after the last consumer is closed on a queue before redistributing messages..". Does this mean that if there were never any consumers on a particular node (since last restart I guess?) then messages arriving onto that node will never get redistributed? If so any advice on how to deal with this situation?
My broker.xml (modified to simplify and anonymize below)
Thanks!
<?xml version='1.0'?>
<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xi="http://www.w3.org/2001/XInclude"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<name>${ARTEMIS_HOSTNAME}</name>
<metrics-plugin class-name="org.apache.activemq.artemis.core.server.metrics.plugins.ArtemisPrometheusMetricsPlugin"/>
<persistence-enabled>true</persistence-enabled>
<journal-type>NIO</journal-type>
<paging-directory>data/paging</paging-directory>
<bindings-directory>data/bindings</bindings-directory>
<journal-directory>data/journal</journal-directory>
<large-messages-directory>data/large-messages</large-messages-directory>
<journal-datasync>true</journal-datasync>
<journal-min-files>2</journal-min-files>
<journal-pool-files>10</journal-pool-files>
<journal-device-block-size>4096</journal-device-block-size>
<journal-file-size>10M</journal-file-size>
<journal-buffer-timeout>644000</journal-buffer-timeout>
<journal-max-io>1</journal-max-io>
<connectors>
<!-- Connector used to be announced through cluster connections and notifications -->
<connector name="artemis1-${ENV}-connector">tcp://artemis1-${ENV}:61616</connector>
<connector name="artemis2-${ENV}-connector">tcp://artemis2-${ENV}:61616</connector>
<connector name="artemis3-${ENV}-connector">tcp://artemis3-${ENV}:61616</connector>
<connector name="artemis4-${ENV}-connector">tcp://artemis4-${ENV}:61616</connector>
</connectors>
<disk-scan-period>5000</disk-scan-period>
<max-disk-usage>90</max-disk-usage>
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<page-sync-timeout>644000</page-sync-timeout>
<acceptors>
<acceptor name="artemis-clients">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
</acceptors>
<cluster-user>cluster-user</cluster-user>
<cluster-password>XXXXXXXXXXXX</cluster-password>
<cluster-connections>
<cluster-connection name="artemis-cluster-${ENV}">
<address></address>
<connector-ref>${ARTEMIS_HOSTNAME}-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<static-connectors allow-direct-connections-only="true">
<connector-ref>artemis1-${ENV}-connector</connector-ref>
<connector-ref>artemis2-${ENV}-connector</connector-ref>
<connector-ref>artemis3-${ENV}-connector</connector-ref>
<connector-ref>artemis4-${ENV}-connector</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
<address-settings>
<!-- if you define auto-create on certain queues, management has to be auto-create -->
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
<config-delete-addresses>FORCE</config-delete-addresses>
<config-delete-queues>FORCE</config-delete-queues>
</address-setting>
<address-setting match="my.organisation.#"> <!-- standard settings for all queues -->
<!-- error queues automatically created based on these params -->
<dead-letter-address>ERROR_MESSAGES</dead-letter-address>
<auto-create-expiry-resources>true</auto-create-expiry-resources>
<auto-create-dead-letter-resources>true</auto-create-dead-letter-resources>
<dead-letter-queue-prefix></dead-letter-queue-prefix> <!-- override the default -->
<dead-letter-queue-suffix>_error</dead-letter-queue-suffix>
<!-- redelivery & redistribution settings -->
<redelivery-delay>600000</redelivery-delay>
<max-delivery-attempts>9</max-delivery-attempts>
<redistribution-delay>600000</redistribution-delay>
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>false</auto-create-addresses>
<auto-create-jms-queues>false</auto-create-jms-queues>
<auto-create-jms-topics>false</auto-create-jms-topics>
<config-delete-addresses>FORCE</config-delete-addresses>
<config-delete-queues>FORCE</config-delete-queues>
</address-setting>
</address-settings>
<addresses>
<address name="my.organisation.app1.jms.queue"><anycast><queue name="my.organisation.app1.jms.queue" /></anycast></address>
<address name="my.organisation.app2.jms.queue.input"><anycast><queue name="my.organisation.app2.jms.queue.input" /></anycast></address>
<address name="my.organisation.app3.jms.queue.input"><anycast><queue name="my.organisation.app3.jms.queue.input" /></anycast></address>
</addresses>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="amq"/>
</security-setting>
<security-setting match="my.organisation.app1.#">
<permission type="consume" roles="app1_role"/>
<permission type="browse" roles="app1_role"/>
<permission type="send" roles="app1_role"/>
</security-setting>
<security-setting match="my.organisation.app2.#">
<permission type="consume" roles="app2_role"/>
<permission type="browse" roles="app2_role"/>
<permission type="send" roles="app2_role"/>
</security-setting>
<security-setting match="my.organisation.app3.#">
<permission type="consume" roles="app3_role"/>
<permission type="browse" roles="app3_role"/>
<permission type="send" roles="app3_role"/>
</security-setting>
</security-settings>
</core>
</configuration>
With your configuration if a message is sent to a node without a consumer then it should be automatically forwarded to a node that does have a consumer. The documentation calls this "initial distribution." What's termed "redistribution" only comes into play for messages which arrived on a broker while consumers were present and subsequently disconnected.
If you think you're hitting a bug then work up a test-case that reproduces the issue and open a JIRA.
Related
When I connect to ActiveMQ Artemis 2.x broker with JMSToolBox, it tries to create an address with non-durable temporary queue which name is generated from UUID. The security settings on this instance of broker does not allow creation of addresses and queues with arbitrary name, and client gets security exception.
To make JMSToolBox work with this server, I need to allow permissions for all addresses (match="#"): createAddress, createNonDurableQueue, send, consume. I can also grant manage permission, but it seems that it is not used. These permissions are too wide, and I do not wish to allow it for any user who needs just to list queues and read from / write to particular queue.
When I connect, I get an error:
org.apache.activemq.artemis.api.core.ActiveMQSecurityException: AMQ229213: User: amq_user does not have permission='CREATE_NON_DURABLE_QUEUE' for queue 2a092c7c-c335-4f16-867e-c0253d34a3e6 on address 2a092c7c-c335-4f16-867e-c0253d34a3e6
Is there some possible workarounds on the client side, or on the server?
For example, can I specify separate security settings for all temporary queues? There is a temporary-queue-namespace setting in broker.xml, but it seems that it works only with address settings.
The security settings which I specify for match="activemq.management#" also does not have any effect.
Can I specify exact temporary queue name or address prefix on the application side? Can I change something in the application to ensure that it uses some address with predefined or prefixed name?
There is an information in JMSToolBox help about connection requirements and it looks strange because it uses address name with jms.queue prefix (it also does not work):
The following configuration is required in broker.xml for JMSToolBox :
<security-setting match="jms.queue.activemq.management">
<permission type="manage" roles="<admin role>" />
</security-setting>
Here is a code block which creates management session in JMSToolBox:
Session sessionJMS = jmsConnection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Queue managementQueue = ((ActiveMQSession) sessionJMS).createQueue("activemq.management");
QueueRequestor requestorJMS = new QueueRequestor((QueueSession) sessionJMS, managementQueue);
Steps to reproduce:
create ActiveMQ Artemis instance with user admin: artemis.cmd create --user admin --password admin --require-login /path/to/instance
start instance: artemis run
create a read-only user - for example "view" with role "view"
add security settings for address match activemq.management# for role "view"
try to connect from JMSToolBox with user admin (success)
try to connect from JMSToolBox with user view (get an error)
Example security settings:
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="amq"/>
</security-setting>
<security-setting match="activemq.management#">
<permission type="createNonDurableQueue" roles="amq,view"/>
<permission type="deleteNonDurableQueue" roles="amq,view"/>
<permission type="createDurableQueue" roles="amq,view"/>
<permission type="deleteDurableQueue" roles="amq,view"/>
<permission type="createAddress" roles="amq,view"/>
<permission type="deleteAddress" roles="amq,view"/>
<permission type="consume" roles="amq,view"/>
<permission type="browse" roles="amq,view"/>
<permission type="send" roles="amq,view"/>
<permission type="manage" roles="amq,view"/>
</security-setting>
</security-settings>
Full stacktrace:
org.apache.activemq.artemis.api.core.ActiveMQSecurityException: AMQ229213: User: amq_user does not have permission='CREATE_NON_DURABLE_QUEUE' for queue 2a092c7c-c335-4f16-867e-c0253d34a3e6 on address 2a092c7c-c335-4f16-867e-c0253d34a3e6
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:558)
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:450)
at org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQSessionContext.createQueue(ActiveMQSessionContext.java:829)
at org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.internalCreateQueue(ClientSessionImpl.java:2054)
at org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.createQueue(ClientSessionImpl.java:309)
at org.apache.activemq.artemis.jms.client.ActiveMQSession.createTemporaryQueue(ActiveMQSession.java:1007)
at javax.jms.QueueRequestor.<init>(QueueRequestor.java:93)
at org.titou10.jtb.qm.artemis2.ActiveMQArtemis2QManager.connect(ActiveMQArtemis2QManager.java:221)
at org.titou10.jtb.jms.model.JTBConnection.connect(JTBConnection.java:267)
at org.titou10.jtb.handler.SessionConnectHandler$1.run(SessionConnectHandler.java:111)
at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:74)
at org.titou10.jtb.handler.SessionConnectHandler.execute(SessionConnectHandler.java:106)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.base/java.lang.reflect.Method.invoke(Unknown Source)
at org.eclipse.e4.core.internal.di.MethodRequestor.execute(MethodRequestor.java:58)
at org.eclipse.e4.core.internal.di.InjectorImpl.invokeUsingClass(InjectorImpl.java:317)
at org.eclipse.e4.core.internal.di.InjectorImpl.invoke(InjectorImpl.java:251)
at org.eclipse.e4.core.contexts.ContextInjectionFactory.invoke(ContextInjectionFactory.java:173)
at org.eclipse.e4.core.commands.internal.HandlerServiceHandler.execute(HandlerServiceHandler.java:156)
at org.eclipse.core.commands.Command.executeWithChecks(Command.java:488)
at org.eclipse.core.commands.ParameterizedCommand.executeWithChecks(ParameterizedCommand.java:485)
at org.eclipse.e4.core.commands.internal.HandlerServiceImpl.executeHandler(HandlerServiceImpl.java:213)
at org.eclipse.e4.ui.workbench.renderers.swt.HandledContributionItem.executeItem(HandledContributionItem.java:438)
at org.eclipse.e4.ui.workbench.renderers.swt.AbstractContributionItem.handleWidgetSelection(AbstractContributionItem.java:449)
at org.eclipse.e4.ui.workbench.renderers.swt.AbstractContributionItem.lambda$2(AbstractContributionItem.java:475)
at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:89)
at org.eclipse.swt.widgets.Display.sendEvent(Display.java:4256)
at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1066)
at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:4054)
at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3642)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$5.run(PartRenderingEngine.java:1155)
at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:338)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.run(PartRenderingEngine.java:1046)
at org.eclipse.e4.ui.internal.workbench.E4Workbench.createAndRunUI(E4Workbench.java:155)
at org.eclipse.e4.ui.internal.workbench.swt.E4Application.start(E4Application.java:168)
at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:203)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:136)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:104)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:402)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:255)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.base/java.lang.reflect.Method.invoke(Unknown Source)
at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:659)
at org.eclipse.equinox.launcher.Main.basicRun(Main.java:596)
at org.eclipse.equinox.launcher.Main.run(Main.java:1467)
My JMX JConsole always show numberOfEntries=-1. How can I make it reflects the right number?
Detail:
I'm working with Infinispan 14.0.3.Final with following simple configuration. The JConsole statistic is showing that the statisticEnabled=Unavailable and the numberOfEntries=-1 even the hits value is increased when I put entries into the cache.
<infinispan>
<cache-container name="default" statistics="true">
<jmx enabled="true" />
<replicated-cache name="invoices" mode="SYNC"
statistics="true" statistics-available="true">
</replicated-cache>
</cache-container>
</infinispan>
I am using XML based spring integration and use s3-inbound-streaming-channel-adapter to stream from a single s3 bucket.
We now have a requirement to stream from two s3 buckets.
So is it possible for s3-inbound-streaming-channel-adapter to stream from multiple buckets?
Or would I need to create a separate s3-inbound-streaming-channel-adapter for each s3 bucket?
This is my current set up for a single s3 bucket and it does work.
<int-aws:s3-inbound-streaming-channel-adapter
channel="s3Channel"
session-factory="s3SessionFactory"
filter="acceptOnceFilter"
remote-directory-expression="'bucket-1'">
<int:poller fixed-rate="1000"/>
</int-aws:s3-inbound-streaming-channel-adapter>
Thanks in advance.
UPDATE:
I ended up having two s3-inbound-streaming-channel-adapter as mentioned by Artem Bilan below.
However, for each inbound adapter, I had to declare instances of acceptOnceFilter and metadataStore separately.
This is because if I only had one instance of acceptOnceFilter and metadataStore and these were shared the the two inbound adapters, then some weird looping started happening.
e.g. When a file_1.csv arrived on bucket-1 and got processed and then if you put the same file_1.csv on bucket-2 then weird looping started happening. Don't know why! So I ended up creating acceptOnceFilter and metadataStore for each inbound adapter.
`
<!-- ===================================================== -->
<!-- Region 1 s3-inbound-streaming-channel-adapter setting -->
<!-- ===================================================== -->
<bean id="metadataStore" class="org.springframework.integration.metadata.SimpleMetadataStore"/>
<bean id="acceptOnceFilter"
class="org.springframework.integration.aws.support.filters.S3PersistentAcceptOnceFileListFilter">
<constructor-arg index="0" ref="metadataStore"/>
<constructor-arg index="1" value="streaming"/>
</bean>
<int-aws:s3-inbound-streaming-channel-adapter id="s3Region1"
channel="s3Channel"
session-factory="s3SessionFactory"
filter="acceptOnceFilter"
remote-directory-expression="'${s3.bucketOne.name}'">
<int:poller fixed-rate="1000"/>
</int-aws:s3-inbound-streaming-channel-adapter>
<int:channel id="s3Channel">
<int:queue capacity="50"/>
</int:channel>
<!-- ===================================================== -->
<!-- Region 2 s3-inbound-streaming-channel-adapter setting -->
<!-- ===================================================== -->
<bean id="metadataStoreRegion2" class="org.springframework.integration.metadata.SimpleMetadataStore"/>
<bean id="acceptOnceFilterRegion2"
class="org.springframework.integration.aws.support.filters.S3PersistentAcceptOnceFileListFilter">
<constructor-arg index="0" ref="metadataStoreRegion2"/>
<constructor-arg index="1" value="streaming"/>
</bean>
<int-aws:s3-inbound-streaming-channel-adapter id="s3Region2"
channel="s3ChannelRegion2"
session-factory="s3SessionFactoryRegion2"
filter="acceptOnceFilterRegion2"
remote-directory-expression="'${s3.bucketTwo.name}'">
<int:poller fixed-rate="1000"/>
</int-aws:s3-inbound-streaming-channel-adapter>
<int:channel id="s3ChannelRegion2">
<int:queue capacity="50"/>
</int:channel>
`
That's correct, the current implementation supports only a single remote directory to poll periodically. We really are working at this very moment to formalize such a solution as an out-of-the-box feature. Similar request has been reported for the (S)FTP support, especially when the target directory is not know in advance during configuration.
If that is not a big deal for your to configure several channel adapters for each for the directory, that would be great. You always can send messages from them to the same channel for processing.
Otherwise you can consider to loop the list of buckets via:
<xsd:attribute name="remote-directory-expression" type="xsd:string">
<xsd:annotation>
<xsd:documentation>
Specify a SpEL expression which will be used to evaluate the directory
path to where the files will be transferred
(e.g., "headers.['remote_dir'] + '/myTransfers'" for outbound endpoints)
There is no root object (message) for inbound endpoints
(e.g., "#someBean.fetchDirectory");
</xsd:documentation>
</xsd:annotation>
</xsd:attribute>
in some bean.
I have a network of brokers on a Complete Graph topology with 3 nodes at different servers: A, B and C. Every broker has a producer attached and, for testing purposes, only one non-broker consumer on broker C. As I'm using the Complete Graph topology every broker also has a broker-consumer for each of the other nodes.
The problem is: A receives a few messages. I expect it to forward those messages to broker C, which has a "real" consumer attached. This is not happening, broker A stores those messages until a "real" consumer connects to it.
What's wrong with my configuration (or understanding)?
I'm using ActiveMQ 5.9.0.
Here's my activemq.xml for broker A. It's the same for B and C, only changing names:
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="broker-A" dataDirectory="${activemq.data}">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic="tokio.>">
<subscriptionRecoveryPolicy>
<noSubscriptionRecoveryPolicy/>
</subscriptionRecoveryPolicy>
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<managementContext>
<managementContext createConnector="true"/>
</managementContext>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70" />
</memoryUsage>
<storeUsage>
<storeUsage limit="40 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="10 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<networkConnectors>
<networkConnector name="linkTo-broker-B"
uri="static:(tcp://SRVMSG01:61616)"
duplex="true"
/>
<networkConnector name="linkTo-broker-C"
uri="static:(tcp://SRVMSG03:61616)"
duplex="true"
/>
</networkConnectors>
<transportConnectors>
<transportConnector uri="tcp://localhost:0" discoveryUri="multicast://default"/>
<transportConnector name="nio" uri="nio://0.0.0.0:61616" />
</transportConnectors>
</broker>
</beans>
By default, networkTTL is 1 (see documentation), so when a producer on B publishes a message, if it takes the path to A (which it will do 50% of the time in your configuration because you've got the broker set up to round-robin between consumers, more on that in a second), it's not allowed to be forwarded to C. You could fix the problem by increasing the value of networkTTL, but the better solution is to set decreaseNetworkConsumerPriority=true (see documentation at same link as above) to ensure that messages always go as directly as possible to the consumer to which they're destined.
Note, however, that if your consumers move around the mesh, this can strand messages both because the networkTTL value won't allow additional forwards and because messages aren't allowed to be resent to a broker through which they've already passed. You can address those by setting networkTTL to a larger value (like 20, to be completely safe) and by applying the replayWhenNoConsumers=true policy setting described in the "Stuck Messages" section of that same documentation page. Neither of those settings is strictly necessary, as long as you're sure your consumers will never move to another broker or you're OK losing a few messages when it does happen.
I am using Infinispan as L2 cache and I have two application nodes. The L2 cache in two apps are replicated. The two apps are not identical.
One of my app fill the database using web services while other app run GUI for the database.
The both app do the extensively read and write to the database. After running the app I have seen following error. I do not know which cause this error.
I wonder why
- My cache instances are not properly replicated each change to other
L2 cache got a two reposes
L2 responses are not equal
ERROR org.infinispan.interceptors.InvocationContextInterceptor - ISPN000136: Execution error
2013-05-29 06:32:32 ERROR - Exception while processing event, reason: org.infinispan.loaders.CacheLoaderException: Responses contains more than 1 element and these elements are not equal, so can't decide which one to use:
[SuccessfulResponse{responseValue=TransientCacheValue{maxIdle=100000, lastUsed=1369809152081} TransientCacheValue {value=MarshalledValue{instance=, serialized=ByteArray{size=1911, array=0x0301fe0409000000..}, cachedHashCode=1816114786}#57991642}} ,
SuccessfulResponse{responseValue=TransientCacheValue{maxIdle=100000, lastUsed=1369809152116} TransientCacheValue {value=MarshalledValue{instance=, serialized=ByteArray{size=1911, array=0x0301fe0409000000..}, cachedHashCode=1816114786}#6cdaa731}} ]
My Infinispan configuration is
<globalJmxStatistics enabled="true" jmxDomain="org.infinispan" allowDuplicateDomains="true"/>
<transport
transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport"
clusterName="infinispan-hibernate-cluster"
distributedSyncTimeout="50000"
strictPeerToPeer="false">
<properties>
<property name="configurationFile" value="jgroups.xml"/>
</properties>
</transport>
</global>
<default>
</default>
<namedCache name="my-cache-entity">
<clustering mode="replication">
<stateRetrieval fetchInMemoryState="false" timeout="60000"/>
<sync replTimeout="20000"/>
</clustering>
<locking isolationLevel="READ_COMMITTED" concurrencyLevel="1000"
lockAcquisitionTimeout="15000" useLockStriping="false"/>
<eviction maxEntries="10000" strategy="LRU"/>
<expiration maxIdle="100000" wakeUpInterval="5000"/>
<lazyDeserialization enabled="true"/>
<!--<transaction useSynchronization="true"
transactionMode="TRANSACTIONAL" autoCommit="false"
lockingMode="OPTIMISTIC"/>-->
<loaders passivation="false" shared="false" preload="false">
<loader class="org.infinispan.loaders.cluster.ClusterCacheLoader"
fetchPersistentState="false"
ignoreModifications="false" purgeOnStartup="false">
<properties>
<property name="remoteCallTimeout" value="20000"/>
</properties>
</loader>
</loaders>
</namedCache>
Replicated entity caches should be configured with state retrieval, as already indicated in the default Infinispan configuration file and you've already done so. ClusterCacheLoader should only used in special situations (for query caching). Why not just use the default Infinsipan configuration provided? In fact, if you don't configure a config file, it'll use the default one.