How is ordering preserved in ActiveMQ? - jms

I have set up an application to listen to an ActiveMQ topic. Here's the way I have configured it:
<jms:listener-container connection-factory="jmsFactory"
container-type="default" destination-type="durableTopic" client-id="CMY-LISTENER"
acknowledge="transacted">
<jms:listener destination="CMY.UPDATES"
ref="continuingStudiesCourseUpdateListener" subscription="CMY-LISTENER" />
</jms:listener-container>
<bean id="jmsFactoryDelegate" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="${jmsFactory.brokerURL}" />
<property name="redeliveryPolicy">
<bean class="org.apache.activemq.RedeliveryPolicy">
<property name="maximumRedeliveries" value="10" />
<property name="initialRedeliveryDelay" value="60000" />
<property name="redeliveryDelay" value="60000" />
<property name="useExponentialBackOff" value="true" />
<property name="backOffMultiplier" value="2" />
</bean>
</property>
</bean>
The problem I have is this:
I put 10 messages into the topic.
If the first message is read, and the application fails to process the task, it rolls back the message.
1 minute later, it retries to read the first message and process it. It fails and rolls back.
2 minutes later, it retries, and rolls back.
4 minutes later... etc
It gets stuck on the first message and the next 9 messages don't get read until the first one is dealt with.
Is this the way a topic is supposed to work? Is there a way that I can have my 9 other messages read while the first one is waiting to be re-tried?

Its working just like its supposed to, that's the nature of transactional message processing. You can't process the other messages until the first one completes or is discarded based on the rules in your given redelivery policy.
Might want to read though the JMS tutorial here:

Related

Apache Ignite cache write is visible to other client after a delay

We have a 8 node Ignite cluster on production. Below is the cache configuration for one of the caches.
<bean id="cache-template-bean" abstract="true"
class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="inputDataCacheTemplate*"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="backups" value="1"/>
<property name="atomicityMode" value="ATOMIC"/>
<property name="dataRegionName" value="dr.prod.input"/>
<property name="partitionLossPolicy" value="READ_WRITE_SAFE"/>
<property name="writeSynchronizationMode" value="PRIMARY_SYNC"/>
<property name="statisticsEnabled" value="true"/>
<property name="affinity">
<bean class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
<property name="partitions" value="256"/>
</bean>
</property>
<property name="expiryPolicyFactory">
<bean class="javax.cache.expiry.CreatedExpiryPolicy" factory-method="factoryOf">
<constructor-arg>
<bean class="javax.cache.expiry.Duration">
<constructor-arg value="DAYS"/>
<constructor-arg value="7"/>
</bean>
</constructor-arg>
</bean>
</property>
</bean>
We are seeing a strange behaviour. It is as follows
Application A writes a record to cache
Application B tries to read that record
Application B is unable to find record in cache, so it inserts new one thereby wiping the data entered by Application A
3 happens very rarely. There are 1000 such cache miss for about 50M events we receive daily.
Gap between 1 and 2 is more than 20ms at least.
We tried putting a code in Application B where on first cache miss we wait for about 20ms. Now we could reduce those misses by a great margin. But still there were some misses. The fact that app B could read same record it could not find after a delay means that app A is not failing in record insertion, nor there is some other network factor which is impacting inserts nor it is because eviction or expiry. It is also ensured that for 1 and 2 key used for put and get operations is same.
What could be going on here? Please help.
I think it's more likely that you have a race condition.
Application B tries to read that record
Application A writes a record to cache
Application B is unable to find record in cache, so it inserts new one thereby wiping the data entered by Application A
Clients generally go to the primary partition to retrieve data, so it's incredibly unlikely that applications A and B are seeing different data.
The traditional way of dealing with this is with transactions, which would also work in Ignite.
Better might be using different APIs. For example, there's IgniteCache#getAndPutIfAbsent() and IgniteCache#putIfAbsent(), both of which do the check and write atomically without needing transactions.

Active mq consumers count decreasing to 0

We are facing a weird problem in which the activemq consumers for some random queues are decreasing till they become 0 after which they are not able to recover.
Once this happens we have to redeploy the consumer application again to start processing.
We have been struggling with this issue for some time but could not figure out the root cause.
activemq broker version 5.14.5
following is the connection configuration.
<bean id="activeMQIconnectConnectionFactory" class="test.ActiveMQIconnectConnectionFactory">
<property name="brokerURL" value="failover:(tcp://localhost:61616)"/>
<property name="prefetchPolicy" ref="prefetchPolicy"/>
<property name="redeliveryPolicy" ref="redeliveryPolicy"/>
<property name="trustAllPackages" value="true"/>
<!-- http://activemq.apache.org/consumer-dispatch-async.html
The default setting is dispatchAsync=true
If you want better thoughput and the chances of having a slow consumer are low, you may want to change this to false.
-->
<property name="dispatchAsync" value="true"/>
<!--
whether or not timestamps on messages should be disabled or not. If you disable them it adds a small performance boost.
Default is false
-->
<property name="disableTimeStampsByDefault" value="true"/>
<!-- http://activemq.apache.org/optimized-acknowledgement.html
This option is disabled by default but can be used to improve throughput in some circumstances as it decreases load on the broker.
-->
<property name="optimizeAcknowledge" value="true"/>
<!-- Default 300ms
For us, 5 sec.
-->
<property name="optimizeAcknowledgeTimeOut" value="5000"/>
<property name="useAsyncSend" value="true"/>
<property name="exceptionListener" ref="jmsExceptionListener"/>
</bean>
<bean id="testQueue" class="org.apache.activemq.command.ActiveMQQueue">
<constructor-arg index="0" value="test.queue"/>
</bean>
<bean id="jmsProducerFactoryPool" class="org.apache.activemq.pool.PooledConnectionFactory" destroy-method="stop"
init-method="start">
<property name="connectionFactory" ref="activeMQIconnectConnectionFactory"/>
<property name="maxConnections" value="10"/>
<property name="maximumActiveSessionPerConnection"
value="1000"/>
<property name="createConnectionOnStartup" value="true"/>
</bean>
<bean id="jmsConsumerFactoryPool" class="org.apache.activemq.pool.PooledConnectionFactory" destroy-method="stop"
init-method="start">
<property name="connectionFactory" ref="activeMQIconnectConnectionFactory"/>
<property name="maxConnections" value="10"/>
<property name="maximumActiveSessionPerConnection" value="1000"/>
<property name="createConnectionOnStartup" value="true"/>
<property name="reconnectOnException" value="true"/>
<property name="idleTimeout" value="86400000"/>
</bean>
<bean id="redeliveryPolicy" class="org.apache.activemq.RedeliveryPolicy">
<property name="maximumRedeliveries" value="1"/>
<property name="queue" value="*"/>
</bean>
<bean id="prefetchPolicy" class="org.apache.activemq.ActiveMQPrefetchPolicy">
<property name="queuePrefetch" value="500"/>
</bean>
<bean id="jmsTemplate" class="com.minda.iconnect.jms.impl.TimedJmsTemplate">
<property name="connectionFactory" ref="jmsProducerFactoryPool"/>
<property name="defaultDestinationName" value="iconnect.queue"/>
<property name="deliveryPersistent" value="true"/>
<!-- I think this is ingored if explicitQosEnabled is not set -->
</bean>
<bean id="simpleMessageConverter" class="org.springframework.jms.support.converter.SimpleMessageConverter"/>
<bean id="testProducer"
class="com.test.TestProducer">
<property name="consumerDestination" ref="testQueu"/>
<property name="jmsTemplate" ref="jmsTemplate"/>
<property name="messageConverter" ref="simpleMessageConverter"/>
</bean>
<bean id="testContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="connectionFactory" ref="jmsConsumerFactoryPool"/>
<property name="destination" ref="testS"/>
<property name="messageListener">
<bean class="org.springframework.jms.listener.adapter.MessageListenerAdapter">
<property name="delegate" ref="testConsumer"/>
<property name="defaultListenerMethod" value="process"/>
<property name="messageConverter" ref="simpleMessageConverter"/>
</bean>
</property>
<property name="concurrentConsumers" value="50"/>
<property name="maxConcurrentConsumers" value="100"/>
<property name="sessionTransacted" value="false"/>
<property name="autoStartup" value="true"/>
</bean>
</beans>
class for connectionFactory
public class ActiveMQIconnectConnectionFactory extends org.apache.activemq.ActiveMQConnectionFactory
{
private static final Logger LOG = LoggerFactory.getLogger(ActiveMQIconnectConnectionFactory.class);
#Override
public void setBrokerURL(String brokerURL)
{
// LOG when connecting to activemq
// using this setter to be sure it's only logged once
// See DJ-5780
LOG.info("ActiveMQ configured is: " + (DEFAULT_BROKER_URL.equals(brokerURL) ? "(default init setting) " : "") + brokerURL);
LOG.info("Connecting to ActiveMQ");
super.setBrokerURL(brokerURL);
}
}
Till now we have been playing around the parameters for timeouts etc but not luck.
We suspect that the issue is occurring due to some connection issue or handling of connections via DMLC, but could not identify the problem. Help is highly appreciated!
I think that your problem is a mix of Spring DMLC and AMQ behaviors based on your configurations affecting each other.
try by changing :
<property name="optimizeAcknowledgeTimeOut" value="500"/>
AND
org.springframework.jms.listener.DefaultMessageListenerContainer.setReceiveTimeout(0);
Or
org.springframework.jms.listener.DefaultMessageListenerContainer.setReceiveTimeout(10000);
org.springframework.jms.listener.DefaultMessageListenerContainer.setIdleTaskExecutionLimit(100);
https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/jms/listener/AbstractPollingMessageListenerContainer.html#setReceiveTimeout-long-
https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/jms/listener/DefaultMessageListenerContainer.html#setIdleTaskExecutionLimit-int-
public void setIdleTaskExecutionLimit(int
idleTaskExecutionLimit) Specify the limit for idle executions of a
consumer task, not having received any message within its execution.
If this limit is reached, the task will shut down and leave receiving
to other executing tasks. The default is 1, closing idle resources
early once a task didn't receive a message. This applies to dynamic
scheduling only; see the "maxConcurrentConsumers" setting. The minimum
number of consumers (see "concurrentConsumers") will be kept around
until shutdown in any case.
Within each task execution, a number of message reception attempts
(according to the "maxMessagesPerTask" setting) will each wait for an
incoming message (according to the "receiveTimeout" setting). If all
of those receive attempts in a given task return without a message,
the task is considered idle with respect to received messages. Such a
task may still be rescheduled; however, once it reached the specified
"idleTaskExecutionLimit", it will shut down (in case of dynamic
scaling).
Raise this limit if you encounter too frequent scaling up and down.
With this limit being higher, an idle consumer will be kept around
longer, avoiding the restart of a consumer once a new load of messages
comes in. Alternatively, specify a higher "maxMessagesPerTask" and/or
"receiveTimeout" value, which will also lead to idle consumers being
kept around for a longer time (while also increasing the average
execution time of each scheduled task).
http://activemq.apache.org/performance-tuning.html
Optimized Acknowledge When consuming messages in auto acknowledge mode
(set when creating the consumers' session), ActiveMQ can acknowledge
receipt of messages back to the broker in batches (to improve
performance). The batch size is 65% of the prefetch limit for the
Consumer. Also if message consumption is slow the batch will be sent
every 300ms. You switch batch acknowledgment on by setting
optimizeAcknowledge=true on the ActiveMQ ConnectionFactory.
http://activemq.apache.org/what-is-the-prefetch-limit-for.html
Once the broker has dispatched a prefetch limit number of messages to
a consumer it will not dispatch any more messages to that consumer
until the consumer has acknowledged at least 50% of the prefetched
messages, e.g., prefetch/2, that it received. When the broker has
received said acknowledgements it will dispatch a further prefetch/2
number of messages to the consumer to 'top-up', as it were, its
prefetch buffer. Note that it's possible to specify a prefetch limit
on a per consumer basis (see below).

Prevent duplicates across restarts in spring integration

I have to poll a directory and write entries to rdbms.
I wired up a redis metadatstore for duplicates check. I see that the framework updates the redis store with entries for all files in the folder [~ 140 files], much before the rdbms entries gets written. At the time of application termination, rdbms has logged only 90 files. On application restart no more files are picked from folder.
Properties: msgs.per.poll=10, polling.interval=2000
How can I ensure entries to redis are made after writing to db, so that both are in sync and I don't miss any files.
<code>
<task:executor id="executor" pool-size="5" />
<int-file:inbound-channel-adapter channel="filesIn" directory="${input.Dir}" scanner="dirScanner" filter="compositeFileFilter" prevent-duplicates="true">
<int:poller fixed-delay="${polling.interval}" max-messages-per-poll="${msgs.per.poll}" task-executor="executor">
</int:poller>
</int-file:inbound-channel-adapter>
<int:channel id="filesIn" />
<bean id="dirScanner" class="org.springframework.integration.file.RecursiveLeafOnlyDirectoryScanner" />
<bean id="compositeFileFilter" class="org.springframework.integration.file.filters.CompositeFileListFilter">
<constructor-arg ref="persistentFilter" />
</bean>
<bean id="persistentFilter" class="org.springframework.integration.file.filters.FileSystemPersistentAcceptOnceFileListFilter">
<constructor-arg ref="metadataStore" />
</bean>
<bean name="metadataStore" class="org.springframework.integration.redis.metadata.RedisMetadataStore">
<constructor-arg name="connectionFactory" ref="redisConnectionFactory"/>
</bean>
<bean id="redisConnectionFactory" class="org.springframework.data.redis.connection.jedis.JedisConnectionFactory" p:hostName="localhost" p:port="6379" />
<int-jdbc:outbound-channel-adapter channel="filesIn" data-source="dataSource" query="insert into files values (:path,:name,:size,:crDT,:mdDT,:id)"
sql-parameter-source-factory="spelSource">
</int-jdbc:outbound-channel-adapter>
....
</code>
Artem is correct, you might as well extend the RedisMetadataStore and flush the entries that are not in your database on initialization time, this way you could use Redis and be in sync with the DB. But this kind of couples things a little.
How can I ensure entries to redis are made after writing to db
It's isn't possible, because FileSystemPersistentAcceptOnceFileListFilter works before any message sending and only once, when FileReadingMessageSource.toBeReceived is empty. Of course, it tries to refetch files on the next application restart, but it can't do that because your RedisMetadataStore already contains entries for those files.
I think we don't have in your case any choice unless use some custom JdbcFileListFilter based on your files table. Fortunately you logic ends up with file entry anyway.

c3p0 pool is not shrinking

I am using c3p0 connection pool with Spring (with plain jdbc, no hibernate). Here is my configuration for pool
<bean id="myDataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close">
<property name="driverClass" value="${jdbc.driver}"/>
<property name="jdbcUrl" value="${jdbc.url}"/>
<property name="user" value="${jdbc.username}"/>
<property name="password" value="${jdbc.password}"/>
<property name="acquireIncrement" value="3"/>
<property name="minPoolSize" value="3"/>
<property name="maxPoolSize" value="25"/>
<property name="maxStatementsPerConnection" value="0"/>
<property name="numHelperThreads" value="6"/>
<property name="testConnectionOnCheckout" value="false" />
<property name="testConnectionOnCheckin" value="false" />
<property name="idleConnectionTestPeriod" value="10"/>
<property name="preferredTestQuery" value="select curdate()"/>
<property name="maxIdleTime" value="5" />
<property name="unreturnedConnectionTimeout" value="5" />
<property name="debugUnreturnedConnectionStackTraces" value="true" />
</bean>
I am using JMX to monitor my connection pool. I see that my pool grows to 25 under load but never shrinks back. Am I missing some cofig here ?
The default is to not shrink the pool. You need to set maxIdleTimeExcessConnections. From the manual (emphasis added):
maxIdleTimeExcessConnections
Default: 0
Number of seconds that Connections in excess of minPoolSize should be
permitted to remain idle in the pool before being culled. Intended for
applications that wish to aggressively minimize the number of open
Connections, shrinking the pool back towards minPoolSize if, following
a spike, the load level diminishes and Connections acquired are no
longer needed. If maxIdleTime is set, maxIdleTimeExcessConnections
should be smaller if the parameter is to have any effect. Zero means
no enforcement, excess Connections are not idled out.
For your pool to shrink after a period of high load:
Define a reasonable values for minPoolSize and maxPoolSize. The defaults are 3 and 15, respectively which should work OK for many applications.
Set a value for maxIdleTime. I picked 2700 seconds (45 mins) because the Cisco firewall between my app and db servers times out TCP connections after an hour. C3P0 will discard connections that have idled for longer than this period of time.
Set a value for maxIdleTimeExcessConnections. I picked 600 seconds (10 mins). C3P0 will close connections that have idled for this period of time until there are only minPoolSize connections open.

Purpose of taskExecutor property in Spring's DefaultMessageListenerContainer

The Spring's DefaultMessageListenerContainer (DMLC) has concurrentConsumer and taskExecutor property. The taskExecutor bean can be given corePoolSize property. What is then the difference between specifying concurrentConsumer and corePoolSize ? When concurrentConsumer property is defined it means that Spring will create specified number of consumer/messageListeners to process the message. When does corePoolSize comes into picture ?
Code snippet
<bean id="myMessageListener"
class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="connectionFactory" ref="connectionFactory" />
<property name="destination" ref="myQueue" />
<property name="messageListener" ref="myListener" />
<property name="cacheLevelName" value="CACHE_CONSUMER"/>
<property name="maxConcurrentConsumers" value="10"/>
<property name="concurrentConsumers" value="3"/>
<property name="taskExecutor" ref="myTaskExecutor"/>
</bean>
<bean id="myTaskExecutor" class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor" >
<property name="corePoolSize" value="100"/>
<property name="maxPoolSize" value="100"/>
<property name="keepAliveSeconds" value="30"/>
<property name="threadNamePrefix" value="myTaskExecutor"/>
</bean>
According to 4.3.6 version, the taskExecutor contains instances of AsyncMessageListenerInvoker which responsible for message processing. corePoolSize is a number of physical threads in the defined pool, while concurrentConsumer is a number of tasks in this pool. I guess this abstraction was designed for more flexible control.
The Purpose of TaskExecutor Property
Set the Spring TaskExecutor to use for running the listener threads.
Default is a SimpleAsyncTaskExecutor, starting up a number of new threads, according to the specified number of concurrent consumers.
Specify an alternative TaskExecutor for integration with an existing thread pool.
Above is from [Spring Official Documentation][1]
When you specify the alternative task executor, then instead of using the asyncTaskExcutor the listener threads will use the defined task executor.
This can be easily illustrated when we define two jmsListeners with the same containerFactory. when you specify the concurrency, the concurrency should support the taskExecutor corePoolSize and maxPoolSize.
If you set the concurrency as 5-20 and you have two listeners then you should set the core poolSize more than 10 and the maxPoolSize more than 40. then listeners can get the threads accordingly their concurrency limit.
In this case, If you set the maxPoolsize to less than 10 then the listener containers will not be upon 10. From the spring you will get below warning as well
The number of scheduled consumers has dropped below concurrent consumers limit, probably due to tasks having been rejected. Check your thread pool configuration! Automatic recovery to be triggered by remaining consumers.
basically, the listener threads will act based on the taskExecutor property.

Resources