how to use amqp StatefulRetryOperationsInterceptor - amqp

I try to use StatefulRetryOperationsInterceptor in mycode like that:
#Bean
public StatefulRetryOperationsInterceptor statefulRetryOperationsInterceptor(){
return RetryInterceptorBuilder.stateful()
.maxAttempts(5)
.backOffOptions(1000,2.0,10000)
.build();
}
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(ConnectionFactory connectionFactory) {
logger.info("==> custom rabbitmq Listener factory:"+ connectionFactory);
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setConcurrentConsumers(3);
factory.setMaxConcurrentConsumers(10);
factory.setAcknowledgeMode(AcknowledgeMode.MANUAL);
factory.setPrefetchCount(200);
factory.setAdviceChan(new Advice[]{
statefulRetryOperationsInterceptor()
}) //add retry
return factory;
}
my code run well but when there is exception in consumer there are no retry at all.
so how to use the StatefulRetryOperationsInterceptor? is that class used to catch exception and do the resend?
And if Exception happaned I want to requeue the message and resend to the consumer for 5 times, then send the message to dead queue, how to use amqp more elegant?
according to #Gary Russell's answer ,I use redis to record exceptions, Is there any method like StatefulRetryOperationsInterceptor to do that more elegant?
try {
receiveMessage(message);
channel.basicAck(deliveryTag, false);
redisTemplate.opsForHash().delete(MQConstants.MQ_CONSUMER_RETRY_COUNT_KEY, messageProperties.getMessageId());
}
catch (Exception e) {
if (consumerCount >= MQConstants.MAX_CONSUMER_COUNT) {
channel.basicReject(deliveryTag, false);
} else {
redisTemplate.opsForHash().increment(MQConstants.MQ_CONSUMER_RETRY_COUNT_KEY,
messageProperties.getMessageId(), 1);
Thread.sleep((long) (Math.pow(MQConstants.BASE_NUM, consumerCount)*1000));
channel.basicNack(deliveryTag, false, true);
}
}

factory.setAcknowledgeMode(AcknowledgeMode.MANUAL);
Since you are using manual acks - you are on your own; the container can't help you. You need to use AUTO with stateful retry or use channel.basicReject() to requeue.

Related

Configuring sessionAcknowledgeMode in DefaultMessageListenerContainer

I have a setup where I have to read a message from a queue in an ActiveMQ broker. Once the message is read I have to do a long-running operation on the message.
Due to this long-running operation on the message I want to acknowledge the message as soon as possible so the resources on the broker are released. The plan would be to execute the following steps once a message is received:
Get message from ActiveMQ
Insert message into DB
Acknowledge message
Do some long-running operation with the message
I've read about JMS and the different acknowledge modes, so before even trying to do that I decided to set up an application where I could try the different modes to understand how they are processes, unfortunately I cannot seem to get my desired output.
Following the information in this answer https://stackoverflow.com/a/10188078 as well as https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/jms/listener/DefaultMessageListenerContainer.html I thought that by using AUTO_ACKNOWLEDGE the message would be acknowledged before my listener is even called, but if I throw an exception in the listener the message is redelivered.
I've tried both with and without setting the setSessionTransacted to true, but in both cases I get the same output. The message is redelivered when an exception is thrown in the JmsListener.
Configuration of JMS
#Bean
public ConnectionFactory connectionFactory() {
ConnectionFactory connectionFactory =
new ActiveMQConnectionFactory(jmsConfig.getBrokerUrl());
return connectionFactory;
}
#Bean
public JmsTemplate jmstemplate(){
JmsTemplate jmsTemplate = new JmsTemplate();
jmsTemplate.setConnectionFactory(connectionFactory());
//jmsTemplate.setSessionTransacted(true);
jmsTemplate.setDefaultDestinationName( jmsConfig.getQueueIn() );
return jmsTemplate;
}
#Bean
public JmsListenerContainerFactory jmsListenerContainerFactoryxxxx(
ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setSessionAcknowledgeMode(Session.AUTO_ACKNOWLEDGE);
//factory.setConcurrency("1");
factory.setSessionTransacted(true);
configurer.configure(factory, connectionFactory);
return factory;
}
JmsListener
#JmsListener(destination = "B1Q1", containerFactory = "jmsListenerContainerFactoryxxxx")
public void receiveMessage(Message message) {
try {
TextMessage m = (TextMessage) message;
String messageText = m.getText();
int retryNum = message.getIntProperty("JMSXDeliveryCount");
long s = message.getLongProperty("JMSTimestamp");
Date d = new Date( s );
String dbText = String.format("Retry %d. Message: %s", retryNum, messageText);
if ( messageText.toLowerCase().contains("exception") ) {
logger.info("Creating exception for retry: {}", retryNum);
throw new RuntimeException();
}
} catch (JMSException e) {
logger.error("Exception!!", e);
}
}
How should I change the code so that the message is not redelivered when an exception is thrown?
Going back to my application where I would be inserting the message into the DB. How could I acknowledge the message in by JmsListener after the message is inserted in the DB but before executing the long-running task?
In order to be able to use AUTO_ACKNOWLEDGE or CLIENT_ACKNOWLEDGE I had to call factory.setSessionTransacted(false) after configuring the factory.
Calling configurer.configure(factory, connectionFactory) overrides the value of sessionTransacted, in my case it was setting it to true which rendered AUTO_ACKNOWLEDGE or CLIENT_ACKNOWLEDGE ineffective. Here's the relevant code of DefaultJmsListenerContainerFactoryConfigurer.java:
public void configure(DefaultJmsListenerContainerFactory factory, ConnectionFactory connectionFactory) {
...
...
if (this.transactionManager != null) {
factory.setTransactionManager(this.transactionManager);
} else {
factory.setSessionTransacted(true);
}
...
...
factory.setSessionAcknowledgeMode(Tibjms.EXPLICIT_CLIENT_ACKNOWLEDGE);
//factory.setSessionTransacted(false);// here it’s not working
factory.setTaskExecutor(new SimpleAsyncTaskExecutor("KDBMessageListener-"));
configurer.configure(factory, connectionFactory);
factory.setSessionTransacted(false); //post configure ,session transacted is working

Using a Connection Name List in a JMS Load Balanced Environment

Our JMS infrastructure is load balanced. As a result of this, I am attempting to use a connectionNameList when configuring the connection factory. The idea here is that any JMS message that arrives on either of the primary or secondary queue manager will get picked up and processed. However, it only appears that messages are being picked up by the primary.
Here is my listener annotation:
#JmsListener(destination = "${request-queue}", containerFactory = "DefaultJmsListenerContainerFactory")
public void onMessage(Message msg) {
System.out.println(msg.toString());
}
Here is the JMS listener container factory:
#Bean(name = "DefaultJmsListenerContainerFactory")
public DefaultJmsListenerContainerFactory createJmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(buildConnectionFactory());
factory.setConcurrency(numberOfListeners);
factory.setSessionAcknowledgeMode(Session.AUTO_ACKNOWLEDGE);
factory.setSessionTransacted(false);
factory.setErrorHandler(queueErrorHandler);
factory.setBackOff(getBackOffStrategy());
return factory;
}
And here is the connection factory:
#Bean(name = "MQConnectionFactory")
public ConnectionFactory buildConnectionFactory() {
try {
MQConnectionFactory mqcf = new MQConnectionFactory();
mqcf.setConnectionNameList(mq1.daluga.com(2171),mq2.daluga.com(2171));
mqcf.setChannel(channel);
mqcf.setTransportType(WMQConstants.WMQ_CM_CLIENT);
return mqcf;
} catch (Exception e) {
throw new RuntimeException(message, e);
}
}
I suspect something in my configuration is just not right. Is there anything obvious that folks see that might cause messages not to be picked up from the secondary queue manager?
Thanks!

Spring JMS - activemq - individualDLQ not used

I'm trying to set up spring JMS for activemq, and I'd like individual DLQs for easier monitoring rather than everything being lumped on one DLQ.
However my bean for this doesn't seem to be picked up. Could anyone point me out what I'm doing wrong as the documentation's pretty vague on how to do this programatically?
My Queue config:
#Bean
public MessageConverter jacksonJmsMessageConverter() {
MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter();
converter.setTargetType(MessageType.TEXT);
converter.setTypeIdPropertyName("_type");
return converter;
}
#Bean
public DeadLetterStrategy deadLetterStrategy() {
IndividualDeadLetterStrategy deadLetterStrategy = new IndividualDeadLetterStrategy();
deadLetterStrategy.setQueueSuffix(".dlq");
return deadLetterStrategy;
}
#Bean
public RedeliveryPolicy redeliveryPolicy() {
RedeliveryPolicy redeliveryPolicy = new RedeliveryPolicy();
redeliveryPolicy.setInitialRedeliveryDelay(5000);
redeliveryPolicy.setBackOffMultiplier(2);
redeliveryPolicy.setUseExponentialBackOff(true);
redeliveryPolicy.setMaximumRedeliveries(5);
return redeliveryPolicy;
}
#Bean
public Queue myQueue() {
ActiveMQQueue queue = new ActiveMQQueue("myQueue");
return queue;
}
You can apply Individual Dead Letter Strategy using configurations something like this
#Bean
DeadLetterStrategy deadLetterStrategy(){
IndividualDeadLetterStrategy dlq = new IndividualDeadLetterStrategy(); //Messages of each will get to their respective Dead Letter Queues. if Original queue = 'x', its DLQ = 'prefix + x'
dlq.setQueueSuffix(".dlq");
dlq.setUseQueueForQueueMessages(true);
return dlq;
}
#Bean
public BrokerService brokerService(#Autowired DeadLetterStrategy strategy) throws Exception {
BrokerService broker = new BrokerService();
TransportConnector connector = new TransportConnector();
connector.setUri(new URI("your broker url")); //default/embedded broker url: vm://localhost?broker.persistent=true
broker.addConnector(connector);
PolicyEntry entry = new PolicyEntry();
entry.setDestination(new ActiveMQQueue("*")); //given DeadLetterStrategy will be applied to all types of Queues; ',' can also be used
entry.setDeadLetterStrategy(strategy);
PolicyMap map = new PolicyMap();
map.setPolicyEntries(Arrays.asList(entry));
broker.setDestinationPolicy(map);
return broker;
}
And finally your queue should look like this:
#JmsListener(destination = "main_queue_name" + ".dlq")
protected void processFailedItem(YourCustomPojo data) {
//do whatever you want
}

Change Active MQ RedeliveryPolicy for the embedded ActiveMQ in Sprint Boot

How to change the redelivery policy for the embedded ActiveMQ when using with Spring Boot? I tried specifying FixedBackOff on the DefaultJmsListenerContainerFactory but it didn't help. Below is code I am using to initialize the jms factory bean. I have a message consumer processing incoming messages on a queue. During processing because of unavailable resource, I throw a checked exception. I am hoping to have the message redelivered for processing after a fixed interval.
Spring Boot : 1.5.7.Release
Java : 1.7
#Bean
public JmsListenerContainerFactory<?> publishFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory =
new DefaultJmsListenerContainerFactory();
factory.setBackOff(new FixedBackOff(5000, 5));
// This provides all boot's default to this factory, including the message converter
configurer.configure(factory, connectionFactory);
// You could still override some of Boot's default if necessary.
factory.setErrorHandler(new ErrorHandler() {
#Override
public void handleError(Throwable t) {
LOG.error("Error occured in JMS transaction.", t);
}
});
return factory;
}
Consumer Code:
#JmsListener(destination = "PublishQueue", containerFactory = "publishFactory")
#Transactional
public void receiveMessage(PublishData publishData) {
LOG.info("Processing incoming message on publish queue with transaction id: " + publishData.getTransactionId());
PublishUser user = new PublishUser();
user.setPriority(1);
user.setUserId(publishData.getUserId());
LOG.trace("Trying to enroll in the publish lock queue for user: " + user);
PublishLockQueue lockQueue = publishLockQueueService.createLock(user);
if (lockQueue == null)
throw new RuntimeException("Unable to create lock for publish");
LOG.trace("Publish lock queue obtained with lock queue id: " + lockQueue.getId());
try {
publishLockQueueService.acquireLock(lockQueue.getId());
LOG.trace("Acquired publish lock.");
}
catch (PublishLockQueueServiceException pex) {
throw new RuntimeException(pex);
}
try {
publishService.publish(publishData, lockQueue.getId());
LOG.trace("Completed publish of changes.");
sendPublishSuccessNotification(publishData);
}
finally {
LOG.trace("Trying to release lock to publish.");
publishLockQueueService.releaseLock(lockQueue.getId());
}
LOG.info("Publish has been completed for transaction id: " + publishData.getTransactionId());
}
#claus answerd: i tested it to work:
Its the consumer, you need to use transacted acknowledge mode to let the consumer rollback on exception and let ActiveMQ be able to re-deliver the message to the same consumer or another consumer if you have multiple consumers running. You can however configure redelivery options on the ActiveMQ such as backoff etc. The error handler above is just a noop listener which cannot do very much other than logging

Why import AsyncRabbitTemplate in spring-amqp

When processing the reply message with AsyncRabbitTemplate.sendAndReceive() or AsyncRabbitTemplate.convertSendAndReceive() method, since the reply message is returned asynchronously with calling method, we can use message listener for reply queue to receive and process reply message, why spring-amqp framework import AsyncRabbitTemplate and RabbiteMessageFuture to process the reply message? For message listener, we can control the related consumer thread,
but for RabbitMessageFuture, the background thread can not be managed.
-------------------Added on 2017/01/06----------------------------
It's simply your choice.
Replies can come back in a different order to sends.
With the async template, the framework takes care of the correlation
for you the reply will appear in the future returned by the send
method.
When you use your own listener, you will have to take care of the
correlation yourself.
Thank you. I know this difference.But there is still a problem. If I use message listener, I can ack the reply message manually(If my message listener
implements ChannelAwareMessageListener interface and I can get the channel instance).But when I use asyncRabbitTemplate, can I ack the reply message manually? It seems that sendAndReceive method ack the reply message automatically.
I don't understand what you mean; since you can inject the listener
container into the template, you have the same "control" either way.
It seems there is some problem in this mean.
I created a rabbitTemplate instance and simple message listener container. But when I use them to construct an asyncRabbitTemplate instance as following code:
#Bean(name="rabbitTemplate")
public RabbitTemplate getRabbitTemplate()
{
RabbitTemplate rabbitTemplate = new RabbitTemplate(getConnectionFactory());
rabbitTemplate.setUseTemporaryReplyQueues(false);
rabbitTemplate.setReplyAddress("replyQueue");
rabbitTemplate.setReceiveTimeout(60000);
rabbitTemplate.setReplyTimeout(60000);
return rabbitTemplate;
}
#Bean(name="asyncRabbitTemplate")
public AsyncRabbitTemplate getAsyncRabbitTemplate()
{
AsyncRabbitTemplate asyncRabbitTemplate =
new AsyncRabbitTemplate(getRabbitTemplate(), createReplyListenerContainer());
asyncRabbitTemplate.setAutoStartup(true);
asyncRabbitTemplate.setReceiveTimeout(60000);
return asyncRabbitTemplate;
}
#Bean(name="replyMessageListenerContainer")
public SimpleMessageListenerContainer createReplyListenerContainer() {
SimpleMessageListenerContainer listenerContainer = new SimpleMessageListenerContainer();
listenerContainer.setConnectionFactory(getConnectionFactory());
listenerContainer.setQueueNames("replyQueue");
listenerContainer.setMessageListener(getRabbitTemplate());
listenerContainer.setRabbitAdmin(getRabbitAdmin());
listenerContainer.setAcknowledgeMode(AcknowledgeMode.AUTO);
return listenerContainer;
}
I found I can not send message successfully. The consumer server can not receive the message.
But when I create asyncRabbitTemplate instance with following code, I found the message can be sent and received successfully.
#Bean(name="asyncRabbitTemplate")
public AsyncRabbitTemplate getAsyncRabbitTemplate()
{
AsyncRabbitTemplate asyncRabbitTemplate =
new AsyncRabbitTemplate(getConnectionFactory(),
"sendMessageExchange",
"sendMessageKey",
"replyQueue");
asyncRabbitTemplate.setReceiveTimeout(60000);
asyncRabbitTemplate.setAutoStartup(true);
return asyncRabbitTemplate;
}
If there is something wrong with my source code?
I used the spring-boot-ampq 1.4.3.RELEASE.
It's simply your choice.
Replies can come back in a different order to sends.
With the async template, the framework takes care of the correlation for you - the reply will appear in the future returned by the send method.
When you use your own listener, you will have to take care of the correlation yourself.
For message listener, we can control the related consumer thread, but for RabbitMessageFuture, the background thread can not be managed.
I don't understand what you mean; since you can inject the listener container into the template, you have the same "control" either way.
EDIT
#SpringBootApplication
public class So41481046Application {
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext context = SpringApplication.run(So41481046Application.class, args);
AsyncRabbitTemplate asyncTemplate = context.getBean(AsyncRabbitTemplate.class);
RabbitConverterFuture<String> future = asyncTemplate.convertSendAndReceive("foo");
try {
String out = future.get(10, TimeUnit.SECONDS);
System.out.println(out);
}
finally {
context.close();
}
System.exit(0);
}
#Bean
public AsyncRabbitTemplate asyncTemplate(RabbitTemplate rabbitTemplate, ConnectionFactory connectionFactory) {
rabbitTemplate.setRoutingKey(queue().getName());
rabbitTemplate.setReplyAddress(replyQueue().getName());
return new AsyncRabbitTemplate(rabbitTemplate, replyContainer(connectionFactory));
}
#Bean
public Queue queue() {
return new AnonymousQueue();
}
#Bean
public Queue replyQueue() {
return new AnonymousQueue();
}
#Bean
public SimpleMessageListenerContainer replyContainer(ConnectionFactory connectionFactory) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory);
container.setQueueNames(replyQueue().getName());
return container;
}
#Bean
public SimpleMessageListenerContainer remoteContainer(ConnectionFactory connectionFactory) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory);
container.setQueueNames(queue().getName());
container.setMessageListener(new MessageListenerAdapter(new Object() {
#SuppressWarnings("unused")
public String handleMessage(String in) {
return in.toUpperCase();
}
}));
return container;
}
}

Resources