Hi I have problem with my Rabbit listener which cause infinite loop on exception (requeue message). My configuration looks:
#Bean(name = "defContainer")
public RabbitListenerContainerFactory containerFactory(ConnectionFactory connectionFactory, PlatformTransactionManager transactionManager){
SimpleRabbitListenerContainerFactory containerFactory = new SimpleRabbitListenerContainerFactory();
containerFactory.setConnectionFactory(connectionFactory);
containerFactory.setConcurrentConsumers(5);
containerFactory.setAcknowledgeMode(AcknowledgeMode.AUTO);
containerFactory.setTransactionManager(transactionManager);
containerFactory.setMessageConverter(messageConverterAmqp());
containerFactory.setDefaultRequeueRejected(false);
return new TxRabbitListenerContainerFactory(containerFactory);
}
where transactionManager is JpaTransactionManager for transaction on postgre db.
TxRabbitListenerContainerFactory is my factory which set setAlwaysRequeueWithTxManagerRollback to false:
public class TxRabbitListenerContainerFactory implements RabbitListenerContainerFactory {
private SimpleRabbitListenerContainerFactory factory;
public TxRabbitListenerContainerFactory(SimpleRabbitListenerContainerFactory factory) {
this.factory = factory;
}
#Override
public MessageListenerContainer createListenerContainer(RabbitListenerEndpoint endpoint) {
SimpleMessageListenerContainer container = factory.createListenerContainer(endpoint);
container.setAlwaysRequeueWithTxManagerRollback(false);
return container;
}
}
Now I have listner like:
#RabbitListener(bindings = #QueueBinding(
value = #Queue(value = "topic.two", durable = "true"),
exchange = #Exchange(value = "topic.def", type = "topic", durable = "true"),
key = "letter.*"
), errorHandler = "rabErrorHandler", containerFactory = "defContainer")
#Transactional
public Motorcycle topicLetters(Motorcycle motorcycle) throws Exception{
motorcycle.setId(UUID.randomUUID().toString());
Testing testing = new Testing();
testingRepository.save(testing);
throwEx();
return motorcycle;
}
where method throwEx(); throw unchecked exception.
Data from DB are properly rollbacked (not commited), but message are constantly requeued, see it in listener:
#Bean
public RabbitListenerErrorHandler rabErrorHandler(){
return new RabbitListenerErrorHandler() {
#Override
public Object handleError(Message message, org.springframework.messaging.Message<?> message1, ListenerExecutionFailedException e) throws Exception {
System.out.println("FFFFFFFFFFF");
return null;
}
};
}
How to prevvent infinite loope, and why is it happend ?
EDIT:
Logs: pasted logs
Set defaultRequeueRejected to false on the container factory.
To programmatically decide when to requeue or nor, leave that at true and throw an AmqpRejectAndDontRequeueException when you don't want it requeued.
EDIT
There's something not adding up...
protected void prepareHolderForRollback(RabbitResourceHolder resourceHolder, RuntimeException exception) {
if (resourceHolder != null) {
resourceHolder.setRequeueOnRollback(isAlwaysRequeueWithTxManagerRollback() ||
RabbitUtils.shouldRequeue(isDefaultRequeueRejected(), exception, logger));
}
}
If both booleans are false, we don't requeue.
Found issue:
Reason: caused by errorHandler handler which was specified on listener level. In some cases error handler return null - which was causing infinite loop (instead rethrowing exception and rollback transaction)
Related
I am currently using Spring Kafka to consume messages from topic along with #Retry of Spring. So basically, I am retrying to process the consumer message in case of an error. But while doing so, I want to avoid the exception message thrown by KafkaMessageListenerContainer. Instead I want to display a custom message. I tried adding an error handler in the ConcurrentKafkaListenerContainerFactory but on doing so, my retry does not get invoked.
Can someone guide me on how to display a custom exception message along with #Retry scenario as well? Below are my code snippets:
ConcurrentKafkaListenerContainerFactory Bean Config
#Bean
ConcurrentKafkaListenerContainerFactory << ? , ? > concurrentKafkaListenerContainerFactory(ConcurrentKafkaListenerContainerFactoryConfigurer configurer, ConsumerFactory < Object, Object > kafkaConsumerFactory) {
ConcurrentKafkaListenerContainerFactory < Object, Object > kafkaListenerContainerFactory =
new ConcurrentKafkaListenerContainerFactory < > ();
configurer.configure(kafkaListenerContainerFactory, kafkaConsumerFactory);
kafkaListenerContainerFactory.setConcurrency(1);
// Set Error Handler
/*kafkaListenerContainerFactory.setErrorHandler(((thrownException, data) -> {
log.info("Retries exhausted);
}));*/
return kafkaListenerContainerFactory;
}
Kafka Consumer
#KafkaListener(
topics = "${spring.kafka.reprocess-topic}",
groupId = "${spring.kafka.consumer.group-id}",
containerFactory = "concurrentKafkaListenerContainerFactory"
)
#Retryable(
include = RestClientException.class,
maxAttemptsExpression = "${spring.kafka.consumer.max-attempts}",
backoff = #Backoff(delayExpression = "${spring.kafka.consumer.backoff-delay}")
)
public void onMessage(ConsumerRecord < String, String > consumerRecord) throws Exception {
// Consume the record
log.info("Consumed Record from topic : {} ", consumerRecord.topic());
// process the record
messageHandler.handleMessage(consumerRecord.value());
}
Below is the exception that I am getting:
You should not use #Retryable as well as the SeekToCurrentErrorHandler (which is now the default, since 2.5; so I presume you are using that version).
Instead, configure a custom SeekToCurrentErrorHandler with max attempts, back off, and retryable exceptions.
That error message is normal; it's logged by the container; it's logging level can be reduced from ERROR to INFO or DEBUG by setting the logLevel property on the SeekToCurrentErrorHandler. You can also add a custom recoverer to it, to log your custom message after the retries are exhausted.
my event retry template,
#Bean(name = "eventRetryTemplate")
public RetryTemplate eventRetryTemplate() {
RetryTemplate template = new RetryTemplate();
ExceptionClassifierRetryPolicy retryPolicy = new ExceptionClassifierRetryPolicy();
Map<Class<? extends Throwable>, RetryPolicy> policyMap = new HashMap<>();
policyMap.put(NonRecoverableException.class, new NeverRetryPolicy());
policyMap.put(RecoverableException.class, new AlwaysRetryPolicy());
retryPolicy.setPolicyMap(policyMap);
ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(backoffInitialInterval);
backOffPolicy.setMaxInterval(backoffMaxInterval);
backOffPolicy.setMultiplier(backoffMultiplier);
template.setRetryPolicy(retryPolicy);
template.setBackOffPolicy(backOffPolicy);
return template;
}
my kafka listener using the retry template,
#KafkaListener(
groupId = "${kafka.consumer.group.id}",
topics = "${kafka.consumer.topic}",
containerFactory = "eventContainerFactory")
public void eventListener(ConsumerRecord<String, String>
events,
Acknowledgment acknowledgment) {
eventRetryTemplate.execute(retryContext -> {
retryContext.setAttribute(EVENT, "my-event");
eventConsumer.consume(events, acknowledgment);
return null;
});
}
my kafka consumer properties,
private ConcurrentKafkaListenerContainerFactory<String, String>
getConcurrentKafkaListenerContainerFactory(
KafkaProperties kafkaProperties) {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
factory.getContainerProperties().setAckOnError(Boolean.TRUE);
kafkaErrorEventHandler.setCommitRecovered(Boolean.TRUE);
factory.setErrorHandler(kafkaErrorEventHandler);
factory.setConcurrency(1);
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
factory.setConsumerFactory(getEventConsumerFactory(kafkaProperties));
return factory;
}
kafka error event handler is my custom error handler that extends the SeekToCurrentErrorHandler and implements the handle error method some what like this.....
#Override
public void handle(Exception thrownException, List<ConsumerRecord<?, ?>>
records,
Consumer<?, ?> consumer, MessageListenerContainer container) {
log.info("Non recoverable exception. Publishing event to Database");
super.handle(thrownException, records, consumer, container);
ConsumerRecord<String, String> consumerRecord = (ConsumerRecord<String,
String>) records.get(0);
FailedEvent event = createFailedEvent(thrownException, consumerRecord);
failedEventService.insertFailedEvent(event);
log.info("Successfully Published eventId {} to Database...",
event.getEventId());
}
here the failed event service is my custom class again which with put these failed events into a queryable relational DB (I chose this to be my DLQ).
Our JMS infrastructure is load balanced. As a result of this, I am attempting to use a connectionNameList when configuring the connection factory. The idea here is that any JMS message that arrives on either of the primary or secondary queue manager will get picked up and processed. However, it only appears that messages are being picked up by the primary.
Here is my listener annotation:
#JmsListener(destination = "${request-queue}", containerFactory = "DefaultJmsListenerContainerFactory")
public void onMessage(Message msg) {
System.out.println(msg.toString());
}
Here is the JMS listener container factory:
#Bean(name = "DefaultJmsListenerContainerFactory")
public DefaultJmsListenerContainerFactory createJmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(buildConnectionFactory());
factory.setConcurrency(numberOfListeners);
factory.setSessionAcknowledgeMode(Session.AUTO_ACKNOWLEDGE);
factory.setSessionTransacted(false);
factory.setErrorHandler(queueErrorHandler);
factory.setBackOff(getBackOffStrategy());
return factory;
}
And here is the connection factory:
#Bean(name = "MQConnectionFactory")
public ConnectionFactory buildConnectionFactory() {
try {
MQConnectionFactory mqcf = new MQConnectionFactory();
mqcf.setConnectionNameList(mq1.daluga.com(2171),mq2.daluga.com(2171));
mqcf.setChannel(channel);
mqcf.setTransportType(WMQConstants.WMQ_CM_CLIENT);
return mqcf;
} catch (Exception e) {
throw new RuntimeException(message, e);
}
}
I suspect something in my configuration is just not right. Is there anything obvious that folks see that might cause messages not to be picked up from the secondary queue manager?
Thanks!
I am struggling hard to find out the way for scheduled/Delaying messages in Spring AMQP/Rabbit MQ and found solution in here.But i still with a prolem
about Spring AMQP/Rabbit MQ which can not received any message.
My source as the following:
#Configuration
public class AmqpConfig {
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setAddresses("172.16.101.14:5672");
connectionFactory.setUsername("admin");
connectionFactory.setPassword("admin");
connectionFactory.setPublisherConfirms(true);
return connectionFactory;
}
#Bean
#Scope("prototype")
public RabbitTemplate rabbitTemplate() {
RabbitTemplate template = new RabbitTemplate(connectionFactory());
return template;
}
#Bean
CustomExchange delayExchange() {
Map<String, Object> args = new HashMap<String, Object>();
args.put("x-delayed-type", "direct");
return new CustomExchange("my-exchange", "x-delayed-message", true, false, args);
}
#Bean
public Queue queue() {
return new Queue("spring-boot-queue", true);
}
#Bean
Binding binding(Queue queue, Exchange delayExchange) {
return BindingBuilder.bind(queue).to(delayExchange).with("spring-boot-queue").noargs();
}
#Bean
public SimpleMessageListenerContainer messageContainer() {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory());
container.setQueues(queue());
container.setExposeListenerChannel(true);
container.setMaxConcurrentConsumers(1);
container.setConcurrentConsumers(1);
container.setAcknowledgeMode(AcknowledgeMode.MANUAL);
container.setMessageListener(new ChannelAwareMessageListener() {
public void onMessage(Message message, Channel channel) throws Exception {
byte[] body = message.getBody();
System.err.println("receive msg : " + new String(body));
channel.basicAck(message.getMessageProperties().getDeliveryTag(), false); //确认消息成功消费
}
});
return container;
}
}
#Component
public class Send implements RabbitTemplate.ConfirmCallback{
private RabbitTemplate rabbitTemplate;
#Autowired
public Send(RabbitTemplate rabbitTemplate) {
this.rabbitTemplate = rabbitTemplate;
this.rabbitTemplate.setConfirmCallback(this);
rabbitTemplate.setMandatory(true);
}
public void sendMsg(String content) {
CorrelationData correlationId = new CorrelationData(UUID.randomUUID().toString());
rabbitTemplate.convertAndSend("my-exchange", "", content, new MessagePostProcessor() {
#Override
public Message postProcessMessage(Message message) throws AmqpException {
message.getMessageProperties().setHeader("x-delay", 6000);
return message;
}
},correlationId);
System.err.println("delay message send ................");
}
/**
* 回调
*/
#Override
public void confirm(CorrelationData correlationData, boolean ack, String cause) {
System.err.println(" callback id :" + correlationData);
if (ack) {
System.err.println("ok");
} else {
System.err.println("fail:" + cause);
}
}
}
Is there someone could give a help.
Thanks all.
Delay messaging is nothing to do with Spring amqp, it's a library which will reside with your code, so the library can't hold any message as such. There are two approaches you can try:
Old Approach:
Set the TTL(time to live) header in each message/queue(policy) and then introduce a DLQ to handle it. once the ttl expired your messages will move from DLQ to main queue so that your listener can process it.
Latest Approach:
Recently RabbitMQ came up with RabbitMQ Delayed Message Plugin , using which you can achieve the same and this plugin support available since RabbitMQ-3.5.8.
You can declare an exchange with the type x-delayed-message and then publish messages with the custom header x-delay expressing in milliseconds a delay time for the message. The message will be delivered to the respective queues after x-delay milliseconds
Details:
To use the delayed-messaging feature, declare an exchange with the type x-delayed-message:
Map<String, Object> args = new HashMap<String, Object>();
args.put("x-delayed-type", "direct");
channel.exchangeDeclare("my-exchange", "x-delayed-message", true, false, args);
Note that we pass an extra header called x-delayed-type, more on it under the Routing section.
Once we have the exchange declared we can publish messages providing a header telling the plugin for how long to delay our messages:
byte[] messageBodyBytes = "delayed payload".getBytes("UTF-8");
Map<String, Object> headers = new HashMap<String, Object>();
headers.put("x-delay", 5000);
AMQP.BasicProperties.Builder props = new AMQP.BasicProperties.Builder().headers(headers);
channel.basicPublish("my-exchange", "", props.build(), messageBodyBytes);
byte[] messageBodyBytes2 = "more delayed payload".getBytes("UTF-8");
Map<String, Object> headers2 = new HashMap<String, Object>();
headers2.put("x-delay", 1000);
AMQP.BasicProperties.Builder props2 = new AMQP.BasicProperties.Builder().headers(headers2);
channel.basicPublish("my-exchange", "", props2.build(), messageBodyBytes2);
In the above example we publish two messages, specifying the delay time with the x-delay header. For this example, the plugin will deliver to our queues first the message with the body "more delayed payload" and then the one with the body "delayed payload".
If the x-delay header is not present, then the plugin will proceed to route the message without delay.
More here: git
I have the following configuration
spring.rabbitmq.listener.prefetch=1
spring.rabbitmq.listener.concurrency=1
spring.rabbitmq.listener.retry.enabled=true
spring.rabbitmq.listener.retry.max-attempts=3
spring.rabbitmq.listener.retry.max-interval=1000
spring.rabbitmq.listener.default-requeue-rejected=false //I have also changed it to true but the same behavior still happens
and in my listener I throw the exception AmqpRejectAndDontRequeueException to reject the message and enforce rabbit not to try to redeliver it...But rabbit redilvers it for 3 times then finally route it to dead letter queue.
Is that the standard behavior according to my provided configuration or do I miss something?
You have to configure the retry policy to not retry for that exception.
You can't do that with properties, you have to configure the retry advice yourself.
I'll post an example later if you need help with that.
requeue-rejected is at the container level (below retry on the stack).
EDIT
#SpringBootApplication
public class So39853762Application {
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext context = SpringApplication.run(So39853762Application.class, args);
Thread.sleep(60000);
context.close();
}
#RabbitListener(queues = "foo")
public void foo(String foo) {
System.out.println(foo);
if ("foo".equals(foo)) {
throw new AmqpRejectAndDontRequeueException("foo"); // won't be retried.
}
else {
throw new IllegalStateException("bar"); // will be retried
}
}
#Bean
public ListenerRetryAdviceCustomizer retryCustomizer(SimpleRabbitListenerContainerFactory containerFactory,
RabbitProperties rabbitPropeties) {
return new ListenerRetryAdviceCustomizer(containerFactory, rabbitPropeties);
}
public static class ListenerRetryAdviceCustomizer implements InitializingBean {
private final SimpleRabbitListenerContainerFactory containerFactory;
private final RabbitProperties rabbitPropeties;
public ListenerRetryAdviceCustomizer(SimpleRabbitListenerContainerFactory containerFactory,
RabbitProperties rabbitPropeties) {
this.containerFactory = containerFactory;
this.rabbitPropeties = rabbitPropeties;
}
#Override
public void afterPropertiesSet() throws Exception {
ListenerRetry retryConfig = this.rabbitPropeties.getListener().getRetry();
if (retryConfig.isEnabled()) {
RetryInterceptorBuilder<?> builder = (retryConfig.isStateless()
? RetryInterceptorBuilder.stateless()
: RetryInterceptorBuilder.stateful());
Map<Class<? extends Throwable>, Boolean> retryableExceptions = new HashMap<>();
retryableExceptions.put(AmqpRejectAndDontRequeueException.class, false);
retryableExceptions.put(IllegalStateException.class, true);
SimpleRetryPolicy policy =
new SimpleRetryPolicy(retryConfig.getMaxAttempts(), retryableExceptions, true);
ExponentialBackOffPolicy backOff = new ExponentialBackOffPolicy();
backOff.setInitialInterval(retryConfig.getInitialInterval());
backOff.setMultiplier(retryConfig.getMultiplier());
backOff.setMaxInterval(retryConfig.getMaxInterval());
builder.retryPolicy(policy)
.backOffPolicy(backOff)
.recoverer(new RejectAndDontRequeueRecoverer());
this.containerFactory.setAdviceChain(builder.build());
}
}
}
}
NOTE: You cannot currently configure the policy to retry all exceptions, "except" this one - you have to classify all exceptions you want retried (and they can't be a superclass of AmqpRejectAndDontRequeueException). I have opened an issue to support this.
The other answers posted here didn't work me when using Spring Boot 2.3.5 and Spring AMQP Starter 2.2.12, but for these versions I was able to customize the retry policy to not retry AmqpRejectAndDontRequeueException exceptions:
#Configuration
public class RabbitConfiguration {
#Bean
public RabbitRetryTemplateCustomizer customizeRetryPolicy(
#Value("${spring.rabbitmq.listener.simple.retry.max-attempts}") int maxAttempts) {
SimpleRetryPolicy policy = new SimpleRetryPolicy(maxAttempts, Map.of(AmqpRejectAndDontRequeueException.class, false), true, true);
return (target, retryTemplate) -> retryTemplate.setRetryPolicy(policy);
}
}
This lets the retry policy skip retries for AmqpRejectAndDontRequeueExceptions but retries all other exceptions as usual.
Configured this way, it traverses the causes of an exception, and skips retries if it finds an AmqpRejectAndDontRequeueException.
Traversing the causes is needed as org.springframework.amqp.rabbit.listener.adapter.MessagingMessageListenerAdapter#invokeHandler wraps all exceptions as a ListenerExecutionFailedException
I need to be able to receive notification when a ActiveMQ client consumes a MQTT message.
activemq.xml
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" advisoryForConsumed="true" />
</policyEntries>
</policyMap>
</destinationPolicy>
In the below code, I get MQTT messages on myTopic fine. I do not get advisory messages in processAdvisoryMessage / processAdvisoryBytesMessage.
#Component
public class MqttMessageListener {
#JmsListener(destination = "mytopic")
public void processMessage(BytesMessage message) {
}
#JmsListener(destination = "ActiveMQ.Advisory.MessageConsumed.Topic.>")
public void processAdvisoryMessage(Message message) {
System.out.println("processAdvisoryMessage Got a message");
}
#JmsListener(destination = "ActiveMQ.Advisory.MessageConsumed.Topic.>")
public void processAdvisoryBytesMessage(BytesMessage message) {
System.out.println("processAdvisoryBytesMessageGot a message");
}
}
What am I doing wrong?
I have also attempted doing this with a ActiveMQ BrokerFilter:
public class AMQMessageBrokerFilter extends GenericBrokerFilter {
#Override
public void acknowledge(ConsumerBrokerExchange consumerExchange, MessageAck ack) throws Exception {
super.acknowledge(consumerExchange, ack);
}
#Override
public void postProcessDispatch(MessageDispatch messageDispatch) {
Message message = messageDispatch.getMessage();
}
#Override
public void messageDelivered(ConnectionContext context, MessageReference messageReference) {
log.debug("messageDelivered called.");
super.messageDelivered(context, messageReference);
}
#Override
public void messageConsumed(ConnectionContext context, MessageReference messageReference) {
log.debug("messageConsumed called.");
super.messageConsumed(context, messageReference);
}
In this second scenario I was unable to both have the message and a contect with which to send the consumed notification. acknowledge/messageDelivered/messageConsumed all have a connection context but only postProcessDispatch has the message which I need part of it (payload is JSON) in order to send my outgoing message. I could be eager and use send which has both but it is safer to wait until at least it was acknowledged.
I have tried:
#Override
public void postProcessDispatch(MessageDispatch messageDispatch) {
super.postProcessDispatch(messageDispatch);
String topic = messageDispatch.getDestination().getPhysicalName();
if( topic == null || topic.equals("delivered") )
return;
try {
ActiveMQTopic responseTopic = new ActiveMQTopic("delivered");
ActiveMQTextMessage responseMsg = new ActiveMQTextMessage();
responseMsg.setPersistent(false);
responseMsg.setResponseRequired(false);
responseMsg.setProducerId(new ProducerId());
responseMsg.setText("Delivered msg: "+msg);
responseMsg.setDestination(responseTopic);
String messageKey = ":"+rand.nextLong();
MessageId msgId = new MessageId(messageKey);
responseMsg.setMessageId(msgId);
ProducerBrokerExchange producerExchange=new ProducerBrokerExchange();
ConnectionContext context = getAdminConnectionContext();
producerExchange.setConnectionContext(context);
producerExchange.setMutable(true);
producerExchange.setProducerState(new ProducerState(new ProducerInfo()));
next.send(producerExchange, responseMsg);
}
catch (Exception e) {
log.debug("Exception: "+e);
}
However the above seems to lead to unstable server. I'm thinking this is related to using the getAdminConnectionContext which seems wrong.
My factory was setting setPubSubDomain to false by default. This disables connections for advisory messages for topics. I set it to true and things started working. Note that queues will not work with this set. To get around that I created two factories and named their beans.
#Bean(name="main")
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory(ConnectionFactory connectionFactory) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
// factory.setDestinationResolver(destinationResolver);
// factory.setPubSubDomain(true);
factory.setConcurrency("3-10");
return factory;
}