Cant retrieve x-death header in rabbit listener springboot - spring-boot

I am using rabbit mq and I want to access the message retries but I always get a null reading x-death value as displayed in this example. However the message is read correctly.
#Component
#RabbitListener(queues = "myQueue")
public class AdyenNotificationMessageListener {
#RabbitHandler
public void processMessage(byte[] messageByte, #Header(name = "x-death", required = false) Map<?, ?> death) {
// death is always null
}
}
Using springboot version : '2.4.1' with spring-boot-starter-amqp'
Aany hint of what I maybe doing wrong would be highly appreciated.

It works fine for me; are you sure the message has the header?
#SpringBootApplication
public class So68231711Application {
public static void main(String[] args) {
SpringApplication.run(So68231711Application.class, args);
}
#Bean
Queue queue() {
return QueueBuilder.durable("so68231711")
.deadLetterExchange("")
.deadLetterRoutingKey("so68231711.dlq")
.build();
}
#Bean
Queue dlq() {
return new Queue("so68231711.dlq");
}
#RabbitListener(queues = "so68231711")
public void listen(String in) {
System.out.println(in);
throw new AmqpRejectAndDontRequeueException("toDLQ");
}
#RabbitListener(queues = "so68231711.dlq")
public void listenDlq(String in, #Header(name = "x-death", required=false) Map<?, ?> death) {
System.out.println(in);
System.out.println(death);
}
}
foo
... Execution of Rabbit message listener failed.
...
foo
{reason=rejected, count=1, exchange=, time=Tue Jul 06 11:09:30 EDT 2021, routing-keys=[so68231711], queue=so68231711}

Related

Kafka is not assigning a partition after Consumer.Poll(Duration.ZERO);

i started a project where i implement appache kafka.
I already have a working producer that writes data into the queue. So far so good. Now i wanted to program an consumer that reads out all the data in the queue.
That is the corresponding code:
try {
consumer.subscribe(Collections.singletonList("names"));
if (startingPoint != null){
consumer.
consumer.poll(Duration.ofMillis(0));
consumer.seekToBeginning(consumer.assignment());
}
ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(500));
for (ConsumerRecord<String, String> record : records) {
keyValuePairs.add(new String[]{record.key(),record.value()});
System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
}
} catch (Exception e) {
e.printStackTrace();
} finally {
consumer.close();
}
That code doesnt work right now like it is supposed to do. Only new records are consumed.
I was able to find out that
seekToBeginning() isn´t working because no partition is assigned to the consumer in that moment.
If i increase the duration of the poll it works. If i just pause the thread on the other hand it doesn´t.
Could someone please try to explain me why that is the case. I tried to find out by myself and already read something about a Kafka heartbeat. But i still haven´t fully understood what happens exactly.
The assignment takes time; polling for 0 will generally mean the poll will exit before it occurs.
You should add a ConsumerRebalanceListener callback to the subscribe() method and perform the seek in onPartitionsAssigned().
EDIT
#SpringBootApplication
public class So69121558Application {
public static void main(String[] args) {
SpringApplication.run(So69121558Application.class, args);
}
#Bean
public ApplicationRunner runner(ConsumerFactory<String, String> cf, KafkaTemplate<String, String> template) {
return args -> {
template.send("so69121558", "test");
Consumer<String, String> consumer = cf.createConsumer("group", "");
consumer.subscribe(Collections.singletonList("so69121558"), new ConsumerRebalanceListener() {
#Override
public void onPartitionsRevoked(Collection<TopicPartition> partitions) {
}
#Override
public void onPartitionsAssigned(Collection<TopicPartition> partitions) {
consumer.seekToBeginning(partitions);
}
});
ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(5));
records.forEach(System.out::println);
Thread.sleep(5000);
consumer.close();
};
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so69121558").partitions(1).replicas(1).build();
}
}
Here are a couple of examples of doing it the Spring way - just add one of these (or both) to the above class.
#KafkaListener(id = "so69121558", topics = "so69121558")
void listen(ConsumerRecord<?, ?> rec) {
System.out.println(rec);
}
#KafkaListener(id = "so69121558-1", topics = "so69121558")
void pojoListen(String in) {
System.out.println(in);
}
The seeks are done a bit differently too; here's the complete example:
#SpringBootApplication
public class So69121558Application extends AbstractConsumerSeekAware {
public static void main(String[] args) {
SpringApplication.run(So69121558Application.class, args);
}
#KafkaListener(id = "so69121558", topics = "so69121558")
void listen(ConsumerRecord<?, ?> rec) {
System.out.println(rec);
}
#KafkaListener(id = "so69121558-1", topics = "so69121558")
void pojoListen(String in) {
System.out.println(in);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so69121558").partitions(1).replicas(1).build();
}
#Override
public void onPartitionsAssigned(Map<TopicPartition, Long> assignments, ConsumerSeekCallback callback) {
callback.seekToBeginning(assignments.keySet());
}
}

Kafka Consumer Invalid Payload Error Handler

I have the below configuration. When the message is invalid I want to send an email and for errors I want to save it in database. How can I handle this in errorHandler() ?
#Configuration
#EnableKafka
public class KafkaConsumerConfig implements KafkaListenerConfigurer{
#Bean
ErrorHandler errorHandler() {
return new SeekToCurrentErrorHandler((rec, ex) ->
{ dbService.saveErrorMsg(rec); }
,new FixedBackOff(5000, 3)) ;
}
#Override
public void configureKafkaListeners(KafkaListenerEndpointRegistrar registrar) {
registrar.setValidator(this.validator);
}
#KafkaListener(topics = "mytopic", concurrency = "3", groupId = "mytopic-1-groupid")
public void consumeFromTopic1(#Payload #Valid ValidatedClass val, ConsumerRecordMetadata meta) throws Exception
{
dbservice.callDB(val,"t");
}
I presume your emai code is in dbService.saveErrorMsg.
Spring Boot should automatically detect the ErrorHandler #Bean and wire it into the container factory.
See Boot's KafkaAnnotationDrivenConfiguration class and ConcurrentKafkaListenerContainerFactoryConfigurer.

Setting authorizationExceptionRetryInterval for Spring Kafka

Anyone know how to set the new property: authorizationExceptionRetryInterval without creating the ConcurrentKafkaListenerContainerFactory manually.
I was going to say...
#Component
class ContainerFactoryCustomizer {
ContainerFactoryCustomizer(AbstractKafkaListenerContainerFactory<?, ?, ?> factory) {
factory.setContainerCustomizer(
container -> container.getContainerProperties()
.setAuthorizationExceptionRetryInterval(Duration.ofSeconds(10L)));
}
}
But that doesn't work, due to a bug (the container customizer is not set up).
Here is a work-around:
#SpringBootApplication
public class So60054097Application {
public static void main(String[] args) {
SpringApplication.run(So60054097Application.class, args);
}
#KafkaListener(id = "so60054097", topics = "so60054097", autoStartup = "false")
public void listen(String in) {
System.out.println(in);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so60054097").partitions(1).replicas(1).build();
}
#Bean
public ApplicationRunner runner(KafkaListenerEndpointRegistry registry) {
return args -> {
MessageListenerContainer container = registry.getListenerContainer("so60054097");
container.getContainerProperties()
.setAuthorizationExceptionRetryInterval(Duration.ofSeconds(10L));
container.start();
};
}
}
(Set autoStartup to false; fix the property and start the container).

How to build a nonblocking Consumer when using AsyncRabbitTemplate with Request/Reply Pattern

I'm new to rabbitmq and currently trying to implement a nonblocking producer with a nonblocking consumer. I've build some test producer where I played around with typereference:
#Service
public class Producer {
#Autowired
private AsyncRabbitTemplate asyncRabbitTemplate;
public <T extends RequestEvent<S>, S> RabbitConverterFuture<S> asyncSendEventAndReceive(final T event) {
return asyncRabbitTemplate.convertSendAndReceiveAsType(QueueConfig.EXCHANGE_NAME, event.getRoutingKey(), event, event.getResponseTypeReference());
}
}
And in some other place the test function that gets called in a RestController
#Autowired
Producer producer;
public void test() throws InterruptedException, ExecutionException {
TestEvent requestEvent = new TestEvent("SOMEDATA");
RabbitConverterFuture<TestResponse> reply = producer.asyncSendEventAndReceive(requestEvent);
log.info("Hello! The Reply is: {}", reply.get());
}
This so far was pretty straightforward, where I'm stuck now is how to create a consumer which is non-blocking too. My current listener:
#RabbitListener(queues = QueueConfig.QUEUENAME)
public TestResponse onReceive(TestEvent event) {
Future<TestResponse> replyLater = proccessDataLater(event.getSomeData())
return replyLater.get();
}
As far as I'm aware, when using #RabbitListener this listener runs in its own thread. And I could configure the MessageListener to use more then one thread for the active listeners. Because of that, blocking the listener thread with future.get() is not blocking the application itself. Still there might be the case where all threads are blocking now and new events are stuck in the queue, when they maybe dont need to. What I would like to do is to just receive the event without the need to instantly return the result. Which is probably not possible with #RabbitListener. Something like:
#RabbitListener(queues = QueueConfig.QUEUENAME)
public void onReceive(TestEvent event) {
/*
* Some fictional RabbitMQ API call where i get a ReplyContainer which contains
* the CorrelationID for the event. I can call replyContainer.reply(testResponse) later
* in the code without blocking the listener thread
*/
ReplyContainer replyContainer = AsyncRabbitTemplate.getReplyContainer()
// ProcessDataLater calls reply on the container when done with its action
proccessDataLater(event.getSomeData(), replyContainer);
}
What is the best way to implement such behaviour with rabbitmq in spring?
EDIT Config Class:
#Configuration
#EnableRabbit
public class RabbitMQConfig implements RabbitListenerConfigurer {
public static final String topicExchangeName = "exchange";
#Bean
TopicExchange exchange() {
return new TopicExchange(topicExchangeName);
}
#Bean
public ConnectionFactory rabbitConnectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setHost("localhost");
return connectionFactory;
}
#Bean
public MappingJackson2MessageConverter consumerJackson2MessageConverter() {
return new MappingJackson2MessageConverter();
}
#Bean
public RabbitTemplate rabbitTemplate() {
final RabbitTemplate rabbitTemplate = new RabbitTemplate(rabbitConnectionFactory());
rabbitTemplate.setMessageConverter(producerJackson2MessageConverter());
return rabbitTemplate;
}
#Bean
public AsyncRabbitTemplate asyncRabbitTemplate() {
return new AsyncRabbitTemplate(rabbitTemplate());
}
#Bean
public Jackson2JsonMessageConverter producerJackson2MessageConverter() {
return new Jackson2JsonMessageConverter();
}
#Bean
Queue queue() {
return new Queue("test", false);
}
#Bean
Binding binding() {
return BindingBuilder.bind(queue()).to(exchange()).with("foo.#");
}
#Bean
public SimpleRabbitListenerContainerFactory myRabbitListenerContainerFactory() {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(rabbitConnectionFactory());
factory.setMaxConcurrentConsumers(5);
factory.setMessageConverter(producerJackson2MessageConverter());
factory.setAcknowledgeMode(AcknowledgeMode.MANUAL);
return factory;
}
#Override
public void configureRabbitListeners(final RabbitListenerEndpointRegistrar registrar) {
registrar.setContainerFactory(myRabbitListenerContainerFactory());
}
}
I don't have time to test it right now, but something like this should work; presumably you don't want to lose messages so you need to set the ackMode to MANUAL and do the acks yourself (as shown).
UPDATE
#SpringBootApplication
public class So52173111Application {
private final ExecutorService exec = Executors.newCachedThreadPool();
#Autowired
private RabbitTemplate template;
#Bean
public ApplicationRunner runner(AsyncRabbitTemplate asyncTemplate) {
return args -> {
RabbitConverterFuture<Object> future = asyncTemplate.convertSendAndReceive("foo", "test");
future.addCallback(r -> {
System.out.println("Reply: " + r);
}, t -> {
t.printStackTrace();
});
};
}
#Bean
public AsyncRabbitTemplate asyncTemplate(RabbitTemplate template) {
return new AsyncRabbitTemplate(template);
}
#RabbitListener(queues = "foo")
public void listen(String in, Channel channel, #Header(AmqpHeaders.DELIVERY_TAG) long tag,
#Header(AmqpHeaders.CORRELATION_ID) String correlationId,
#Header(AmqpHeaders.REPLY_TO) String replyTo) {
ListenableFuture<String> future = handleInput(in);
future.addCallback(result -> {
Address address = new Address(replyTo);
this.template.convertAndSend(address.getExchangeName(), address.getRoutingKey(), result, m -> {
m.getMessageProperties().setCorrelationId(correlationId);
return m;
});
try {
channel.basicAck(tag, false);
}
catch (IOException e) {
e.printStackTrace();
}
}, t -> {
t.printStackTrace();
});
}
private ListenableFuture<String> handleInput(String in) {
SettableListenableFuture<String> future = new SettableListenableFuture<String>();
exec.execute(() -> {
try {
Thread.sleep(2000);
}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
future.set(in.toUpperCase());
});
return future;
}
public static void main(String[] args) {
SpringApplication.run(So52173111Application.class, args);
}
}

spring boot rabbitmq dead letter queue config not work

I config spring boot rabbit' dead letter queue, but ErrorHandler never receive any message. I search all the questiones about dead letter queue, but could not figure out. Can anyone help me ?
RabbitConfig.java to config dead letter queue/exchange:
#Configuration
public class RabbitConfig {
public final static String MAIL_QUEUE = "mail_queue";
public final static String DEAD_LETTER_EXCHANGE = "dead_letter_exchange";
public final static String DEAD_LETTER_QUEUE = "dead_letter_queue";
public static Map<String, Object> args = new HashMap<String, Object>();
static {
args.put("x-dead-letter-exchange", DEAD_LETTER_EXCHANGE);
//args.put("x-dead-letter-routing-key", DEAD_LETTER_QUEUE);
args.put("x-message-ttl", 5000);
}
#Bean
public Queue mailQueue() {
return new Queue(MAIL_QUEUE, true, false, false, args);
}
#Bean
public Queue deadLetterQueue() {
return new Queue(DEAD_LETTER_QUEUE, true);
}
#Bean
public FanoutExchange deadLetterExchange() {
return new FanoutExchange(DEAD_LETTER_EXCHANGE);
}
#Bean
public Binding deadLetterBinding() {
return BindingBuilder.bind(deadLetterQueue()).to(deadLetterExchange());
}
}
ErrorHandler.java to process DEAD LETTER QUEUE:
#Component
#RabbitListener( queues = RabbitConfig.DEAD_LETTER_QUEUE)
public class ErrorHandler {
#RabbitHandler
public void handleError(Object message) {
System.out.println("xxxxxxxxxxxxxxxxxx"+message);
}
}
MailServiceImpl.java to process MAIL_QUEUE:
#Service
#RabbitListener(queues = RabbitConfig.MAIL_QUEUE)
#ConditionalOnProperty("spring.mail.host")
public class MailServiceImpl implements MailService {
#Autowired
private JavaMailSender mailSender;
#RabbitHandler
#Override
public void sendMail(TMessageMail form) {
//......
try {
mailSender.save(form);
}catch(Exception e) {
logger.error("error in sending mail: {}", e.getMessage());
throw new AmqpRejectAndDontRequeueException(e.getMessage());
}
}
}
thx god, I finanlly find the answer!
all the configuration are correct, the problem is all the queues like mail_queue are created before I configure dead letter queue. So when I set x-dead-letter-exchange to the queue after the queue is created, it does not take effect.
中文就是,修改队列参数后,要删除队列重建!!!这么简单的一个tip,花了我几小时。。。。。。
How to delete queue, I follow the answer.
Deleting queues in RabbitMQ

Resources