spring-kafka: DefaultErrorHandler with DeadLetterPublishingRecoverer(BiFunction) not considered. No DL topic created - spring-boot

In my Spring Boot application using spring-kafka, I am trying to configure an error handler with 2 things:-
Retry message consumption failures a certain times (FixedBackOff) before publishing to a dead letter topic
Create a dead letter topic with a name of my choice
Using
// Version highlights
id 'org.springframework.boot' version '2.7.2'
...
implementation 'org.springframework.kafka:spring-kafka' // 2.8.8
Here is the code I am using based on what I read in Spring docs and reiterated in several articles online:
#Bean
public DefaultErrorHandler byteArrayDefaultErrorHandler(KafkaTemplate<String, byte[]> template) {
var recoverer =
new DeadLetterPublishingRecoverer(
template,
(record, e) -> new TopicPartition("%s.deadLetter".formatted(record.topic()), 0);
);
return new DefaultErrorHandler(recoverer, new FixedBackOff(3000, 3));
}
But the above bean is not considered/used. So, when consumption encounters a failure (currently simulating failure by throwing an exception),
the FixedBackOff is not considered but the default one with 10 attempts back to back is used.
No DL topic is created.
Currently, the consumer config class has minimal stuff:
#Bean public ConsumerFactory<String, byte[]> byteArrayConsumerFactory() { ... }
#Bean public ConcurrentKafkaListenerContainerFactory<String, byte[]> byteArrayListenerContainerFactory() { .. }
#Bean public DefaultErrorHandler byteArrayDefaultErrorHandler(KafkaTemplate<String, byte[]> template) { ...code pasted above... }
And the listener is as follows:
#KafkaListener(
topics = "${app.config.kafka.topic}",
containerFactory = "byteArrayListenerContainerFactory"
)
public void consumeMessage(ConsumerRecord<String, byte[]> record) { ... }
Am at a loss figuring out what I have missed or added something conflicting the wiring. Help figuring out is highly appreciated.

The error handler bean will only be wired in by boot if you are using Boot's auto configured container factory.
Since you are creating your own container factory bean...
#Bean public ConcurrentKafkaListenerContainerFactory<String, byte[]> byteArrayListenerContainerFactory() { .. }
...you must add the error handler yourself - see setCommonErrorHandler().
The framework does not automatically provision the dead letter topic; add a #Bean NewTopic dlt() { ... }.
https://docs.spring.io/spring-kafka/docs/current/reference/html/#configuring-topics

Related

Spring Kafka Requirements for Supporting Multiple Consumers

As one would expect its common to want to have different Consumers deserializing in different ways off topics in Kafka. There is a known problem with spring boot autoconfig. It seems that as soon as other factories are defined Spring Kafka or the autoconfig complains about not being able to find a suitable consumer factory anymore. Some have pointed out that one solution is to include a ConsumerFactory of type (Object, Object) in the config. But no one has shown the source code for this or clarified if it needs to be named in any particular way. Or if simply adding this Consumer to the config removes the need to turn off autoconfig. All that remains very unclear.
If you are not familiar with this issue please read https://github.com/spring-projects/spring-boot/issues/19221
Where it was just stated ok, define the ConsumerFactory and add it somewhere in your config. Can someone be a bit more precise about this please.
Show exactly how to define the ConsumerFactory so that Spring boot autoconfig will not complain.
Explain if turning off autoconfig is or is not needed?
Explain if Consumer Factory needs to be named in any special way or not.
The simplest solution is to stick with Boot's auto-configuration and override the deserializer on the #KafkaListener itself...
#SpringBootApplication
public class So63108344Application {
public static void main(String[] args) {
SpringApplication.run(So63108344Application.class, args);
}
#KafkaListener(id = "so63108344-1", topics = "so63108344-1")
public void listen1(String in) {
System.out.println(in);
}
#KafkaListener(id = "so63108344-2", topics = "so63108344-2", properties =
ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG +
"=org.apache.kafka.common.serialization.ByteArrayDeserializer")
public void listen2(byte[] in) {
System.out.println(in);
}
#Bean
public NewTopic topic1() {
return TopicBuilder.name("so63108344-1").partitions(1).replicas(1).build();
}
#Bean
public NewTopic topic2() {
return TopicBuilder.name("so63108344-2").partitions(1).replicas(1).build();
}
}
For more advanced container customization (or if you don't want to pollute the #KafkaListener, you can use a ContainerCustomizer...
#Component
class Customizer {
public Customizer(ConcurrentKafkaListenerContainerFactory<?, ?> factory) {
factory.setContainerCustomizer(container -> {
if (container.getGroupId().equals("so63108344-2")) {
container.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
container.getContainerProperties().getKafkaConsumerProperties()
.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.ByteArrayDeserializer");
}
});
}
}

Spring Integration: connection to multiple MQ servers by config

I do have a Spring Boot 5 application and I also have it running against one IBM MQ server.
Now we want it to connect to three or more MQ servers. My intention is now to just add XY connection infos to the environment and then I get XY MQConnectionFactory beans and al the other beans that are needed for processing.
At the moment this is what I have:
#Bean
#Qualifier(value="MQConnection")
public MQConnectionFactory getIbmConnectionFactory() throws JMSException {
MQConnectionFactory factory = new MQConnectionFactory();
// seeting all the parameters here
return factory;
}
But this is quite static. Is there an elegant way of doing this?
I stumbled about IntegrationFlow. Is this a possibly working solution?
Thanks for all your tipps!
KR
Solution
Based on Artem Bilan's response I built this class.
#Configuration
public class ConnectionWithIntegrationFlowMulti {
protected static final Logger LOG = Logger.create();
#Value("${mq.queue.jms.sources.queue.queue-manager}")
private String queueManager;
#Autowired
private ConnectionConfig connectionConfig;
#Autowired
private SSLSocketFactory sslSocketFactory;
#Bean
public MessageChannel queureader() {
return new DirectChannel();
}
#Autowired
private IntegrationFlowContext flowContext;
#PostConstruct
public void processBeanDefinitionRegistry() throws BeansException {
Assert.notEmpty(connectionConfig.getTab().getLocations(), "At least one CCDT file locations must be provided.");
for (String tabLocation : connectionConfig.getTab().getLocations()) {
try {
IntegrationFlowRegistration theFlow = this.flowContext.registration(createFlow(tabLocation)).register();
LOG.info("Registered bean flow for %s with id = %s", queueManager, theFlow.getId());
} catch (JMSException e) {
LOG.error(e);
}
}
}
public IntegrationFlow createFlow(String tabLocation) throws JMSException {
LOG.info("creating ibmInbound");
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(getConnection(tabLocation)).destination(createDestinationBean()))
.handle(m -> LOG.info("received payload: " + m.getPayload().toString()))
.get();
}
public MQConnectionFactory getConnection(String tabLocation) throws JMSException {
MQConnectionFactory factory = new MQConnectionFactory();
// doing stuff
return factory;
}
#Bean
public MQQueue createDestinationBean() {
LOG.info("creating destination bean");
MQQueue queue = new MQQueue();
try {
queue.setBaseQueueManagerName(queueManager);
queue.setBaseQueueName(queueName);
} catch (Exception e) {
LOG.error(e, "destination bean: Error for integration flow");
}
return queue;
}
}
With Spring Integration you can create IntegrationFlow instances dynamically at runtime. For that purpose there is an IntegrationFlowContext with its registration() API. The returned IntegrationFlowRegistrationBuilder as a callback like:
/**
* Add an object which will be registered as an {#link IntegrationFlow} dependant bean in the
* application context. Usually it is some support component, which needs an application context.
* For example dynamically created connection factories or header mappers for AMQP, JMS, TCP etc.
* #param bean an additional arbitrary bean to register into the application context.
* #return the current builder instance
*/
IntegrationFlowRegistrationBuilder addBean(Object bean);
So, your MQConnectionFactory instances can be populated alongside with the other flow, used as references in the particular JMS components and registered as beans, too.
See more info in docs: https://docs.spring.io/spring-integration/docs/5.2.3.RELEASE/reference/html/dsl.html#java-dsl-runtime-flows
If you are fine with creating them statically, you can create the beans as you are now (each having a unique qualifier), but you can access them all dynamically in your services / components by having an #Autowired List<MQConnectionFactory> field or #Autowired Map<String, MQConnectionFactory> field. Spring will automatically populate the fields with all of the beans of type MQConnectionFactory
In the the Map implementation, the String will be the qualifier value.
If you also want to create the beans dynamically based on some properties, etc, it gets a little more complicated. You will need to look into something along the lines of instantiating beans at runtime

Spring 3 and Rabbit MQ integration (not Spring Boot)

I'm having difficulty getting a Spring 3 application to integrate with RabbitMQ, in order to receive messages from a queue (I do not need to send messages).
Part of the challenge is much of the documentation now relates to Spring Boot. The related Spring guide is helpful, but following the steps does not seem to work in my case. For instance, the guide includes the text:
The message listener container and receiver beans are all you need to listen for messages.
So I have setup the listener container and receiver beans with the following code.
Setting up message handler
#Component
public class CustomMessageHandler {
public void handleMessage(String text) {
System.out.println("Received: " + text);
}
}
Setting up configuration
#Configuration
public class RabbitConfig {
#Bean
public RabbitTemplate rabbitTemplate(final ConnectionFactory connectionFactory){
final RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setRoutingKey("queue-name");
return rabbitTemplate;
}
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setHost("...host...");
connectionFactory.setPort(5671);
connectionFactory.setVirtualHost("...virtual host..");
connectionFactory.setUsername("...username...");
connectionFactory.setPassword("...password...");
return connectionFactory;
}
#Bean
public MessageListenerAdapter messageListenerAdapter(CustomMessageHandler messageHandler) {
return new MessageListenerAdapter(messageHandler, "handleMessage");
}
#Bean
public SimpleMessageListenerContainer listenerContainer(ConnectionFactory connectionFactory,
MessageListenerAdapter messageListenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setQueueNames("queue-name");
container.setConnectionFactory(connectionFactory);
container.setMessageListener(messageListenerAdapter);
return container;
}
}
Unfortunately with this setup, the application will start up, but it never triggers the message handler. The queue it is trying to read from also has one message sitting in it, waiting to be consumed.
Any ideas on something that is missing, or appears misconfigured?
Thanks to some dependency management assistance from #GaryRussell, I was able to see that the version of spring-rabbit and spring-amqp were too recent. Using the older 1.3.9.RELEASE unfortunately proved to add additional challenges.
Some other assistance came in the form of using an actual RabbitMQ Java client. This option was much simpler to implement, and avoided the dependency problems. Ultimately I needed to include the following dependency:
<dependency>
<groupId>com.rabbitmq</groupId>
<artifactId>amqp-client</artifactId>
<version>5.7.3</version>
</dependency>
And then I simply followed their documentation on creating a connection, and consuming messages.
Voila, it works!

Bean injection for spring integration message handler

I am fairly new to spring and spring integration. What I'm trying to do: publish mqtt messages using spring integration.
Here is the code:
#Configuration
#IntegrationComponentScan
#Service
public class MQTTPublishAdapter {
private MqttConfiguration mqttConfiguration;
public MQTTPublishAdapter(MqttConfiguration mqttConfiguration) {
this.mqttConfiguration = mqttConfiguration;
}
#Bean
public MessageChannel mqttOutboundChannel() {
return new PublishSubscribeChannel();
}
#Bean
public MqttPahoClientFactory mqttClientFactory() {
DefaultMqttPahoClientFactory factory = new
DefaultMqttPahoClientFactory();
//... set factory details
return factory;
}
#Bean
#ServiceActivator(inputChannel = "mqttOutboundChannel")
public MQTTCustomMessageHandler mqttOutbound() {
String clientId = UUID.randomUUID().toString();
MQTTCustomMessageHandler messageHandler =
new MQTTCustomMessageHandler(clientId, mqttClientFactory());
//...set messagehandler details
return messageHandler;
}
//I extend this only because the publish method is protected and I want to
send messages to different topics
public class MQTTCustomMessageHandler extends MqttPahoMessageHandler {
//default constructors
public void sendMessage(String topic, String message){
MqttMessage mqttMessage = new MqttMessage();
mqttMessage.setPayload(message.getBytes());
try {
super.publish(topic, mqttMessage, null);
} catch (Exception e) {
log.error("Failure to publish message on topic " + topic,
e.getMessage());
}
}
}
This is the clase where I am trying to inject the Handler
#Service
public class MQTTMessagePublisher {
private MQTTCustomMessageHandler mqttCustomMessageHandler;
public MQTTMessagePublisher(#Lazy MQTTCustomMessageHandler
mqttCustomMessageHandler) {
this.mqttCustomMessageHandler = mqttCustomMessageHandler;
}
public void publishMessage(String topic, String message) {
mqttCustomMessageHandler.sendMessage(topic, message);
}
}
So my question is about how should I inject the bean I am trying to use because if I remove the #Lazy annotation it says that "Requested bean is currently in creation: Is there an unresolvable circular reference?". I do not have any circular dependencies as in the bean I only set some strings, so I'm guessing that I don't really understand how this should work.
Very sorry about the formating, it's one of my first questions around here.
Edit:
If I remove
#ServiceActivator(inputChannel = "mqttOutboundChannel")
and add
messageHandler.setChannelResolver((name) -> mqttOutboundChannel());
it works. I'm still unclear why the code crashes.
You show a lot of custom code, but not all of them.
It's really hard to answer to questions where it is only a custom code. Would be great to share as much info as possible. For example an external project on GitHub to let us to play and reproduce would be fully helpful and would save some time.
Nevertheless, I wonder what is your MQTTCustomMessageHandler. However I guess it is not a MessageHandler implementation. From here the #ServiceActivator annotation is not going to work properly since it is applied really for the mqttOutbound(), not whatever you expect. Or you need to move this annotation to your sendMessage() method in the MQTTCustomMessageHandler or have it as a MessageHandler.
On the other hand it is not clear why do you need that #ServiceActivator annotation at all since you call that method manually from the MQTTMessagePublisher.
Also it is not clear why you have so much custom code when Framework provides for your out-of-the-box channel adapter implementations.
Too many questions to your code, than possible answer...
See more info in the reference manual:
https://docs.spring.io/spring-integration/docs/current/reference/html/#annotations
https://docs.spring.io/spring-integration/docs/current/reference/html/#mqtt

Spring boot stream bind queue with multiple routing keys

I need to bind single queue with multiple routing keys.
I have configuration in application.properties:
spring.cloud.stream.bindings.some-channel1.destination=exch
spring.cloud.stream.bindings.some-channel1.group=a-queue
spring.cloud.stream.rabbit.bindings.some-channel1.consumer.binding-routing-key=event.domain1
spring.cloud.stream.bindings.some-channel2.destination=exch
spring.cloud.stream.bindings.some-channel2.group=a-queue
spring.cloud.stream.rabbit.bindings.some-channel2.consumer.binding-routing-key=event.domain2
This creates queue and bindings properly in rabbit, but finally after running application I got:
org.springframework.cloud.stream.binder.BinderException: Exception thrown while starting consumer:
After all above configuration i still bad because I need single channel. But queue binded with list of routing keys.
Any Ideas how to configure it?
You can't do it with stream properties, but you can always add extra bindings with normal Spring AMQP declarations...
#SpringBootApplication
#EnableBinding(Sink.class)
public class So50526298Application {
public static void main(String[] args) {
SpringApplication.run(So50526298Application.class, args);
}
#StreamListener(Sink.INPUT)
public void listen(String in) {
System.out.println(in);
}
// extra bindings...
#Bean
public TopicExchange exch() {
return new TopicExchange("exch");
}
#Bean
public Queue queue() {
return new Queue("exch.a-queue");
}
#Bean
public Binding extraBinding1() {
return BindingBuilder.bind(queue()).to(exch()).with("event-domain2");
}
}
There is also a third party "advanced" boot starter that allows you to add declarations in a yaml file. I haven't tried it, but it looks interesting.

Resources