ConcurrentKafkaListenerContainerFactory message converter is ignored when configuring listeners automatically - spring

I need to create Kafka listeners at runtime, and everything seems working, except that the message converter property seems being ignored (or maybe this is a designed feature or I've made something wrong).
When using #KafkaListener, it works correct, but when creating listeners manually my message isn't converted to a desired object and I'm getting an error:
Caused by: java.lang.ClassCastException: class java.lang.String cannot be cast to class com.my.company.model.MyPojo (java.lang.String is in module java.base of loader 'bootstrap'; com.my.company.model.MyPojo is in unnamed module of loader 'app')
at com.my.company.config.MyPojo.kafka.KafkaConfig.lambda$createListenerContainers$2(KafkaConfig.java:142)
My configuration:
#Bean
ConcurrentKafkaListenerContainerFactory<String, Object> kafkaListenerContainerFactory() {
var factory = new ConcurrentKafkaListenerContainerFactory<String, Object>();
factory.setConsumerFactory(consumerFactory());
factory.setMessageConverter(new StringJsonMessageConverter());
return factory;
}
#Bean
MessageListenerContainer createListenerContainer1() {
ContainerProperties containerProperties = new ContainerProperties(topicConfig("my_topic"));
var container = new KafkaMessageListenerContainer<>(consumerFactory(), containerProperties);
//tried this too...
//var container = kafkaListenerContainerFactory().createContainer(topicConfig("my_topic"));
container.setupMessageListener((MessageListener<String, MyPojo>) data -> getDataService.process(data.value()););
container.start();
return container;
}
The WORKING Kafka listener:
#KafkaListener(id = "1", topics = "my_topic)
public void listenGetDataTopic(#Payload MyPojo message) {
log.info(message);
}
I've tried a lot of different configs and to debug it deeply, and, of course I see the difference between handling messages when using #KafkaListener and manually created listeners, but I didn't figure out how to apply a message conversion to a manually created listeners. Is there a possibility to achieve this?

The message converter is not a property of the container, it is a property of the listener adapter used to invoke the pojo method for the #KafkaListener.
When using a container directly, your listener must implement MessageListener or one of its sub-interfaces.
You can either invoke the converter yourself in your listener (e.g. create a lightweight adapter) or you need to use another technique for dynamically creating #KafkaListeners.
See
Kafka Spring: How to create Listeners dynamically or in a loop?
Kafka Consumer in spring can I re-assign partitions programmatically?
Can i add topics to my #kafkalistener at runtime
for some examples of those techniques.

Related

Transactional kafka listener retry

I'm trying to create a Spring Kafka #KafkaListener which is both transactional (kafa and database) and uses retry. I am using Spring Boot. The documentation for error handlers says that
When transactions are being used, no error handlers are configured, by default, so that the exception will roll back the transaction. Error handling for transactional containers are handled by the AfterRollbackProcessor. If you provide a custom error handler when using transactions, it must throw an exception if you want the transaction rolled back (source).
However, when I configure my listener with a #Transactional("kafkaTransactionManager) annotation, even though I can clearly see that the template rolls back produced messages when an exception is raised, the container actually uses a non-null commonErrorHandler rather than an AfterRollbackProcessor. This is the case even when I explicitly configure the commonErrorHandler to null in the container factory. I do not see any evidence that my configured AfterRollbackProcessor is ever invoked, even after the commonErrorHandler exhausts its retry policy.
I'm uncertain how Spring Kafka's error handling works in general at this point, and am looking for clarification. The questions I want to answer are:
What is the recommended way to configure transactional kafka listeners with Spring-Kafka 2.8.0? Have I done it correctly?
Should the common error handler indeed be used rather than the after rollback processor? Does it rollback the current transaction before trying to process the message again according to the retry policy?
In general, when I have a transactional kafka listener, is there ever more than one layer of error handling I should be aware of? E.g. if my common error handler re-throws exceptions of kind T, will another handler catch that and potentially start retry of its own?
Thanks!
My code:
#Configuration
#EnableScheduling
#EnableKafka
public class KafkaConfiguration {
private static final Logger LOGGER = LoggerFactory.getLogger(KafkaConfiguration.class);
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConsumerFactory<Object, Object> consumerFactory) {
var factory = new ConcurrentKafkaListenerContainerFactory<Integer, Object>();
factory.setConsumerFactory(consumerFactory);
var afterRollbackProcessor =
new DefaultAfterRollbackProcessor<Object, Object>(
(record, e) -> LOGGER.info("After rollback processor triggered! {}", e.getMessage()),
new FixedBackOff(1_000, 1));
// Configures different error handling for different listeners.
factory.setContainerCustomizer(
container -> {
var groupId = container.getContainerProperties().getGroupId();
if (groupId.equals("InputProcessorHigh") || groupId.equals("InputProcessorLow")) {
container.setAfterRollbackProcessor(afterRollbackProcessor);
// If I set commonErrorHandler to null, it is defaulted instead.
}
});
return factory;
}
}
#Component
public class InputProcessor {
private static final Logger LOGGER = LoggerFactory.getLogger(InputProcessor.class);
private final KafkaTemplate<Integer, Object> template;
private final AuditLogRepository repository;
#Autowired
public InputProcessor(KafkaTemplate<Integer, Object> template, AuditLogRepository repository) {
this.template = template;
this.repository = repository;
}
#KafkaListener(id = "InputProcessorHigh", topics = "input-high", concurrency = "3")
#Transactional("kafkaTransactionManager")
public void inputHighProcessor(ConsumerRecord<Integer, Input> input) {
processInputs(input);
}
#KafkaListener(id = "InputProcessorLow", topics = "input-low", concurrency = "1")
#Transactional("kafkaTransactionManager")
public void inputLowProcessor(ConsumerRecord<Integer, Input> input) {
processInputs(input);
}
public void processInputs(ConsumerRecord<Integer, Input> input) {
var key = input.key();
var message = input.value().getMessage();
var output = new Output().setMessage(message);
LOGGER.info("Processing {}", message);
template.send("output-left", key, output);
repository.createIfNotExists(message); // idempotent insert
template.send("output-right", key, output);
if (message.contains("ERROR")) {
throw new RuntimeException("Simulated processing error!");
}
}
}
My application.yaml (minus my bootstrap-servers and security config):
spring:
kafka:
consumer:
auto-offset-reset: 'earliest'
key-deserializer: 'org.apache.kafka.common.serialization.IntegerDeserializer'
value-deserializer: 'org.springframework.kafka.support.serializer.JsonDeserializer'
isolation-level: 'read_committed'
properties:
spring.json.trusted.packages: 'java.util,java.lang,com.github.tomboyo.silverbroccoli.*'
producer:
transaction-id-prefix: 'tx-'
key-serializer: 'org.apache.kafka.common.serialization.IntegerSerializer'
value-serializer: 'org.springframework.kafka.support.serializer.JsonSerializer'
[EDIT] (solution code)
I was able to figure it out with Gary's help. As they say, we need to set the kafka transaction manager on the container so that the container can start transactions. The transactions documentation doesn't cover how to do this, and there are a few ways. First, we can get the mutable container properties object from the factory and set the transaction manager on that:
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
var factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.getContainerProperties().setTransactionManager(...);
return factory;
}
If we are in Spring Boot, we can re-use some of the auto configuration to set sensible defaults on our factory before we customize it. We can see that the KafkaAutoConfiguration module imports KafkaAnnotationDrivenConfiguration, which produces a ConcurrentKafkaListenerContainerFactoryConfigurer bean. This appears to be responsible for all the default configuration in a Spring-Boot application. So, we can inject that bean and use it to initialize our factory before adding customizations:
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer bootConfigurer,
ConsumerFactory<Object, Object> consumerFactory) {
var factory = new ConcurrentKafkaListenerContainerFactory<Object, Object>();
// Apply default spring-boot configuration.
bootConfigurer.configure(factory, consumerFactory);
factory.setContainerCustomizer(
container -> {
... // do whatever
});
return factory;
}
Once that's done, the container uses the AfterRollbackProcessor for error handling, as expected. As long as I don't explicitly configure a common error handler, this appears to be the only layer of exception handling.
The AfterRollbackProcessor is only used when the container knows about the transaction; you must provide a KafkaTransactionManager to the container so that the kafka transaction is started by the container, and the offsets sent to the transaction. Using #Transactional is not the correct way to start a Kafka Transaction.
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#transactions

Issues migrating a Spring AMQP consumer/producer service to a Spring Stream source

I am migrating a Spring Boot microservice that consumes data from 3 RabbitMQ queues on server A, saves it into Redis and finally produces messages into an exchange in a different RabbitMQ on server B so these messages can be consumed by another microservice. This flow is working fine but I would like to migrate it to Spring Cloud Stream using the RabbitMQ binder. All Spring AMQP configuration is customised in the properties file and no spring property is used to create connections, queues, bindings, etc...
My first idea was setting up two bindings in Spring Cloud Stream, one connected to server A (consumer) and the other connected to server B (producer), and migrate the existing code to a Processor but I discarded it because it seems connection names cannot be set yet if multiple binders are used and I need to add several bindings to consume from server A's queues and bindingRoutingKey property does not support a list of values (I know it can be done programmately as explained here).
So I decided to only refactor the part of code related to the producer to use Spring Cloud Stream over RabbitMQ so the same microservice should consume via Spring AMQP from server A (original code) and should produce into server B via Spring Cloud Stream.
The first issue I found was a NonUniqueBeanDefinitionException in Spring Cloud Stream because a org.springframework.messaging.handler.annotation.support.MessageHandlerMethodFactory bean was defined twice with handlerMethodFactory and integrationMessageHandlerMethodFactory names.
org.springframework.beans.factory.NoUniqueBeanDefinitionException: No qualifying bean of type 'org.springframework.messaging.handler.annotation.support.MessageHandlerMethodFactory' available: expected single matching bean but found 2: handlerMethodFactory,integrationMessageHandlerMethodFactory
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveNamedBean(DefaultListableBeanFactory.java:1144)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveBean(DefaultListableBeanFactory.java:411)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBean(DefaultListableBeanFactory.java:344)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBean(DefaultListableBeanFactory.java:337)
at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1123)
at org.springframework.cloud.stream.binding.StreamListenerAnnotationBeanPostProcessor.injectAndPostProcessDependencies(StreamListenerAnnotationBeanPostProcessor.java:317)
at org.springframework.cloud.stream.binding.StreamListenerAnnotationBeanPostProcessor.afterSingletonsInstantiated(StreamListenerAnnotationBeanPostProcessor.java:113)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:862)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:877)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:549)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:141)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:743)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:390)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:312)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1214)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1203)
It seems the former bean is created by Spring AMQP and the latter by Spring Cloud Stream so I created my own primary bean:
#Bean
#Primary
public MessageHandlerMethodFactory messageHandlerMethodFactory() {
return new DefaultMessageHandlerMethodFactory();
}
Now the application is able to start but the output channel is created by Spring Cloud Stream in server A instead of server B. It seems that Spring Cloud Stream configuration is using the connection created by Spring AMQP instead of using its own configuration.
The configuration of Spring AMQP is this:
#Bean
public SimpleRabbitListenerContainerFactory priceRabbitListenerContainerFactory(
ConnectionFactory consumerConnectionFactory) {
return
getSimpleRabbitListenerContainerFactory(
consumerConnectionFactory,
rabbitProperties.getConsumer().getListeners().get(LISTENER_A));
}
#Bean
public SimpleRabbitListenerContainerFactory maxbetRabbitListenerContainerFactory(
ConnectionFactory consumerConnectionFactory) {
return
getSimpleRabbitListenerContainerFactory(
consumerConnectionFactory,
rabbitProperties.getConsumer().getListeners().get(LISTENER_B));
}
#Bean
public ConnectionFactory consumerConnectionFactory() throws Exception {
return
new CachingConnectionFactory(
getRabbitConnectionFactoryBean(
rabbitProperties.getConsumer()
).getObject()
);
}
private SimpleRabbitListenerContainerFactory getSimpleRabbitListenerContainerFactory(
ConnectionFactory connectionFactory,
RabbitProperties.ListenerProperties listenerProperties) {
//return a SimpleRabbitListenerContainerFactory set up from external properties
}
/**
* Create the AMQ Admin.
*/
#Bean
public AmqpAdmin consumerAmqpAdmin(ConnectionFactory consumerConnectionFactory) {
return new RabbitAdmin(consumerConnectionFactory);
}
/**
* Create the map of available queues and declare them in the admin.
*/
#Bean
public Map<String, Queue> queues(AmqpAdmin consumerAmqpAdmin) {
return
rabbitProperties.getConsumer().getListeners().entrySet().stream()
.map(listenerEntry -> {
Queue queue =
QueueBuilder
.nonDurable(listenerEntry.getValue().getQueueName())
.autoDelete()
.build();
consumerAmqpAdmin.declareQueue(queue);
return new AbstractMap.SimpleEntry<>(listenerEntry.getKey(), queue);
}).collect(
Collectors.toMap(
AbstractMap.SimpleEntry::getKey,
AbstractMap.SimpleEntry::getValue
)
);
}
/**
* Create the map of available exchanges and declare them in the admin.
*/
#Bean
public Map<String, TopicExchange> exchanges(AmqpAdmin consumerAmqpAdmin) {
return
rabbitProperties.getConsumer().getListeners().entrySet().stream()
.map(listenerEntry -> {
TopicExchange exchange =
new TopicExchange(listenerEntry.getValue().getExchangeName());
consumerAmqpAdmin.declareExchange(exchange);
return new AbstractMap.SimpleEntry<>(listenerEntry.getKey(), exchange);
}).collect(
Collectors.toMap(
AbstractMap.SimpleEntry::getKey,
AbstractMap.SimpleEntry::getValue
)
);
}
/**
* Create the list of bindings and declare them in the admin.
*/
#Bean
public List<Binding> bindings(Map<String, Queue> queues, Map<String, TopicExchange> exchanges, AmqpAdmin consumerAmqpAdmin) {
return
rabbitProperties.getConsumer().getListeners().keySet().stream()
.map(listenerName -> {
Queue queue = queues.get(listenerName);
TopicExchange exchange = exchanges.get(listenerName);
return
rabbitProperties.getConsumer().getListeners().get(listenerName).getKeys().stream()
.map(bindingKey -> {
Binding binding = BindingBuilder.bind(queue).to(exchange).with(bindingKey);
consumerAmqpAdmin.declareBinding(binding);
return binding;
}).collect(Collectors.toList());
}).flatMap(Collection::stream)
.collect(Collectors.toList());
}
Message listeners are:
#RabbitListener(
queues="${consumer.listeners.LISTENER_A.queue-name}",
containerFactory = "priceRabbitListenerContainerFactory"
)
public void handleMessage(Message rawMessage, org.springframework.messaging.Message<ModelPayload> message) {
// call a service to process the message payload
}
#RabbitListener(
queues="${consumer.listeners.LISTENER_B.queue-name}",
containerFactory = "maxbetRabbitListenerContainerFactory"
)
public void handleMessage(Message rawMessage, org.springframework.messaging.Message<ModelPayload> message) {
// call a service to process the message payload
}
Properties:
#
# Server A config (Spring AMQP)
#
consumer.host=server-a
consumer.username=
consumer.password=
consumer.port=5671
consumer.ssl.enabled=true
consumer.ssl.algorithm=TLSv1.2
consumer.ssl.validate-server-certificate=false
consumer.connection-name=local:microservice-1
consumer.thread-factory.thread-group-name=server-a-consumer
consumer.thread-factory.thread-name-prefix=server-a-consumer-
# LISTENER_A configuration
consumer.listeners.LISTENER_A.queue-name=local.listenerA
consumer.listeners.LISTENER_A.exchange-name=exchangeA
consumer.listeners.LISTENER_A.keys[0]=*.1.*.*
consumer.listeners.LISTENER_A.keys[1]=*.3.*.*
consumer.listeners.LISTENER_A.keys[2]=*.6.*.*
consumer.listeners.LISTENER_A.keys[3]=*.8.*.*
consumer.listeners.LISTENER_A.keys[4]=*.9.*.*
consumer.listeners.LISTENER_A.initial-concurrency=5
consumer.listeners.LISTENER_A.maximum-concurrency=20
consumer.listeners.LISTENER_A.thread-name-prefix=listenerA-consumer-
# LISTENER_B configuration
consumer.listeners.LISTENER_B.queue-name=local.listenerB
consumer.listeners.LISTENER_B.exchange-name=exchangeB
consumer.listeners.LISTENER_B.keys[0]=*.1.*
consumer.listeners.LISTENER_B.keys[1]=*.3.*
consumer.listeners.LISTENER_B.keys[2]=*.6.*
consumer.listeners.LISTENER_B.initial-concurrency=5
consumer.listeners.LISTENER_B.maximum-concurrency=20
consumer.listeners.LISTENER_B.thread-name-prefix=listenerB-consumer-
#
# Server B config (Spring Cloud Stream)
#
spring.rabbitmq.host=server-b
spring.rabbitmq.port=5672
spring.rabbitmq.username=
spring.rabbitmq.password=
spring.cloud.stream.bindings.outbound.destination=microservice-out
spring.cloud.stream.bindings.outbound.group=default
spring.cloud.stream.rabbit.binder.connection-name-prefix=local:microservice
So my question is: is it possible to use in the same Spring Boot application code that consumes data from RabbitMQ via Spring AMQP and produces messages into a different server via Spring Cloud Stream RabbitMQ? If it is, could somebody tell me what I am doing wrong, please?
Spring AMQP version is the one provided by Boot version 2.1.7 (2.1.8-RELEASE) and Spring Cloud Stream version is the one provided by Spring Cloud train Greenwich.SR2 (2.1.3.RELEASE).
EDIT
I was able to make it work configuring the binder via multiple configuration properties instead of the default one. So with this configuration it works:
#
# Server B config (Spring Cloud Stream)
#
spring.cloud.stream.binders.transport-layer.type=rabbit
spring.cloud.stream.binders.transport-layer.environment.spring.rabbitmq.host=server-b
spring.cloud.stream.binders.transport-layer.environment.spring.rabbitmq.port=5672
spring.cloud.stream.binders.transport-layer.environment.spring.rabbitmq.username=
spring.cloud.stream.binders.transport-layer.environment.spring.rabbitmq.password=
spring.cloud.stream.bindings.stream-output.destination=microservice-out
spring.cloud.stream.bindings.stream-output.group=default
Unfortunately it is not possible to set the connection-name yet in multiple binders configuration: A custom ConnectionNameStrategy is ignored if there is a custom binder configuration.
Anyway, I still do not understand why it seems the contexts are "mixed" when using Spring AMQP and Spring Cloud Stream RabbitMQ. It is still necessary to set a primary MessageHandlerMethodFactory bean in order the implementation to work.
EDIT
I found out that the NoUniqueBeanDefinitionException was caused because the microservice itself was creating a ConditionalGenericConverter to be used by Spring AMQP part to deserialize messages from Server A.
I removed it and added some MessageConverters instead. Now the problem is solved and the #Primary bean is no longer necessary.
Unrelated, but
consumerAmqpAdmin.declareQueue(queue);
You should never communicate with the broker within a #Bean definition; it is too early in application context lifecycle. It might work but YMMV; also if the broker is not available it will prevent your app from starting.
It's better to define beans of type Declarables containing the lists of queues, channels, bindings and the Admin will automatically declare them when the connection is first opened successfully. See the reference manual.
I have never seen the MessageHandlerFactory problem; Spring AMQP declares no such bean. If you can provide a small sample app that exhibits the behavior, that would be useful.
I'll see if I can find a work around to the connection name issue.
EDIT
I found a work around to the connection name issue; it involves a bit of reflection but it works. I suggest you open a new feature request against the binder to request a mechanism to set the connection name strategy when using multiple binders.
Anyway; here's the work around...
#SpringBootApplication
#EnableBinding(Processor.class)
public class So57725710Application {
public static void main(String[] args) {
SpringApplication.run(So57725710Application.class, args);
}
#Bean
public Object connectionNameConfigurer(BinderFactory binderFactory) throws Exception {
setConnectionName(binderFactory, "rabbit1", "myAppProducerSide");
setConnectionName(binderFactory, "rabbit2", "myAppConsumerSide");
return null;
}
private void setConnectionName(BinderFactory binderFactory, String binderName,
String conName) throws Exception {
binderFactory.getBinder(binderName, MessageChannel.class); // force creation
#SuppressWarnings("unchecked")
Map<String, Map.Entry<Binder<?, ?, ?>, ApplicationContext>> binders =
(Map<String, Entry<Binder<?, ?, ?>, ApplicationContext>>) new DirectFieldAccessor(binderFactory)
.getPropertyValue("binderInstanceCache");
binders.get(binderName)
.getValue()
.getBean(CachingConnectionFactory.class).setConnectionNameStrategy(queue -> conName);
}
#StreamListener(Processor.INPUT)
#SendTo(Processor.OUTPUT)
public String listen(String in) {
System.out.println(in);
return in.toUpperCase();
}
}
and
spring.cloud.stream.binders.rabbit1.type=rabbit
spring.cloud.stream.binders.rabbit1.environment.spring.rabbitmq.host=localhost
spring.cloud.stream.binders.rabbit1.environment.spring.rabbitmq.port=5672
spring.cloud.stream.binders.rabbit1.environment.spring.rabbitmq.username=guest
spring.cloud.stream.binders.rabbit1.environment.spring.rabbitmq.password=guest
spring.cloud.stream.bindings.output.destination=outDest
spring.cloud.stream.bindings.output.producer.required-groups=outQueue
spring.cloud.stream.bindings.output.binder=rabbit1
spring.cloud.stream.binders.rabbit2.type=rabbit
spring.cloud.stream.binders.rabbit2.environment.spring.rabbitmq.host=localhost
spring.cloud.stream.binders.rabbit2.environment.spring.rabbitmq.port=5672
spring.cloud.stream.binders.rabbit2.environment.spring.rabbitmq.username=guest
spring.cloud.stream.binders.rabbit2.environment.spring.rabbitmq.password=guest
spring.cloud.stream.bindings.input.destination=inDest
spring.cloud.stream.bindings.input.group=default
spring.cloud.stream.bindings.input.binder=rabbit2
and

Kafka listener receiving List<ConsumerRecord<String, String>>, is it possible to consume?

I am super new in Kafka and I frankly have no idea about this type of consumer (as far as I understood is like that due is batch ready), so I am struggling to figure out how to basically consume the list of these events.
I have something like this:
#KafkaListener(topics = "#{'${kafka.listener.list-of-topics}'.split(',')}")
public void readMessage(List<ConsumerRecord<String, String>> records,
final Acknowledgment acknowledgment) {
try {
....
I know when I receive an event (at least a single one) is of type "MyObject" so I can do it fine when I get a single message.
I believe there must be a way to read/cast this List<ConsumerRecords<String,String> but I cannot figure out how..
any ideas?
See the reference manual: Batch Listeners.
Starting with version 1.1, #KafkaListener methods can be configured to receive the entire batch of consumer records received from the consumer poll. To configure the listener container factory to create batch listeners, set the batchListener property:
#Bean
public KafkaListenerContainerFactory<?> batchFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setBatchListener(true); // <<<<<<<<<<<<<<<<<<<<<<<<<
return factory;
}
...
You can also receive a list of ConsumerRecord<?, ?> objects but it must be the only parameter (aside from optional Acknowledgment, when using manual commits, and/or Consumer<?, ?> parameters) defined on the method:
...
When using Spring Boot, set the property spring.kafka.listener.type=batch.

Dynamically Change Concurrent Consumers - rabbitMQ

I am using SimpleRabbitListenerFactory and am trying to dynamically decrease/increase the concurrentConsumers which am not able to achieve. Can anybody help me with this. Thank you
Code is as follows:
SimpleRabbitListenerContainerFactory factory;
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory()
{
factory=new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(this.connectionFactory);
changeConsumers(2);
return factory;
}
public void changeConsumers(int minConsumers)
{
factory.setConcurrentConsumers(2);
}
***In another package***
if(messageCount>6000)
changeConsumers(5);
When message count is morethan 6000 it is going into the changeCOnsumers method but consumers are not changing to 5.
The factory is used during application initialization to create listener container objects; changing its properties later won't change the properties on containers it has already created.
You have to change the property on the containers themselves.
You can access the containers using the RabbitListenerEndpointRegistry bean; call getListenerContainers and iterate over them (or get an individual container using it's id).
Cast the MessageListenerContainer to SimpleMessageListenerContainer to change its properties.

Springboot JMS LIstener ActiveMQ is very slow

Im having a SpringBoot application which consume my custom serializable message from ActiveMQ Queue. So far it is worked, however, the consume rate is very poor, only 1 - 20 msg/sec.
#JmsListener(destination = "${channel.consumer.destination}", concurrency="${channel.consumer.maxConcurrency}")
public void receive(IMessage message) {
processor.process(message);
}
The above is my channel consumer class's snippet, it has a processor instance (injected, autowired and inside it i have #Async service, so i can assume the main thread will be released as soon as message entering #Async method) and also it uses springboot activemq default conn factory which i set from application properties
# ACTIVEMQ (ActiveMQProperties)
spring.activemq.broker-url= tcp://localhost:61616?keepAlive=true
spring.activemq.in-memory=true
spring.activemq.pool.enabled=true
spring.activemq.pool.expiry-timeout=1
spring.activemq.pool.idle-timeout=30000
spring.activemq.pool.max-connections=50
Few things worth to inform:
1. I run everything (Eclipse, ActiveMQ, MYSQL) in my local laptop
2. Before this, i also tried using custom connection factory (default AMQ, pooling, and caching) equipped with custom threadpool task executor, but still getting same result. Below is a snapshot performance capture which i took and updating every 1 sec
3. I also notive in JVM Monitor that the used heap keep incrementing
I want to know:
1. Is there something wrong/missing from my steps?I can't even touch hundreds in my message rate
2. Annotated #JmsListener method will execute process async or sync?
3. If possible and supported, how to use traditional sync receive() with SpringBoot properly and ellegantly?
Thank You
I'm just checking something similar. I have defined DefaultJmsListenerContainerFactory in my JMSConfiguration class (Spring configuration) like this:
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory(CachingConnectionFactory connectionFactory) {
// settings made based on https://bsnyderblog.blogspot.sk/2010/05/tuning-jms-message-consumption-in.html
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory(){
#Override
protected void initializeContainer(DefaultMessageListenerContainer container) {
super.initializeContainer(container);
container.setIdleConsumerLimit(5);
container.setIdleTaskExecutionLimit(10);
}
};
factory.setConnectionFactory(connectionFactory);
factory.setConcurrency("10-50");
factory.setCacheLevel(CACHE_CONSUMER);
factory.setReceiveTimeout(5000L);
factory.setDestinationResolver(new BeanFactoryDestinationResolver(beanFactory));
return factory;
}
As you can see, I took those values from https://bsnyderblog.blogspot.sk/2010/05/tuning-jms-message-consumption-in.html. It's from 2010 but I could not find anything newer / better so far.
I have also defined Spring's CachingConnectionFactory Bean as a ConnectionFactory:
#Bean
public CachingConnectionFactory buildCachingConnectionFactory(#Value("${activemq.url}") String brokerUrl) {
// settings based on https://bsnyderblog.blogspot.sk/2010/02/using-spring-jmstemplate-to-send-jms.html
ActiveMQConnectionFactory activeMQConnectionFactory = new ActiveMQConnectionFactory();
activeMQConnectionFactory.setBrokerURL(brokerUrl);
CachingConnectionFactory cachingConnectionFactory = new CachingConnectionFactory(activeMQConnectionFactory);
cachingConnectionFactory.setSessionCacheSize(10);
return cachingConnectionFactory;
}
This setting will help JmsTemplate with sending.
So my answer to you is set the values of your connection pool like described in the link. Also I guess you can delete spring.activemq.in-memory=true because (based on documentation) in case you specify custom broker URL, "in-memory" property is ignored.
Let me know if this helped.
G.

Resources