Spring kafka unit test listener not subscribing to topic - spring

I have a sample project to explore spring with kafka ( find here ) . I have a listener subscribing to topic my-test-topic-upstream which will just lo the message and key and publish same to another topic my-test-topic-downstream. I tried this is local kafka ( docker-compose file is there ) and it works.
Now I'm tryignt to write a test for this using a embedded kafka server. Under test I have a embedded server starting up ( TestContext.java ) which should start before the test ( overridden junit beforeAll ).
private static EmbeddedKafkaBroker kafka() {
EmbeddedKafkaBroker kafkaEmbedded =
new EmbeddedKafkaBroker(
3,
false,
1,
"my-test-topic-upstream", "my-test-topic-downstream");
Map<String, String> brokerProperties = new HashMap<>();
brokerProperties.put("default.replication.factor", "1");
brokerProperties.put("offsets.topic.replication.factor", "1");
brokerProperties.put("group.initial.rebalance.delay.ms", "3000");
kafkaEmbedded.brokerProperties(brokerProperties);
try {
kafkaEmbedded.afterPropertiesSet();
} catch (Exception e) {
throw new RuntimeException(e);
}
return kafkaEmbedded;
}
Then I create a producer ( TickProducer ) and publish a message to the topic which I expect my listener will be able to consume.
public TickProducer(String brokers) {
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, brokers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
producer = new KafkaProducer<>(props);
}
public RecordMetadata publishTick(String brand)
throws ExecutionException, InterruptedException {
return publish(TOPIC, brand, Instant.now().toString());
}
private RecordMetadata publish(String topic, String key, String value)
throws ExecutionException, InterruptedException {
final RecordMetadata recordMetadata;
recordMetadata = producer.send(new ProducerRecord<>(topic, key, value)).get();
producer.flush();
return recordMetadata;
}
I see following log message keep logging.
11:32:35.745 [main] WARN o.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=my-test-group] Connection to node -1 could not be established. Broker may not be available.
finally fails with
11:36:52.774 [main] ERROR o.s.boot.SpringApplication - Application run failed
org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
Any tips here?

Look at the INFO log ConsumerConfig to see where he is trying to connect (compare it to the ProducerConfig). I suspect you haven't updated the spring boot bootstrap-servers property to point to the embedded broker.
See
/**
* Set the system property with this name to the list of broker addresses.
* #param brokerListProperty the brokerListProperty to set
* #return this broker.
* #since 2.3
*/
public EmbeddedKafkaBroker brokerListProperty(String brokerListProperty) {
this.brokerListProperty = brokerListProperty;
return this;
}
Set it to spring.kafka.bootstrap-servers which will then be used instead of SPRING_EMBEDDED_KAFKA_BROKERS.
BTW, it's generally easier to use the #EmbeddedKafka annotation instead of instantiating the server yourself.

Related

Transactional kafka listener retry

I'm trying to create a Spring Kafka #KafkaListener which is both transactional (kafa and database) and uses retry. I am using Spring Boot. The documentation for error handlers says that
When transactions are being used, no error handlers are configured, by default, so that the exception will roll back the transaction. Error handling for transactional containers are handled by the AfterRollbackProcessor. If you provide a custom error handler when using transactions, it must throw an exception if you want the transaction rolled back (source).
However, when I configure my listener with a #Transactional("kafkaTransactionManager) annotation, even though I can clearly see that the template rolls back produced messages when an exception is raised, the container actually uses a non-null commonErrorHandler rather than an AfterRollbackProcessor. This is the case even when I explicitly configure the commonErrorHandler to null in the container factory. I do not see any evidence that my configured AfterRollbackProcessor is ever invoked, even after the commonErrorHandler exhausts its retry policy.
I'm uncertain how Spring Kafka's error handling works in general at this point, and am looking for clarification. The questions I want to answer are:
What is the recommended way to configure transactional kafka listeners with Spring-Kafka 2.8.0? Have I done it correctly?
Should the common error handler indeed be used rather than the after rollback processor? Does it rollback the current transaction before trying to process the message again according to the retry policy?
In general, when I have a transactional kafka listener, is there ever more than one layer of error handling I should be aware of? E.g. if my common error handler re-throws exceptions of kind T, will another handler catch that and potentially start retry of its own?
Thanks!
My code:
#Configuration
#EnableScheduling
#EnableKafka
public class KafkaConfiguration {
private static final Logger LOGGER = LoggerFactory.getLogger(KafkaConfiguration.class);
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConsumerFactory<Object, Object> consumerFactory) {
var factory = new ConcurrentKafkaListenerContainerFactory<Integer, Object>();
factory.setConsumerFactory(consumerFactory);
var afterRollbackProcessor =
new DefaultAfterRollbackProcessor<Object, Object>(
(record, e) -> LOGGER.info("After rollback processor triggered! {}", e.getMessage()),
new FixedBackOff(1_000, 1));
// Configures different error handling for different listeners.
factory.setContainerCustomizer(
container -> {
var groupId = container.getContainerProperties().getGroupId();
if (groupId.equals("InputProcessorHigh") || groupId.equals("InputProcessorLow")) {
container.setAfterRollbackProcessor(afterRollbackProcessor);
// If I set commonErrorHandler to null, it is defaulted instead.
}
});
return factory;
}
}
#Component
public class InputProcessor {
private static final Logger LOGGER = LoggerFactory.getLogger(InputProcessor.class);
private final KafkaTemplate<Integer, Object> template;
private final AuditLogRepository repository;
#Autowired
public InputProcessor(KafkaTemplate<Integer, Object> template, AuditLogRepository repository) {
this.template = template;
this.repository = repository;
}
#KafkaListener(id = "InputProcessorHigh", topics = "input-high", concurrency = "3")
#Transactional("kafkaTransactionManager")
public void inputHighProcessor(ConsumerRecord<Integer, Input> input) {
processInputs(input);
}
#KafkaListener(id = "InputProcessorLow", topics = "input-low", concurrency = "1")
#Transactional("kafkaTransactionManager")
public void inputLowProcessor(ConsumerRecord<Integer, Input> input) {
processInputs(input);
}
public void processInputs(ConsumerRecord<Integer, Input> input) {
var key = input.key();
var message = input.value().getMessage();
var output = new Output().setMessage(message);
LOGGER.info("Processing {}", message);
template.send("output-left", key, output);
repository.createIfNotExists(message); // idempotent insert
template.send("output-right", key, output);
if (message.contains("ERROR")) {
throw new RuntimeException("Simulated processing error!");
}
}
}
My application.yaml (minus my bootstrap-servers and security config):
spring:
kafka:
consumer:
auto-offset-reset: 'earliest'
key-deserializer: 'org.apache.kafka.common.serialization.IntegerDeserializer'
value-deserializer: 'org.springframework.kafka.support.serializer.JsonDeserializer'
isolation-level: 'read_committed'
properties:
spring.json.trusted.packages: 'java.util,java.lang,com.github.tomboyo.silverbroccoli.*'
producer:
transaction-id-prefix: 'tx-'
key-serializer: 'org.apache.kafka.common.serialization.IntegerSerializer'
value-serializer: 'org.springframework.kafka.support.serializer.JsonSerializer'
[EDIT] (solution code)
I was able to figure it out with Gary's help. As they say, we need to set the kafka transaction manager on the container so that the container can start transactions. The transactions documentation doesn't cover how to do this, and there are a few ways. First, we can get the mutable container properties object from the factory and set the transaction manager on that:
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
var factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.getContainerProperties().setTransactionManager(...);
return factory;
}
If we are in Spring Boot, we can re-use some of the auto configuration to set sensible defaults on our factory before we customize it. We can see that the KafkaAutoConfiguration module imports KafkaAnnotationDrivenConfiguration, which produces a ConcurrentKafkaListenerContainerFactoryConfigurer bean. This appears to be responsible for all the default configuration in a Spring-Boot application. So, we can inject that bean and use it to initialize our factory before adding customizations:
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer bootConfigurer,
ConsumerFactory<Object, Object> consumerFactory) {
var factory = new ConcurrentKafkaListenerContainerFactory<Object, Object>();
// Apply default spring-boot configuration.
bootConfigurer.configure(factory, consumerFactory);
factory.setContainerCustomizer(
container -> {
... // do whatever
});
return factory;
}
Once that's done, the container uses the AfterRollbackProcessor for error handling, as expected. As long as I don't explicitly configure a common error handler, this appears to be the only layer of exception handling.
The AfterRollbackProcessor is only used when the container knows about the transaction; you must provide a KafkaTransactionManager to the container so that the kafka transaction is started by the container, and the offsets sent to the transaction. Using #Transactional is not the correct way to start a Kafka Transaction.
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#transactions

Spring Kafka - Display Custom Error Message with #Retry

I am currently using Spring Kafka to consume messages from topic along with #Retry of Spring. So basically, I am retrying to process the consumer message in case of an error. But while doing so, I want to avoid the exception message thrown by KafkaMessageListenerContainer. Instead I want to display a custom message. I tried adding an error handler in the ConcurrentKafkaListenerContainerFactory but on doing so, my retry does not get invoked.
Can someone guide me on how to display a custom exception message along with #Retry scenario as well? Below are my code snippets:
ConcurrentKafkaListenerContainerFactory Bean Config
#Bean
ConcurrentKafkaListenerContainerFactory << ? , ? > concurrentKafkaListenerContainerFactory(ConcurrentKafkaListenerContainerFactoryConfigurer configurer, ConsumerFactory < Object, Object > kafkaConsumerFactory) {
ConcurrentKafkaListenerContainerFactory < Object, Object > kafkaListenerContainerFactory =
new ConcurrentKafkaListenerContainerFactory < > ();
configurer.configure(kafkaListenerContainerFactory, kafkaConsumerFactory);
kafkaListenerContainerFactory.setConcurrency(1);
// Set Error Handler
/*kafkaListenerContainerFactory.setErrorHandler(((thrownException, data) -> {
log.info("Retries exhausted);
}));*/
return kafkaListenerContainerFactory;
}
Kafka Consumer
#KafkaListener(
topics = "${spring.kafka.reprocess-topic}",
groupId = "${spring.kafka.consumer.group-id}",
containerFactory = "concurrentKafkaListenerContainerFactory"
)
#Retryable(
include = RestClientException.class,
maxAttemptsExpression = "${spring.kafka.consumer.max-attempts}",
backoff = #Backoff(delayExpression = "${spring.kafka.consumer.backoff-delay}")
)
public void onMessage(ConsumerRecord < String, String > consumerRecord) throws Exception {
// Consume the record
log.info("Consumed Record from topic : {} ", consumerRecord.topic());
// process the record
messageHandler.handleMessage(consumerRecord.value());
}
Below is the exception that I am getting:
You should not use #Retryable as well as the SeekToCurrentErrorHandler (which is now the default, since 2.5; so I presume you are using that version).
Instead, configure a custom SeekToCurrentErrorHandler with max attempts, back off, and retryable exceptions.
That error message is normal; it's logged by the container; it's logging level can be reduced from ERROR to INFO or DEBUG by setting the logLevel property on the SeekToCurrentErrorHandler. You can also add a custom recoverer to it, to log your custom message after the retries are exhausted.
my event retry template,
#Bean(name = "eventRetryTemplate")
public RetryTemplate eventRetryTemplate() {
RetryTemplate template = new RetryTemplate();
ExceptionClassifierRetryPolicy retryPolicy = new ExceptionClassifierRetryPolicy();
Map<Class<? extends Throwable>, RetryPolicy> policyMap = new HashMap<>();
policyMap.put(NonRecoverableException.class, new NeverRetryPolicy());
policyMap.put(RecoverableException.class, new AlwaysRetryPolicy());
retryPolicy.setPolicyMap(policyMap);
ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(backoffInitialInterval);
backOffPolicy.setMaxInterval(backoffMaxInterval);
backOffPolicy.setMultiplier(backoffMultiplier);
template.setRetryPolicy(retryPolicy);
template.setBackOffPolicy(backOffPolicy);
return template;
}
my kafka listener using the retry template,
#KafkaListener(
groupId = "${kafka.consumer.group.id}",
topics = "${kafka.consumer.topic}",
containerFactory = "eventContainerFactory")
public void eventListener(ConsumerRecord<String, String>
events,
Acknowledgment acknowledgment) {
eventRetryTemplate.execute(retryContext -> {
retryContext.setAttribute(EVENT, "my-event");
eventConsumer.consume(events, acknowledgment);
return null;
});
}
my kafka consumer properties,
private ConcurrentKafkaListenerContainerFactory<String, String>
getConcurrentKafkaListenerContainerFactory(
KafkaProperties kafkaProperties) {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
factory.getContainerProperties().setAckOnError(Boolean.TRUE);
kafkaErrorEventHandler.setCommitRecovered(Boolean.TRUE);
factory.setErrorHandler(kafkaErrorEventHandler);
factory.setConcurrency(1);
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
factory.setConsumerFactory(getEventConsumerFactory(kafkaProperties));
return factory;
}
kafka error event handler is my custom error handler that extends the SeekToCurrentErrorHandler and implements the handle error method some what like this.....
#Override
public void handle(Exception thrownException, List<ConsumerRecord<?, ?>>
records,
Consumer<?, ?> consumer, MessageListenerContainer container) {
log.info("Non recoverable exception. Publishing event to Database");
super.handle(thrownException, records, consumer, container);
ConsumerRecord<String, String> consumerRecord = (ConsumerRecord<String,
String>) records.get(0);
FailedEvent event = createFailedEvent(thrownException, consumerRecord);
failedEventService.insertFailedEvent(event);
log.info("Successfully Published eventId {} to Database...",
event.getEventId());
}
here the failed event service is my custom class again which with put these failed events into a queryable relational DB (I chose this to be my DLQ).

How to configure JmsListener on ActiveMQ for autoscaling using Qpid Sender

I have a kubernetes cluster with an activeMQ Artemis Queue and I am using hpa for autoscaling of micro services. The messages are send via QpidSender and received via JMSListener.
Messaging works, but I am not able to configure the Queue/Listener in a way, that autoscaling works as expacted.
This is my Qpid sender
public static void send(String avroMessage, String task) throws JMSException, NamingException {
Connection connection = createConnection();
connection.start();
Session session = createSession(connection);
MessageProducer messageProducer = createProducer(session);
TextMessage message = session.createTextMessage(avroMessage);
message.setStringProperty("task", task);
messageProducer.send(
message,
DeliveryMode.NON_PERSISTENT,
Message.DEFAULT_PRIORITY,
Message.DEFAULT_TIME_TO_LIVE);
connection.close();
}
private static MessageProducer createProducer(Session session) throws JMSException {
Destination producerDestination =
session.createQueue("queue?consumer.prefetchSize=1&heartbeat='10000'");
return session.createProducer(producerDestination);
}
private static Session createSession(Connection connection) throws JMSException {
return connection.createSession(Session.AUTO_ACKNOWLEDGE);
}
private static Connection createConnection() throws NamingException, JMSException {
Hashtable<Object, Object> env = new Hashtable<>();
env.put(Context.INITIAL_CONTEXT_FACTORY, "org.apache.qpid.jms.jndi.JmsInitialContextFactory");
env.put("connectionfactory.factoryLookup", amqUrl);
Context context = new javax.naming.InitialContext(env);
ConnectionFactory connectionFactory = (ConnectionFactory) context.lookup("factoryLookup");
PooledConnectionFactory pooledConnectionFactory = new PooledConnectionFactory();
pooledConnectionFactory.setConnectionFactory(connectionFactory);
pooledConnectionFactory.setMaxConnections(10);
return connectionFactory.createConnection(amqUsername, amqPassword);
}
This is my Listener config
#Bean
public JmsConnectionFactory jmsConnection() {
JmsConnectionFactory jmsConnection = new JmsConnectionFactory();
jmsConnection.setRemoteURI(this.amqUrl);
jmsConnection.setUsername(this.amqUsername);
jmsConnection.setPassword(this.amqPassword);
return jmsConnection;
}
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(jmsConnection());
return factory;
}
And here is my Listener
#JmsListener(
destination = "queue?consumer.prefetchSize=1&heartbeat='10000'",
selector = "task = 'myTask'"
)
public void receiveMsg(Message message) throws IOException, JMSException {
message.acknowledge();
doStuff();
}
I send the message like this
QpidSender.send(avroMessage, "myTask");
This setting works. I can send different messages and as soon than there are more then 2, the second instance of my service starts and consumes the message. If later the message count is below 2, the service is terminated.
The problem is: I don't want the message to be acknowledged before the doStuff(). Because if something goes wrong or if the service is terminated before finishing doStuff(), the message is lost (right?).
But if I reorder it to
doStuff();
message.acknowledge();
the second instance can not receive a message from the broker, as long as the first service is still in doStuff() and hasn't acknowledged the message.
How do I configure this in a way, that more than one instance can consume a message from the queue, but the message isn't lost, if the service gets terminated or something else fails on doStuff()?
Use factory.setSessionTransacted(true).
See the javadocs for DefaultMessageListenerContainer:
* <p><b>It is strongly recommended to either set {#link #setSessionTransacted
* "sessionTransacted"} to "true" or specify an external {#link #setTransactionManager
* "transactionManager"}.</b> See the {#link AbstractMessageListenerContainer}
* javadoc for details on acknowledge modes and native transaction options, as
* well as the {#link AbstractPollingMessageListenerContainer} javadoc for details
* on configuring an external transaction manager. Note that for the default
* "AUTO_ACKNOWLEDGE" mode, this container applies automatic message acknowledgment
* before listener execution, with no redelivery in case of an exception.

can we batch up groups of 10 message load in mosquitto using spring integration

this is how i have defined my mqtt connection using spring integration.i am not sure whether this is possible bt can we setup a mqtt subscriber works after getting a 10 load of messages. right now subscriber works after publishing a message as it should.
#Autowired
ConnectorConfig config;
#Bean
public MqttPahoClientFactory mqttClientFactory() {
DefaultMqttPahoClientFactory factory = new DefaultMqttPahoClientFactory();
factory.setServerURIs(config.getUrl());
factory.setUserName(config.getUser());
factory.setPassword(config.getPass());
return factory;
}
#Bean
public MessageProducer inbound() {
MqttPahoMessageDrivenChannelAdapter adapter =
new MqttPahoMessageDrivenChannelAdapter(config.getClientid(), mqttClientFactory(), "ALERT", "READING");
adapter.setCompletionTimeout(5000);
adapter.setConverter(new DefaultPahoMessageConverter());
adapter.setQos(1);
adapter.setOutputChannel(mqttRouterChannel());
return adapter;
}
/**this is router**/
#MessageEndpoint
public class MessageRouter {
private final Logger logger = LoggerFactory.getLogger(MessageRouter.class);
static final String ALERT = "ALERT";
static final String READING = "READING";
#Router(inputChannel = "mqttRouterChannel")
public String route(#Header("mqtt_topic") String topic){
String route = null;
switch (topic){
case ALERT:
logger.info("alert message received");
route = "alertTransformerChannel";
break;
case READING:
logger.info("reading message received");
route = "readingTransformerChannel";
break;
}
return route;
}
}
i need to batch up groups of 10 messages at a time
That is not a MqttPahoMessageDrivenChannelAdapter responsibility.
We use there MqttCallback with this semantic:
* #param topic name of the topic on the message was published to
* #param message the actual message.
* #throws Exception if a terminal error has occurred, and the client should be
* shut down.
*/
public void messageArrived(String topic, MqttMessage message) throws Exception;
So, we can't batch them there on this Channel Adapter by nature of the Paho client.
What we can suggest you from the Spring Integration perspective is an Aggregator EIP implementation.
In your case you should add #ServiceActivator for the AggregatorFactoryBean #Bean before that mqttRouterChannel, before sending to the router.
That maybe as simple as:
#Bean
#ServiceActivator(inputChannel = "mqttAggregatorChannel")
AggregatorFactoryBean mqttAggregator() {
AggregatorFactoryBean aggregator = new AggregatorFactoryBean();
aggregator.setProcessorBean(new DefaultAggregatingMessageGroupProcessor());
aggregator.setCorrelationStrategy(m -> 1);
aggregator.setReleaseStrategy(new MessageCountReleaseStrategy(10));
aggregator.setExpireGroupsUponCompletion(true);
aggregator.setSendPartialResultOnExpiry(true);
aggregator.setGroupTimeoutExpression(new ValueExpression<>(1000));
aggregator.setOutputChannelName("mqttRouterChannel");
return aggregator;
}
See more information in the Reference Manual.

No mannaged connection when using jms

I am using wildfly jms queue... I'm using wildfly 9.0.2.Final
I make the producer like this :
#Inject
private JMSContext jmsContext;
private JMSProducer jmsProducer;
#Resource(mappedName = "java:/jboss/exported/jms/queue/TosDownloadReport")
private Queue queueDownloadReport;
public void downloadReport(String adminId, DownloadReportFilter filter){
try {
jmsProducer = jmsContext.createProducer();
String requestParam = Json.getInstance().getObjectMapper().writeValueAsString(filter);
LOG.info("requestParam {}", requestParam);
String id = UUID.randomUUID().toString().replace("-", "");
RoutingRequest request = new RoutingRequest();
request.putProperty("id", id);
request.putProperty("adminId", adminId);
request.putProperty("parameterRequest", requestParam);
QueueMsgDownloadReport message = new QueueMsgDownloadReport();
message.setId(id);
message.setAdminId(adminId);
String jsonMsg = Json.getInstance().getObjectMapper().writeValueAsString(message);
gatewayService.send(RESOURCE, METHOD, request);
jmsProducer.send(queueDownloadReport, jsonMsg);
} catch (Exception e) {
LOG.error(e.getMessage(),e);
}
}
But sometimes i get exception like this and i must restart wildfly
2017-05-05 11:08:20,004 ERROR [com.daksa.tos.infrastructure.api.TosTimer] (EJB default - 7) Could not create a session: IJ000453: Unable to get managed connection for java:/JmsXA: javax.jms.JMSRuntimeException: Could not create a session: IJ000453: Unable to get managed connection for java:/JmsXA
Caused by: javax.resource.ResourceException: IJ000655: No managed connections available within configured blocking timeout (30000 [ms])
From what i read, i don't need to call jmsContext.close() if i'm using inject right?
Please tell me what i do wrong...Thx

Resources