How can i unit test a Kstream with an Kafka Binder? - spring

i want to unit test an Kafka Stream Aggregate and i am totally confused which method to use.
I read about the TestSupportBinder but i do not think this is working in my case, therefore i use the KafkaEmbedded Method. This is how i initialize the embedded Kafka.
#Before
public void setUp() throws Exception{
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("group-id", "false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
DefaultKafkaConsumerFactory<Object, LoggerMessage> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
consumer = cf.createConsumer();
embeddedKafka.consumeFromAnEmbeddedTopic(consumer, OUTPUT_TOPIC);
}
What i want to test is the following:
public interface Channels {
String LOGGER_IN_STREAM = "logger-topic-in-stream";
String LOGGER_IN = "logger-topic-in";
String LOGGERDATAVALIDATED_OUT = "loggerDataValidated-topic-out";
#Input(Channels.LOGGER_IN)
SubscribableChannel processMessage();
#Input(Channels.LOGGER_IN_STREAM)
KStream<Object, LoggerMessage> loggerKstreamIn();
#Output(Channels.LOGGERDATAVALIDATED_OUT)
MessageChannel validateLoggerData();
}
And i get following error message
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'some.domain.Channels': Invocation of init method failed; nested exception is java.lang.IllegalStateException: No factory found for binding target type: org.apache.kafka.streams.kstream.KStream among registered factories: channelFactory,messageSourceFactory
Caused by: java.lang.IllegalStateException: No factory found for binding target type: org.apache.kafka.streams.kstream.KStream among registered factories: channelFactory,messageSourceFactory
What am i doing wrong?

I missed to inject my Channels Interface as a MockBean. After i did that everything worked as expected.

Related

ConcurrentKafkaListenerContainerFactory message converter is ignored when configuring listeners automatically

I need to create Kafka listeners at runtime, and everything seems working, except that the message converter property seems being ignored (or maybe this is a designed feature or I've made something wrong).
When using #KafkaListener, it works correct, but when creating listeners manually my message isn't converted to a desired object and I'm getting an error:
Caused by: java.lang.ClassCastException: class java.lang.String cannot be cast to class com.my.company.model.MyPojo (java.lang.String is in module java.base of loader 'bootstrap'; com.my.company.model.MyPojo is in unnamed module of loader 'app')
at com.my.company.config.MyPojo.kafka.KafkaConfig.lambda$createListenerContainers$2(KafkaConfig.java:142)
My configuration:
#Bean
ConcurrentKafkaListenerContainerFactory<String, Object> kafkaListenerContainerFactory() {
var factory = new ConcurrentKafkaListenerContainerFactory<String, Object>();
factory.setConsumerFactory(consumerFactory());
factory.setMessageConverter(new StringJsonMessageConverter());
return factory;
}
#Bean
MessageListenerContainer createListenerContainer1() {
ContainerProperties containerProperties = new ContainerProperties(topicConfig("my_topic"));
var container = new KafkaMessageListenerContainer<>(consumerFactory(), containerProperties);
//tried this too...
//var container = kafkaListenerContainerFactory().createContainer(topicConfig("my_topic"));
container.setupMessageListener((MessageListener<String, MyPojo>) data -> getDataService.process(data.value()););
container.start();
return container;
}
The WORKING Kafka listener:
#KafkaListener(id = "1", topics = "my_topic)
public void listenGetDataTopic(#Payload MyPojo message) {
log.info(message);
}
I've tried a lot of different configs and to debug it deeply, and, of course I see the difference between handling messages when using #KafkaListener and manually created listeners, but I didn't figure out how to apply a message conversion to a manually created listeners. Is there a possibility to achieve this?
The message converter is not a property of the container, it is a property of the listener adapter used to invoke the pojo method for the #KafkaListener.
When using a container directly, your listener must implement MessageListener or one of its sub-interfaces.
You can either invoke the converter yourself in your listener (e.g. create a lightweight adapter) or you need to use another technique for dynamically creating #KafkaListeners.
See
Kafka Spring: How to create Listeners dynamically or in a loop?
Kafka Consumer in spring can I re-assign partitions programmatically?
Can i add topics to my #kafkalistener at runtime
for some examples of those techniques.

Transactional kafka listener retry

I'm trying to create a Spring Kafka #KafkaListener which is both transactional (kafa and database) and uses retry. I am using Spring Boot. The documentation for error handlers says that
When transactions are being used, no error handlers are configured, by default, so that the exception will roll back the transaction. Error handling for transactional containers are handled by the AfterRollbackProcessor. If you provide a custom error handler when using transactions, it must throw an exception if you want the transaction rolled back (source).
However, when I configure my listener with a #Transactional("kafkaTransactionManager) annotation, even though I can clearly see that the template rolls back produced messages when an exception is raised, the container actually uses a non-null commonErrorHandler rather than an AfterRollbackProcessor. This is the case even when I explicitly configure the commonErrorHandler to null in the container factory. I do not see any evidence that my configured AfterRollbackProcessor is ever invoked, even after the commonErrorHandler exhausts its retry policy.
I'm uncertain how Spring Kafka's error handling works in general at this point, and am looking for clarification. The questions I want to answer are:
What is the recommended way to configure transactional kafka listeners with Spring-Kafka 2.8.0? Have I done it correctly?
Should the common error handler indeed be used rather than the after rollback processor? Does it rollback the current transaction before trying to process the message again according to the retry policy?
In general, when I have a transactional kafka listener, is there ever more than one layer of error handling I should be aware of? E.g. if my common error handler re-throws exceptions of kind T, will another handler catch that and potentially start retry of its own?
Thanks!
My code:
#Configuration
#EnableScheduling
#EnableKafka
public class KafkaConfiguration {
private static final Logger LOGGER = LoggerFactory.getLogger(KafkaConfiguration.class);
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConsumerFactory<Object, Object> consumerFactory) {
var factory = new ConcurrentKafkaListenerContainerFactory<Integer, Object>();
factory.setConsumerFactory(consumerFactory);
var afterRollbackProcessor =
new DefaultAfterRollbackProcessor<Object, Object>(
(record, e) -> LOGGER.info("After rollback processor triggered! {}", e.getMessage()),
new FixedBackOff(1_000, 1));
// Configures different error handling for different listeners.
factory.setContainerCustomizer(
container -> {
var groupId = container.getContainerProperties().getGroupId();
if (groupId.equals("InputProcessorHigh") || groupId.equals("InputProcessorLow")) {
container.setAfterRollbackProcessor(afterRollbackProcessor);
// If I set commonErrorHandler to null, it is defaulted instead.
}
});
return factory;
}
}
#Component
public class InputProcessor {
private static final Logger LOGGER = LoggerFactory.getLogger(InputProcessor.class);
private final KafkaTemplate<Integer, Object> template;
private final AuditLogRepository repository;
#Autowired
public InputProcessor(KafkaTemplate<Integer, Object> template, AuditLogRepository repository) {
this.template = template;
this.repository = repository;
}
#KafkaListener(id = "InputProcessorHigh", topics = "input-high", concurrency = "3")
#Transactional("kafkaTransactionManager")
public void inputHighProcessor(ConsumerRecord<Integer, Input> input) {
processInputs(input);
}
#KafkaListener(id = "InputProcessorLow", topics = "input-low", concurrency = "1")
#Transactional("kafkaTransactionManager")
public void inputLowProcessor(ConsumerRecord<Integer, Input> input) {
processInputs(input);
}
public void processInputs(ConsumerRecord<Integer, Input> input) {
var key = input.key();
var message = input.value().getMessage();
var output = new Output().setMessage(message);
LOGGER.info("Processing {}", message);
template.send("output-left", key, output);
repository.createIfNotExists(message); // idempotent insert
template.send("output-right", key, output);
if (message.contains("ERROR")) {
throw new RuntimeException("Simulated processing error!");
}
}
}
My application.yaml (minus my bootstrap-servers and security config):
spring:
kafka:
consumer:
auto-offset-reset: 'earliest'
key-deserializer: 'org.apache.kafka.common.serialization.IntegerDeserializer'
value-deserializer: 'org.springframework.kafka.support.serializer.JsonDeserializer'
isolation-level: 'read_committed'
properties:
spring.json.trusted.packages: 'java.util,java.lang,com.github.tomboyo.silverbroccoli.*'
producer:
transaction-id-prefix: 'tx-'
key-serializer: 'org.apache.kafka.common.serialization.IntegerSerializer'
value-serializer: 'org.springframework.kafka.support.serializer.JsonSerializer'
[EDIT] (solution code)
I was able to figure it out with Gary's help. As they say, we need to set the kafka transaction manager on the container so that the container can start transactions. The transactions documentation doesn't cover how to do this, and there are a few ways. First, we can get the mutable container properties object from the factory and set the transaction manager on that:
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
var factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.getContainerProperties().setTransactionManager(...);
return factory;
}
If we are in Spring Boot, we can re-use some of the auto configuration to set sensible defaults on our factory before we customize it. We can see that the KafkaAutoConfiguration module imports KafkaAnnotationDrivenConfiguration, which produces a ConcurrentKafkaListenerContainerFactoryConfigurer bean. This appears to be responsible for all the default configuration in a Spring-Boot application. So, we can inject that bean and use it to initialize our factory before adding customizations:
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer bootConfigurer,
ConsumerFactory<Object, Object> consumerFactory) {
var factory = new ConcurrentKafkaListenerContainerFactory<Object, Object>();
// Apply default spring-boot configuration.
bootConfigurer.configure(factory, consumerFactory);
factory.setContainerCustomizer(
container -> {
... // do whatever
});
return factory;
}
Once that's done, the container uses the AfterRollbackProcessor for error handling, as expected. As long as I don't explicitly configure a common error handler, this appears to be the only layer of exception handling.
The AfterRollbackProcessor is only used when the container knows about the transaction; you must provide a KafkaTransactionManager to the container so that the kafka transaction is started by the container, and the offsets sent to the transaction. Using #Transactional is not the correct way to start a Kafka Transaction.
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#transactions

spring batch execution, how to avoid if file name is empty with FlatFileItemWriter from logging exception

I am using spring batch applicationwith reader,writer and processor. File name is passed from batchjob to writer which is in stepscope.When bean is initialized I could see exception in BATCH_STEP_EXECUTION table as below
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'scopedTarget.resWriter' defined in class path resource : Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.batch.item.file.FlatFileItemWriter]: Factory method 'resWriter' threw exception; nested exception is java.lang.IllegalArgumentException: Path must not be null
Spring batch code
#StepScope
#Bean
public FlatFileItemWriter<EntityObject> regulatedEntityWriter(#Value("#{jobParameters['fileName']}") String fileName){
/*
while bean is initialized fileName is empty and FlatFileItemWriter requries filename, then it throws Path must not be null exeption
*/
pretaFileName = fileName;
FlatFileItemWriter<EntityObject> csvFileWriter = new FlatFileItemWriter<>();
String exportFileHeader = "column1,column2,column3";
StringHeaderWriter headerWriter = new StringHeaderWriter(exportFileHeader);
csvFileWriter.setHeaderCallback(headerWriter);
csvFileWriter.setShouldDeleteIfEmpty(true);
CustomDelimitedLineAggregator<EntityObject> lineAggregator = new CustomDelimitedLineAggregator<>();
BeanWrapperFieldExtractor<EntityObject> fieldExtractor = new BeanWrapperFieldExtractor<>();
fieldExtractor.setNames(new String[]{"column1", "column2", "column3"});
lineAggregator.setFieldExtractor(fieldExtractor);
csvFileWriter.setLineAggregator(lineAggregator);
csvFileWriter.setEncoding(encodingType);
csvFileWriter.setResource(new FileSystemResource(fileName));
return csvFileWriter;
}
```
Above method is called using joblauncher
``` JobParameters params = new JobParametersBuilder()
.addString("JobID", String.valueOf(System.currentTimeMillis()))
.addString("fileName", "sample_file.txt")
.toJobParameters();
JobExecution jobExecution =jobLauncher.run(job, params);
I have tried #Lazy annotation, still when server is coming up it throws that exception.
I am using multi node cluster and it add entries for each node while server is coming up in BATCH_STEP_EXECUTION table.How to avoid this exception while server startup for the first time?
I have used below property in springboot application.properties to disable spring batch by default as I am triggering and passing parameter through cron trigger.
spring.batch.job.enabled=false

Consider defining a bean of type 'net.corda.core.messaging.CordaRPCOps' in your configuration

Unable to use CordaRPCOps implementation methods in my CustomController,
#RequestMapping(value="/peers", produces = MediaType.APPLICATION_JSON)
public Map<String, List<String>> peers() throws Exception
{
CordaRPCOps proxy=rpc.getParameterValue("proxy");
Party myIdentity= proxy.nodeInfo().getLegalIdentities().get(0);
return ImmutableMap.of("peers", rpcOpsImpl.networkMapSnapshot()
.stream()
.filter(nodeInfo -> nodeInfo.getLegalIdentities().get(0) != myIdentity)
.map(it -> it.getLegalIdentities().get(0).getName().getOrganisation())
.collect(toList()));
}
getting error while building runPartyAServer as,
APPLICATION FAILED TO START
Description:
Field services in net.corda.server.controllers.CustomController required a bean of type 'net.corda.core.messaging.CordaRPCOps' that could not be found.
Action:
Consider defining a bean of type 'net.corda.core.messaging.CordaRPCOps' in your configuration.
As the error message says, you must define a bean of type CordaRPCConnection/CordaRPCOps.
Something similar to:
#Bean
private fun connect(): CordaRPCConnection {
val hostAndPort = NetworkHostAndPort(configuration.host, configuration.port)
val client = CordaRPCClient(hostAndPort)
val connection = client.start(configuration.user, configuration.password)
return connection;
}
We do not provide any DI container integration by default.

Error accessing Spring session data stored in Redis

In my REST controllers Spring project, I want to store Session information in Redis.
In my application.properties I have defined the following:
spring.session.store-type=redis
spring.session.redis.namespace=rdrestcore
com.xyz.redis.host=192.168.201.46
com.xyz.redis.db=0
com.xyz.redis.port=6379
com.xyz.redis.pool.min-idle=5
I also have enabled Http Redis Session with:
#Configuration
#EnableRedisHttpSession
public class SessionConfig extends AbstractHttpSessionApplicationInitializer
{}
I finally have a redis connection factory like this:
#Configuration
#EnableRedisRepositories
public class RdRedisConnectionFactory {
#Autowired
private Environment env;
#Value("${com.xyz.redis.host}")
private String redisHost;
#Value("${com.xyz.redis.db}")
private Integer redisDb;
#Value("${com.xyz.redis.port}")
private Integer redisPort;
#Value("${com.xyz.redis.pool.min-idle}")
private Integer redisPoolMinIdle;
#Bean
JedisPoolConfig jedisPoolConfig() {
JedisPoolConfig poolConfig = new JedisPoolConfig();
if(redisPoolMinIdle!=null) poolConfig.setMinIdle(redisPoolMinIdle);
return poolConfig;
}
#Bean
JedisConnectionFactory jedisConnectionFactory() {
JedisConnectionFactory jedisConFactory = new JedisConnectionFactory();
if(redisHost!=null) jedisConFactory.setHostName(redisHost);
if(redisPort!=null) jedisConFactory.setPort(redisPort);
if(redisDb!=null) jedisConFactory.setDatabase(redisDb);
jedisConFactory.setPoolConfig(jedisPoolConfig());
return jedisConFactory;
}
#Bean
public RedisTemplate<String, Object> redisTemplate() {
final RedisTemplate< String, Object > template = new RedisTemplate();
template.setConnectionFactory( jedisConnectionFactorySpring());
template.setKeySerializer( new StringRedisSerializer() );
template.setValueSerializer( new GenericJackson2JsonRedisSerializer() );
template.setHashKeySerializer(new StringRedisSerializer());
template.setHashValueSerializer( new GenericJackson2JsonRedisSerializer() );
return template;
}
}
With this configuration, the session information gets stored in Redis, but, it is serialized very strangely. I mean, the keys are readable, but the values stored are not (I query the information from a program called "Redis Desktop Manager")... for example... for a new session, I get a hash with key:
*spring:session:sessions:c1110241-0aed-4d40-9861-43553b3526cb*
and the keys this hash contains are: maxInactiveInterval, lastAccessedTime, creationTime, sessionAttr:SPRING_SECURITY_CONTEXT
but their values are all they coded like something similar to this:
\xAC\xED\x00\x05sr\x00\x0Ejava.lang.Long;\x8B\xE4\x90\xCC\x8F#\xDF\x02\x00\x01J\x00\x05valuexr\x00\x10java.lang.Number\x86\xAC\x95\x1D\x0B\x94\xE0\x8B\x02\x00\x00xp\x00\x00\x01b$G\x88*
(for the creationTime key)
and if I try to access this information from code, with the redisTemplate, it rises an exception like this one:
Exception occurred in target VM: Cannot deserialize; nested exception is
org.springframework.core.serializer.support.SerializationFailedException:
Failed to deserialize payload. Is the byte array a result of
corresponding serialization for DefaultDeserializer?; nested exception
is java.io.StreamCorruptedException: invalid stream header: 73657373
org.springframework.data.redis.serializer.SerializationException: Cannot deserialize; nested exception is
org.springframework.core.serializer.support.SerializationFailedException:
Failed to deserialize payload. Is the byte array a result of
corresponding serialization for DefaultDeserializer?; nested exception
is java.io.StreamCorruptedException: invalid stream header: 73657373
at org.springframework.data.redis.serializer.JdkSerializationRedisSerializer.deserialize(JdkSerializationRedisSerializer.java:82)
I think it is some kind of problem with the serialization/deserialization of the Spring session information, but I don't know what else to do to be able to control that.
Does anyone know what Im doing wrong?
Thank you
You're on the right track, your problem is serialization indeed. Try this configuration (configure your template with these serializers only):
template.setHashValueSerializer(new JdkSerializationRedisSerializer());
template.setHashKeySerializer(new StringRedisSerializer());
template.setKeySerializer(new StringRedisSerializer());
template.setDefaultSerializer(new JdkSerializationRedisSerializer());

Resources