Kafka topic not created automatically on remote kafka after spring boot start(and create on local kafka server) - spring

1) I start kafka on my machine
2) I start my spring boot server with config:
#Bean
public NewTopic MyTopic() {
return new NewTopic("my-topic", 5, (short) 1);
}
#Bean
public ProducerFactory<String, byte[]> greetingProducerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public KafkaTemplate<String, byte[]> unpMessageKafkaTemplate() {
return new KafkaTemplate<>(greetingProducerFactory());
}
result - server is start successfully and create my-topic in kafka.
But if I try do it with remote kafka on remote server - topic not create.
and in log spring write:
12:35:09.880 [ main] [INFO ] o.a.k.clients.admin.AdminClientConfig: [] AdminClientConfig values:
bootstrap.servers = [localhost:9092]
If I add this bean to config:
#Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, "remote_host:9092");
return new KafkaAdmin(configs);
}
topic create succesfully.
1) Why it happens?
2) Do I have to create KafkaAdmin ? why for the local Kafka is not required?
EDDIT
My current config:
spring:
kafka:
bootstrap-servers: remote:9092
producer:
key-serializer: org.apache.kafka.common.serialization.StringDeserializer
value-serializer: org.apache.kafka.common.serialization.ByteArraySerializer
and
#Configuration
public class KafkaTopicConfig {
#Value("${response.topics.topicName}")
private String topicName;
#Bean
public NewTopic responseTopic() {
return new NewTopic(topicName, 5, (short) 1);
}
}
After start I see:
bootstrap.servers = [remote:9092]
client.id =
connections.max.idle.ms = 300000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
...
But topic not create

KafkaAdmin is the kafka spring object that looks for NewTopic objects in your spring context and creates them. If you do not have a KafkaAdmin no creation will take place. You can explicitly create KafkaAdmin (as you show in your code snippet) or indirectly order its creation via the spring kafka configuration properties.
KafkaAdmin is a nice to have it is not related to production or consumption to/ from topics for your application code.
EDIT
You must have something wrong; I just tested it...
spring:
kafka:
bootstrap-servers: remote:9092
and
2019-03-21 09:18:18.354 INFO 58301 --- [ main] o.a.k.clients.admin.AdminClientConfig
: AdminClientConfig values:
bootstrap.servers = [remote:9092]
...

Spring Boot will automatically configure a KafkaAdmin for you, but it uses the application.yml (or application.properties). See Boot properties. Scroll down to spring.kafka.bootstrap-servers=. That's why it works with localhost (it's the default).
You also don't need a ProducerFactory or template; boot will create them for you from properties.

Related

Producer Kafka throws deserialization exception

I have one topic and Producer/Consumer:
Dependencies (Spring Initializr)
Producer (apache kafka)
Consumer (apache kafka stream, cloud stream)
Producer:
KafkaProducerConfig
#Configuration
public class KafkaProducerConfig {
#Bean
public KafkaTemplate<String, Person> kafkaTemplate(){
return new KafkaTemplate<>(producerFactory());
}
#Bean
public ProducerFactory<String, Person> producerFactory(){
Map<String, Object> configs = new HashMap<>();
configs.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "127.0.0.1:9092");
configs.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configs.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return new DefaultKafkaProducerFactory<>(configs);
}
}
Controller:
#RestController
public class KafkaProducerApplication {
private KafkaTemplate<String, Person> kafkaTemplate;
public KafkaProducerApplication(KafkaTemplate<String, Person> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}
#GetMapping("/persons")
public Mono<List<Person>> findAll(){
var personList = Mono.just(Arrays.asList(new Person("Name1", 15),
new Person("Name2", 10)));
personList.subscribe(dataList -> kafkaTemplate.send("topic_test_spring", dataList.get(0)));
return personList;
}
}
It works correctly when accessing the endpoint and does not throw any exception in the IntelliJ console.
Consumer:
spring:
cloud:
stream:
function:
definition: personService
bindings:
personService-in-0:
destination: topic_test_spring
kafka:
bindings:
personService-in-0:
consumer:
configuration:
value:
deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
binders:
brokers:
- localhost:9091
- localhost:9092
kafka:
consumer:
properties:
spring:
json:
trusted:
packages: "*"
PersonKafkaConsumer
#Configuration
public class PersonKafkaConsumer {
#Bean
public Consumer<KStream<String, Person>> personService(){
return kstream -> kstream.foreach((key, person) -> {
System.out.println(person.getName());
});
}
}
Here I get the exception when run the project.
org.apache.kafka.streams.errors.StreamsException: Deserialization exception handler is set to fail upon a deserialization error. If you would rather have the streaming pipeline continue after a deserialization error, please set the default.deserialization.exception.handler appropriately. Caused by: java.lang.IllegalArgumentException: The class 'com.example.producer.model.Person' is not in the trusted packages: [java.util, java.lang, com.nttdata.bootcamp.yanki.model, com.nttdata.bootcamp.yanki.model.]. If you believe this class is safe to deserialize, please provide its name. If the serialization is only done by a trusted source, you can also enable trust all (). org.apache.kafka.streams.errors.StreamsException: Deserialization exception handler is set to fail upon a deserialization error. If you would rather have the streaming pipeline continue after a deserialization error, please set the default.deserialization.exception.handler appropriately.
The package indicated in the exception refers to the entity's package but in the producer. The producer's properties file has no configuration.

Java JobRunr when using Spring Boot Redis Starter

How do I create and use the Redis connection that spring-boot-starter-data-redis creates? It doesn't seem like there is a Bean for RedisClient created by the default auto configuration so I'm not sure of the best way to do this.
The documentation does state that in this case you need to create the StorageProvider yourself which is fine, but can you reuse what Spring Boot has already created. I believe this would need to be a pooled connection which you would also need to enable through Spring Boot.
RedisTemplate offers a high-level abstraction for Redis interactions:
https://docs.spring.io/spring-data/data-redis/docs/current/reference/html/#redis:template
Redis autoconfiguration :
#AutoConfiguration
#ConditionalOnClass({RedisOperations.class})
#EnableConfigurationProperties({RedisProperties.class})
#Import({LettuceConnectionConfiguration.class, JedisConnectionConfiguration.class})
public class RedisAutoConfiguration {
public RedisAutoConfiguration() {
}
#Bean
#ConditionalOnMissingBean(
name = {"redisTemplate"}
)
#ConditionalOnSingleCandidate(RedisConnectionFactory.class)
public RedisTemplate<Object, Object> redisTemplate(RedisConnectionFactory redisConnectionFactory) {
RedisTemplate<Object, Object> template = new RedisTemplate();
template.setConnectionFactory(redisConnectionFactory);
return template;
}
#Bean
#ConditionalOnMissingBean
#ConditionalOnSingleCandidate(RedisConnectionFactory.class)
public StringRedisTemplate stringRedisTemplate(RedisConnectionFactory redisConnectionFactory) {
return new StringRedisTemplate(redisConnectionFactory);
}
}
Here you can find the corresponding configuration properties(including connection pool default configuration).
Simple implementation example :
https://www.baeldung.com/spring-data-redis-tutorial

Spring Kafka 2.6.x ErrorHandler and DeadLetterPublishingRecoverer with ConcurrentKafkaListenerContainerFactory

We are trying to use the DLT feature in Spring Kafka 2.6.x. This is the config yml:
kafka:
bootstrap-servers: localhost:9092
auto-offset-reset: earliest
consumer:
key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
enable-auto-commit: false
properties:
isolation.level: read_committed
fetch.max.wait.ms: 100
spring.json.value.default.type: 'com.sample.entity.Event'
spring.json.trusted.packages: 'com.sample.entity.*'
spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
producer:
bootstrap-servers: localhost:9092
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonDeserializer
And here is the KafkaConfig class:
#EnableKafka
#Configuration
#Log4j2
public class KafkaConfig {
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Event>
kafkaListenerContainerFactory(ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
ConsumerFactory<Object, Object> consumerFactory) {
ConcurrentKafkaListenerContainerFactory<String, Event> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory);
return factory;
}
#Bean
public SeekToCurrentErrorHandler errorHandler(DeadLetterPublishingRecoverer deadLetterPublishingRecoverer) {
SeekToCurrentErrorHandler handler = new SeekToCurrentErrorHandler(deadLetterPublishingRecoverer);
handler.addNotRetryableExceptions(UnprocessableException.class);
return handler;
}
#Bean
public DeadLetterPublishingRecoverer publisher(KafkaOperations kafkaOperations) {
return new DeadLetterPublishingRecoverer(kafkaOperations);
}
}
It is fine without the ConcurrentKafkaListenerContainerFactory, but since we want to scale up scale down the number of instances, we want to use the ConcurrentKafkaListenerContainer.
What is the proper way to do this?
Also, I found that if it is Deserialisation Exception, the message in .DLT is not sent properly (not proper JSON), while if it is "UnprocessableException" (our custom exception that throws within listener) it is proper JSON in .DLT
Since you are wiring up your own consumer factory, you have to set the error handler on it.
but since we want to scale up scale down the number of instances, we want to use the ConcurrentKafkaListenerContainer.
Boot's auto configuration wires up a concurrent container with concurrency=1 (if there is no ....listener.concurrency property); so you can use Boot's factory.
For deserialization exceptions (all exceptions), the record.value() is whatever came in originally. If that's not what you are seeing, please provide an example of what's in the original record and the DLT.

Kafka Producer SSL properties for YAML file

This is the Kafka Producer properties in YAML file.
When i am enabling SSL my kafka producer doesn't work.Its not able to identify the topic on the broker .But when i use PLAINTEXT my kafka producer works properly.
Am i missing something for SSL config.
PS: Bootsrap server are different for SSL and PLAINTEXT.
spring:
kafka:
producer:
bootstrap-servers: <server name>
properties:
acks: all
retries: 3
retry.backoff.ms: 200000
ssl.protocol: SSL
ssl.endpoint.identification.algorithm: https
ssl:
keystore-location: keystore.jks
keystore-password: password
This is my Kafka Producer config
#Bean
public ProducerFactory<String, JsonMessage> producerFactory() {
Map<String, Object> config = new HashMap<>();
config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
config.put(ProducerConfig.ACKS_CONFIG, acks);
config.put(ProducerConfig.RETRIES_CONFIG, retries);
config.put(ProducerConfig.RETRY_BACKOFF_MS_CONFIG, retryBackoffMs);
return new DefaultKafkaProducerFactory<>(config);
}
#Bean
public KafkaTemplate<String, JsonMessage> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
This is the values returned for kafka prodcuer on spring boot console
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
You are creating your own ProducerFactory bean so the properties in application.yml are not being used; those properties are used by Boot when auto configuring its bean.
You need to set the SSL properties yourself in your producerFactory() bean.

Spring Boot Auto Configuration Failed Loading Spring Kafka Properties

Spring boot failed to load properties. Here are the properties that i am using through the yaml file.
spring:
kafka:
bootstrap-servers: localhost:9092
consumer:
auto-commit-interval: 100
enable-auto-commit: true
group-id: ********************
auto-offset-reset: earliest
value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
producer:
batch-size: 16384
buffer-memory: 33554432
retries: 0
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
listener:
poll-timeout: 20000
The exception i am getting is this
Caused by: java.lang.IllegalAccessException: Class org.apache.kafka.common.utils.Utils can not access a member of class org.springframework.kafka.support.serializer.JsonDeserializer with modifiers "protected"
I think the constructor is protected. Please provide a way to instantiate this.
That's correct. See:
protected JsonDeserializer() {
this((Class<T>) null);
}
protected JsonDeserializer(ObjectMapper objectMapper) {
this(null, objectMapper);
}
public JsonDeserializer(Class<T> targetType) {
this(targetType, new ObjectMapper());
this.objectMapper.configure(MapperFeature.DEFAULT_VIEW_INCLUSION, false);
this.objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
}
The JsonDeserializer isn't designed to be instantiated by the default constructor because it needs to know the targetType to deserialize.
You can extend this class to your particular type:
public class FooJsonDeserializer extends JsonDeserializer<Foo> { }
and use already this as class value for that value-deserializer property.
Or you can consider to customize the DefaultKafkaConsumerFactory:
#Bean
public ConsumerFactory<?, ?> kafkaConsumerFactory(KafkaProperties properties) {
Map<String, Object> consumerProperties = properties.buildConsumerProperties();
consumerProperties.put(CommonClientConfigs.METRIC_REPORTER_CLASSES_CONFIG,
MyConsumerMetricsReporter.class);
DefaultKafkaConsumerFactory<Object, Object> consumerFactory =
new DefaultKafkaConsumerFactory<>(consumerProperties);
consumerFactory.setValueDeserializer(new JsonDeserializer<>(Foo.class));
return consumerFactory;
}

Resources