Kafka Producer SSL properties for YAML file - spring-boot

This is the Kafka Producer properties in YAML file.
When i am enabling SSL my kafka producer doesn't work.Its not able to identify the topic on the broker .But when i use PLAINTEXT my kafka producer works properly.
Am i missing something for SSL config.
PS: Bootsrap server are different for SSL and PLAINTEXT.
spring:
kafka:
producer:
bootstrap-servers: <server name>
properties:
acks: all
retries: 3
retry.backoff.ms: 200000
ssl.protocol: SSL
ssl.endpoint.identification.algorithm: https
ssl:
keystore-location: keystore.jks
keystore-password: password
This is my Kafka Producer config
#Bean
public ProducerFactory<String, JsonMessage> producerFactory() {
Map<String, Object> config = new HashMap<>();
config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
config.put(ProducerConfig.ACKS_CONFIG, acks);
config.put(ProducerConfig.RETRIES_CONFIG, retries);
config.put(ProducerConfig.RETRY_BACKOFF_MS_CONFIG, retryBackoffMs);
return new DefaultKafkaProducerFactory<>(config);
}
#Bean
public KafkaTemplate<String, JsonMessage> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
This is the values returned for kafka prodcuer on spring boot console
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null

You are creating your own ProducerFactory bean so the properties in application.yml are not being used; those properties are used by Boot when auto configuring its bean.
You need to set the SSL properties yourself in your producerFactory() bean.

Related

Adding Java environment variables in application.properties in springboot

I am running a springboot application which requires to trust the certificate which i added in my local truststore.
For now i am setting it under run configurations options in intellij and it works.
ex->::-Djavax.net.ssl.trustStore=location\cacerts;-Djavax.net.ssl.trustStorePassword=changeit
I was wondering is there any way to set it from application.properties file in springboot in the way we set spring properties?
If you want to make REST calls, You can configure the RestTemplate Bean like :
#Configuration
public class SslConfiguration {
#Value("${http.client.ssl.trust-store}")
private Resource keyStore;
#Value("${http.client.ssl.trust-store-password}")
private String keyStorePassword;
#Bean
RestTemplate restTemplate() throws Exception {
SSLContext sslContext = new SSLContextBuilder()
.loadTrustMaterial(
keyStore.getURL(),
keyStorePassword.toCharArray()
).build();
SSLConnectionSocketFactory socketFactory =
new SSLConnectionSocketFactory(sslContext);
HttpClient httpClient = HttpClients.custom()
.setSSLSocketFactory(socketFactory).build();
HttpComponentsClientHttpRequestFactory factory =
new HttpComponentsClientHttpRequestFactory(httpClient);
return new RestTemplate(factory);
}
}
Check this : Example

Spring Kafka 2.6.x ErrorHandler and DeadLetterPublishingRecoverer with ConcurrentKafkaListenerContainerFactory

We are trying to use the DLT feature in Spring Kafka 2.6.x. This is the config yml:
kafka:
bootstrap-servers: localhost:9092
auto-offset-reset: earliest
consumer:
key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
enable-auto-commit: false
properties:
isolation.level: read_committed
fetch.max.wait.ms: 100
spring.json.value.default.type: 'com.sample.entity.Event'
spring.json.trusted.packages: 'com.sample.entity.*'
spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
producer:
bootstrap-servers: localhost:9092
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonDeserializer
And here is the KafkaConfig class:
#EnableKafka
#Configuration
#Log4j2
public class KafkaConfig {
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Event>
kafkaListenerContainerFactory(ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
ConsumerFactory<Object, Object> consumerFactory) {
ConcurrentKafkaListenerContainerFactory<String, Event> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory);
return factory;
}
#Bean
public SeekToCurrentErrorHandler errorHandler(DeadLetterPublishingRecoverer deadLetterPublishingRecoverer) {
SeekToCurrentErrorHandler handler = new SeekToCurrentErrorHandler(deadLetterPublishingRecoverer);
handler.addNotRetryableExceptions(UnprocessableException.class);
return handler;
}
#Bean
public DeadLetterPublishingRecoverer publisher(KafkaOperations kafkaOperations) {
return new DeadLetterPublishingRecoverer(kafkaOperations);
}
}
It is fine without the ConcurrentKafkaListenerContainerFactory, but since we want to scale up scale down the number of instances, we want to use the ConcurrentKafkaListenerContainer.
What is the proper way to do this?
Also, I found that if it is Deserialisation Exception, the message in .DLT is not sent properly (not proper JSON), while if it is "UnprocessableException" (our custom exception that throws within listener) it is proper JSON in .DLT
Since you are wiring up your own consumer factory, you have to set the error handler on it.
but since we want to scale up scale down the number of instances, we want to use the ConcurrentKafkaListenerContainer.
Boot's auto configuration wires up a concurrent container with concurrency=1 (if there is no ....listener.concurrency property); so you can use Boot's factory.
For deserialization exceptions (all exceptions), the record.value() is whatever came in originally. If that's not what you are seeing, please provide an example of what's in the original record and the DLT.

SpringBoot Kafka: Bean method 'kafkaTemplate' in 'KafkaAutoConfiguration' is not loaded

I am using springboot and trying to write a KafkaProducer to push messages in Kafka queue.
I have created these methods in #Configuration class.
#Bean
public KafkaTemplate<String, String> kafkaTemplate(){
return new KafkaTemplate<>(producerFactory());
}
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress); //bootstrapAddress holds address of kafka server
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
And I have Autowired this KafkaTemplate bean in my KafkaMessageProducer class that takes care of the handling of send function of KafkaTemplate.
#Autowired
KafkaTemplate<String, String> kafkaTemplate;
But I am facing this error when I try to compile my code
Field kafkaTemplate in <pathoffile>.KafkaMessageProducer required a bean of type 'org.springframework.kafka.core.KafkaTemplate' that could not be found.
- Bean method 'kafkaTemplate' in 'KafkaAutoConfiguration' not loaded because #ConditionalOnMissingBean (types: org.springframework.kafka.core.KafkaTemplate; SearchStrategy: all) found bean 'avroKafkaTemplate'
Action:Consider revisiting the conditions above or defining a bean of type 'org.springframework.kafka.core.KafkaTemplate' in your configuration.
Also, if I try to exclude KafkaAutoConfiguration in my Spring project, I get error like "Bean cannot be loaded as KafkaAutoConfiguration is disabled'.
Any idea why I am getting this Bean error and what may be the solution?
EDIT:- I found following bean in a jar file used by my project
#Bean
#Conditional({EnableQueueCondition.class})
public KafkaTemplate<String, String> kafkaTemplate() {
KafkaTemplate<String, String> kafkaTemplate = new KafkaTemplate(this.producerFactory());
kafkaTemplate.setProducerListener(new ProducerListenerImpl());
return kafkaTemplate;
}
So, this is where the error is coming from, but I don't how to tell spring to not look to this bean, and use the bean I defined. I have tried using Primary and Qualifier annotations on the bean, it still gives the same error. Is there might be a possibility that my defined bean is not created or not found, and KafkaAutoConfiguration is then looking for the default bean that is override by avroKafkaTemplate bean? What may be the solution of this problem?
By default spring boot provide KafkaTemplate bean if you add kafka dependency in POM.
You just need to define the properties in your application.yml file
example:
server: port: 9000
spring:
kafka:
consumer:
bootstrap-servers: localhost:9092
group-id: group_id
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
producer:
bootstrap-servers: localhost:9092
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
enable auto configuration
#Configuration
#EnableKafka
and autowire the kafkaTemplate:
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
If your case the auto config factory is looking for ProducerFactory<String, Strng>which does not match with your configuration.
#Bean
#ConditionalOnMissingBean(ProducerFactory.class)
public ProducerFactory<String, Strng> kafkaProducerFactory()
so rename your producerFactory() to kafkaProducerFactory(), it will solve your issue.
From the stacktrace, there is another KafkaTemplate bean - avroKafkaTemplate. So I guess there is another configuration, duplicating KafkaTemplate definition.

Kafka Fails to Process all the messages - Java Spring Boot

I have a spring boot application(spring version 2.2.2.RELEASE) where I have configured Kafka consumer which processes the data from Kafka and serves to multiple web sockets. The subscription is successful to kafka, but Not all messages from selected Kafka topic is processed by the consumer. Few messages are delayed and few are missed out. But the producer is sending out data which is perfectly ensured. Below I have shared the configuration properties that I have used.
#Bean
public ConsumerFactory<String, String> consumerFactory() {
final String BOOTSTRAP_SERVERS = kafkaBootstrapServer;
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, consumerGroupId);
return new DefaultKafkaConsumerFactory<>(props);
}
Is there any configuration I am missing?
For a new Consumer (never committed offsets for the group.id) you must set AUTO_OFFSET_RESET to earliest to avoid missing any existing records in the topic (default is latest).

Kafka topic not created automatically on remote kafka after spring boot start(and create on local kafka server)

1) I start kafka on my machine
2) I start my spring boot server with config:
#Bean
public NewTopic MyTopic() {
return new NewTopic("my-topic", 5, (short) 1);
}
#Bean
public ProducerFactory<String, byte[]> greetingProducerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public KafkaTemplate<String, byte[]> unpMessageKafkaTemplate() {
return new KafkaTemplate<>(greetingProducerFactory());
}
result - server is start successfully and create my-topic in kafka.
But if I try do it with remote kafka on remote server - topic not create.
and in log spring write:
12:35:09.880 [ main] [INFO ] o.a.k.clients.admin.AdminClientConfig: [] AdminClientConfig values:
bootstrap.servers = [localhost:9092]
If I add this bean to config:
#Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, "remote_host:9092");
return new KafkaAdmin(configs);
}
topic create succesfully.
1) Why it happens?
2) Do I have to create KafkaAdmin ? why for the local Kafka is not required?
EDDIT
My current config:
spring:
kafka:
bootstrap-servers: remote:9092
producer:
key-serializer: org.apache.kafka.common.serialization.StringDeserializer
value-serializer: org.apache.kafka.common.serialization.ByteArraySerializer
and
#Configuration
public class KafkaTopicConfig {
#Value("${response.topics.topicName}")
private String topicName;
#Bean
public NewTopic responseTopic() {
return new NewTopic(topicName, 5, (short) 1);
}
}
After start I see:
bootstrap.servers = [remote:9092]
client.id =
connections.max.idle.ms = 300000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
...
But topic not create
KafkaAdmin is the kafka spring object that looks for NewTopic objects in your spring context and creates them. If you do not have a KafkaAdmin no creation will take place. You can explicitly create KafkaAdmin (as you show in your code snippet) or indirectly order its creation via the spring kafka configuration properties.
KafkaAdmin is a nice to have it is not related to production or consumption to/ from topics for your application code.
EDIT
You must have something wrong; I just tested it...
spring:
kafka:
bootstrap-servers: remote:9092
and
2019-03-21 09:18:18.354 INFO 58301 --- [ main] o.a.k.clients.admin.AdminClientConfig
: AdminClientConfig values:
bootstrap.servers = [remote:9092]
...
Spring Boot will automatically configure a KafkaAdmin for you, but it uses the application.yml (or application.properties). See Boot properties. Scroll down to spring.kafka.bootstrap-servers=. That's why it works with localhost (it's the default).
You also don't need a ProducerFactory or template; boot will create them for you from properties.

Resources