Spring cloud stream (Kafka) autoCreateTopics not working - spring

I am using Spring Cloud stream with Kafka binder. To disable the auto-create topics I referred to this- How can I configure a Spring Cloud Stream (Kafka) application to autocreate the topics in Confluent Cloud?. But, it seems that setting this property is not working and the framework creates the topics automatically.
Here is the configuration in application.properties
spring.cloud.stream.kafka.binder.auto-create-topics=false
Here is the startup log
2021-06-25 09:22:46.522 INFO 38879 --- [pool-2-thread-1] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [localhost:9092]
Other details-
Spring boot version: 2.3.12.RELEASE
Spring Cloud Stream version: Hoxton.SR11
Am I missing anything in this configuration?

spring.cloud.stream.kafka.binder.auto-create-topics=false
That property configures the binder so that it will not create the topics; it does not set that consumer property.
To explicitly set that property, also set
spring.cloud.stream.kafka.binder.consumer-properties.allow.auto.create.topics=false

Related

Do not auto connect to kafka topic or broker either consumer or producer springboot without using a profiles and a simple flag

I am trying to disable connecting to kafka using spring boot with simple flag and could not find any working examples. I tried auto-start flags provides in the spring boot and kafka api documentations, but none of them worked.
The reason is I do not want to connect in local, or dev environments - so how can I disable this? I want to use flags and not use #Profile annotations
2022-10-05 INFO o.s.c.s.binder.DefaultBinderFactory : Creating binder: kafka
2022-10-05 INFO o.s.c.s.binder.DefaultBinderFactory : Caching the binder: kafka
2022-10-05 INFO o.s.c.s.binder.DefaultBinderFactory : Retrieving cached binder: kafka
2022-10-05 INFO o.s.c.s.b.k.p.KafkaTopicProvisioner : Using kafka topic for outbound: abc.def.ghi
2022-10-05 INFO o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
bootstrap.servers = [my.confluent.cloud:1234]
client.dns.lookup = use_all_dns_ips
client.id =
and a lot of admin client config values
I want to disable creating binders

setting partition count in kafka in spring boot using application.yml

How to set up partition count in kafka in spring boot using application.yml.
kafka:
zookeeper:
host: localhost:2181
groupId: group1
topic: test_kafkaw
bootstrap:
servers: localhost:9092
If you are using Spring Cloud Stream, you can specify partition count per Kafka topic in application.yml/application.properties:
spring.cloud.stream.bindings.<binding-name>.producer.partition-count
The Kafka binder uses the ‘partitionCount’ setting of the producer to create a topic with the given partition count.
If you are using Spring for Apache Kafka, you can use TopicBuilder to configure that — something like:
#Bean
public NewTopic topic() {
return TopicBuilder.name("topic1")
.partitions(10)
.replicas(1)
.build();
}
TopicBuilder reference: https://docs.spring.io/spring-kafka/docs/current/api/org/springframework/kafka/config/TopicBuilder.html

Configuring consumerWindowSize in Spring Boot application

ActiveMQ Artemis configuration file in Spring Boot below:
spring:
artemis:
host: localhost
port: 61616
user: admin
password: admin123
There is no properties for broker-url so that I can set consumerWindowSize like
tcp://localhost:61616?consumerWindowSize=0`
How can i configured consumerWindowSize in a Spring Boot application.
Based on the Spring Boot documentation (which references ArtemisProperties) I don't believe you can set the broker's actual URL or any of the properties associated with it. This is a pretty serious short-coming of the Artemis Spring Boot integration as it really limits the configuration. There is already an issue open to (hopefully) address this.
Added below configuration to solve this issue:
#Bean("connectionFactory")
public ConnectionFactory connectionFactory(AppProperties appProperties) {
ActiveMQConnectionFactory cf = new ActiveMQConnectionFactory($brokerUrl);
cf.setUser($user);
cf.setPassword($password);
return cf;
}

Spring Boot Java Kafka configuration, overwrite port

I use Spring Boot + Kafka. This is my current, pretty simple configuration for Kafka:
#Configuration
#EnableKafka
public class KafkaConfig {
}
This configuration works pretty fine and is able to connect to Kafka instance on default Kafka port: 9092
Right now I need to change the port, let's say on 9093.
How to update this Kafka configuration in order to be able to connect on 9093?
I think something like this in your properties file will do the trick
spring.kafka.bootstrap-servers=localhost:9093
you can specify comma separated list of host:port

Integrating Spring Cloud Sleuth with Spring boot amqp

Looking for an example that shows integrating spring cloud sleuth with spring boot amqp (rabbit) publisher and subscriber.
I do see the following messages in the log
2016-10-21 08:35:15.708 INFO [producer,9148f56490e5742f,943ed050691842ab,false] 30928 --- [nio-8080-exec-1] a.b.c.controllers.MessagingController : Received Request to pulish with Activity OrderShipped
2016-10-21 08:35:15.730 INFO [producer,9148f56490e5742f,943ed050691842ab,false] 30928 --- [nio-8080-exec-1] a.b.c.service.ProducerService : Message published
When I look at messages on Queue, I don't see traceId or any other details added to the header. Should I use MessagePostProcessor to add these to the header?
Also what should be done on the receiving service?
We don't instrument Spring AMQP out of the box. You can however use Spring Integration or Spring Cloud Stream that we do support and then everything will work out of the box. If you need to use Spring AMQP for some reason you'll have to instrument the code yourself (and sends us a PR ;) ).
Using Spring AMQP you can set MessagePostProcessor on the RabbitTemplateusing the setBeforePublishPostProcessors method.
We implemented the org.springframework.amqp.core.MessagePostProcessor and Overrided the postProcessMessage method this way:
#Override
public org.springframework.amqp.core.Message postProcessMessage(org.springframework.amqp.core.Message message)
throws AmqpException {
MessagingMessageConverter converter = new MessagingMessageConverter();
MessageBuilder<?> mb = MessageBuilder.fromMessage((Message<?>) converter.fromMessage(message));
inject(tracer.getCurrentSpan(), mb);
return converter.toMessage(mb.build(), message.getMessageProperties());
}
The inject method can now set all the required headers on the message, and it will be passed to the rabbitMq with the changes.
You have a great example of how to implement such inject method in org.springframework.cloud.sleuth.instrument.messaging.MessagingSpanInjector
We are using v1.1.1 of spring-cloud-sleuth-stream so my example is based on this version, in next release(1.2) it will be easier.

Resources