setting partition count in kafka in spring boot using application.yml - spring-boot

How to set up partition count in kafka in spring boot using application.yml.
kafka:
zookeeper:
host: localhost:2181
groupId: group1
topic: test_kafkaw
bootstrap:
servers: localhost:9092

If you are using Spring Cloud Stream, you can specify partition count per Kafka topic in application.yml/application.properties:
spring.cloud.stream.bindings.<binding-name>.producer.partition-count
The Kafka binder uses the ‘partitionCount’ setting of the producer to create a topic with the given partition count.
If you are using Spring for Apache Kafka, you can use TopicBuilder to configure that — something like:
#Bean
public NewTopic topic() {
return TopicBuilder.name("topic1")
.partitions(10)
.replicas(1)
.build();
}
TopicBuilder reference: https://docs.spring.io/spring-kafka/docs/current/api/org/springframework/kafka/config/TopicBuilder.html

Related

How to configure Kafka consumer retry property from application.properties in spring boot?

In spring-boot,
application.yml:
kafka:
bootstrap-servers: localhost:9092
listener:
concurrency: 10
ack-mode: MANUAL
producer:
topic: test-record
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
retries: 3
orn-record:
timeout: 3
#acks: 1
consumer:
groupId: test-record
topic: test
enable-auto-commit: false
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
By using above configuration, We can avoid java web(bean) based configuration in spring boot and it's a high worthy advantage.
Q: Can we add kafka error handler and kafka consumer number of retry properties from application.properties / application.yml ?
I could not find any reference or documentation about it hence hoping some conclusion, just because of this issue now I have to go to java web based configuration in spring boot and remove the properties configuration which is again going back to old way in spring. I believe there should be some workaround and we could achieve this through property file configuration.
Consumers don't have a retry property. If the offsets were not committed, the next poll will try again from the same offsets.
There is also not any configurable error handling class that is out-of-band from deserialization like there is in Kafka Streams.
If you want to handle deserialization errors, and not processing errors, you can set that up like so
spring:
kafka:
bootstrap-servers: ...
consumer:
# Configures the Spring Kafka ErrorHandlingDeserializer that delegates to the 'real' deserializers
key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
properties:
# Delegate deserializers
spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
spring.deserializer.value.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
Beyond that, you can leverage DeadLetterPublishingRecoverer and SeekToCurrentErrorHandler in the code to produce the error events to a new topic for inspection and further processing. - Source

Spring cloud stream (Kafka) autoCreateTopics not working

I am using Spring Cloud stream with Kafka binder. To disable the auto-create topics I referred to this- How can I configure a Spring Cloud Stream (Kafka) application to autocreate the topics in Confluent Cloud?. But, it seems that setting this property is not working and the framework creates the topics automatically.
Here is the configuration in application.properties
spring.cloud.stream.kafka.binder.auto-create-topics=false
Here is the startup log
2021-06-25 09:22:46.522 INFO 38879 --- [pool-2-thread-1] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [localhost:9092]
Other details-
Spring boot version: 2.3.12.RELEASE
Spring Cloud Stream version: Hoxton.SR11
Am I missing anything in this configuration?
spring.cloud.stream.kafka.binder.auto-create-topics=false
That property configures the binder so that it will not create the topics; it does not set that consumer property.
To explicitly set that property, also set
spring.cloud.stream.kafka.binder.consumer-properties.allow.auto.create.topics=false

Configuring consumerWindowSize in Spring Boot application

ActiveMQ Artemis configuration file in Spring Boot below:
spring:
artemis:
host: localhost
port: 61616
user: admin
password: admin123
There is no properties for broker-url so that I can set consumerWindowSize like
tcp://localhost:61616?consumerWindowSize=0`
How can i configured consumerWindowSize in a Spring Boot application.
Based on the Spring Boot documentation (which references ArtemisProperties) I don't believe you can set the broker's actual URL or any of the properties associated with it. This is a pretty serious short-coming of the Artemis Spring Boot integration as it really limits the configuration. There is already an issue open to (hopefully) address this.
Added below configuration to solve this issue:
#Bean("connectionFactory")
public ConnectionFactory connectionFactory(AppProperties appProperties) {
ActiveMQConnectionFactory cf = new ActiveMQConnectionFactory($brokerUrl);
cf.setUser($user);
cf.setPassword($password);
return cf;
}

Spring Boot Java Kafka configuration, overwrite port

I use Spring Boot + Kafka. This is my current, pretty simple configuration for Kafka:
#Configuration
#EnableKafka
public class KafkaConfig {
}
This configuration works pretty fine and is able to connect to Kafka instance on default Kafka port: 9092
Right now I need to change the port, let's say on 9093.
How to update this Kafka configuration in order to be able to connect on 9093?
I think something like this in your properties file will do the trick
spring.kafka.bootstrap-servers=localhost:9093
you can specify comma separated list of host:port

Spring Cloud Netflix Hystrix Turbine not getting info from services on the same host

I have followed Spring Cloud Netflix's guide to configure Turbine. After enabling Hystrix in two microservices I have verified that /hystrix.stream endpoints generate the correct output.
Now in a hystrix dashboard project I have configured Turbine to get the aggregated results of all the services. However all I get is a succession of:
: ping
data: {"reportingHostsLast10Seconds":0,"name":"meta","type":"meta","timestamp":1448552456486}
This is my config:
HystrixDashboard + Turbine Application:
#EnableHystrixDashboard
#EnableTurbine
#SpringBootApplication
public class HystrixDashboardApplication {
public static void main(String[] args) {
SpringApplication.run(HystrixDashboardApplication.class, args);
}
}
HystrixDashboard + Turbine application.yml:
spring:
application:
name: hystrix-dashboard
server:
port: 10000
turbine:
appConfig: random-story-microservice,storyteller-api
instanceUrlSuffix: /hystrix.stream
logging:
level:
com.netflix.turbine: 'TRACE'
UPDATE
Following kreel's directions I have configured Turbine this way:
turbine:
appConfig: random-story-microservice,storyteller-api
instanceUrlSuffix: /hystrix.stream
clusterNameExpression: new String("default")
It doesn't fail with an exception anymore and in the logs I see that Turbine finds the two candidate hosts/microservices:
[ Timer-0] c.n.t.discovery.InstanceObservable : Retrieved hosts from InstanceDiscovery: 2
However only one of them is finally registered. In InstanceObservable.run() only one of the hosts is added because they have the same hashcode so they are considered the same when added to newState.hostsUp. The com.netflix.turbine.discovery.Instance hashcode is calculated based on the hostname ("myhost" in both cases), and cluster ("default"):
// set the current state
for(Instance host: newList) {
if(host.isUp()) {
newState.hostsUp.add(host);
} else {
newState.hostsDown.add(host);
}
}
What do we have to do when the same host offers two different microservices? Only the first instance is registered in this case.
I think I have an answer but first, to be sure, why are you expecting "default" ?
In fact I think you are misunderstanding the doc :
The configuration key turbine.appConfig is a list of eureka serviceIds that turbine will use to lookup instances. The turbine stream is then used in the Hystrix dashboard using a url that looks like: http://my.turbine.sever:8080/turbine.stream?cluster=<CLUSTERNAME>; (the cluster parameter can be omitted if the name is "default"). The cluster parameter must match an entry in turbine.aggregator.clusterConfig. Values returned from eureka are uppercase, thus we expect this example to work if there is an app registered with Eureka called "customers":
turbine:
aggregator:
clusterConfig: CUSTOMERS
appConfig: customers
In your case :
turbine:
aggregator:
clusterConfig: MY_CLUSTER
appConfig: random-story-microservice,storyteller-api
So it will return "random-story-microservice,storyteller-api" in uppercase.
So, I think you need to apply this part :
The clusterName can be customized by a SPEL expression in turbine.clusterNameExpression with root an instance of InstanceInfo. The default value is appName, which means that the Eureka serviceId ends up as the cluster key (i.e. the InstanceInfo for customers has an appName of "CUSTOMERS"). A different example would be turbine.clusterNameExpression=aSGName, which would get the cluster name from the AWS ASG name. Another example:
turbine:
aggregator:
clusterConfig: SYSTEM,USER
appConfig: customers,stores,ui,admin
clusterNameExpression: metadata['cluster']
In this case, the cluster name from 4 services is pulled from their metadata map, and is expected to have values that include "SYSTEM" and "USER".
To use the "default" cluster for all apps you need a string literal expression (with single quotes):
turbine:
appConfig: customers,stores
clusterNameExpression: 'default'
Spring Cloud provides a spring-cloud-starter-turbine that has all the dependencies you need to get a Turbine server running. Just create a Spring Boot application and annotate it with #EnableTurbine.
Add this in your config :
clusterNameExpression: 'default'

Resources