Spring boot Kafka provides in application.yml for some properties several placements for the configuration. For example ssl configuration:
# application.yml
spring:
kafka:
ssl:
trust-store-location:
...
admin:
ssl:
trust-store-location:
...
producer:
ssl:
trust-store-location:
...
consumer:
ssl:
trust-store-location:
...
All of them use the same class KafkaProperties to configure.
But might KafkaProperties be different for producer/consumer/admin beans here?
Am I right that if producer/consumer/admin miss their own properties then they use ones from the base place spring.kafka.ssl? And if producer/consumer/admin have their own properties they will ignore ones from the base spring.kafka.ssl?
The ssl.truststore.location default value is null. As such, i am using another property, min.insync.replicas as example, but the concept is the same.
But might KafkaProperties be different for producer/consumer/admin
beans here?
You can configure properties at a granular level if required. For example, for a KafkaAdmin, this is how you do it:
#Bean
public KafkaAdmin kafkaAdmin() {
Map<String, Object> configs = new HashMap<>();
//other configs here
configs.put("spring.kafka.admin.properties.min.insync.replicas", 1);
return new KafkaAdmin(configs);
}
Am I right that if producer/consumer/admin miss their own properties
then they use ones from the base place spring.kafka.ssl?
If the bean specific properties is not configured(as per my example above), Spring Kafka will look in the application.yaml(Common properties can be found here), for example:
spring
kafka
admin
properties
min-insync-replicas: 3
And if it is not there, the default will be configured. One way to find the default is by issuing this command (min.insync.replicas is at broker/topic level):
kafka-configs.bat --bootstrap-server localhost:9092 --entity-type topics --entity-name <topic-name> --describe --all
//Result:
min.insync.replicas=1 sensitive=false synonyms={DEFAULT_CONFIG:min.insync.replicas=1}
The github's page of Spring Kafka TopicBuilder also specifies this:
Since 2.6 partitions and replicas default to * {#link
Optional#empty()} indicating the broker defaults will be applied.
which is 1 as per Kafka's broker default, as per here.
And if producer/consumer/admin have their own properties they will
ignore ones from the base spring.kafka.ssl?
Right.
Related
The autoStartup property of a listener is currently exposed at ContainerProperties and also on the KafkaListener annotation levels.
In some cases, it may be interesting to set this property at config level for all listeners in all factories.
So, wouldn't it make sense to expose this property at KafkaProperties#Listener level.
Proposal: spring.kafka.listener.auto-startup
From a general point of view, it is not always clear why some ContainerProperties are exposed under spring.kafka.listener.* and others not. Wouldn't it make more sense to expose them all (at least the ones we can set from a property file - like syncCommits, syncCommitTimeout, deliveryAttemptHeader, pauseImmediate, etc)?
I can contribute on this feature. The idea would be to uniformize the way to set properties on a container.
Any feedback is more than welcome.
Auto configuration is performed by spring-boot, not individual projects like spring-kafka. You would need to submit your proposed changes there.
You can easily configure it for all listeners by injecting the factory into some other bean definition factory method:
#Bean
SomeBean someBean(ConcurrentKafkaListenerContainerFactory<?, ?> factory) {
factory.setAutoStartup(false);
...
}
I have got a Spring Boot project with two data sources, one DB2 and one Postgres. I configured that, but have a problem:
The auto-detection for the database type does not work on the DB2 (in any project) unless I specify the database dialect using spring.jpa.database-platform = org.hibernate.dialect.DB2390Dialect.
But how do I specify that for only one of the database connections? Or how do I specify the other one independently?
Additional info to give you more info on my project structure: I seperated the databases roughly according to this tutorial, although I do not use the ChainedTransactionManager: https://medium.com/preplaced/distributed-transaction-management-for-multiple-databases-with-springboot-jpa-and-hibernate-cde4e1b298e4
I use the same basic project structure and almost unchanged configuration files.
Ok, I found the answer myself and want to post it for the case that anyone else has the same question.
The answer lies in the config file for each database, i.e. the DB2Config.java file mentioned in the tutorial mentioned in the question.
While I'm at it, I'll inadvertedly also answer the question "how do I manipulate any of the spring.jpa properties for several databases independently".
In the example, the following method gets called:
#Bean
public LocalContainerEntityManagerFactoryBean db2EntityManagerFactory(
#Qualifier(“db2DataSource”) DataSource db2DataSource,
EntityManagerFactoryBuilder builder
) {
return builder
.dataSource(db2DataSource)
.packages(“com.preplaced.multidbconnectionconfig.model.db2”)
.persistenceUnit(“db2”)
.build();
}
While configuring the datasource and the package our model lies in, we can also inject additional configuration.
After calling .packages(...), we can set a propertiesMap that can contain everything we would normally set via spring.jpa in the application.properties file.
If we want to set the DB2390Dialect, the method could now look like this (with the possibility to easily add further configuration):
#Bean
public LocalContainerEntityManagerFactoryBean db2EntityManagerFactory(
#Qualifier(“db2DataSource”) DataSource db2DataSource,
EntityManagerFactoryBuilder builder
) {
HashMap<String, String> propertiesMap = new HashMap<String, String>();
propertiesMap.put("hibernate.dialect", "org.hibernate.dialect.DB2390Dialect");
return builder
.dataSource(db2DataSource)
.packages(“com.preplaced.multidbconnectionconfig.model.db2”)
.properties(propertiesMap)
.persistenceUnit(“db2”)
.build();
}
Note that "hibernate.dialect" seems to work (instead of "database-platform").
I am using Spring Boot 2 + Graphite reporting and have some metrics that are receiving common tags, and some that are not.
I have two beans that are identical except for the name, both creating a timer with the same code. One of them is created first and does not have commonTags applied, the other does. e.g.,
#Autowired
MeterRegistry meterRegistry
#Bean Foo doesNotGetCommonTags() {
meterRegistry.timer(...);
...
}
#Bean MeterRegistryCustomizer<GraphiteMeterRegistry> graphiteCustomizer() {
return r -> r.config().commonTags(...);
}
#Bean Foo getsCommonTags() {
meterRegistry.timer(...);
...
}
It seems to have to do with bean creation order, but I cannot figure out how to ensure my registry is fully customised before creating meters. I’m using the common tags as a prefix, so this results in some metrics being send to Graphite unprefixed, while others are.
I don’t understand what could cause this. My graphiteCustomizer should be applied before the metrics registry is injected, right?
My workaround (which I don’t consider a full solution) was to not use MeterRegsitryCustomizer and instead create the Graphite registry myself with my desired customisations.
I am just starting to use Kafka with Spring Boot & want to send & consume JSON objects.
I am getting the following error when I attempt to consume an message from the Kafka topic:
org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition dev.orders-0 at offset 9903. If needed, please seek past the record to continue consumption.
Caused by: java.lang.IllegalArgumentException: The class 'co.orders.feedme.feed.domain.OrderItem' is not in the trusted packages: [java.util, java.lang]. If you believe this class is safe to deserialize, please provide its name. If the serialization is only done by a trusted source, you can also enable trust all (*).
at org.springframework.kafka.support.converter.DefaultJackson2JavaTypeMapper.getClassIdType(DefaultJackson2JavaTypeMapper.java:139) ~[spring-kafka-2.1.5.RELEASE.jar:2.1.5.RELEASE]
at org.springframework.kafka.support.converter.DefaultJackson2JavaTypeMapper.toJavaType(DefaultJackson2JavaTypeMapper.java:113) ~[spring-kafka-2.1.5.RELEASE.jar:2.1.5.RELEASE]
at org.springframework.kafka.support.serializer.JsonDeserializer.deserialize(JsonDeserializer.java:218) ~[spring-kafka-2.1.5.RELEASE.jar:2.1.5.RELEASE]
at org.apache.kafka.clients.consumer.internals.Fetcher.parseRecord(Fetcher.java:923) ~[kafka-clients-1.0.1.jar:na]
at org.apache.kafka.clients.consumer.internals.Fetcher.access$2600(Fetcher.java:93) ~[kafka-clients-1.0.1.jar:na]
I have attempted to add my package to the list of trusted packages by defining the following property in application.properties:
spring.kafka.consumer.properties.spring.json.trusted.packages = co.orders.feedme.feed.domain
This doesn't appear to make any differences. What is the correct way to add my package to the list of trusted packages for Spring's Kafka JsonDeserializer?
Since you have the trusted package issue solved, for your next problem you could take advantage of the overloaded
DefaultKafkaConsumerFactory(Map<String, Object> configs,
Deserializer<K> keyDeserializer,
Deserializer<V> valueDeserializer)
and the JsonDeserializer "wrapper" of spring kafka
JsonDeserializer(Class<T> targetType, ObjectMapper objectMapper)
Combining the above, for Java I have:
new DefaultKafkaConsumerFactory<>(properties,
new IntegerDeserializer(),
new JsonDeserializer<>(Foo.class,
new ObjectMapper()
.registerModules(new KotlinModule(), new JavaTimeModule()).setSerializationInclusion(JsonInclude.Include.NON_NULL)
.setDateFormat(new ISO8601DateFormat()).configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false))));
Essentially, you can tell the factory to use your own Deserializers and for the Json one, provide your own ObjectMapper. There you can register the Kotlin Module as well as customize date formats and other stuff.
Ok, I have read the documentation in a bit more detail & have found an answer to my question. I am using Kotlin so the creation of my consumer looks like this with the
#Bean
fun consumerFactory(): ConsumerFactory<String, FeedItem> {
val configProps = HashMap<String, Any>()
configProps[ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG] = bootstrapServers
configProps[ConsumerConfig.GROUP_ID_CONFIG] = "feedme"
configProps[ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG] = StringDeserializer::class.java
configProps[ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG] = JsonDeserializer::class.java
configProps[JsonDeserializer.TRUSTED_PACKAGES] = "co.orders.feedme.feed.domain"
return DefaultKafkaConsumerFactory(configProps)
}
Now I just need a way to override the creation of the Jackson ObjectMapper in the JsonDeserializer so that it can work with my Kotlin data classes that don't have a zero-argument constructor :)
UPDATE: I just published this question also here, I might have done a better work phrasing it there.
How can I explicitly define an order in which Spring's out-of-the-box process of reading properties out of an available-in-classpath application.yml will take place BEFORE my #Configuration annotated class which reads configuration data from zookeeper and places them as system properties which are later easily read and injected into members using #Value?
I have a #Configuration class, which defines a creation of a #Bean, in a which configuration data from zookeeper is read and placed as system properties, in a way that they can easily be read and injected into members using #Value.
#Profile("prod")
#Configuration
public class ZookeeperConfigurationReader {
#Value("${zookeeper.url}")
static String zkUrl;
#Bean
public static PropertySourcesPlaceholderConfigurer zkPropertySourcesPlaceHolderConfigurer() {
PropertySourcesConfigurerAdapter propertiesAdapter = new PropertySourcesConfigurerAdapter();
new ConfigurationBuilder().populateAdapterWithDataFromZk(propertiesAdapter);
return propertiesAdapter.getConfigurer();
}
public void populateAdapterWithDataFromZk(ConfigurerAdapter ca) {
...
}
}
Right now I pass the zookeeper.url into the executed program using a -Dzookeeper.url which is added to the execution line. Right now I read it by calling directly System.getProperty("zookeeper.url").
Since I'm using Spring-Boot application, I also have a application.yml configuration file.
I would like to be able to set the zookeeper.url in the application.yml, and keep my execution line clean as possible from explicit properties.
The mission turns out to be harder than I thought.
As you can see in the above code sniplet of ZookeeperConfigurationReader, I'm trying to inject that value using #Value("${zookeeper.url}") into a member in the class which performs the actual read of data from zookeeper, but at the time the code that needs that value accesses it, it is still null. The reason for that is that in spring life cycle wise, I'm still in the phase of "configuration" as I'm a #Configuration annotated class myself, and the spring's code which reads the application.yml data and places them as system properties, hasn't been executed yet.
So bottom line, what I'm looking for is a way to control the order and tell spring to first read application.yml into system properties, and then load ZookeeperConfigurationReader class.
You can try to use Spring Cloud Zookeeper. I posted a brief example of use here