Spring Cloud Stream Kafka add headers to only one binding - spring

The reference manual for Spring Cloud Stream says that you can use the property "spring.cloud.stream.kafka.binder.headers" to set the headers for all bindings. Is there a way to set the headers for a specific binding? For example, something like this:
spring.cloud.stream:
bindings:
input:
destination: input-topic
headers: header-for-input-only
output:
destination: input-topic
headers: header-for-output-only

Unfortunately not, it's a binder-wide setting. More granular settings could be a new feature.

Related

Spring Cloud Stream RabbitMQ Binder Consumer Properties republishToDlq not working for me

Using below configuration aplication.yml, for a consumer application through Spring Cloud Stream 3.0.12.RELEASE and Spring Boot 2.5.6
spring:
main:
web-application-type: none
cloud:
stream:
bindings:
consumeSomething-in-0:
destination: some-topic-vhost2
group: some-events
binder: local_rabbit
consumer:
max-attempts: 1
auto-bind-dlq: true
dlq-ttl: 50000
republishToDlq: true
I am trying to manage error handling with republishToDlq to route the messages which throw exception together with exception stack trace. In the document it is told that;
2. RabbitMQ Binder Overview
..." In addition, republishToDlq causes the binder to publish a failed message to the DLQ (instead of rejecting it). This feature lets additional information (such as the stack trace in the x-exception-stacktrace header) be added to the message in headers."
When I check the queue some-topic-vhost2.some-events.dlq, I figure out that the message which throws exception routed to that queue. It is OK. But the exception stack trace is not added as a header. And also, the applicaiton send as a warning below,
o.s.c.s.b.r.RabbitMessageChannelBinder : 'republishToDlq' is true, but the 'DLX' dead letter exchange is not present; disabling 'republishToDlq'
What is wrong with this config. What should I do to enable republishToDlq
You probably missing some configuration.
Here is a working example I provided for a different question - https://github.com/olegz/stream-function-samples/tree/main/stream-rabbit-dlq
Although it this example is specific to RoutingFunction, it is just like any other function, so you can easily retrofit it to your case by changing all properties that use functionRouter-in-0 to your consumeSomething-in-0.
In addition to what Oleg mentioned above, I think there is a slight problem with your configuration where you set rabbit binding properties. There are a few Rabbit consumer-specific properties that must be set on the rabbit binding itself. See below and notice the rabbit indirection in the configuration. max-attempts and other binding properties can stay where they are in your configuration.
spring:
cloud:
stream:
bindings:
consumeSomething-in-0:
destination: some-topic-vhost2
group: some-events
binder: local_rabbit
consumer:
max-attempts: 1
rabbit
bindings:
consumeSomething-in-0:
consumer:
auto-bind-dlq: true
dlq-ttl: 50000
republishToDlq: true

Spring Kafka Ssl Implemantation without using trust/keystores

Hi I want to implement ssl feature in my spring kafka connection.To achive this I have to set SslTruststoreLocation,SslKeystoreLocations.But is it possible not to set classpath.Can I give the content o cert file as a string and running ssl implemantation like that?Is there any change to do that
Normal way we can set this properties
spring:
kafka:
ssl:
key-password: pass
keystore-location: /tmp/kafka.client.keystore.jks
keystore-password: pass
truststore-location: /tmp/kafka.client.truststore.jks
truststore-password: pass
What I want to to set cert file content and running spring kafka implemantation

Referencing configuration section from other place in spring-boot application.yaml

I'm configuring spring boot kafka streams in application.yaml. I need to configure properties of the output topics:
producer:
topic.properties:
cleanup.policy: compact
retention.ms: 604800000
Because I have the same configuration across the whole file I want to make single point where to define values:
my:
policy: compact
retention: 604800000
producer:
topic.properties:
cleanup.policy: ${my.policy}
retention.ms: ${my.retention}
But the topic.properties is just generic map passed to underlying kafka library. To make the configuration more flexible I would like to reference the my section from the producer.topic.properties. So when new kafka property is added then only my section is updated.
I tried:
producer:
topic.properties: ${my}
But this doesn't work - ${my} is replaced by my.toString() and configuration fails on getting String where Map is expected.
I'm looking for some section placeholder. For example in OpenAPI Spec you can do something similar to:
my:
policy: compact
retention: 604800000
producer:
topic.properties:
$ref: '/my'
I know basic YAML doesn't support references. But is there something in spring-boot allowing to reference other config sections?
You can reference other properties, but one at a time:
my:
policy: compact
retention: 604800000
producer:
topic.properties:
policy: ${my.policy}
retention: ${my.retention}

Spring boot 2.4.x cannot handle multi document yml files from config server

Java version: 8
Spring Boot version: 2.4.1
Spring Cloud version: 2020.0.0, specifically I use a Spring Cloud Config Server connected to GIT and our services are Spring Cloud Config Clients.
I have migrated away from using bootstrap.yml and started using spring.config.import and spring.config.activate.on-profile as mentioned in the documentation here and here
My configuration in my service, who is a client to the config server looks like this:
server.port: 9001
spring:
application.name: my-rest-service
config.import: configserver:http://localhost:8888
cloud.config.profile: ${spring.profiles.active}
My configuration in the config server looks like this:
application.yml (has two documents separated by the ---)
logging:
file.name: <omitted>
level:
root: INFO
---
spring:
config.activate.on-profile: dev
logging.level.root: DEBUG
my-rest-sercive.yml (has two documents separated by the ---)
spring:
datasource:
driver-class-name: <omitted>
username: <omitted>
password: <omitted>
---
spring:
config.activate.on-profile: dev
datasource.url: <omitted>
Because there is a profile "dev" active, I successfully get the following 4 configurations from config server:
application.yml: general logging level
application.yml: specific logging for dev
my-rest-sercive.yml: general datasource properties
my-rest-sercive.yml: specific datasource url for dev
I can see these 4 sources successfully being fetched when I use my browser or when I debug or in the logs when I lower the loglevel to trace:
o.s.b.c.config.ConfigDataEnvironment : Adding imported property source 'configserver:https://git.company.com/path.git/file:C:\configservergit\config\my-rest-service.yml'
o.s.b.c.config.ConfigDataEnvironment : Adding imported property source 'configserver:https://git.company.com/path.git/file:C:\configservergit\config\my-rest-service.yml'
o.s.b.c.config.ConfigDataEnvironment : Adding imported property source 'configserver:https://git.company.com/path.git/file:C:\configservergit\config\application.yml'
o.s.b.c.config.ConfigDataEnvironment : Adding imported property source 'configserver:https://git.company.com/path.git/file:C:\configservergit\config\application.yml'
However, notice that because I use multi document yml files, out of these 4 property sources only TWO unique names are used.
In a later step, when Spring creates the data source bean, he complains he cannot find the data source URL. If I debug the spring bean factory I can indeed see that out of the 4 property files returned by the config server, only two have remained (the ones that don't contain the dev profile specific configuration). I assume this is because they have an identical name and they overwrite each other. This is an effect of this piece of code in the MutablePropertySource.class:
public void addLast(PropertySource<?> propertySource) {
synchronized(this.propertySourceList) {
this.removeIfPresent(propertySource); <-- this is the culrprit!
this.propertySourceList.add(propertySource);
}
}
This is a breaking change from Spring 2.3/Spring Cloud Hoxton where it correctly collected all properties. I think spring cloud needs to change the config server so that every document within a yml has has a unique name when returned to Spring. This is exactly how Spring Boot handles multi document yml files, by appending the String (documenyt #1) to the property source name
I found an interesting note about profiles and multi document yml, basically saying it is not supported, but this doesn't apply to my use case because my yml files are not profiles based (there is no -{profileName} in the last part of the file name).
This is a known issue with the new release. We can track the issue here on the spring cloud config server github page.
The workaround seems to be stop using multi document yml files and use multiple distinct files with the profile name in the filename.

Spring Boot YAML Config

I am trying to deploy a Spring Boot application for which I am not able to change the code. I'd prefer to configure it using YML, however I am hitting this YAML limitation:
Elements have a value or children, never both.
The config scheme for the application includes multiple instances of:
com.company.feature=default
com.company.feature.config=value
Is there any way to configure these as YAML, as the following would be invalid:
com:
company:
feature: default
config: value
You can't put it all in one hierarchy. Any property whose key has children and a value must be listed with the value without the heirarchy.
GOOD EXAMPLE:
com:
company:
feature:
config: value
com.company.feature: default
BAD EXAMPLE
com:
company:
feature:
config: value
com:
company:
feature: default
If you tried to do previous code then according to YAML, you overwrote com:company:feature with default instead of config: value

Resources