How to define multiple packages as trusted package in yaml - spring-boot

I have a Spring boot application where I am consuming data from Kafka topics.
For the Objects which I consume, I need to provide their package names as trusted packages.
eg.
spring:
kafka:
consumer:
bootstrap-servers: localhost:9092
group-id: group_id
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
properties:
spring:
json:
trusted:
packages: com.sample.cassandra
Now I have multiple packages which I need to define as trusted package.
For example I have com.sample.demo, but when I try something like this:
spring:
json:
trusted:
packages: com.sample.cassandra, com.sample.demo
It doesn't work.
What would be the correct syntax to define multiple packages in the yaml file?

The YAML syntax for sequences is
packages: [com.sample.cassandra, com.sample.demo]
You can alternatively use the block syntax:
packages:
- com.sample.cassandra
- com.sample.demo

Related

Spring application properties (map) from docker environment

I would like to configure my dockerized spring boot application using Docker Environments. The property is a map, which I configure it in application.yml like below.
spring:
kafka:
producer:
properties:
"schema.registry.url": http://schema-registry.com:8081
I tried the following, but it didn't worked,
environment:
- SPRING_KAFKA_PRODUCER_PROPERTIES_SCHEMA.REGISTRY.URL=http://schema-registry.com:8081
How can I configure this schema.registry.url parameter from docker environment ?
Well, first of all, I would require a little bit more of information about how do you use that containerized application: Do you deploy it with docker-compose? Does it forms part of a Docker Swarm?
Depending of this, the possible solutions can vary.
Docker Swarm
For example, if you are using Docker Swarm, you can define your application.yml as a template:
application.yml.template
spring:
kafka:
producer:
properties:
"schema.registry.url": {{ env "schema_registry" }}
Then, you will have to parse that template. For that, I will suppose you have located your Spring Boot executable JAR under /usr/app in the container and that your image is named springboot-app.
docker-compose.yml
version: "3.8"
services:
springboot-app:
image: springboot-app:latest
environment:
SPRING_KAFKA_PRODUCER_PROPERTIES_SCHEMA.REGISTRY.URL: 'http://schema-registry.com:8081'
configs:
- source: springboot-app.application.yml
target: /usr/app/config/application.yaml
mode: 0440
configs:
springboot-app.application.yml:
template_driver: golang
file: ./application.yml.template
So you can deploy now your Swarm with docker stack deploy -c docker-compose.yml springboot-app.
Or even better, if you are working in a production environment, you can separate the environment variables from the common configuration:
docker-compose.yml
version: "3.8"
services:
springboot-app:
image: springboot-app:latest
configs:
- source: springboot-app.application.yml
target: /usr/app/config/application.yaml
mode: 0440
configs:
springboot-app.application.yml:
template_driver: golang
file: ./application.yml.template
docker-compose.dev.yml
version: "3.8"
services:
springboot-app:
environment:
SPRING_KAFKA_PRODUCER_PROPERTIES_SCHEMA.REGISTRY.URL: 'http://schema-registry.com:8081'
And deploy it as docker stack deploy -c docker-compose.yml -c docker-compose.dev.yml springboot-app.
Docker Compose
Since you mentioned in a lately comment that you are using docker-compose, the way of working with isn't the same.
First of all, not all the properties in Spring can be overridden in the Docker Compose file, only the ones that you can pass to Maven at the time of building or starting the application.
Also, it seems you have wrongly defined the environment Property, since normally all those that you provide you should change the dots '.' by underscores '_', but anyway, since normally the configuration of a Kafka Producer goes further than just defining an URL, I would use the profiles feature of Spring.
You can create several profiles with the configuration combinations that you want, and inform Spring via Compose which one you want to use. Let's see an example.
application.yml
spring:
config:
activate:
on-profile: "development"
kafka:
producer:
properties:
"schema.registry.url": https://kafka-dev-endpoint.com
---
spring:
config:
activate:
on-profile: "production"
kafka:
producer:
properties:
"schema.registry.url": https://kafka-prod-endpoint.com
and finally then:
docker-compose.yml
environment:
- SPRING_PROFILES_ACTIVE=development
If you wanna check further, you have more information about that here: https://docs.spring.io/spring-boot/docs/current/reference/html/howto.html#howto-set-active-spring-profiles

Spring Cloud Streams kafka binder - topic serialization configuration

So I think I've run myself into confusion as I understand there are two different kafka binders for SpringCloudStreams:
Spring Cloud Streams Kafka Binder
Spring Cloud Streams Kafka Streams Binder
I'm looking for the correct YAML settings to define the serializer and deserializer in the normal kafka binder for spring cloud streams:
I can tweak the defaults using this logic:
spring:
main:
web-application-type: NONE
application:
name: tbfm-translator
kafka:
consumer:
group-id: ${consumer_id}
bootstrap-servers: ${kafka_servers}
cloud:
schemaRegistryClient:
endpoint: ${schema_registry}
stream:
# default:
# producer.useNativeEncoding: true
# consumer.useNativeEncoding: true
defaultBinder: kafka
kafka:
binder:
auto-add-partitions: true # I wonder if its cause this is set
auto-create-topics: true # Disabling this seem to override the server setings and will auto create
producer-properties:
# For additional properties you can check here:
# https://docs.confluent.io/current/installation/configuration/producer-configs.html
schema.registry.url: ${schema_registry}
# Disable for auto schema registration
auto.register.schemas: false
# Use only the latest schema version
use.latest.version: true
# This will use reflection to generate schemas from classes - used to validate current data set
# against the scheam registry for valid production
schema.reflection: true
# To use an avro key enable the following line
#key.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
#This will use a string based key - aka not in the registry - dont need a name strategy with string serializer
key.serializer: org.apache.kafka.common.serialization.StringSerializer
# This will control the Serializer Setup
value.subject.name.strategy: io.confluent.kafka.serializers.subject.RecordNameStrategy
value.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
which is:
spring.cloud.stream.kafka.binder.producer-properties.value.serializer
spring.cloud.stream.kafka.binder.producer-properties.key.serializer
I figure I should be able to do this on a per-topic basis:
spring:
cloud:
stream:
bindings:
my-topic:
destination: a-topic
xxxxxxxx??
I've come across setting:
producer:
use-native-encoding: false
keySerde: <CLASS>
But this doesn't seem to be working. Is there an easy property I can set to do this on the per-topic basis? I think the keySerde is for the Kafka-streams implementation not the normal kafka binder.
use-native-encoding must be true to use your own serializers.
spring.cloud.stream.kafka.bindings.my-topic.producer.configuration.value.serializer: ...
See the documentation for kafka-specific producer properties.
configuration
Map with a key/value pair containing generic Kafka producer properties.
Default: Empty map.
stream:
bindings: # Define output topics here and then again in the kafka.bindings section
test:
destination: multi-output
producer:
useNativeDecoding: true
kafka:
bindings:
test:
destination: multi-output
producer:
configuration:
value.serializer: org.apache.kafka.common.serialization.StringSerializer
This seems to work - but very annoying I have to duplicate the binding definition in two places
Makes we want to shy away from the YAML style definition

Spring Cloud Stream: Cannot connect to 2 rabbitmq clusters without allowOverride

I have a spring boot application (let's call it example-service) with the following configuration to connect to 2 different rabbitmq clusters.
spring:
cloud:
stream:
defaultBinder: rabbitA
binders:
rabbitA:
inheritEnvironment: false
defaultCandidate: false
type: rabbit
environment:
spring:
rabbitmq:
addresses: rabbitmq-a:5672
username: user-a
password: password-a
rabbitB:
inheritEnvironment: false
defaultCandidate: false
type: rabbit
environment:
spring:
rabbitmq:
addresses: rabbitmq-b:5672
username: user-b
password: password-b
bindings:
dataFromA:
destination: exchange-1
group: queue-1
binder: rabbitA
dataFromB:
destination: exchange-2
group: queue-2
binder: rabbitB
That itself works fine, it connects to both clusters. The problem is that this service is deployed in an environment where there is a spring config server with the following files:
application.yml
spring.rabbitmq:
addresses: rabbitmq-a:5672
username: user-a
password: password-a
Then that seems to override the configuration set for each binder, located under the "environment" property. So I needed to add this extra config.
example-service.yml
spring.cloud:
config:
overrideSystemProperties: false
allowOverride: true
overrideNone: false
Now the example-service connects to both rabbitmq clusters again. But I have observed certain side effects, mainly not being able to override other properties in the config server example-service.yml anymore, which is a real need for me. So I have discarded using allowOverride and its related properties.
The question is... is it possible to make it work without using allowOverride, while keeping the spring.rabbitmq.addresses/username/password in the remote config server application.yml?
Thank you very much in advance.
Kind regards.
Which version are you using? I just tested it with 3.0.6 and it works fine:
spring:
cloud:
stream:
binders:
rabbitA:
type: rabbit
inherit-environment: false
environment:
spring:
rabbitmq:
virtual-host: A
rabbitB:
type: rabbit
inherit-environment: false
environment:
spring:
rabbitmq:
virtual-host: B
bindings:
input1:
binder: rabbitA
destination: input1
group: foo
input2:
binder: rabbitB
destination: input2
group: bar
rabbitmq:
virtual-host: /
Probably not related, but your group indentation is wrong.

SCDF kubernetes custom source is writing data to "ouput" channel

I have custom source application which reads data from external kafka server and pass the information to next processor in the stream. In local everything works perfect. I have created docker image of the code and when i deploy stream in kubernetes env, i do see topic with the name stream.source-app got created but messages produced by source are actually going to "output" topic. I dont see this issue in local env.
application.yaml
spring:
cloud:
stream:
bindings:
Workitemconnector_subscribe:
destination: Workitemconnector
contentType: application/json
group: SCDFMessagingSourceTestTool1
consumer:
partitioned: true
concurrency: 1
headerMode: embeddedHeaders
output:
# destination: dataOut
binder: kafka2
binders:
kafka1:
type: kafka
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: xx.xxx.xx.xxx:9092
zkNodes: xx.xxx.xx.xxx:2181
kafka2:
type: kafka
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: server1:9092
zkNodes: server1:2181
spring.cloud.stream.defaultBinder: kafka1
In local without defining any parameters during stream deployment, i notice source is consuming message from xxxx server and producing data to server1 and to the topic name "stream.sourceapp" but in kubernetes env it is acting strange. It is always sending data to "output" topic even though "stream.sourceapp" topic exists

Multiple bindingRoutingKey's for a consumer with Spring Cloud Stream using RabbitMQ

I'd like to configure an input channel in Spring Cloud Stream to be bound to the same exchange (destination) with multiple routing keys. I've managed to get this working with a single routing key like this:
spring:
cloud:
stream:
rabbit:
bindings:
input1:
consumer:
bindingRoutingKey: key1.#
bindings:
input1:
binder: rabbit
group: group1
destination: dest-group1
But I cannot seem to get it working for multiple keys. I've tried this:
spring:
cloud:
stream:
rabbit:
bindings:
input1:
consumer:
bindingRoutingKey: key1.#,key2.#
bindings:
input1:
binder: rabbit
group: group1
destination: dest-group1
But this doesn't seem to work.
I'm using Spring Boot 2.0.1 and Spring cloud dependencies are imported from:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>Finchley.RC1</version>
<type>pom</type>
<scope>import</scope>
</dependency>
Does anyone know how to achieve this?
This can be done now by adding a property:
spring.cloud.stream.rabbit.bindings.<channel-name>.consumer.binding-routing-key-delimiter=,
Then you can comma separate the routing keys:
spring.cloud.stream.rabbit.bindings.<channel-name>.consumer.binding-routing-key=key1,key2,key3
Thanks Gary
Reference documentation
It can't be done with properties; but you can declare the additional bindings as beans; see this answer.
There is also a third party "advanced" boot starter that allows you to add declarations in a yaml file. I haven't tried it, but it looks interesting.

Resources