Spring cloud stream mutually exclusive property issue - spring

I have application.yml configuration i.e
cloud:
stream:
poller:
# Cron for polling data.
cron: 0 0/30 * * * *
........
I am facing errors like
Description:
The following configuration properties are mutually exclusive:
spring.integration.poller.cron
spring.integration.poller.fixed-delay
spring.integration.poller.fixed-rate
However, more than one of those properties has been configured at the same time:
spring.integration.poller.cron
spring.integration.poller.fixed-delay
Action:
Update your configuration so that only one of the mutually exclusive properties is configured.
Even though I haven't added fix-delay it shows that I have added it.
I saw that PollerConfigEnvironmentPostProcessor class adds fixed-delay if the property is absent. So how can I use the cron expression?
//TODO Must remain after removal of deprecated code above in the future
streamPollerProperties.putIfAbsent(INTEGRATION_PROPERTY_PREFIX + "fixed-delay", "1s");
streamPollerProperties.putIfAbsent(INTEGRATION_PROPERTY_PREFIX + "max-messages-per-poll", "1");
I have also checked with spring integration poller properties instead of spring cloud stream poller as it is deprecated but getting the same error
integration:
poller:
cron: 0 0/30 * * * *
Earlier with spring cloud version 2020.0.2, it was working fine. As soon as I update the spring cloud version to 2021.0.1, error gets started

This is bug. That:
streamPollerProperties.putIfAbsent(INTEGRATION_PROPERTY_PREFIX + "fixed-delay", "1s");
has to be conditional if there is no spring.integration.poller.fixed-delay or spring.integration.poller.cron present yet.
As a workaround I suggest to implement an EnvironmentPostProcessor like this:
public class RemoveStreamPollerEnvironmentPostProcessor implements EnvironmentPostProcessor {
#Override
public void postProcessEnvironment(ConfigurableEnvironment environment, SpringApplication application) {
environment.getPropertySources().remove("spring.integration.poller");
}
}
This way all the spring.integration.poller. properties related to Spring Cloud Stream will be removed from the environment. But those configured manually based on the spring.integration.poller. will still be present.
Therefore your:
spring:
integration:
poller:
cron: 0 0/30 * * * *
will be good.
NOTE: the EnvironmentPostProcessor has to be registered in the spring.factories.

Related

StreamsException: Unable to initialize state, this can happen if multiple instances of Kafka Streams are running in the same state directory

This is regarding upgrading existing code base in production which uses windowing from kafka-clients,kafka-streams,spring-kafka 2.4.0 to 2.6.x and also upgrading spring-boot-starter-parentfrom 2.2.2.RELEASE to 2.3.x as 2.2 is incompatible with kafka-streams 2.6.
The existing code had these beans mentioned below with old verions(2.4.0,2.2 spring release):
#Bean("DataCompressionCustomTopology")
public Topology customTopology(#Qualifier("CustomFactoryBean") StreamsBuilder streamsBuilder) {
//Your topology code
return streamsBuilder.build();
}
#Bean("GenericKafkaStreams")
public KafkaStreams kStream() {
//Your kafka streams code
return kafkaStreams;
}
Now after upgrading kafka streams,kafka clients to to 2.6.2 and spring kafka to 2.6.x, the following exception was observed:
2021-05-13 12:33:51.954 [Persistence-Realtime-Transformation] [main] WARN o.s.b.w.s.c.AnnotationConfigServletWebServerApplicationContext - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Failed to start bean 'CustomFactoryBean'; nested exception is org.springframework.kafka.KafkaException: Could not start stream: ; nested exception is org.apache.kafka.streams.errors.StreamsException: Unable to initialize state, this can happen if multiple instances of Kafka Streams are running in the same state directory
A similar Error can happen when you are running multiple of the same application(name/id) on the same machine.
Please visite State.dir to get the idea.
you can add that in Kafka configurations and make it unique per each instance
In case you are using spring cloud stream (cann't have same port in the same machine):
spring.cloud.stream.kafka.streams.binder.configuration.state.dir: ${spring.application.name}${server.port}
UPDATE:
In the case of spring stream kafka:
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
KafkaStreamsConfiguration kStreamsConfig() {
Map<String, Object> props = new HashMap<>();
props.put(APPLICATION_ID_CONFIG, springApplicationName);
props.put(BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
props.put(StreamsConfig.STATE_DIR_CONFIG, String.format("%s%s", springApplicationName, serverPort));
return new KafkaStreamsConfiguration(props);
}
or:
spring.kafka:
bootstrap-servers: ....
streams:
properties:
application.server: localhost:${server.port}
state.dir: ${spring.application.name}${server.port}
The problem here is newer versions of spring-kafka is initializing one more instance of kafka streams based on topology bean automatically and another bean of generickafkaStreams is getting initialized from existing code base which is resulting in multiple threads trying to lock over state directory and thus the error.
Even disabling the KafkaAutoConfiguration at spring boot level does not disable this behavior. This was such a pain to identify and lost lot of time.
The fix is to get rid of topology bean and have our own custom kafka streams bean as below code:
protected Topology customTopology() {
//topology code
return streamsBuilder.build();
}
/**
* This starts kafka stream application and sets the state listener and state
* store listener.
*
* #return KafkaStreams
*/
#Bean("GenericKafkaStreams")
public KafkaStreams kStream() {
KafkaStreams kafkaStreams = new KafkaStreams(customTopology(), kstreamsconfigs);
return kafkaStreams;
}
If you have a sophisticated Kafka Streams topology in your Spring Cloud Streams Kafka Streams Binder 3.0 style application, you might need to specify different application ids for different functions like the following:
spring.cloud.stream.function.definition: myFirstStream;mySecondStream
...
spring.cloud.stream.kafka.streams:
binder:
functions:
myFirstStream:
applicationId: app-id-1
mySecondStream:
applicationId: app-id-2
I've handled problem on versions:
org.springframework.boot version 2.5.3
org.springframework.kafka:spring-kafka:2.7.5
org.apache.kafka:kafka-clients:2.8.0
org.apache.kafka:kafka-streams:2.8.0
Check this: State directory
By default it is created in temp folder with kafka streams app id like:
/var/folders/xw/xgslnvzj1zj6wp86wpd8hqjr0000gn/T/kafka-streams/${spring.kafka.streams.application-id}/.lock
If two or more Kafka Streams apps use the same spring.kafka.streams.application-id then you get this exception.
So just change your Kafka Streams apps id's.
Or set directory option manually StreamsConfig.STATE_DIR_CONFIG in streams config.
Above answers to set state dir works perfectly for me. Thanks.
Adding one observation that might be helpful for someone working with spring-boot. When working on same machine and trying to bring up multiple kafka stream application instances and If you have enabled property spring.devtools.restart.enabled (which mostly is the case in dev profile), you might want to disable it as when the same application instance restarts automatically it might not get store lock. This is what I was facing and was able to resolve by disabling restart behavior.
In my case perfectly works specyfing separate #TestConfiguration class in which I specify counter for changing application name for each SpringBoot Test Context.
#TestConfiguration
public class TestKafkaStreamsConfig {
private static final AtomicInteger COUNTER = new AtomicInteger();
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
KafkaStreamsConfiguration kStreamsConfig() {
final var props = new HashMap<String, Object>();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "test-application-id-" + COUNTER.getAndIncrement());
// rest of configuration
return new KafkaStreamsConfiguration(props);
}
}
Of course I had to enable spring bean overriding to replace primary configuration.
Edit: I'm using SpringBoot v. 2.5.10 so in my case to make use of #TestConfiguration i have to pass it to #SpringBootTest(classes =) annotation.
I was facing the same problem. A single topology in spring boot and I was trying to access the state store for interactive queries. In order to do so I needed a KafkaStreams object as shown below.
GlobalKTable<String, String> configTable = builder.globalTable("config",
Materialized.<String, String, KeyValueStore<Bytes, byte[]>> as("config-store")
.withKeySerde(Serdes.String())
.withValueSerde(Serdes.String()));
KafkaStreams streams = new KafkaStreams(builder.build(), kconfig.asProperties());
streams.start();
ReadOnlyKeyValueStore<String, String> configView = streams.store(StoreQueryParameters.fromNameAndType("config-store", QueryableStoreTypes.keyValueStore()));
The problem is the Spring Kafka Factory bean starts a topology and calling streams.start() causes the lock on the state store as a second start is called.
This can be fixed by setting the auto start property to false.
spring.kafka.streams.auto-startup=false
That's all you need.

Spring Integration - FtpInboundFileSynchronizer Comparator configuration with DSL

Spring Integration's FtpInboundFileSynchronizer allows for the setting of a Comparator<FTPFile> to allow ordering of the downloads. The documentation says:
Starting with version 5.1, the synchronizer can be provided with a Comparator. This is useful when restricting the number of files fetched with maxFetchSize.
This is fine for #Bean configuration:
#Bean
public FtpInboundFileSynchronizer ftpInboundFileSynchronizer(...)
FtpInboundFileSynchronizer synchronizer = new FtpInboundFileSynchronizer(sessionFactory);
...
synchronizer.setComparator(comparator);
return synchronizer;
}
But if I want to programatically assemble flows, the Java DSL is encouraged.
StandardIntegrationFlow flow = IntegrationFlows
.from(Ftp.inboundAdapter(ftpFileSessionFactory, comparator)
.maxFetchSize(1)
...
The comparator in the Ftp.inboundAdapter(...) factory method is only for the comparison of files locally, after they have been downloaded. There are configuration settings that get passed to the synchronizer here (like remote directory, timestamp, etc.). But there is no setting for the synchronizer equivalent to setting it above.
Solution attempt:
The alternative is to create the synchronizer as non-bean, create the FtpInboundFileSynchronizingMessageSource in a similar way, and use IntegrationFlows.from(source) to assemble the synchronizer results in a runtime exception when the flow is registered with the flow context:
Creating EvaluationContext with no beanFactory
java.lang.RuntimeException: No beanFactory
at org.springframework.integration.expression.ExpressionUtils.createStandardEvaluationContext(ExpressionUtils.java:90) ~[spring-integration-core-5.3.2.RELEASE.jar:5.3.2.RELEASE]
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.afterPropertiesSet(AbstractInboundFileSynchronizer.java:299) ~[spring-integration-file-5.3.2.RELEASE.jar:5.3.2.RELEASE]
That makes sense; the FtpInboundFileSynchronizer is not supposed to be constructed outside of a context. (Though this does appear to work.) But how, in that case, can I dynamically assemble ftp integration flows with a synchronizer configured with a Comparator<FTPFile>?
Looks like we have missed to expose that remoteComparator option in DSL.
Feel free to raise a GH issue or even contribute a fix: https://github.com/spring-projects/spring-integration/issues
As a workaround for dynamic flows, I really would suggest to go a separate FtpInboundFileSynchronizer and FtpInboundFileSynchronizingMessageSource and then use the mentioned IntegrationFlows.from(source). What you probably miss in your configuration is this API:
/**
* Add an object which will be registered as an {#link IntegrationFlow} dependant bean in the
* application context. Usually it is some support component, which needs an application context.
* For example dynamically created connection factories or header mappers for AMQP, JMS, TCP etc.
* #param bean an additional arbitrary bean to register into the application context.
* #return the current builder instance
*/
IntegrationFlowRegistrationBuilder addBean(Object bean);
I mean that FtpInboundFileSynchronizingMessageSource is OK to pass to the from() as a is, but synchronizer has to be added as an extra bean for registration.
Another more fancy way is to consider to use a new feature called DSL extensions: https://docs.spring.io/spring-integration/docs/5.3.2.RELEASE/reference/html/dsl.html#java-dsl-extensions
So, you can extend that FtpInboundChannelAdapterSpec to provide a missed option to configure for an internal synchronizer.

SpringBoot 2.2.4 Actuator - path for custom management endpoints

After moving from spring-boot v1.3 to the newest spring-boot v2.2.4 we've lost the ability to have custom endpoints under management port.
Before we had our custom endpoints declared as:
#Component
public class CacheEndpoint implements MvcEndpoint {
...
#Override
public String getPath() {
return "/v1/cache";
}
...
// mappings goes here
Since MvcEndpoint has been removed from spring-boot actuator now we need to do next:
#Component
#RestControllerEndpoint(id = "cache")
public class CacheEndpoint {
...
// mappings goes here
Unfortunately, we've lost an option to have a custom root path for our custom management endpoints (before it was /v1/)
For back-compatibility, we still want to have default actuator endpoints such as health, metrics, env.. to be under / base path. e.g. host:<management_port>/health, but at the same time we still want to support our custom endpoints under /v1/ path, e.g. host:<management_port>/v1/cache
I tried a lot of things, googled even more, but no success yet.
Is there a way to achieve this?
This is what I use for spring boot 2:
application.yml:
management:
endpoints:
enabled-by-default: true
web:
exposure:
include: "*"
base-path: "/management" # <-- note, here is the context path
All-in-all consider reading a migration guide for actuator from spring boot 1.x to 2.x

Refresh springboot configuration dynamically

Is there any way to refresh springboot configuration as soon as we change .properties file?
I came across spring-cloud-config and many articles/blogs suggested to use this for a distributed environment. I have many deployments of my springboot application but they are not related or dependent on one another. I also looked at few solutions where they suggested providing rest endpoints to refresh configs manually without restarting application. But I want to refresh configuration dynamically whenever I change .properties file without manual intervention.
Any guide/suggestion is much appreciated.
Can you just use the Spring Cloud Config "Server" and have it signal to your Spring Cloud client that the properties file changed. See this example:
https://spring.io/guides/gs/centralized-configuration/
Under the covers, it is doing a poll of the underlying resource and then broadcasts it to your client:
#Scheduled(fixedRateString = "${spring.cloud.config.server.monitor.fixedDelay:5000}")
public void poll() {
for (File file : filesFromEvents()) {
this.endpoint.notifyByPath(new HttpHeaders(), Collections
.<String, Object>singletonMap("path", file.getAbsolutePath()));
}
}
If you don't want to use the config server, in your own code, you could use a similar scheduled annotation and monitor your properties file:
#Component
public class MyRefresher {
#Autowired
private ContextRefresher contextRefresher;
#Scheduled(fixedDelay=5000)
public void myRefresher() {
// Code here could potentially look at the properties file
// to see if it changed, and conditionally call the next line...
contextRefresher.refresh();
}
}

Spring Boot 2 integrate Brave MySQL-Integration into Zipkin

I am trying to integrate the Brave MySql Instrumentation into my Spring Boot 2.x service to automatically let its interceptor enrich my traces with spans concerning MySql-Queries.
The current Gradle-Dependencies are the following
compile 'io.zipkin.zipkin2:zipkin:2.4.5'
compile('io.zipkin.reporter2:zipkin-sender-okhttp3:2.3.1')
compile('io.zipkin.brave:brave-instrumentation-mysql:4.14.3')
compile('org.springframework.cloud:spring-cloud-starter-zipkin:2.0.0.M5')
I already configured Sleuth successfully to send traces concerning HTTP-Request to my Zipkin-Server and now I wanted to add some spans for each MySql-Query the service does.
The TracingConfiguration it this:
#Configuration
public class TracingConfiguration {
/** Configuration for how to send spans to Zipkin */
#Bean
Sender sender() {
return OkHttpSender.create("https://myzipkinserver.com/api/v2/spans");
}
/** Configuration for how to buffer spans into messages for Zipkin */
#Bean AsyncReporter<Span> spanReporter() {
return AsyncReporter.create(sender());
}
#Bean Tracing tracing(Reporter<Span> spanListener) {
return Tracing.newBuilder()
.spanReporter(spanReporter())
.build();
}
}
The Query-Interceptor works properly, but my problem now is that the spans are not added to the existing trace but each are added to a new one.
I guess its because of the creation of a new sender/reporter in the configuration, but I have not been able to reuse the existing one created by the Spring Boot Autoconfiguration.
That would moreover remove the necessity to redundantly define the Zipkin-Url (because it is already defined for Zipkin in my application.yml).
I already tried autowiring the Zipkin-Reporter to my Bean, but all I got is a SpanReporter - but the Brave-Tracer-Builder requries a Reporter<Span>
Do you have any advice for me how to properly wire things up?
Please use latest snapshots. Sleuth in latest snapshots uses brave internally so integration will be extremely simple.

Resources