NoSuchMethodError when trying to map Kafka binder to input method - spring-boot

The following is prompted in console when trying to launch a spring cloud streams project with the Kafka binder active:
org.springframework.context.ApplicationContextException: Failed to start bean 'inputBindingLifecycle';
Caused by: java.lang.NoSuchMethodError: java.util.List.of(Ljava/lang/Object;Ljava/lang/Object;Ljava/lang/Object;Ljava/lang/Object;Ljava/lang/Object;Ljava/lang/Object;Ljava/lang/Object;)Ljava/util/List;
My input method goes as follows using Spring Cloud functions:
#Bean
public Function<Message<String>, byte[]> exec() {
return input -> ...
Now, having Kafka in place, my .properties file looks as follows:
spring.cloud.stream.function.bindings.exec-in-0=in
spring.cloud.stream.bindings.in.destination=topic-0
spring.cloud.stream.function.bindings.exec-out-0=out
spring.cloud.stream.bindings.out.destination=topic-1
spring.cloud.stream.bindings.in.binder=kafka
spring.cloud.stream.kafka.bindings.in.consumer.configuration.value.deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.cloud.stream.bindings.out.binder=kafka
spring.cloud.stream.kafka.bindings.out.producer.configuration.value.serializer=org.apache.kafka.common.serialization.ByteArraySerializer
Am I missing any configs for the input method? Should that method be different for Kafka (already tested this with PubSub and it works)?

The error in your stack trace gives us a clue that you are facing this problem: https://github.com/spring-projects/spring-integration/issues/3761.
So, or upgrade to the latest Spring Cloud Stream: https://spring.io/projects/spring-cloud-stream#learn
Or to the latest Spring Integration: https://spring.io/projects/spring-integration#learn
Or just use Java > 8 !

Related

StreamsException: Unable to initialize state, this can happen if multiple instances of Kafka Streams are running in the same state directory

This is regarding upgrading existing code base in production which uses windowing from kafka-clients,kafka-streams,spring-kafka 2.4.0 to 2.6.x and also upgrading spring-boot-starter-parentfrom 2.2.2.RELEASE to 2.3.x as 2.2 is incompatible with kafka-streams 2.6.
The existing code had these beans mentioned below with old verions(2.4.0,2.2 spring release):
#Bean("DataCompressionCustomTopology")
public Topology customTopology(#Qualifier("CustomFactoryBean") StreamsBuilder streamsBuilder) {
//Your topology code
return streamsBuilder.build();
}
#Bean("GenericKafkaStreams")
public KafkaStreams kStream() {
//Your kafka streams code
return kafkaStreams;
}
Now after upgrading kafka streams,kafka clients to to 2.6.2 and spring kafka to 2.6.x, the following exception was observed:
2021-05-13 12:33:51.954 [Persistence-Realtime-Transformation] [main] WARN o.s.b.w.s.c.AnnotationConfigServletWebServerApplicationContext - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Failed to start bean 'CustomFactoryBean'; nested exception is org.springframework.kafka.KafkaException: Could not start stream: ; nested exception is org.apache.kafka.streams.errors.StreamsException: Unable to initialize state, this can happen if multiple instances of Kafka Streams are running in the same state directory
A similar Error can happen when you are running multiple of the same application(name/id) on the same machine.
Please visite State.dir to get the idea.
you can add that in Kafka configurations and make it unique per each instance
In case you are using spring cloud stream (cann't have same port in the same machine):
spring.cloud.stream.kafka.streams.binder.configuration.state.dir: ${spring.application.name}${server.port}
UPDATE:
In the case of spring stream kafka:
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
KafkaStreamsConfiguration kStreamsConfig() {
Map<String, Object> props = new HashMap<>();
props.put(APPLICATION_ID_CONFIG, springApplicationName);
props.put(BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
props.put(StreamsConfig.STATE_DIR_CONFIG, String.format("%s%s", springApplicationName, serverPort));
return new KafkaStreamsConfiguration(props);
}
or:
spring.kafka:
bootstrap-servers: ....
streams:
properties:
application.server: localhost:${server.port}
state.dir: ${spring.application.name}${server.port}
The problem here is newer versions of spring-kafka is initializing one more instance of kafka streams based on topology bean automatically and another bean of generickafkaStreams is getting initialized from existing code base which is resulting in multiple threads trying to lock over state directory and thus the error.
Even disabling the KafkaAutoConfiguration at spring boot level does not disable this behavior. This was such a pain to identify and lost lot of time.
The fix is to get rid of topology bean and have our own custom kafka streams bean as below code:
protected Topology customTopology() {
//topology code
return streamsBuilder.build();
}
/**
* This starts kafka stream application and sets the state listener and state
* store listener.
*
* #return KafkaStreams
*/
#Bean("GenericKafkaStreams")
public KafkaStreams kStream() {
KafkaStreams kafkaStreams = new KafkaStreams(customTopology(), kstreamsconfigs);
return kafkaStreams;
}
If you have a sophisticated Kafka Streams topology in your Spring Cloud Streams Kafka Streams Binder 3.0 style application, you might need to specify different application ids for different functions like the following:
spring.cloud.stream.function.definition: myFirstStream;mySecondStream
...
spring.cloud.stream.kafka.streams:
binder:
functions:
myFirstStream:
applicationId: app-id-1
mySecondStream:
applicationId: app-id-2
I've handled problem on versions:
org.springframework.boot version 2.5.3
org.springframework.kafka:spring-kafka:2.7.5
org.apache.kafka:kafka-clients:2.8.0
org.apache.kafka:kafka-streams:2.8.0
Check this: State directory
By default it is created in temp folder with kafka streams app id like:
/var/folders/xw/xgslnvzj1zj6wp86wpd8hqjr0000gn/T/kafka-streams/${spring.kafka.streams.application-id}/.lock
If two or more Kafka Streams apps use the same spring.kafka.streams.application-id then you get this exception.
So just change your Kafka Streams apps id's.
Or set directory option manually StreamsConfig.STATE_DIR_CONFIG in streams config.
Above answers to set state dir works perfectly for me. Thanks.
Adding one observation that might be helpful for someone working with spring-boot. When working on same machine and trying to bring up multiple kafka stream application instances and If you have enabled property spring.devtools.restart.enabled (which mostly is the case in dev profile), you might want to disable it as when the same application instance restarts automatically it might not get store lock. This is what I was facing and was able to resolve by disabling restart behavior.
In my case perfectly works specyfing separate #TestConfiguration class in which I specify counter for changing application name for each SpringBoot Test Context.
#TestConfiguration
public class TestKafkaStreamsConfig {
private static final AtomicInteger COUNTER = new AtomicInteger();
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
KafkaStreamsConfiguration kStreamsConfig() {
final var props = new HashMap<String, Object>();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "test-application-id-" + COUNTER.getAndIncrement());
// rest of configuration
return new KafkaStreamsConfiguration(props);
}
}
Of course I had to enable spring bean overriding to replace primary configuration.
Edit: I'm using SpringBoot v. 2.5.10 so in my case to make use of #TestConfiguration i have to pass it to #SpringBootTest(classes =) annotation.
I was facing the same problem. A single topology in spring boot and I was trying to access the state store for interactive queries. In order to do so I needed a KafkaStreams object as shown below.
GlobalKTable<String, String> configTable = builder.globalTable("config",
Materialized.<String, String, KeyValueStore<Bytes, byte[]>> as("config-store")
.withKeySerde(Serdes.String())
.withValueSerde(Serdes.String()));
KafkaStreams streams = new KafkaStreams(builder.build(), kconfig.asProperties());
streams.start();
ReadOnlyKeyValueStore<String, String> configView = streams.store(StoreQueryParameters.fromNameAndType("config-store", QueryableStoreTypes.keyValueStore()));
The problem is the Spring Kafka Factory bean starts a topology and calling streams.start() causes the lock on the state store as a second start is called.
This can be fixed by setting the auto start property to false.
spring.kafka.streams.auto-startup=false
That's all you need.

Starting Spring Boot application without check for Kafka Server

I have got an application that uses SpringBoot 2.10.0.Release and kafka in the version 2.10.0. The application has got a simple producer and consumer: The sender works with KafkaTemplate and the consumer with KafkaListener.
What I try to achieve is to be able to start the SpringBoot application even if the KafkaServer is not running.
Currently without a running KafkaBroker the application cannot be started with this error message:
org.springframework.context.ApplicationContextException:
Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry';
nested exception is org.apache.kafka.common.errors.TimeoutException
Is there a way to achieve this and if yes could anybody give me hint or a keyword how to manage this?
When running the Spring-Boot application with a KafkaListener, the listener will per default try to listen to Kafka. If the KafkaBroker is invalid or missing, then you will get a org.apache.kafka.common.KafkaException.
You can change the default behaviour of the container factory by setting the autoStartup property to false. One way to do this is by adding autoStartup = "false" element to your KafkaListener annotation:
#KafkaListener(topics = "some_topic", autoStartup = "false")
public void fooEventListener(){
Now your spring boot application will start. You will still get an error when trying to use the KafkaListener if the broker is still down or invalid, but you will now be able to handle the error within your Java code instead of a Spring Boot server crash.
Documentation about KafkaListner autoStartup element.
It have to be mentioned that the error you are receiving (TimeoutException) is not because the broker is down, it is what Kafka will throw if the buffer is full.
The batch records will then be removed from the queue and will not be delivered to the broker. This error will not be the reason for you application using Kafka not to start.

About map properties to java class in spring boot 2

I want to convert properties to map, see below
field2ZhNameMap.platform=平台
==>
private Map<String,String> field2ZhNameMap;
In Spring boot 1.5.6 start the app in tomcat it's OK, but use sprint boot 2.0.0.M7 start the app in tomcat I got below error
Caused by: org.springframework.boot.context.properties.bind.BindException: Failed to bind properties under '' to com.foo.bar.util.Field2ZhNameProperties
at org.springframework.boot.context.properties.bind.Binder.handleBindError(Binder.java:227)
Caused by: java.lang.IllegalArgumentException: PropertyName must not be empty
at org.springframework.util.Assert.hasLength(Assert.java:233)
at org.springframework.boot.origin.PropertySourceOrigin.<init>(PropertySourceOrigin.java:41)
After debuging source code I found start the app in tomcat it has a JndiPropertySource which caused above problem. So I have to explicitly disable JndiPropertySource by specify spring.jndi.ignore=true in a spring.properties to solve this problem.
In addition I found these classes like Binder do not exist in 1.5.6, it seems it has a big change from 1.5.6 to 2.0.0. So I want to know if has some documents record these change and guide how to correctly map properties to java class in spring boot 2?
For me, upgrade to Spring Boot 2.0.1.RELEASE on Tomcat 8.5.30 resolved PropertyName must not be empty

Spring cloud data flow unable to find file

I have created a microservice in springboot, there is folder under resource folder and then a file under this folder
i,e.
resource
mycustomfolder
myfile.txt
I am creating a bean, which filed populated by myfile
#Value("${file-path}")
private String filePath;
#Bean
public MyBean byBean() throws IOException {
//read file path
String path = ResourceUtils.getURL(filePath).getPath();
//populated by bean
MyBean myBean = myservice.populatedMyBean(path);
return myBean;
}
filePath value is set in application.property
dataload-config-file=src/main/resources/mycustomfolder/myfile.txt
when i am executing this springboot app it's working file.
But when i am creating a jar of it and deploying with spring cloud data flow it giving me error on creating myBean
showing exception cause
Caused by: java.io.FileNotFoundException: /tmp/spring-cloud-dataflow-4865534318197521357/test-1506882530191/test.process/src/main/resources/mycustomfolder/myfile.txt (No such file or directory)
why this is happning normally working fine, but throwing error with spring-cloud-dataflow?
Spring Cloud Data Flow can only orchestrate Spring Cloud Stream (SCSt) or Spring Cloud Task (SCT) based microservice applications. It is unclear whether your Spring Boot application complies to previously mentioned frameworks.
Please use the SCSt and SCT samples for reference. If your application does comply with the SCSt/SCT programming model, it'd be better you share the source code for review.

disable RabbitAutoConfiguration programmatically

Is there a programmatic(properties based) way of disabling RabbitAutoConfiguration in spring boot (1.2.2).
Looks like spring.rabbitmq.dynamic=false disables just the AmqpAdmin but not the connection factory etc.
We want a model where app properties might be sourced from spring cloud config (includes control bus) or via -D jvm args. This decision is made at deployment time.
When properties are sourced from -D jvm args, we disable the spring cloud config client but rabbit keeps throwing exceptions such as :
[org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer] - [Co
nsumer raised exception, processing can restart if the connection factory suppor
ts it. Exception summary: org.springframework.amqp.AmqpConnectException: java.ne
t.ConnectException: Connection refused: connect]
First you need to exclude RabbitAutonfiguration from your app
#EnableAutoConfiguration(exclude=RabbitAutoConfiguration.class)
Then you can import it based on some property like this
#Configuration
#ConditionalOnProperty(name="myproperty",havingValue="valuetocheck",matchIfMissing=false)
#Import(RabbitAutoConfiguration.class)
class RabbitOnConditionalConfiguration{
}

Resources