How can I write a unit test for the below method:
public KStream<String, Object> process(KStream<String, Object> kstream) {
return kstream
.filter((key, value) -> isFilterRequired(value))
.mapValues(this::mapValues);
I am using Spring Cloud Stream framework, how could I mock KStream in order to invoke process method or how could I test this piece of code in Spring Kafka world.
Create a method that returns Kafka Streams topology and then you will be able to test it using TopologyTestDriver. No need for mocking.
For more details see, Testing Streams Code documentation from Confluent.
Related
I need to call a SOAP Webservice from my REST service. I'm using Spring integration in my project. Currently I'm using xml based configuration to achieve the target. But I want to write code in java dsl. Kindly help me how to call a SOAP service from a REST Service using Spring integration DSL.
One example would be really helpful.
See the documentation: https://docs.spring.io/spring-integration/docs/current/reference/html/ws.html#webservices-dsl
#Bean
IntegrationFlow outboundMarshalled() {
return f -> f.handle(Ws.marshallingOutboundGateway()
.id("marshallingGateway")
.marshaller(someMarshaller())
.unmarshaller(someUnmarshalller()))
...
}
or
.handle(Ws.simpleOutboundGateway(template)
.uri(uri)
.sourceExtractor(sourceExtractor)
.encodingMode(DefaultUriBuilderFactory.EncodingMode.NONE)
.headerMapper(headerMapper)
.ignoreEmptyResponses(true)
.requestCallback(requestCallback)
.uriVariableExpressions(uriVariableExpressions)
.extractPayload(false))
)
TLDR; How do you test a Reactive Function composition using the Test Binder?
I have a Spring Cloud Stream that uses Reactive Functions and I don't know how to test it. I don't see any official docs on how to do an Integration Test from input source to output destination binder.
In my specific case, I am connecting a Spring Integration flow using a Reactive Supplier and the IntegrationReactiveUtils.messageChannelToFlux() pattern. This works in a development environment - I can pull messages from RabbitMQ using the Spring Integration Flow and they enter the SCSt.
My SCSt has several function chained together, each one is reactive. They are composed like func1|func2|func3. I verified this works with a dev Rabbit (source) and Kafka (Destination).
I can't seem to figure out how to test this, and there doesn't seem to be any official documentation on testing a complete reactive stream. Right now I have code that roughly looks like this:
#Autowired
MessageChannel inputChannel;
#Autowired
private OutputDestination output;
#Test
void myTest() {
//omitted prep of var 'messageToSend'
this.inputChannel.send(messageToSend);
var outputMessage = output.receive(5000);
Assertions.assertNotNull(outputMessage.getPayload());
}
The error I receive is that output.receive(5000) returns null. I suspect a threading issue because I am not subscribing to the Flux and waiting for completion.
I have run a debugger in the Flux functions and see the message going all the way to the end with no errors or weirdness.
I figured this out actually. I had to specify the binder name. I had a test property spring.cloud.stream.bindings.processingStream set, which I thought made 2 new bindings (processingStream-in-0 and processingStream-out-0).
It turns out I had to set the binding name in the test code like output.receive(5000, "processingStream"), without the -out-0 suffix. I can now receive messages from the stream.
This is regarding upgrading existing code base in production which uses windowing from kafka-clients,kafka-streams,spring-kafka 2.4.0 to 2.6.x and also upgrading spring-boot-starter-parentfrom 2.2.2.RELEASE to 2.3.x as 2.2 is incompatible with kafka-streams 2.6.
The existing code had these beans mentioned below with old verions(2.4.0,2.2 spring release):
#Bean("DataCompressionCustomTopology")
public Topology customTopology(#Qualifier("CustomFactoryBean") StreamsBuilder streamsBuilder) {
//Your topology code
return streamsBuilder.build();
}
#Bean("GenericKafkaStreams")
public KafkaStreams kStream() {
//Your kafka streams code
return kafkaStreams;
}
Now after upgrading kafka streams,kafka clients to to 2.6.2 and spring kafka to 2.6.x, the following exception was observed:
2021-05-13 12:33:51.954 [Persistence-Realtime-Transformation] [main] WARN o.s.b.w.s.c.AnnotationConfigServletWebServerApplicationContext - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Failed to start bean 'CustomFactoryBean'; nested exception is org.springframework.kafka.KafkaException: Could not start stream: ; nested exception is org.apache.kafka.streams.errors.StreamsException: Unable to initialize state, this can happen if multiple instances of Kafka Streams are running in the same state directory
A similar Error can happen when you are running multiple of the same application(name/id) on the same machine.
Please visite State.dir to get the idea.
you can add that in Kafka configurations and make it unique per each instance
In case you are using spring cloud stream (cann't have same port in the same machine):
spring.cloud.stream.kafka.streams.binder.configuration.state.dir: ${spring.application.name}${server.port}
UPDATE:
In the case of spring stream kafka:
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
KafkaStreamsConfiguration kStreamsConfig() {
Map<String, Object> props = new HashMap<>();
props.put(APPLICATION_ID_CONFIG, springApplicationName);
props.put(BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
props.put(StreamsConfig.STATE_DIR_CONFIG, String.format("%s%s", springApplicationName, serverPort));
return new KafkaStreamsConfiguration(props);
}
or:
spring.kafka:
bootstrap-servers: ....
streams:
properties:
application.server: localhost:${server.port}
state.dir: ${spring.application.name}${server.port}
The problem here is newer versions of spring-kafka is initializing one more instance of kafka streams based on topology bean automatically and another bean of generickafkaStreams is getting initialized from existing code base which is resulting in multiple threads trying to lock over state directory and thus the error.
Even disabling the KafkaAutoConfiguration at spring boot level does not disable this behavior. This was such a pain to identify and lost lot of time.
The fix is to get rid of topology bean and have our own custom kafka streams bean as below code:
protected Topology customTopology() {
//topology code
return streamsBuilder.build();
}
/**
* This starts kafka stream application and sets the state listener and state
* store listener.
*
* #return KafkaStreams
*/
#Bean("GenericKafkaStreams")
public KafkaStreams kStream() {
KafkaStreams kafkaStreams = new KafkaStreams(customTopology(), kstreamsconfigs);
return kafkaStreams;
}
If you have a sophisticated Kafka Streams topology in your Spring Cloud Streams Kafka Streams Binder 3.0 style application, you might need to specify different application ids for different functions like the following:
spring.cloud.stream.function.definition: myFirstStream;mySecondStream
...
spring.cloud.stream.kafka.streams:
binder:
functions:
myFirstStream:
applicationId: app-id-1
mySecondStream:
applicationId: app-id-2
I've handled problem on versions:
org.springframework.boot version 2.5.3
org.springframework.kafka:spring-kafka:2.7.5
org.apache.kafka:kafka-clients:2.8.0
org.apache.kafka:kafka-streams:2.8.0
Check this: State directory
By default it is created in temp folder with kafka streams app id like:
/var/folders/xw/xgslnvzj1zj6wp86wpd8hqjr0000gn/T/kafka-streams/${spring.kafka.streams.application-id}/.lock
If two or more Kafka Streams apps use the same spring.kafka.streams.application-id then you get this exception.
So just change your Kafka Streams apps id's.
Or set directory option manually StreamsConfig.STATE_DIR_CONFIG in streams config.
Above answers to set state dir works perfectly for me. Thanks.
Adding one observation that might be helpful for someone working with spring-boot. When working on same machine and trying to bring up multiple kafka stream application instances and If you have enabled property spring.devtools.restart.enabled (which mostly is the case in dev profile), you might want to disable it as when the same application instance restarts automatically it might not get store lock. This is what I was facing and was able to resolve by disabling restart behavior.
In my case perfectly works specyfing separate #TestConfiguration class in which I specify counter for changing application name for each SpringBoot Test Context.
#TestConfiguration
public class TestKafkaStreamsConfig {
private static final AtomicInteger COUNTER = new AtomicInteger();
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
KafkaStreamsConfiguration kStreamsConfig() {
final var props = new HashMap<String, Object>();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "test-application-id-" + COUNTER.getAndIncrement());
// rest of configuration
return new KafkaStreamsConfiguration(props);
}
}
Of course I had to enable spring bean overriding to replace primary configuration.
Edit: I'm using SpringBoot v. 2.5.10 so in my case to make use of #TestConfiguration i have to pass it to #SpringBootTest(classes =) annotation.
I was facing the same problem. A single topology in spring boot and I was trying to access the state store for interactive queries. In order to do so I needed a KafkaStreams object as shown below.
GlobalKTable<String, String> configTable = builder.globalTable("config",
Materialized.<String, String, KeyValueStore<Bytes, byte[]>> as("config-store")
.withKeySerde(Serdes.String())
.withValueSerde(Serdes.String()));
KafkaStreams streams = new KafkaStreams(builder.build(), kconfig.asProperties());
streams.start();
ReadOnlyKeyValueStore<String, String> configView = streams.store(StoreQueryParameters.fromNameAndType("config-store", QueryableStoreTypes.keyValueStore()));
The problem is the Spring Kafka Factory bean starts a topology and calling streams.start() causes the lock on the state store as a second start is called.
This can be fixed by setting the auto start property to false.
spring.kafka.streams.auto-startup=false
That's all you need.
I have two Java processes - which get spawned from the same Jar using different run configurations
Process A - Client UI component , Developed Using Spring bean xml based approach. No Spring Boot is there.
Process B - A new Springboot Based component , hosts REST End points.
Now from Process A , on various button click how can I call the REST end points on Process B using Feign Client.
Note - Since Process A is Spring XML based , right at the moment we can not convert that to Spring boot. Hence #EnableFeignClients can not be used to initialise the Feign Clients
So Two questions
1) If the above is possible how to do it ?
2) Till Process A is moved to Spring boot - is Feign still an easier option than spring REST template ?
Feign is a Java to HTTP client binder inspired by Retrofit, JAXRS-2.0, and WebSockets and you can easily use feign without spring boot. And Yes, feign still better option to use because Feign Simplify the HTTP API Clients using declarative way as Spring REST does.
1) Define http methods and endpoints in interface.
#Headers({"Content-Type: application/json"})
public interface NotificationClient {
#RequestLine("POST")
String notify(URI uri, #HeaderMap Map<String, Object> headers, NotificationBody body);
}
2) Create Feign client using Feign.builder() method.
Feign.builder()
.encoder(new JacksonEncoder())
.decoder(customDecoder())
.target(Target.EmptyTarget.create(NotificationClient.class));
There are various decoders available in feign to simplify your tasks.
You are able to just initialise Feign in any code (without spring) just like in the readme example:
public static void main(String... args) {
GitHub github = Feign.builder()
.decoder(new GsonDecoder())
.target(GitHub.class, "https://api.github.com");
...
}
Please take a look at the getting started guide: feign on github
I am trying to configure a Spring Batch listener to send a message to a Spring Integration Gateway for StepExecution events.
The following link explains how to configure this with XML
http://docs.spring.io/spring-batch/trunk/reference/html/springBatchIntegration.html#providing-feedback-with-informational-messages
How can this be setup using Spring Integration DSL? I've found no way to configure a gateway with a service interface using DSL.
At the moment I worked around this by implementing an actual StepExecutionListener, and have this then calling an interface which is annotated with #MessagingGateway (calling the corresponding #Gateway method) in order to get a message to a channel. And I then setup an Integration DSL flow for this channel.
Is there a simpler way using DSL, avoiding that workaround? Is there some way to connect a Batch listener direct to a gateway, like one can using XML config?
Cheers,
Menno
First of all SI DSL is just an extension of existing SI Java and Annotation configuration, so it can be used together with any other Java config. Of course an XML #Import is also posible.
There is no gateway configuration in the DSL, because its methods can't be wired with linear IntegrationFlow. There is need to provide downstream flows for each method.
So, #MessagingGateway is a right way to go ahead:
#MessagingGateway(name = "notificationExecutionsListener", defaultRequestChannel = "stepExecutionsChannel")
public interface MyStepExecutionListener extends StepExecutionListener {}
From other side #MessagingGateway parsing as well as <gateway> tag parsing ends up with GatewayProxyFactoryBean definition. So, you just can declare that bean, if you don't want to introduce a new class:
#Bean
public GatewayProxyFactoryBean notificationExecutionsListener(MessageChannel stepExecutionsChannel) {
GatewayProxyFactoryBean gateway = new GatewayProxyFactoryBean(StepExecutionListener.class);
gateway.setDefaultRequestChannel(stepExecutionsChannel);
return gateway;
}
After the latest Milestone 3 I have an idea to introduce nested flows, when we may be able to introduce Gateway support for flows. Something like this:
#Bean
public IntegrationFlow gatewayFlow() {
return IntegrationFlows
.from(MyGateway.class, g ->
g.method("save", f -> f.transform(...)
.filter(...))
.method("delete", f -> f.handle(...)))
.handle(...)
.get();
}
However I'm not sure that it will simplify the life, as far as any nested Lambda just adds more noise and might break loosely coupling principle.