EnableBinding, Output, Input deprecated Since version of 3.1 of Spring Cloud Stream - spring

Since version 3.1 the major API for working with queues is deprecated.
In the class comment it says:
Deprecated
as of 3.1 in favor of functional programming model
I searched a lot in the web for a solution but didn't find a solid E2E explanation on how I should migrate.
Looking for examples for:
read from queue
write to queue
If there are a few ways to do that (as I saw in web) I'd be glad for an explanation and the typical use case for each option as well.

I'm assuming you are already familiar with the main concepts, and will focus on the migration.
I'm using kotlin for the demo code, to reduce verbosity
First, some references which may help:
Here is the initial relevant doc: link
This is an explanation for the naming scheme in the new functional format: link
This is a more detailed explanation with some more advanced scenarios: link
TL;DR
Instead of working with annotation-based configuration, spring now uses detected beans of Consumer/Function/Supplier to define your streams for you.
Input/Consumer
Whereas before you had code looking like this:
interface BindableGradesChannel {
#Input
fun gradesChannel(): SubscribableChannel
companion object {
const val INPUT = "gradesChannel"
}
}
and the usage was similar to:
#Service
#EnableBinding(BindableGradesChannel::class)
class GradesListener {
private val log = LoggerFactory.getLogger(GradesListener::class.java)
#StreamListener(BindableScoresChannel.INPUT)
fun listen(grade: Grade) {
log.info("Received $grade")
// do something
}
}
now the entire definition is irrelevant, and can be done like so:
#Service
class GradesListener {
private val log = LoggerFactory.getLogger(GradesListener::class.java)
#Bean
fun gradesChannel(): Consumer<Grade> {
return Consumer { listen(grade = it) }
}
fun listen(grade: Grade) {
log.info("Received $grade")
// do something
}
}
notice how the Consumer bean replaced the #StreamListener and the #Input.
Regarding the configuration, if before in order to configure you had an application.yml looking like so:
spring:
cloud:
stream:
bindings:
gradesChannel:
destination: GradesExchange
group: grades-updates
consumer:
concurrency: 10
max-attempts: 3
now it should be like so:
spring:
cloud:
stream:
bindings:
gradesChannel-in-0:
destination: GradesExchange
group: grades-updates
consumer:
concurrency: 10
max-attempts: 3
notice how gradesChannel was replaced by gradesChannel-in-0 - to understand the full naming convention please see the naming convention link at the top.
Some details:
If you have more than one such bean in your application, you need to define the spring.cloud.function.definition property.
You have the option to give your channels custom names, so if you'd like to continue using gradesChannel you can set spring.cloud.stream.function.bindings.gradesChannel-in-0=gradesChannel and use everywhere in the configuration gradesChannel.
Output/Supplier
The concept here is similar, you replace config and code looking like this:
interface BindableStudentsChannel {
#Output
fun studentsChannel(): MessageChannel
}
and
#Service
#EnableBinding(BindableStudentsChannel::class)
class StudentsQueueWriter(private val studentsChannel: BindableStudentsChannel) {
fun publish(message: Message<Student>) {
studentsChannel.studentsChannel().send(message)
}
}
can now be replaced by:
#Service
class StudentsQueueWriter {
#Bean
fun studentsChannel(): Supplier<Student> {
return Supplier { Student("Adam") }
}
}
As you can see, we have a major difference - when is it called and by who?
Before we could trigger it manually, but now it is triggered by spring, every second (by default). This is fine for use cases such as when you need to publish a sensor data every second, but this is not good when you want to send the message on an event. Besides using Function for whatever reason, spring offers 2 alternatives:
StreamBridge - link
Using StreamBridge you can. define the target explicitly like so:
#Service
class StudentsQueueWriter(private val streamBridge: StreamBridge) {
fun publish(message: Message<Student>) {
streamBridge.send("studentsChannel-out-0", message)
}
}
This way you don't define the target channel as a bean, but you can still send the message. The downside is that you have some explicit configuration in your class.
Reactor API - link
The other way is to use some kind of reactive mechanism such as Sinks.Many, and to return it. Using this your code will look similar to:
#Service
class StudentsQueueWriter {
val students: Sinks.Many<String> = Sinks.many().multicast().onBackpressureBuffer()
#Bean
fun studentsChannel(): Supplier<Flux<Student>> {
return Supplier { students.asFlux() }
}
}
and the usage may be similar to:
class MyClass(val studentsQueueWriter: StudentsQueueWriter) {
fun newStudent() {
studentsQueueWriter.students.tryEmitNext(Student("Adam"))
}
}

Related

Spring Cloud Stream 3 RabbitMQ consumer not working

I'm able to make Spring+Rabbit work with the non-functional way (prior to 2.0?), but I'm trying to use with the functional pattern as the previous one is deprecated.
I've been following this doc: https://docs.spring.io/spring-cloud-stream/docs/3.1.0/reference/html/spring-cloud-stream.html#_binding_and_binding_names
The queue (consumer) is not being created in Rabbit with the new method. I can see the connection being created but without any consumer.
I have the following in my application.properties:
spring.cloud.stream.function.bindings.approved-in-0=approved
spring.cloud.stream.bindings.approved.destination=myTopic.exchange
spring.cloud.stream.bindings.approved.group=myGroup.approved
spring.cloud.stream.bindings.approved.consumer.back-off-initial-interval=2000
spring.cloud.stream.rabbit.bindings.approved.consumer.queueNameGroupOnly=true
spring.cloud.stream.rabbit.bindings.approved.consumer.bindingRoutingKey=myRoutingKey
which is replacing:
spring.cloud.stream.bindings.approved.destination=myTopic.exchange
spring.cloud.stream.bindings.approved.group=myGroup.approved
spring.cloud.stream.bindings.approved.consumer.back-off-initial-interval=2000
spring.cloud.stream.rabbit.bindings.approved.consumer.queueNameGroupOnly=true
spring.cloud.stream.rabbit.bindings.approved.consumer.bindingRoutingKey=myRoutingKey
And the new class
#Slf4j
#Service
public class ApprovedReceiver {
#Bean
public Consumer<String> approved() {
// I also saw that it's recommended to not use Consumer, but use Function instead
// https://docs.spring.io/spring-cloud-stream/docs/3.1.0/reference/html/spring-cloud-stream.html#_consumer_reactive
return value -> log.info("value: {}", value);
}
}
which is replacing
// BindableApprovedChannel.class
#Configuration
public interface BindableApprovedChannel {
#Input("approved")
SubscribableChannel getApproved();
}
// ApprovedReceiver.class
#Service
#EnableBinding(BindableApprovedChannel.class)
public class ApprovedReceiver {
#StreamListener("approved")
public void handleMessage(String payload) {
log.info("value: {}", payload);
}
}
Thanks!
If you have multiple beans of type Function, Supplier or Consumer (which could be declared by third party libraries), the framework does not know which one to bind to.
Try setting the spring.cloud.function.definition property to approved.
https://docs.spring.io/spring-cloud-stream/docs/3.1.3/reference/html/spring-cloud-stream.html#spring_cloud_function
In the event you only have single bean of type java.util.function.[Supplier/Function/Consumer], you can skip the spring.cloud.function.definition property, since such functional bean will be auto-discovered. However, it is considered best practice to use such property to avoid any confusion. Some time this auto-discovery can get in the way, since single bean of type java.util.function.[Supplier/Function/Consumer] could be there for purposes other then handling messages, yet being single it is auto-discovered and auto-bound. For these rare scenarios you can disable auto-discovery by providing spring.cloud.stream.function.autodetect property with value set to false.
Gary's answer is correct. If adding the definition property alone doesn't resolve the issue I would recommend sharing what you're doing for your supplier.
This is also a very helpful general discussion for transitioning from imperative to functional with links to repos with more in depth examples: EnableBinding is deprecated in Spring Cloud Stream 3.x

Spring Cloud Stream - route to multiple dynamic destinations at runtime

I have a use-case where I need to produce to multiple Kafka topics/destinations determined at runtime. I tried to combine Functions with multiple input and output arguments by using a returning Flux<Message<T>> from a functional bean of type Function with setting the header spring.cloud.stream.sendto.destination for each Message as described here. I came up with the following implementation:
#Bean
public Function<Person, Flux<Message<Person>>> route() {
return person -> Flux.fromIterable(Stream.of(person.getEvents())
.map(e -> MessageBuilder.withPayload(person)
.setHeader("spring.cloud.stream.sendto.destination", e).build())
.collect(Collectors.toList()));
}
and I have also this in my config:
spring.cloud.stream.dynamic-destinations=
This is my Person:
#AllArgsConstructor
#NoArgsConstructor
#Data
public class Person {
private String[] events;
private String name;
}
events contains the list of Kafka topic names.
However, it doesn't work. What am I'm missing?
spring.cloud.stream.sendto.destination uses BinderAwareChannelResolver internally which is deprecated in favor of StreamBridge. I think you can rewrite your code as below. I haven't tested it, but here is the template:
#Autowired StreamBridge streamBridge;
#Bean
public Consumer<Person> route() {
return person -> streamBridge.send(person.getName(), person);
}
Behind the scenes, Spring Cloud Stream will create a binding for Person dynamically.
If you know your destinations in advance at deployment time, you can also set them through configuration. For e.g. spring.cloud.stream.source as foo;bar..;.... Then the framework creates output bindings in the form of foo-out-0, bar-out-0 etc. Then you need to set destinations - spring.cloud.stream.bindings.foo-out-0.destination=foo. But since your use case is strictly about dynamic destinations, you can't go with this approach, rather try using what I suggested above.
One solution that works uses BinderAwareChannelResolver. However, it's deprecated in favor if providing spring.cloud.stream.sendto.destination property in 3.0.
#Autowired
private BinderAwareChannelResolver binderAwareChannelResolver;
#Bean
public Consumer<Person> route() {
return person ->
Stream.of(person.getEvents())
.forEach(e -> binderAwareChannelResolver.resolveDestination(e)
.send(MessageBuilder.withPayload(person).build()));
}
I don't like this solution because it combines the function-based programming model with the "legacy-style" programming model. If anyone has a better solution, please feel free to comment/answer.

Convert Spring Cloud Stream to use reactive cloud function

Currently I have Spring Boot application which is something like this.
#Component
#EnableBinding(Source::class)
class EmailMessageProducer(private val source: Source) {
suspend fun send(textMessage: TextMessage) {
source.output().send(
MessageBuilder.withPayload(textMessage).setHeader("service", "test").build()
)
}
}
I would like to use Spring Cloud Function here using reactive pattern.
Furthermore, is my current solution non blocking? I am asking this because this is my first time using Kotlin coroutine in this context.
Java solution works for me as well since I am just trying to understand the concept here.
What you're looking for is a reactive Supplier (e.g., Supplier<Flux>).
In your case it would look something like this:
#SpringBootApplication
public class SomeApplication {
#Bean
public Supplier<Flux<Message<String>>> messageProducer() {
return () -> Flux.just(MessageBuilder.withPayload(textMessage).setHeader("service", "test").build());
}
}
Provide spring.cloud.function.definition=messageProducer property and this is pretty much it.
Obviously the above example produced a finite stream with a single item, but feel free to modify the returned flux. In fact we discuss this in more details here.

Ordering multiple RouterFunctions

I have multiple RouterFunctions which I register as beans (one per section of code).
One of them is /** for dynamic routing for React. Basically, if no other route matches, I want it to go to that one.
The problem is sometimes, depending on the whims of what order they are used, the /** will block another endpoint.
Is there a way to order the separate RouterFunctions or a better way to deal with having everything that doesn't match something else go to a specific route?
Spring WebFlux is gathering all RouterFunction beans and reducing them into one using RouterFunction::andOther (See RouterFunctionMapping).
So you can just order your RouterFunction beans as regular beans and Spring WebFlux will do the rest.
#Bean
#Order(1)
public RouterFunction first() {
//
}
#Bean
#Order(2)
public RouterFunction second() {
//
}
#Bean
#Order(3)
public RouterFunction third() {
//
}
I've worked out a solution that takes advantage of the fact that RouterFunction has an add() function to combine them together.
First, I had RouterFunction beans that looked something like this:
#Configuration
class MyRouter {
#Bean
fun myRoutes(myHandler: MyHandler): RouterFunction<ServerResponse>
= router {
GET("/path", myHandler::handlePath)
}
}
I had multiple of these, and if there were some conflicting paths (like /**), which one ran was kind of question mark.
I decided to merge these into one RouterFunction where I could control the order. Since I didn't want to have to manually manage these somewhere (i.e., if I made a new Router class, I just wanted it picked up automatically).
First, I had to make my normal routes no longer beans. I also needed an easy way to let Spring find them all, so I decided to create an abstract class for them to extend.
That looked like this:
abstract class RouterConfig {
open val priority: Int = 0 // higher number, later in list
open val routes: RouterFunction<ServerResponse>
get() = TODO()
}
The priority lets me override the order they are added (with the default being fine if it doesn't matter). A larger number means it'll be loaded later.
After that, I changed my Router classes to Components and made them not spit out a bean anymore. Something like this:
#Configuration
class MyRouter(
private val myHandler: MyHandler
) {
override val routes: RouterFunction<ServerResponse>
get() = router {
GET("/path", myHandler::handlePath)
}
}
Finally, I create a new class and bean which would gather them all up:
#Configuration
class AppRouter {
#Bean
fun appRoutes(routerConfigs: List<RouterConfig>): RouterFunction<ServerResponse>
= routerConfigs.sortedBy { it.priority }
.map { it.routes }
.reduce { r, c -> r.add(c) }
}
And that seemed to do the trick. Now the routes will be added in priority order, so for the one that might be a bit greedy (/**) I simply set the priority of that class to 100 to make it come last.

How do a I setup mongodb messagestore in aggregator using annotation

I am trying to add an aggregator to my code.
Couple of problems I am facing.
1. How do I setup a messagestore using annotations only.
2. Is there any design of aggregator works ? basically some picture explaining the same.
#MessageEndpoint
public class Aggregator {
#Aggregator(inputChannel = "abcCH",outputChannel = "reply",sendPartialResultsOnExpiry = "true")
public APayload aggregatingMethod(List<APayload> items) {
return items.get(0);
}
#ReleaseStrategy
public boolean canRelease(List<Message<?>> messages){
return messages.size()>2;
}
#CorrelationStrategy
public String correlateBy(Message<AbcPayload> message) {
return (String) message.getHeaders().get(RECEIVED_MESSAGE_KEY);
}
}
In the Reference Manual we have a note:
Annotation configuration (#Aggregator and others) for the Aggregator component covers only simple use cases, where most default options are sufficient. If you need more control over those options using Annotation configuration, consider using a #Bean definition for the AggregatingMessageHandler and mark its #Bean method with #ServiceActivator:
And a bit below:
Starting with the version 4.2 the AggregatorFactoryBean is available, to simplify Java configuration for the AggregatingMessageHandler.
So, actually you should configure AggregatorFactoryBean as a #Bean and with the #ServiceActivator(inputChannel = "abcCH",outputChannel = "reply").
Also consider to use Spring Integration Java DSL to simplify your life with the Java Configuration.

Resources