Spring 6: Spring Cloud Stream Kafka - Replacement for #EnableBinding - spring

I was reading "Spring Microservices In Action (2021)" because I wanted to brush up on Microservices.
Now with Spring Boot 3 a few things changed. In the book, an easy example of how to push messages to a topic and how to consume messages to a topic were presented.
The Problem is: The examples presented do just not work with Spring Boot 3. Sending Messages from a Spring Boot 2 Project works. The underlying project can be found here:
https://github.com/ihuaylupo/manning-smia/tree/master/chapter10
Example 1 (organization-service):
Consider this Config:
spring.cloud.stream.bindings.output.destination=orgChangeTopic
spring.cloud.stream.bindings.output.content-type=application/json
spring.cloud.stream.kafka.binder.zkNodes=kafka #kafka is used as a network alias in docker-compose
spring.cloud.stream.kafka.binder.brokers=kafka
And this Component(Class) which can is injected in a service in this project
#Component
public class SimpleSourceBean {
private Source source;
private static final Logger logger = LoggerFactory.getLogger(SimpleSourceBean.class);
#Autowired
public SimpleSourceBean(Source source){
this.source = source;
}
public void publishOrganizationChange(String action, String organizationId){
logger.debug("Sending Kafka message {} for Organization Id: {}", action, organizationId);
OrganizationChangeModel change = new OrganizationChangeModel(
OrganizationChangeModel.class.getTypeName(),
action,
organizationId,
UserContext.getCorrelationId());
source.output().send(MessageBuilder.withPayload(change).build());
}
}
This code fires a message to the topic (destination) orgChangeTopic. The way I understand it, the firsttime a message is fired, the topic is created.
Question 1: How do I do this Spring Boot 3? Config-Wise and "Code-Wise"?
Example 2:
Consider this config:
spring.cloud.stream.bindings.input.destination=orgChangeTopic
spring.cloud.stream.bindings.input.content-type=application/json
spring.cloud.stream.bindings.input.group=licensingGroup
spring.cloud.stream.kafka.binder.zkNodes=kafka
spring.cloud.stream.kafka.binder.brokers=kafka
And this code:
#SpringBootApplication
#RefreshScope
#EnableDiscoveryClient
#EnableFeignClients
#EnableEurekaClient
#EnableBinding(Sink.class)
public class LicenseServiceApplication {
public static void main(String[] args) {
SpringApplication.run(LicenseServiceApplication.class, args);
}
#StreamListener(Sink.INPUT)
public void loggerSink(OrganizationChangeModel orgChange) {
log.info("Received an {} event for organization id {}",
orgChange.getAction(), orgChange.getOrganizationId());
}
What this method is supposed to do is to fire whenever a message is fired in orgChangeTopic, we want the method loggerSink to fire.
How do I do this in Spring Boot 3?

In Spring Cloud Stream 4.0.0 (the version used if you are using Boot 3), a few things are removed - such as the EnableBinding, StreamListener, etc. We deprecated them before in 3.x and finally removed them in the 4.0.0 version. The annotation-based programming model is removed in favor of the functional programming style enabled through the Spring Cloud Function project. You essentially express your business logic as java.util.function.Funciton|Consumer|Supplier etc. for a processor, sink, and source, respectively. For ad-hoc source situations, as in your first example, Spring Cloud Stream provides a StreamBridge API for custom sends.
Your example #1 can be re-written like this:
#Component
public class SimpleSourceBean {
#Autowired
StreamBridge streamBridge
public void publishOrganizationChange(String action, String organizationId){
logger.debug("Sending Kafka message {} for Organization Id: {}", action, organizationId);
OrganizationChangeModel change = new OrganizationChangeModel(
OrganizationChangeModel.class.getTypeName(),
action,
organizationId,
UserContext.getCorrelationId());
streamBridge.send("output-out-0", MessageBuilder.withPayload(change).build());
}
}
Config
spring.cloud.stream.bindings.output-out-0.destination=orgChangeTopic
spring.cloud.stream.kafka.binder.brokers=kafka
Just so you know, you no longer need that zkNode property. Neither the content type since the framework auto-converts that for you.
StreamBridge send takes a binding name and the payload. The binding name can be anything - but for consistency reasons, we used output-out-0 here. Please read the reference docs for more context around the reasoning for this binding name.
If you have a simple source that runs on a timer, you can express this simply as a supplier as below (instead of using a StreamBrdige).
#Bean
public Supplier<OrganizationChangeModel> ouput() {
return () -> {
// return the payload
};
}
spring.cloud.function.definition=output
spring.cloud.bindings.output-out-0.destination=...
Example #2
#Bean
public Consumer<OrganizationChangeModel> loggerSink() {
return model -> {
log.info("Received an {} event for organization id {}",
orgChange.getAction(), orgChange.getOrganizationId());
};
}
Config:
spring.cloud.function.definition=loggerSink
spring.cloud.stream.bindings.loggerSink-in-0.destination=orgChangeTopic
spring.cloud.stream.bindings.loggerSinnk-in-0.group=licensingGroup
spring.cloud.stream.kafka.binder.brokers=kafka
If you want the input/output binding names to be specifically input or output rather than with in-0, out-0 etc., there are ways to make that happen. Details for this are in the reference docs.

Related

Spring Clould Stream Resolving Input Channel dynamically based on Message

I need a way of resolving an Inbound Channel dynamically based on the type of the Incoming Message.
I am not looking for any header based solution which is already mentioned in this link
https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/3.0.0.M1/spring-cloud-stream.html#_using_streamlistener_for_content_based_routing
The resolution has to happen based on the type of the message. If there is a custom binding that can be done at application startup to be able to do this, that should be ok; Please give me some samples on how this can be achieved.
There is no such support in Spring Cloud Stream.
The underlying Spring for Apache Kafka project does have support for such scenarios.
See #KafkaListener on a Class.
It requires the payload to have been deserialized by the Kafka deserializer; then the method called depends on the payload type.
It also supports a fallback "default" method.
#KafkaListener(id = "multi", topics = "myTopic")
static class MultiListenerBean {
#KafkaHandler
public void listen(String foo) {
...
}
#KafkaHandler
public void listen(Integer bar) {
...
}
#KafkaHandler(isDefault = true)
public void listenDefault(Object object) {
...
}
}

Dynamic to() in Apache Camel Route

I am writing a demo program using Apache Camel. Out Camel route is being called from a Spring Boot scheduler and it will transfer file from the source directory C:\CamelDemo\inputFolder to the destination directory C:\CamelDemo\outputFolder
The Spring Boot scheduler is as under
#Component
public class Scheduler {
#Autowired
private ProducerTemplate producerTemplate;
#Scheduled(cron = "#{#getCronValue}")
public void scheduleJob() {
System.out.println("Scheduler executing");
String inputEndpoint = "file:C:\\CamelDemo\\inputFolder?noop=true&sendEmptyMessageWhenIdle=true";
String outputEndpoint = "file:C:\\CamelDemo\\outputFolder?autoCreate=false";
Map<String, Object> headerMap = new HashMap<String, Object>();
headerMap.put("inputEndpoint", inputEndpoint);
headerMap.put("outputEndpoint", outputEndpoint);
producerTemplate.sendBodyAndHeaders("direct:transferFile", null, headerMap);
System.out.println("Scheduler complete");
}
}
The Apache Camel route is as under
#Component
public class FileTransferRoute extends RouteBuilder {
#Override
public void configure() {
errorHandler(defaultErrorHandler()
.maximumRedeliveries(3)
.redeliverDelay(1000)
.retryAttemptedLogLevel(LoggingLevel.WARN));
from("direct:transferFile")
.log("Route reached")
.log("Input Endpoint: ${in.headers.inputEndpoint}")
.log("Output Endpoint: ${in.headers.outputEndpoint}")
.pollEnrich().simple("${in.headers.inputEndpoint}")
.recipientList(header("outputEndpoint"));
//.to("file:C:\\CamelDemo\\outputFolder?autoCreate=false")
}
}
When I am commenting out the line for recipientList() and uncommenting the to() i.e. givig static endpoint in to(), the flow is working. But when I am commenting to() and uncommenting recipientList(), it is not working. Please help how to route the message to the dynamic endpoint (outputEndpoint)?
You are using pollEnrich without specifying an AggregationStrategy: in this case, Camel will create a new OUT message from the retrieved resource, without combining it to the original IN message: this means you will lose the headers previously set on the IN message.
See documentation : https://camel.apache.org/manual/latest/enrich-eip.html#_a_little_enrich_example_using_java
strategyRef Refers to an AggregationStrategy to be used to merge the reply from the external service, into a single outgoing message. By default Camel will use the reply from the external service as outgoing message.
A simple solution would be to define a simple AggregationStrategy on your pollEnrich component, which simply copies headers from the IN message to the new OUT message (note that you will then use the original IN message body, but in your case it's not a problem I guess)
from("direct:transferFile")
.log("Route reached")
.log("Input Endpoint: ${in.headers.inputEndpoint}")
.log("Output Endpoint: ${in.headers.outputEndpoint}")
.pollEnrich().simple("${in.headers.inputEndpoint}")
.aggregationStrategy((oldExchange, newExchange) -> {
// Copy all headers from IN message to the new OUT Message
newExchange.getIn().getHeaders().putAll(oldExchange.getIn().getHeaders());
return newExchange;
})
.log("Output Endpoint (after pollEnrich): ${in.headers.outputEndpoint}")
.recipientList(header("outputEndpoint"));
//.to("file:C:\\var\\CamelDemo\\outputFolder?autoCreate=false");

Spring Cloud Stream #SendTo Annotation not working

I'm using Spring Cloud Stream with Spring Boot. My application is very simple:
ExampleService.class:
#EnableBinding(Processor1.class)
#Service
public class ExampleService {
#StreamListener(Processor1.INPUT)
#SendTo(Processor1.OUTPUT)
public String dequeue(String message){
System.out.println("New message: " + message);
return message;
}
#SendTo(Processor1.OUTPUT)
public String queue(String message){
return message;
}
}
Procesor1.class:
public interface Processor1 {
String INPUT = "input1";
String OUTPUT = "output1";
#Input(Processor1.INPUT)
SubscribableChannel input1();
#Output(Processor1.OUTPUT)
MessageChannel output1();
}
application.properties:
spring.cloud.stream.bindings.input1.destination=test_input
spring.cloud.stream.bindings.input1.group=test_group
spring.cloud.stream.bindings.input1.binder=binder1
spring.cloud.stream.bindings.output1.destination=test_output
spring.cloud.stream.bindings.output1.binder=binder1
spring.cloud.stream.binders.binder1.type=rabbit
spring.cloud.stream.binders.binder1.environment.spring.rabbitmq.host=localhost
Scenarios:
1) When I push a message in 'test_input.test_group' queue, message is correctly printed and correctly sent to 'test_output' exchange. So ExampleService::dequeue works well.
2) When I invoke ExampleService::queue method (from outside the class, in a test), message is never sent to 'test_output' exchange.
I'm working with Spring Boot 2.0.6.RELEASE and Spring Cloud Stream 2.0.2.RELEASE.
Anybody knows why scenario 2) is not working? Thanks in advance.
What leads you to believe that #SendTo on its own is supported? #SendTo is a secondary annotation used by many projects, not just Spring Cloud Stream; as far as I know, there is nothing that will look for it on its own.
Try Spring Integration's #Publisher annotation instead (with #EnablePublisher).
EDIT
To force proxying with CGLIB instead of a JDK proxy, you can do this...
#Bean
public static BeanFactoryPostProcessor bfpp() {
return bf -> {
bf.getBean(IntegrationContextUtils.PUBLISHER_ANNOTATION_POSTPROCESSOR_NAME,
PublisherAnnotationBeanPostProcessor.class).setProxyTargetClass(true);
};
}

Kinesis as producer in Spring Boot Reactive Stream API

I'm trying to build a small Spring Boot Reactive API. The API should let the users subscribe to some data, returned as SSE.
The data is located on a Kinesis Topic.
Creating the Reactive API, and the StreamListener to Kinesis is fairly easy - but can I combine these, so the Kinesis Topic are used as a producer for the event stream used by my data service.
The code looks more or less like this
//Kinesis binding, with listenerMode: rawRecords
#EnableBinding(Sink.class)
public class KinesisStreamListener {
#StreamListener(value = Sink.INPUT)
public void logger(List<Record> payload) throws Exception {
}
}
#RestController
#RequestMapping("/data")
public class DataResource {
#Autowired
DataService service;
#GetMapping(produces = {MediaType.TEXT_EVENT_STREAM_VALUE, MediaType.APPLICATION_STREAM_JSON_VALUE})
public Flux<EventObject> getData() {
return service.getData();
}
}
#Component
public class DataService {
Flux<EventObject> getData() {
Flux<Long> interval = Flux.interval(Duration.ofMillis(1000));
Flux<EventObject> dataFlux = Flux.fromStream(Stream.generate(() -> ???
));
return dataFlux.zip(interval, dataFlux).map(Tuple2::getT2);
}
}
Here is a sample how I would do that: https://github.com/artembilan/sandbox/tree/master/cloud-stream-kinesis-to-webflux.
Once we agree about details and some improvements it can go to the official Spring Cloud Stream Samples repository: https://github.com/spring-cloud/spring-cloud-stream-samples
The main idea is to reuse the same Flux provided by the #StreamListener via Spring Cloud Stream Reactive Support. This is is already a FluxPublish, so any new SSE connections will work as a plain Reactive subscribers.
There are a couple tricks to count with:
For the listenerMode: rawRecords, we also need to configure a contentType: application/octet-stream to avoid any conversion attempts when Binder sends a message to the Sink.INPUT channel.
Since listenerMode: rawRecords returns a List<Record> our Flux in the #StreamListener method should expect exactly this type, but not a plain Record.
Both concerns are considered as a Framework improvements.
So, let us now how it looks and works for you.

Not able to to filter messages received using condition attribute in Spring Cloud Stream #StreamListener annotation

I am trying to create a event based system for communicating between services using Apache Kafka as Messaging system and Spring Cloud Stream Kafka.
I have written my Receiver class methods as below,
#StreamListener(target = Sink.INPUT, condition = "headers['eventType']=='EmployeeCreatedEvent'")
public void handleEmployeeCreatedEvent(#Payload String payload) {
logger.info("Received EmployeeCreatedEvent: " + payload);
}
This method is specifically to catch for messages or events related to EmployeeCreatedEvent.
#StreamListener(target = Sink.INPUT, condition = "headers['eventType']=='EmployeeTransferredEvent'")
public void handleEmployeeTransferredEvent(#Payload String payload) {
logger.info("Received EmployeeTransferredEvent: " + payload);
}
This method is specifically to catch for messages or events related to EmployeeTransferredEvent.
#StreamListener(target = Sink.INPUT)
public void handleDefaultEvent(#Payload String payload) {
logger.info("Received payload: " + payload);
}
This is the default method.
When I run the application, I am not able to see the methods annoated with condition attribute being called. I only see the handleDefaultEvent method being called.
I am sending a message to this Receiver Application from the Sending/Source App using the below CustomMessageSource class as below,
#Component
#EnableBinding(Source.class)
public class CustomMessageSource {
#Autowired
private Source source;
public void sendMessage(String payload,String eventType) {
Message<String> myMessage = MessageBuilder.withPayload(payload)
.setHeader("eventType", eventType)
.build();
source.output().send(myMessage);
}
}
I am calling the method from my controller in Source App as below,
customMessageSource.sendMessage("Hello","EmployeeCreatedEvent");
The customMessageSource instance is autowired as below,
#Autowired
CustomMessageSource customMessageSource;
Basicaly, I would like to filter messages received by the Sink/Receiver application and handle them accordingly.
For this I have used the #StreamListener annotation with condition attribute to simulate the behaviour of handling different events.
I am using Spring Cloud Stream Chelsea.SR2 version.
Can someone help me in resolving this issue.
It seems like the headers are not propagated. Make sure you include the custom headers in spring.cloud.stream.kafka.binder.headers http://docs.spring.io/autorepo/docs/spring-cloud-stream-docs/Chelsea.SR2/reference/htmlsingle/#_kafka_binder_properties .

Resources