I'm trying to build a small Spring Boot Reactive API. The API should let the users subscribe to some data, returned as SSE.
The data is located on a Kinesis Topic.
Creating the Reactive API, and the StreamListener to Kinesis is fairly easy - but can I combine these, so the Kinesis Topic are used as a producer for the event stream used by my data service.
The code looks more or less like this
//Kinesis binding, with listenerMode: rawRecords
#EnableBinding(Sink.class)
public class KinesisStreamListener {
#StreamListener(value = Sink.INPUT)
public void logger(List<Record> payload) throws Exception {
}
}
#RestController
#RequestMapping("/data")
public class DataResource {
#Autowired
DataService service;
#GetMapping(produces = {MediaType.TEXT_EVENT_STREAM_VALUE, MediaType.APPLICATION_STREAM_JSON_VALUE})
public Flux<EventObject> getData() {
return service.getData();
}
}
#Component
public class DataService {
Flux<EventObject> getData() {
Flux<Long> interval = Flux.interval(Duration.ofMillis(1000));
Flux<EventObject> dataFlux = Flux.fromStream(Stream.generate(() -> ???
));
return dataFlux.zip(interval, dataFlux).map(Tuple2::getT2);
}
}
Here is a sample how I would do that: https://github.com/artembilan/sandbox/tree/master/cloud-stream-kinesis-to-webflux.
Once we agree about details and some improvements it can go to the official Spring Cloud Stream Samples repository: https://github.com/spring-cloud/spring-cloud-stream-samples
The main idea is to reuse the same Flux provided by the #StreamListener via Spring Cloud Stream Reactive Support. This is is already a FluxPublish, so any new SSE connections will work as a plain Reactive subscribers.
There are a couple tricks to count with:
For the listenerMode: rawRecords, we also need to configure a contentType: application/octet-stream to avoid any conversion attempts when Binder sends a message to the Sink.INPUT channel.
Since listenerMode: rawRecords returns a List<Record> our Flux in the #StreamListener method should expect exactly this type, but not a plain Record.
Both concerns are considered as a Framework improvements.
So, let us now how it looks and works for you.
Related
I was reading "Spring Microservices In Action (2021)" because I wanted to brush up on Microservices.
Now with Spring Boot 3 a few things changed. In the book, an easy example of how to push messages to a topic and how to consume messages to a topic were presented.
The Problem is: The examples presented do just not work with Spring Boot 3. Sending Messages from a Spring Boot 2 Project works. The underlying project can be found here:
https://github.com/ihuaylupo/manning-smia/tree/master/chapter10
Example 1 (organization-service):
Consider this Config:
spring.cloud.stream.bindings.output.destination=orgChangeTopic
spring.cloud.stream.bindings.output.content-type=application/json
spring.cloud.stream.kafka.binder.zkNodes=kafka #kafka is used as a network alias in docker-compose
spring.cloud.stream.kafka.binder.brokers=kafka
And this Component(Class) which can is injected in a service in this project
#Component
public class SimpleSourceBean {
private Source source;
private static final Logger logger = LoggerFactory.getLogger(SimpleSourceBean.class);
#Autowired
public SimpleSourceBean(Source source){
this.source = source;
}
public void publishOrganizationChange(String action, String organizationId){
logger.debug("Sending Kafka message {} for Organization Id: {}", action, organizationId);
OrganizationChangeModel change = new OrganizationChangeModel(
OrganizationChangeModel.class.getTypeName(),
action,
organizationId,
UserContext.getCorrelationId());
source.output().send(MessageBuilder.withPayload(change).build());
}
}
This code fires a message to the topic (destination) orgChangeTopic. The way I understand it, the firsttime a message is fired, the topic is created.
Question 1: How do I do this Spring Boot 3? Config-Wise and "Code-Wise"?
Example 2:
Consider this config:
spring.cloud.stream.bindings.input.destination=orgChangeTopic
spring.cloud.stream.bindings.input.content-type=application/json
spring.cloud.stream.bindings.input.group=licensingGroup
spring.cloud.stream.kafka.binder.zkNodes=kafka
spring.cloud.stream.kafka.binder.brokers=kafka
And this code:
#SpringBootApplication
#RefreshScope
#EnableDiscoveryClient
#EnableFeignClients
#EnableEurekaClient
#EnableBinding(Sink.class)
public class LicenseServiceApplication {
public static void main(String[] args) {
SpringApplication.run(LicenseServiceApplication.class, args);
}
#StreamListener(Sink.INPUT)
public void loggerSink(OrganizationChangeModel orgChange) {
log.info("Received an {} event for organization id {}",
orgChange.getAction(), orgChange.getOrganizationId());
}
What this method is supposed to do is to fire whenever a message is fired in orgChangeTopic, we want the method loggerSink to fire.
How do I do this in Spring Boot 3?
In Spring Cloud Stream 4.0.0 (the version used if you are using Boot 3), a few things are removed - such as the EnableBinding, StreamListener, etc. We deprecated them before in 3.x and finally removed them in the 4.0.0 version. The annotation-based programming model is removed in favor of the functional programming style enabled through the Spring Cloud Function project. You essentially express your business logic as java.util.function.Funciton|Consumer|Supplier etc. for a processor, sink, and source, respectively. For ad-hoc source situations, as in your first example, Spring Cloud Stream provides a StreamBridge API for custom sends.
Your example #1 can be re-written like this:
#Component
public class SimpleSourceBean {
#Autowired
StreamBridge streamBridge
public void publishOrganizationChange(String action, String organizationId){
logger.debug("Sending Kafka message {} for Organization Id: {}", action, organizationId);
OrganizationChangeModel change = new OrganizationChangeModel(
OrganizationChangeModel.class.getTypeName(),
action,
organizationId,
UserContext.getCorrelationId());
streamBridge.send("output-out-0", MessageBuilder.withPayload(change).build());
}
}
Config
spring.cloud.stream.bindings.output-out-0.destination=orgChangeTopic
spring.cloud.stream.kafka.binder.brokers=kafka
Just so you know, you no longer need that zkNode property. Neither the content type since the framework auto-converts that for you.
StreamBridge send takes a binding name and the payload. The binding name can be anything - but for consistency reasons, we used output-out-0 here. Please read the reference docs for more context around the reasoning for this binding name.
If you have a simple source that runs on a timer, you can express this simply as a supplier as below (instead of using a StreamBrdige).
#Bean
public Supplier<OrganizationChangeModel> ouput() {
return () -> {
// return the payload
};
}
spring.cloud.function.definition=output
spring.cloud.bindings.output-out-0.destination=...
Example #2
#Bean
public Consumer<OrganizationChangeModel> loggerSink() {
return model -> {
log.info("Received an {} event for organization id {}",
orgChange.getAction(), orgChange.getOrganizationId());
};
}
Config:
spring.cloud.function.definition=loggerSink
spring.cloud.stream.bindings.loggerSink-in-0.destination=orgChangeTopic
spring.cloud.stream.bindings.loggerSinnk-in-0.group=licensingGroup
spring.cloud.stream.kafka.binder.brokers=kafka
If you want the input/output binding names to be specifically input or output rather than with in-0, out-0 etc., there are ways to make that happen. Details for this are in the reference docs.
I'm creating a spring reactor application to consume messages from websockets server, transform them and later save them to redis and some sql database, saving to redis and sql database is also reactive. Also, before writing to redis and sql database, messages will be windowed (with different timespans) and aggregated.
I'm not sure if the way I've accomplished what I want to achieve is a proper reactive wise, it means, I'm not losing reactive benefits (performance).
First, let me show you what I got:
#Service
class WebSocketsConsumer {
public ConnectableFlux<String> webSocketFlux() {
return Flux.<String>create(emitter -> {
createWebSocketClient()
.execute(URI.create("wss://some-url-goes-here.com"), session -> {
WebSocketMessage initialMessage = session.textMessage("SOME_MSG_HERE");
Flux<String> flux = session.send(Mono.just(initialMessage))
.thenMany(session.receive())
.map(WebSocketMessage::getPayloadAsText)
.doOnNext(emitter::next);
Flux<String> sessionStatus = session.closeStatus()
.switchIfEmpty(Mono.just(CloseStatus.GOING_AWAY))
.map(CloseStatus::toString)
.doOnNext(emitter::next)
.flatMapMany(Flux::just);
return flux
.mergeWith(sessionStatus)
.then();
})
.subscribe(); //1: highlighted by Intellij Idea: `Calling subsribe in not blocking context`
})
.publish();
}
private ReactorNettyWebSocketClient createWebSocketClient() {
return new ReactorNettyWebSocketClient(
HttpClient.create(),
() -> WebsocketClientSpec.builder().maxFramePayloadLength(131072 * 100)
);
}
}
And
#Service
class WebSocketMessageDispatcher {
private final WebSocketsConsumer webSocketsConsumer;
private final Consumer<String> reactiveRedisConsumer;
private final Consumer<String> reactiveJdbcConsumer;
private Disposable webSocketsDisposable;
WebSocketMessageDispatcher(WebSocketsConsumer webSocketsConsumer, Consumer<String> redisConsumer, Consumer<String> dbConsumer) {
this.webSocketsConsumer = webSocketsConsumer;
this.reactiveRedisConsumer = redisConsumer;
this.reactiveJdbcConsumer = dbConsumer;
}
#EventListener(ApplicationReadyEvent.class)
public void onReady() {
ConnectableFlux<String> messages = webSocketsConsumer.webSocketFlux();
messages.subscribe(reactiveRedisConsumer);
messages.subscribe(reactiveJdbcConsumer);
webSocketsDisposable = messages.connect();
}
#PreDestroy
public void onDestroy() {
if (webSocketsDisposable != null) webSocketsDisposable.dispose();
}
}
Questions:
Is it a proper use of reactive streams? Maybe redis and database writes should be done in flatMap, however IMO they can't as I want them to happen in the background and they will also aggregate messages with different time windows. Also note comment 1 from the code above where idea lints my code, code works however I wonder what this lint may result in? Maybe I should use doOnNext not to call emitter::next but to invoke some dispatcher of messages there with some funcion like doOnNext(dispatcher::dispatchMessage) ?
I want websockets client to start immediately after application is ready and stop consuming messages when application shuts down, are #EventListener(ApplicationReadyEvent.class) and #PreDestroy annotations and code shown above a proper way to handle this scenario in reactive world?
As I said saving to redis and sql database is also reactive, i.e. those saves are also producing Mono<T> is subscribing to those Monos inside subscribe of websockets flux ok or it should be accomplished some other way (comments 2 and 3 in code above)
A simple #RestController is connected with a #MessagingGateway to an IntegrationFlow.
After a load test we saw within the tracing that we lose "a lot of time" before even starting the processing within the flow:
Tracing result
In this example we can see that over 90ms spend befor sending the message to the flow.
Did anyone have some idea what leads to this behavior?
As far as I understood the documentation, everything is handled in the sender thread and therefore no special worker threads are created.
We use the Restcontroller since we need to create the documentation with springdoc-openapi-ui
ExampleCode:
RestController
#RestController
public class DescriptionEndpoint {
HttpMessageGateway httpMessageGateway;
public Result findData(#Valid dataRequest dataRequest) {
final Map<String, Object> headerParams = new HashMap<>();
return httpMessageGateway.basicDataDescriptionFlow(dataRequest, headerParams);
}
}
Gateway
#MessagingGateway
public interface HttpMessageGateway {
#Gateway(requestChannel = "startDataFlow.input")
Result basicDataDescriptionFlow(#Payload dataRequest prDataRequest, #Headers Map<String, Object> map);
}
IntegrationFlow
public class ExampleFlow {
#Bean
public IntegrationFlow startDataFlow() {
return new FlowExtension()
.handle(someHandler1)
.handle(someHandler2)
.handle(someHandler3)
.get();
}
}
After adding some more traces I realized, that this timing issue is caused by my spring security configuration.
Unfortunatelly, i thought, the span is only representing the time after the start of findData(..). But it seems, the tracing starts already in the proxy methods and security chain.
After improving some implementation on our JWTToken filter, the spend times for these endpoints are OK.
I'm using Spring Cloud Stream with Spring Boot. My application is very simple:
ExampleService.class:
#EnableBinding(Processor1.class)
#Service
public class ExampleService {
#StreamListener(Processor1.INPUT)
#SendTo(Processor1.OUTPUT)
public String dequeue(String message){
System.out.println("New message: " + message);
return message;
}
#SendTo(Processor1.OUTPUT)
public String queue(String message){
return message;
}
}
Procesor1.class:
public interface Processor1 {
String INPUT = "input1";
String OUTPUT = "output1";
#Input(Processor1.INPUT)
SubscribableChannel input1();
#Output(Processor1.OUTPUT)
MessageChannel output1();
}
application.properties:
spring.cloud.stream.bindings.input1.destination=test_input
spring.cloud.stream.bindings.input1.group=test_group
spring.cloud.stream.bindings.input1.binder=binder1
spring.cloud.stream.bindings.output1.destination=test_output
spring.cloud.stream.bindings.output1.binder=binder1
spring.cloud.stream.binders.binder1.type=rabbit
spring.cloud.stream.binders.binder1.environment.spring.rabbitmq.host=localhost
Scenarios:
1) When I push a message in 'test_input.test_group' queue, message is correctly printed and correctly sent to 'test_output' exchange. So ExampleService::dequeue works well.
2) When I invoke ExampleService::queue method (from outside the class, in a test), message is never sent to 'test_output' exchange.
I'm working with Spring Boot 2.0.6.RELEASE and Spring Cloud Stream 2.0.2.RELEASE.
Anybody knows why scenario 2) is not working? Thanks in advance.
What leads you to believe that #SendTo on its own is supported? #SendTo is a secondary annotation used by many projects, not just Spring Cloud Stream; as far as I know, there is nothing that will look for it on its own.
Try Spring Integration's #Publisher annotation instead (with #EnablePublisher).
EDIT
To force proxying with CGLIB instead of a JDK proxy, you can do this...
#Bean
public static BeanFactoryPostProcessor bfpp() {
return bf -> {
bf.getBean(IntegrationContextUtils.PUBLISHER_ANNOTATION_POSTPROCESSOR_NAME,
PublisherAnnotationBeanPostProcessor.class).setProxyTargetClass(true);
};
}
maybe someone has an idea to my following problem:
I am currently on a project, where i want to use the AWS SQS with Spring Cloud integration. For the receiver part i want to provide a API, where a user can register a "message handler" on a queue, which is an interface and will contain the user's business logic, e.g.
MyAwsSqsReceiver receiver = new MyAwsSqsReceiver();
receiver.register("a-queue-name", new MessageHandler(){
#Override
public void handle(String message){
//... business logic for the received message
}
});
I found examples, e.g.
https://codemason.me/2016/03/12/amazon-aws-sqs-with-spring-cloud/
and read the docu
http://cloud.spring.io/spring-cloud-aws/spring-cloud-aws.html#_sqs_support
But the only thing i found there to "connect" a functionality for processing a incoming message is a annotation on a method, e.g. #SqsListener or #MessageMapping.
These annotations are fixed to a certain queue-name, though. So now i am at a loss, how to dynamically "connect" my provided "MessageHandler" (from my API) to the incoming message for the specified queuename.
In the Config the example there is a SimpleMessageListenerContainer, which gets a QueueMessageHandler set, but this QueueMessageHandler does not seem
to be the right place to set my handler or to override its methods and provide my own subclass of QueueMessageHandler.
I already did something like this with the Spring Amqp integration and RabbitMq and thought, that it would be also similar here with AWS SQS.
Does anyone have an idea, how to accomplish this?
thx + bye,
Ximon
EDIT:
I found, that Spring JMS could actually do that, e.g. www.javacodegeeks.com/2016/02/aws-sqs-spring-jms-integration.html. Does anybody know, what consequences using JMS protocol has here, good or bad?
I am facing the same issue.
I am trying to go in an unusual way where I set up an Aws client bean at build time and then instead of using sqslistener annotation to consume from the specific queue I use the scheduled annotation which I can programmatically pool (each 10 secs in my case) from which queue I want to consume.
I did the example that iterates over queues defined in properties and then consumes from each one.
Client Bean:
#Bean
#Primary
public AmazonSQSAsync awsSqsClient() {
return AmazonSQSAsyncClientBuilder
.standard()
.withRegion(Regions.EU_WEST_1.getName())
.build();
}
Consumer:
// injected in the constructor
private final AmazonSQSAsync awsSqsClient;
#Scheduled(fixedDelay = 10000)
public void pool() {
properties.getSqsQueues()
.forEach(queue -> {
val receiveMessageRequest = new ReceiveMessageRequest(queue)
.withWaitTimeSeconds(10)
.withMaxNumberOfMessages(10);
// reading the messages
val result = awsSqsClient.receiveMessage(receiveMessageRequest);
val sqsMessages = result.getMessages();
log.info("Received Message on queue {}: message = {}", queue, sqsMessages.toString());
// deleting the messages
sqsMessages.forEach(message -> {
val deleteMessageRequest = new DeleteMessageRequest(queue, message.getReceiptHandle());
awsSqsClient.deleteMessage(deleteMessageRequest);
});
});
}
Just to clarify, in my case, I need multiple queues, one for each tenant, with the queue URL for each one passed in a property file. Of course, in your case, you could get the queue names from another source, maybe a ThreadLocal which has the queues you have created in runtime.
If you wish, you can also try the JMS approach where you create message consumers and add a listener to each one you wish (See the doc Aws Jms documentation).
When we do Spring and SQS we use the spring-cloud-starter-aws-messaging.
Then just create a Listener class
#Component
public class MyListener {
#SQSListener(value="myqueue")
public void listen(MyMessageType message) {
//process the message
}
}