Difference between DirectChannel and FluxMessageChannel - spring

I was reading about Spring Integration's FluxMessageChannel here and here, but I still don't understand exactly what are the differences between using a DirectChannel and FluxMessageChannel when using Project Reactor. Since the DirectChannel is stateless and controlled by its pollers, I'd expect the FluxMessageChannel to not be needed. I'm trying to understand when exactly should I use each and why, when speaking on Reactive Streams applications that are implemented with Spring Integration.
I currently have a reactive project that uses DirectChannel, and it seems to work fine, even the documentation says:
the flow behavior is changed from an imperative push model to a reactive pull model
I'd like to understand when to use each of the channels and what is the exact difference when working with Reactive Streams.

The DirectChannel does not have any poller and its implementation is very simple: as long as a message is sent to it, the handler is called. In the same caller's thread:
public class DirectChannel extends AbstractSubscribableChannel {
private final UnicastingDispatcher dispatcher = new UnicastingDispatcher();
private volatile Integer maxSubscribers;
/**
* Create a channel with default {#link RoundRobinLoadBalancingStrategy}.
*/
public DirectChannel() {
this(new RoundRobinLoadBalancingStrategy());
}
Where that UnicastingDispatcher is:
public final boolean dispatch(final Message<?> message) {
if (this.executor != null) {
Runnable task = createMessageHandlingTask(message);
this.executor.execute(task);
return true;
}
return this.doDispatch(message);
}
(There is no executor option for the DirectChannel)
private boolean doDispatch(Message<?> message) {
if (tryOptimizedDispatch(message)) {
return true;
}
...
protected boolean tryOptimizedDispatch(Message<?> message) {
MessageHandler handler = this.theOneHandler;
if (handler != null) {
try {
handler.handleMessage(message);
return true;
}
catch (Exception e) {
throw IntegrationUtils.wrapInDeliveryExceptionIfNecessary(message,
() -> "Dispatcher failed to deliver Message", e);
}
}
return false;
}
That's why I call it " imperative push model". The caller is this case is going to wait until the handler finishes its job. And if you have a big flow, everything is going to be stopped in the sender thread until a sent message has reached the end of the flow of direct channels. In two simple words: the publisher is in charge for the whole execution and it is blocked in this case. You haven't faced any problems with your solution based on the DirectChannel just because you didn't use reactive non-blocking threads yet like Netty in WebFlux or MongoDB reactive driver.
The FluxMessageChannel was really designed for Reactive Streams purposes where the subscriber is in charge for handling a message which it pulls from the Flux on demand. This way just after sending the publisher is free to do anything else. Just because it is already a subscriber responsibility to handle the message.
I would say it is definitely OK to use DirectChannel as long as your handlers are not blocking. As long as they are blocking you should go with FluxMessageChannel. Although don't forget that there are other channel types for different tasks: https://docs.spring.io/spring-integration/docs/current/reference/html/core.html#channel-implementations

Related

Spring Reactor and consuming websocket messages

I'm creating a spring reactor application to consume messages from websockets server, transform them and later save them to redis and some sql database, saving to redis and sql database is also reactive. Also, before writing to redis and sql database, messages will be windowed (with different timespans) and aggregated.
I'm not sure if the way I've accomplished what I want to achieve is a proper reactive wise, it means, I'm not losing reactive benefits (performance).
First, let me show you what I got:
#Service
class WebSocketsConsumer {
public ConnectableFlux<String> webSocketFlux() {
return Flux.<String>create(emitter -> {
createWebSocketClient()
.execute(URI.create("wss://some-url-goes-here.com"), session -> {
WebSocketMessage initialMessage = session.textMessage("SOME_MSG_HERE");
Flux<String> flux = session.send(Mono.just(initialMessage))
.thenMany(session.receive())
.map(WebSocketMessage::getPayloadAsText)
.doOnNext(emitter::next);
Flux<String> sessionStatus = session.closeStatus()
.switchIfEmpty(Mono.just(CloseStatus.GOING_AWAY))
.map(CloseStatus::toString)
.doOnNext(emitter::next)
.flatMapMany(Flux::just);
return flux
.mergeWith(sessionStatus)
.then();
})
.subscribe(); //1: highlighted by Intellij Idea: `Calling subsribe in not blocking context`
})
.publish();
}
private ReactorNettyWebSocketClient createWebSocketClient() {
return new ReactorNettyWebSocketClient(
HttpClient.create(),
() -> WebsocketClientSpec.builder().maxFramePayloadLength(131072 * 100)
);
}
}
And
#Service
class WebSocketMessageDispatcher {
private final WebSocketsConsumer webSocketsConsumer;
private final Consumer<String> reactiveRedisConsumer;
private final Consumer<String> reactiveJdbcConsumer;
private Disposable webSocketsDisposable;
WebSocketMessageDispatcher(WebSocketsConsumer webSocketsConsumer, Consumer<String> redisConsumer, Consumer<String> dbConsumer) {
this.webSocketsConsumer = webSocketsConsumer;
this.reactiveRedisConsumer = redisConsumer;
this.reactiveJdbcConsumer = dbConsumer;
}
#EventListener(ApplicationReadyEvent.class)
public void onReady() {
ConnectableFlux<String> messages = webSocketsConsumer.webSocketFlux();
messages.subscribe(reactiveRedisConsumer);
messages.subscribe(reactiveJdbcConsumer);
webSocketsDisposable = messages.connect();
}
#PreDestroy
public void onDestroy() {
if (webSocketsDisposable != null) webSocketsDisposable.dispose();
}
}
Questions:
Is it a proper use of reactive streams? Maybe redis and database writes should be done in flatMap, however IMO they can't as I want them to happen in the background and they will also aggregate messages with different time windows. Also note comment 1 from the code above where idea lints my code, code works however I wonder what this lint may result in? Maybe I should use doOnNext not to call emitter::next but to invoke some dispatcher of messages there with some funcion like doOnNext(dispatcher::dispatchMessage) ?
I want websockets client to start immediately after application is ready and stop consuming messages when application shuts down, are #EventListener(ApplicationReadyEvent.class) and #PreDestroy annotations and code shown above a proper way to handle this scenario in reactive world?
As I said saving to redis and sql database is also reactive, i.e. those saves are also producing Mono<T> is subscribing to those Monos inside subscribe of websockets flux ok or it should be accomplished some other way (comments 2 and 3 in code above)

Spring cloud function Function interface return success/failure handling

I currently have a spring cloud stream application that has a listener function that mainly listens to a certain topic and executes the following in sequence:
Consume messages from a topic
Store consumed message in the DB
Call an external service for some information
Process the data
Record the results in DB
Send the message to another topic
Acknowledge the message (I have the acknowledge mode set to manual)
We have decided to move to Spring cloud function, and I have been already able to already do almost all the steps above using the Function interface, with the source topic as input and the sink topic as an output.
#Bean
public Function<Message<NotificationMessage>, Message<ValidatedEvent>> validatedProducts() {
return message -> {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
String status = restEndpoint.getStatusFor(message.getPayload());
ValidatedEvent event = getProcessingResult(message.getPayload(), status);
notificationMessageService.saveOrUpdate(notificationMessage, 1, true);
Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
return MessageBuilder
.withPayload(event)
.setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
.build();
}
}
My problem goes with exception handling in step 7 (Acknowledge the message). We only acknowledge the message if we are sure that it was sent successfully to the sink queue, otherwise we do no acknowledge the message.
My question is, how can such a thing be implemented within Spring cloud function, specially that the send method is fully dependant on the Spring Framework (as the result of the function interface implementation evaluation).
earlier, we could do this through try/catch
#StreamListener(value = NotificationMesage.INPUT)
public void onMessage(Message<NotificationMessage> message) {
try {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
String status = restEndpoint.getStatusFor(message.getPayload());
ValidatedEvent event = getProcessingResult(message.getPayload(), status);
Message message = MessageBuilder
.withPayload(event)
.setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
.build();
kafkaTemplate.send(message);
notificationMessageService.saveOrUpdate(notificationMessage, 1, true);
Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
}catch (Exception exception){
notificationMessageService.saveOrUpdate(notificationMessage, 1, false);
}
}
Is there a listener that triggers after the Function interface have returned successfully, something like KafkaSendCallback but without specifying a template
Building upon what Oleg mentioned above, if you want to strictly restore the behavior in your StreamListener code, here is something you can try. Instead of using a function, you can switch to a consumer and then use KafkaTemplate to send on the outbound as you had previously.
#Bean
public Consumer<Message<NotificationMessage>> validatedProducts() {
return message -> {
try{
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
String status = restEndpoint.getStatusFor(message.getPayload());
ValidatedEvent event = getProcessingResult(message.getPayload(), status);
Message message = MessageBuilder
.withPayload(event)
.setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
.build();
kafkaTemplate.send(message); //here, you make sure that the data was sent successfully by using some callback.
//only ack if the data was sent successfully.
Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
}
catch (Exception exception){
notificationMessageService.saveOrUpdate(notificationMessage, 1, false);
}
};
}
Another thing that is worth looking into is using Kafka transactions, in which case if it doesn't work end-to-end, no acknowledgment will happen. Spring Cloud Stream binder has support for this based on the foundations in Spring for Apache Kafka. More details here. Here is the Spring Cloud Stream doc on this.
Spring cloud stream has no knowledge of function. It is just the same message handler as it was before, so the same approach with callback as you used before would work with functions. So perhaps you can share some code that could clarify what you mean? I also don't understand what do you mean by ..send method is fully dependant on the Spring Framework..
Alright, So what I opted in was actually not to use KafkaTemplate (Or streamBridge)for that matter. While it is a feasible solution it would mean that my Function is going to be split into Consumer and some sort of an improvised supplied (the KafkaTemplate in this case).
As I wanted to adhere to the design goals of the functional interface, I have isolated the behaviour for Database update in a ProducerListener interface implementation
#Configuration
public class ProducerListenerConfiguration {
private final MongoTemplate mongoTemplate;
public ProducerListenerConfiguration(MongoTemplate mongoTemplate) {
this.mongoTemplate = mongoTemplate;
}
#Bean
public ProducerListener myProducerListener() {
return new ProducerListener() {
#SneakyThrows
#Override
public void onSuccess(ProducerRecord producerRecord, RecordMetadata recordMetadata) {
final ValidatedEvent event = new ObjectMapper().readerFor(ValidatedEvent.class).readValue((byte[]) producerRecord.value());
final var updateResult = updateDocumentProcessedState(event.getKey(), event.getPayload().getVersion(), true);
}
#SneakyThrows
#Override
public void onError(ProducerRecord producerRecord, #Nullable RecordMetadata recordMetadata, Exception exception) {
ProducerListener.super.onError(producerRecord, recordMetadata, exception);
}
};
}
public UpdateResult updateDocumentProcessedState(String id, long version, boolean isProcessed) {
Query query = new Query();
query.addCriteria(Criteria.where("_id").is(id));
Update update = new Update();
update.set("processed", isProcessed);
update.set("version", version);
return mongoTemplate.updateFirst(query, update, ProductChangedEntity.class);
}
}
Then with each successful attempt, the DB is updated with the processing result and the updated version number.

Spring Integration: Manual channel handling

What I want: Build a configurable library that
uses another library that has an internal routing and a subscribe method like: clientInstance.subscribe(endpoint, (endpoint, message) -> <handler>) , e.g. Paho MQTT library
later in my code I want to access the messages in a Flux.
My idea:
create MessageChannels like so:
integrationFlowContext
.registration(IntegrationFlows.from("message-channel:" + endpoint)).bridge().get())
.register()
forward to reactive publishers:
applicationContext.registerBean(
"publisher:" + endpoint,
Publisher.class,
() -> IntegrationFlows.from("message-channel:" + endpoint)).toReactivePublisher()
);
keep the message channels in a set or similar and implement the above handler: (endpoint, message) -> messageChannels.get(endpoint).send( <converter>(message))
later use (in a #PostConstruct method):
Flux
.from((Publihser<Message<?>>)applicationContext.getBean("publisher:" + enpoint))
.map(...)
.subscribe()
I doubt this to be the best way to do what I want. Feels like abusing spring integration. Any suggestions are welcome at this point.
In general however (at least in my tests) this seemed to be working. But when I run my application, I get errors like: "Caused by: org.springframework.messaging.core.DestinationResolutionException: no output-channel or replyChannel header available".
This is especially bad, since after this exception the publishers claim to not have a subscriber anymore. Thus, in a real application no messages are proceeded anymore.
I am not sure what this message means, but I can kind of reproduce it (but don't understand why):
#Test
public void channelTest() {
integrationFlowContext
.registration(
IntegrationFlows.from("any-channel").bridge().get()
)
.register();
registryUtil.registerBean(
"any-publisher",
Publisher.class,
() -> IntegrationFlows.from("any-channel").toReactivePublisher()
);
Flux
.from((Publisher<Message<?>>) applicationContext.getBean("any-publisher"))
.subscribe(System.out::println);
MessageChannel messageChannel = applicationContext.getBean("any-channel", MessageChannel.class);
try {
messageChannel.send(MessageBuilder.withPayload("test").build());
} catch (Throwable t) {
log.error("Error: ", t);
}
}
I of course read parts of the spring integration documentation, but don't quite get what happens behind the scenes. Thus, I feel like guessing possible error causes.
EDIT:
This, however works:
#TestConfiguration
static class Config {
GenericApplicationContext applicationContext;
Config(
GenericApplicationContext applicationContext,
IntegrationFlowContext integrationFlowContext
) {
this.applicationContext = applicationContext;
// optional here, but needed for some reason in my library,
// since I can't find the channel beans like I will do here,
// if I didn't register them like so:
//integrationFlowContext
// .registration(
// IntegrationFlows.from("any-channel").bridge().get())
// .register();
applicationContext.registerBean(
"any-publisher",
Publisher.class,
() -> IntegrationFlows.from("any-channel").toReactivePublisher()
);
}
#PostConstruct
void connect(){
Flux
.from((Publisher<Message<?>>) applicationContext.getBean("any-publisher"))
.subscribe(System.out::println);
}
}
#Autowired
ApplicationContext applicationContext;
#Autowired
IntegrationFlowContext integrationFlowContext;
#Test
#SneakyThrows
public void channel2Test() {
MessageChannel messageChannel = applicationContext.getBean("any-channel", MessageChannel.class);
try {
messageChannel.send(MessageBuilder.withPayload("test").build());
} catch (Throwable t) {
log.error("Error: ", t);
}
}
Thus apparently my issue above is realted to messages arriving "too early" .. I guess?!
No, your issue is related to round-robin dispatched on the DirectChannel for the any-channel bean name.
You define two IntegrationFlow instances starting with that channel and then you declare their own subscribers, but at runtime both of them are subscribed to the same any-channel instance. And that one comes with the round-robin balancer by default. So, one message goes to your Flux.from() subscriber, but another to that bridge() which doesn't know what to do with your message, so it tries to resolve a replyChannel header.
Therefore your solution just only with one IntegrationFlows.from("any-channel").toReactivePublisher() is correct. Although you could just do a FluxMessageChannel registration and use it from one side for regular messages sending and from other side as a reactive source for Flux.from().

Exponential backoff for business exceptions when using reactive spring-amqp?

I'm using Spring AMQP 2.1.6 and Spring Boot 2.1.5 and I'm looking for the recommended way to configure spring-amqp to retry business exceptions for reactive components (Mono) with exponential backoff. For example:
#RabbitListener
public Mono<Void> myListener(MyMessage myMessage) {
Mono<Void> mono = myService.doSomething(myMessage);
return mono;
}
I'd like spring-amqp to retry automatically if doSomething returns an error. Usually one can configure this for blocking RabbitListener's when setting up the container factory:
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
...
factory.setAdviceChain(retryInterceptor(..));
Where retryInterceptor might be defined like this:
private static RetryOperationsInterceptor retryInterceptor(long backoffInitialInterval, double backoffMultiplier, long backoffMaxInterval, int maxAttempts) {
ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(backoffInitialInterval);
backOffPolicy.setMultiplier(backoffMultiplier);
backOffPolicy.setMaxInterval(backoffMaxInterval);
RetryTemplate retryTemplate = new RetryTemplate();
retryTemplate.setRetryPolicy((new SimpleRetryPolicy(maxAttempts)));
retryTemplate.setBackOffPolicy(backOffPolicy);
StatelessRetryOperationsInterceptorFactoryBean bean = new StatelessRetryOperationsInterceptorFactoryBean();
bean.setRetryOperations(retryTemplate);
return bean.getObject();
}
But the advice chain doesn't seem to be used for reactive RabbitListener's. This is probably because, if I understand it correctly, the RetryTemplate/ExponentialBackOffPolicy actually blocks the thread.
As a workaround I could of course do something like (switching to Kotlin because it's a bit easier):
#RabbitListener
fun myListener(MyMessage myMessage) : Mono<Void> {
return myService.doSomething(myMessage)
.retryExponentialBackoff(10, Duration.ofMillis(100), Duration.ofSeconds(5)) { ctx ->
log.info("Caught exception ${ctx.exception()}")
}
}
But I'd like this retry logic to be applied to for all instances of Mono returned from RabbitListener's. Is something like this possible or should you configure this another way when using reactive sequences from project reactor with spring-amqp?
It is really better to apply retry logic into your reactive sequence, similar way you do with the retryExponentialBackoff(). Just because the Reactive Streams execution doesn't happen on the same thread we can apply that Retrytemplate for the myListener().
Right now the logic internally is like this:
private static class MonoHandler {
static boolean isMono(Object result) {
return result instanceof Mono;
}
#SuppressWarnings("unchecked")
static void subscribe(Object returnValue, Consumer<? super Object> success,
Consumer<? super Throwable> failure) {
((Mono<? super Object>) returnValue).subscribe(success, failure);
}
}
That Consumer<? super Throwable> failure does this:
private void asyncFailure(Message request, Channel channel, Throwable t) {
this.logger.error("Future or Mono was completed with an exception for " + request, t);
try {
channel.basicNack(request.getMessageProperties().getDeliveryTag(), false, true);
}
catch (IOException e) {
this.logger.error("Failed to nack message", e);
}
}
So, we don't have any way to to initiate that RetryTemplate at all, but at the same time with an explicit basicNack() we have a natural retry with the re-fetching the same message from the RabbitMQ back.
We could probably apply a Reactor retry for that Mono internally, but it doesn't look like RetryOperationsInterceptor can simply be converted to the Mono.retry().
So, in other words, the RetryOperationsInterceptor is a wrong way for reactive processing. Use Mono.retry() explicitly in your own code.
You may expose some common utility method and apply it as a Mono.transform(Function<? super Mono<T>, ? extends Publisher<V>> transformer) whenever you have a reactive return for the #RabbitListener method.

Stomp over websocket using Spring and sockJS message lost

On the client side javascript I have
stomp.subscribe("/topic/path", function (message) {
console.info("message received");
});
And on the server side
public class Controller {
private final MessageSendingOperations<String> messagingTemplate;
ï¼ Autowired
public Controller(MessageSendingOperations<String> messagingTemplate) {
this.messagingTemplate = messagingTemplate;
}
#SubscribeMapping("/topic/path")
public void subscribe() {
LOGGER.info("before send");
messagingTemplate.convertAndSend(/topic/path, "msg");
}
}
From this setup, I am occasionally (around once in 30 page refreshes) experiencing message dropping, which means I can see neither "message received" msg on the client side nor the websocket traffic from Chrome debugging tool.
"before send" is always logged on the server side.
This looks like that the MessageSendingOperations is not ready when I call it in the subscribe() method. (if I put Thread.sleep(50); before calling messagingTemplate.convertAndSend the problem would disappear (or much less likely to be reproduced))
I wonder if anyone experienced the same before and if there is an event that can tell me MessageSendingOperations is ready or not.
The issue you are facing is laying in the nature of clientInboundChannel which is ExecutorSubscribableChannel by default.
It has 3 subscribers:
0 = {SimpleBrokerMessageHandler#5276} "SimpleBroker[DefaultSubscriptionRegistry[cache[0 destination(s)], registry[0 sessions]]]"
1 = {UserDestinationMessageHandler#5277} "UserDestinationMessageHandler[DefaultUserDestinationResolver[prefix=/user/]]"
2 = {SimpAnnotationMethodMessageHandler#5278} "SimpAnnotationMethodMessageHandler[prefixes=[/app/]]"
which are invoked within taskExecutor, hence asynchronously.
The first one here (SimpleBrokerMessageHandler (or StompBrokerRelayMessageHandler) if you use broker-relay) is responsible to register subscription for the topic.
Your messagingTemplate.convertAndSend(/topic/path, "msg") operation may be performed before the subscription registration for that WebSocket session, because they are performed in the separate threads. Hence the Broker handler doesn't know you to send the message to the session.
The #SubscribeMapping can be configured on method with return, where the result of this method will be sent as a reply to that subscription function on the client.
HTH
Here is my solution. It is along the same lines. Added a ExecutorChannelInterceptor and published a custom SubscriptionSubscribedEvent. The key is to publish the event after the message has been handled by AbstractBrokerMessageHandler which means the subscription has been registered with the broker.
#Override
public void configureClientInboundChannel(ChannelRegistration registration) {
registration.interceptors(new ExecutorChannelInterceptorAdapter() {
#Override
public void afterMessageHandled(Message<?> message, MessageChannel channel, MessageHandler handler, Exception ex) {
SimpMessageHeaderAccessor accessor = SimpMessageHeaderAccessor.wrap(message);
if (accessor.getMessageType() == SimpMessageType.SUBSCRIBE && handler instanceof AbstractBrokerMessageHandler) {
/*
* Publish a new session subscribed event AFTER the client
* has been subscribed to the broker. Before spring was
* publishing the event after receiving the message but not
* necessarily after the subscription occurred. There was a
* race condition because the subscription was being done on
* a separate thread.
*/
applicationEventPublisher.publishEvent(new SessionSubscribedEvent(this, message));
}
}
});
}
A little late but I thought I'd add my solution. I was having the same problem with the subscription not being registered before I was sending data through the messaging template. This issue happened rarely and unpredictable because of the race with the DefaultSubscriptionRegistry.
Unfortunately, I could not just use the return method of the #SubscriptionMapping because we were using a custom object mapper that changed dynamically based on the type of user (attribute filtering essentially).
I searched through the Spring code and found SubscriptionMethodReturnValueHandler was responsible for sending the return value of subscription mappings and had a different messagingTemplate than the autowired SimpMessagingTemplate of my async controller!!
So the solution was autowiring MessageChannel clientOutboundChannel into my async controller and using that to create a SimpMessagingTemplate. (You can't directly wire it in because you'll just get the template going to the broker).
In subscription methods, I then used the direct template while in other methods I used the template that went to the broker.

Resources