is it possible to create a queue listener using web flux spring integration? - spring-boot

#Component
#RequiredArgsConstructor
public class EventListener {
private final EventProcessingService eventProcessingService;
#JmsListener(destination = "inputQueue", constainerFactory = "myContainerFactory)
public void receiveMessage(Message message) {
eventProcessingService.doSome(message).subscribe(); // return Mono<Void>
}
}
#Service
public class EventProcessingService {
public Mono<Void> doSome(Message message) {
//...
}
}
#Configuration
#RequiredArgsConstructor
public class MqIntegration {
private final ConnectionFactory connectionFactory;
#Bean
public Publisher<Message<String>> mqReactiveFlow() {
return IntegrationFlows
.from(Jms.messageDrivenChannelAdapter(this.connectionFactory)
.destination("testQueue"))
.channel(MessageChannels.queue())
.toReactivePublisher();
}
}
I have some webflux application which interacts with ibm mq and a JmsListener which listens for messages from the queue when a message is received EventProcessingService makes requests to other services depending on the messages.
I would like to know how I can create a JmsListener that works with reactive threads using Spring Integration. In other words I want to know if it is possible to create an Integration flow which will receive messages from the queue and call the EvenProcessingService when the messages are received so that it does not have a negative effect on the threads inside webflux application

I think we need to clean up some points in your question.
WebFlux is not a project by itself. It is Spring Framework module about Web on top of reactive server: https://docs.spring.io/spring-framework/docs/current/reference/html/web-reactive.html#spring-webflux
The #JmsListener is a part of another Spring Framework module - spring-jms. And there is nothing relevant to threads used by reactive server for WebFlux layer. https://docs.spring.io/spring-framework/docs/current/reference/html/integration.html#jms
Spring Integration is a separate project which implement EIP on top of Spring Framework dependency injection container. It indeed has its own WebFlux module for channel adapters on top of WebFlux API in Spring Framework: https://docs.spring.io/spring-integration/docs/current/reference/html/webflux.html#webflux. And it also has a JMS module on top of JMS module from Spring Framework: https://docs.spring.io/spring-integration/docs/current/reference/html/jms.html#jms. However there is nothing related to #JmsLisntener since its Jms.messageDrivenChannelAdapter() fully covers that functionality and from a big height it does it the same way - via MessageListenerContainer.
All of this is might not be relevant to the question, but it is better to have a clear context of what you are asking so we will feel that we are on the same page with you.
Now trying to answer to your concern.
As long as you don't deal with JMS from WebFlux layer (#RequestMapping or WebFlux.inboundGateway()), you don't effect those non-blocking thread. The JMS MessageListenerContainer spawns its own threads and perform pulling from the queue and message processing.
What you are explaining with your JMS configuration and service looks more like this:
#Bean
public IntegrationFlow mqReactiveFlow() {
return IntegrationFlows
.from(Jms.messageDrivenChannelAdapter(this.connectionFactory)
.destination("testQueue"))
.handle(this.eventProcessingService)
.nullChannel();
}
There is really no reason to shift messages just after JMS into a QueueChannel since JMS listening is already an async operation.
We need that nullChannel in the end of your flow just because your service method returns Mono and framework knows nothing what to do with that. Starting with version 5.4.3 the NullChannel is able to subscribe to the Publisher payload of the message produced to it.
You could have though a FluxMessageChannel in between to really simulate a back-pressure for JMS listener, but that won't make to much different for your next service.

I think you are going to have to bypass #JmsListener as that is registering an on message, which although asynchronous isn't going to be reactive. JMS is essentially blocking, so patching a reactive layer on top, is going to be just a patch.
You will need to use the Publisher that you have created to generate the back pressure. I think you are going to have to define and instantiate your own listener bean which does something on the lines of :
public Flux<String> mqReactiveListener() {
return Flux.from(mqReactiveFlow())
.map(Message::getPayload);
}

Related

Best way to handle Content Provider failures in Java Spring Application

I have Spring Web Application, which invokes content provider for some data. Its becoming common issue that content provider service is failing and app is becoming unresponsive. What would be best approach or design to check that content provider service is up or down at application level and handle it appropriately.
You can use circuit breaker pattern and Hystrix library.
It will monitor the method annotated with #HystrixCommand and if the failures amount to a certain threshold, it will start redirecting the calls to a fallbackMethod, allowing the continuation of service, and leaving you time to recover from failure.
A simple snippet that shows it in action is as follows:
#Service
public class BookService {
...
#HystrixCommand(fallbackMethod = "reliable")
public String readingList() {
...
}
public String reliable() {
...
}
}

SockJS receive stomp messages from spring websocket out of order

I am trying to streaming time series data using Springframework SimpMessagingTemplate (default Stomp implementation) to broadcast messages to a topic that the SockJS client subscribed to. However, the messages is received out of order. The server is single thread and messages are sent in ascending order by their timestamps. The client somehow received the messages out of the order.
I am using the latest release version of both stompjs and springframework (4.1.6 release).
looks like there is a built in striped executor, so just enable it:
#Override
protected void configureMessageBroker(MessageBrokerRegistry registry) {
// ...
registry.setPreservePublishOrder(true);
}
https://docs.spring.io/spring/docs/current/spring-framework-reference/web.html#websocket-stomp-ordered-messages
Found the root cause of this issue. The messages were sending in "correct" order from the application implementation perspective (I.e, convertAndSend() are called in one thread or at least thread safe fashion"). However, Springframework web socket uses reactor-tcp implementation which will process the messages on clientOutboundChannel from the thread pool. Thus the messages can be written to the tcp socket in different order that they are arrived. When I configured the web socket to limit 1 thread for the clientOutboundChannel, the order is preserved.
This problem is not in the SocketJS but a limitation of current Spring web socket design.
It's Spring web socket design problem. To receive messages in valid order you have to set corePoolSize of websocket clients to 1.
#Configuration
#EnableWebSocketMessageBroker
public class WebSocketMessageBrokerConfiguration extends AbstractWebSocketMessageBrokerConfigurer {
#Override
public void configureClientOutboundChannel(ChannelRegistration registration) {
registration.taskExecutor().corePoolSize(1);
}
#Override
public void configureClientInboundChannel(ChannelRegistration registration) {
registration.taskExecutor().corePoolSize(1);
}
}
UPDATE
Please see #Jason's answer. Spring 5.1 has a setPreservePublishOrder() to order the messages based on their client ID.
I experienced this issue as well. I don't like to limit my thread pool size to 1 for this will cause an overhead on my application. Instead, I used a StripedExecutorService to process messages coming in and out of my application. This type of executor service guarantees ordered processing of messages for tasks that have same stripe. For me, I use WebSocket session ID as stripe. Register this executor via ChannelRegistration.taskExecutor() on your inbound, broker, and outbound channel and this will guarantee ordered messages. Choose your stripe wisely.

Send a message with SockJS to Spring Websocket handler over RabbitMQ

I'm developing message-broker communication between 2 applications: Grails client and Spring Boot micro service.
To make my client-side updated in long-polling manner I use WebSockets.
I've successfully configured Grails and Spring Boot to use web sockets over RabbitMQ broker.
Grails client gets all publications from Spring Boot as expected.
But I faced a problem to send message from my JS code on Grails side to Spring Boot handler on server side.
I follows all default configuration from: https://github.com/zyro23/grails-spring-websocket/blob/010ea1fb3557a63b6ce0d87a0b055f6cbc7df319/README.md
The same config I used to write on Spring Boot side:
#Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config.enableStompBrokerRelay("/topic", "/queue")
.setRelayHost(brokerRelayHost)
.setSystemLogin(brokerRelayUsername)
.setSystemPasscode(brokerRelayPassword)
config.setApplicationDestinationPrefixes("/app");
}
#Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/stomp").withSockJS()
}
My client code call:
client.send("/app/hello", {}, JSON.stringify("world"));
But annotation #MessageMapping("/hello") doesn't work on my Spring boot handler methods.
Another one strange thing that when I enable Grails handlers with the same annotation they works good and receives all messages.
I've monitored RabbitMQ admin console and seems like in case with Spring Boot handlers client never send message to broker.
Did anybody find the same issue with cross-application web socket message sending?
Thanks in advance!

How to configure StepExecutionListener with Spring Integration DSL

I am trying to configure a Spring Batch listener to send a message to a Spring Integration Gateway for StepExecution events.
The following link explains how to configure this with XML
http://docs.spring.io/spring-batch/trunk/reference/html/springBatchIntegration.html#providing-feedback-with-informational-messages
How can this be setup using Spring Integration DSL? I've found no way to configure a gateway with a service interface using DSL.
At the moment I worked around this by implementing an actual StepExecutionListener, and have this then calling an interface which is annotated with #MessagingGateway (calling the corresponding #Gateway method) in order to get a message to a channel. And I then setup an Integration DSL flow for this channel.
Is there a simpler way using DSL, avoiding that workaround? Is there some way to connect a Batch listener direct to a gateway, like one can using XML config?
Cheers,
Menno
First of all SI DSL is just an extension of existing SI Java and Annotation configuration, so it can be used together with any other Java config. Of course an XML #Import is also posible.
There is no gateway configuration in the DSL, because its methods can't be wired with linear IntegrationFlow. There is need to provide downstream flows for each method.
So, #MessagingGateway is a right way to go ahead:
#MessagingGateway(name = "notificationExecutionsListener", defaultRequestChannel = "stepExecutionsChannel")
public interface MyStepExecutionListener extends StepExecutionListener {}
From other side #MessagingGateway parsing as well as <gateway> tag parsing ends up with GatewayProxyFactoryBean definition. So, you just can declare that bean, if you don't want to introduce a new class:
#Bean
public GatewayProxyFactoryBean notificationExecutionsListener(MessageChannel stepExecutionsChannel) {
GatewayProxyFactoryBean gateway = new GatewayProxyFactoryBean(StepExecutionListener.class);
gateway.setDefaultRequestChannel(stepExecutionsChannel);
return gateway;
}
After the latest Milestone 3 I have an idea to introduce nested flows, when we may be able to introduce Gateway support for flows. Something like this:
#Bean
public IntegrationFlow gatewayFlow() {
return IntegrationFlows
.from(MyGateway.class, g ->
g.method("save", f -> f.transform(...)
.filter(...))
.method("delete", f -> f.handle(...)))
.handle(...)
.get();
}
However I'm not sure that it will simplify the life, as far as any nested Lambda just adds more noise and might break loosely coupling principle.

Spring Integration: Obtaining logs and handling callbacks from the default MQTT Paho client

Below is an interesting example of sending messages over MQTT with the standard outbound-channel-adapter (not the MQTT outbound adapter):
https://github.com/joshlong/spring-integration-mqtt
The authors implement their own message handler, and pass it to the adapter.
Now my question is: Is it possible to implement a custom message handler using the MQTT outbound adapter? Or is it only possible with the general outbound-channel-adapter of Spring Integration?
My objective is to obtain logs and handle callbacks from the Paho client, so I can for example handle connection errors, timeouts, etc...
Spring Integration 4.0 provides the MQTT module with MqttPahoMessageHandler as default implementation of AbstractMqttMessageHandler.
I'd say that you can extend from MqttPahoMessageHandler to achieve your MqttCallback wishes, but yes, you can use that custom MessageHandler implementation only from <int:outbound-channel-adapter ref="">.
The out-of-the-box <int-mqtt:outbound-channel-adapter> is just for population a bean for MqttPahoMessageHandler and you can't change that behaviour.
From other side, when you will start to do Spring Integration from JavaConfig you will get deal just only with classes, so there is no boundaries to restict you with custom tags:
#ServiceActivator(inputChannel = "sendToMqttChannel")
#Bean
public MessageHandler mqttHandler() {
return new MyMqttPahoMessageHandler();
}

Resources