Sending JMS messages in a Spring WebFlux reactive handler: is it blocking? - spring

Is this the correct way to handle reactively? I see 2 threads one reactive nio which is until and including flatMap(fareRepo::save). The other thread is computations thread which starts from sending message and goes on till ServerResponse.build(). My question is this correct way to handle request reactively? Note: that fareRepo is reactive couchbase repo.
thanks
return request.bodyToMono(Fare.class).flatMap(fareRepo::save).flatMap(fs -> {
logger.info("sending message: {}, to queue", fs.getId());
jmsTemplate.send("fare-request-queue", (session) -> session.createTextMessage(fs.getId()));
return Mono.just(fs);
}).flatMap(fi -> ServerResponse.created(URI.create("/fare/" + fi.getId())).build());

I'm assuming you're using Spring Framework's JmsTemplate implementation, which is blocking.
Without more context, we can only assume that you have a blocking operation in the middle of a reactive operator and that this will cause issues in your application.

Spring JmsTemplate will block your request thread which is not good for reactive design coding. you can try with .publishOn(Schedulers.elastic()) which will create new thread and execute code without blocking request thread. since it is I/O bound operation use Schedulers.elastic()
return request.bodyToMono(Fare.class).flatMap(fareRepo::save)
.publishOn(Schedulers.elastic())
.flatMap(fs -> {
logger.info("sending message: {}, to queue", fs.getId());
jmsTemplate.send("fare-request-queue", (session) -> session.createTextMessage(fs.getId()));
return Mono.just(fs);
}).flatMap(fi -> ServerResponse.created(URI.create("/fare/" + fi.getId())).build());

Related

Facing io.netty.handler.timeout.ReadTimeoutException: null while consuming server sent events

I am new to spring web flux, I have a client application that consumes server-sent events, The events are published by the server randomly there is not fixed delay. But consumer throws io.netty.handler.timeout.ReadTimeoutException: null after 60 secs if there no event
Server-side events consumer code
webClient.get()
.uri("http://localhost:8080/events")
.accept(MediaType.TEXT_EVENT_STREAM)
.retrieve()
.bodyToFlux(type)
.subscribe(event -> process(event));
I need the client to be connected even if there is no event for a long time.
Full Exception
[36mr.netty.http.client.HttpClientConnect [...] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
reactor.Flux.MonoFlatMapMany.1 onError(org.springframework.web.reactive.function.client.WebClientRequestException: nested exception is io.netty.handler.timeout.ReadTimeoutException)
reactor.Flux.MonoFlatMapMany.1
org.springframework.web.reactive.function.client.WebClientRequestException: nested exception is io.netty.handler.timeout.ReadTimeoutException
at org.springframework.web.reactive.function.client.ExchangeFunctions$DefaultExchangeFunction.lambda$wrapException$9(ExchangeFunctions.java:141) ~[spring-webflux-5.3.5.jar:5.3.5]
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Error has been observed at the following site(s):
In the Mozilla description for server sent events there is a note:
A colon as the first character of a line is in essence a comment, and
is ignored. Note: The comment line can be used to prevent connections
from timing out; a server can send a comment periodically to keep the
connection alive.
So periodically sending comments can keep the connection alive. So how do we send a comment?
Well spring has the class ServerSentEvent that has the function ServerSentEvent#comment. So if we use this class in combination with for instance Flux#interval we can merge in events containing only the comments keep alive.
Here is an example from a project i built a while back
#Bean
public RouterFunction<ServerResponse> foobars() {
return route()
.path("/api", builder -> builder
.GET("/foobar/{id}", accept(TEXT_EVENT_STREAM), request -> ok()
.contentType(MediaType.TEXT_EVENT_STREAM)
.header("Cache-Control", "no-transform")
.body(Flux.merge(foobarHandler.stream(request.pathVariable("id")),
Flux.interval(Duration.ofSeconds(15)).map(aLong -> ServerSentEvent.<List<FoobarResponse>>builder()
.comment("keep alive").build())), new ParameterizedTypeReference<ServerSentEvent<List<FoobarResponse>>>(){}))
.build();
}
Webflux use a default timeout fallback that will eventually show io.netty.handler.timeout.ReadTimeoutException: null.
It is possible to prevent this error by passing a custom timeout fallback to the timeout method(s):
public final Flux<T> timeout(Duration timeout, #Nullable Publisher<? extends T> fallback);
Additionally, you can use methods like onErrorContinue, onErrorReturn, ... to properly handle exceptions in Flux, example:
return webClient.get().uri(url).retrieve().bodyToFlux(String.class)
.timeout(timeout, Mono.error(new ReadTimeoutException("Timeout")))
.onErrorContinue((e, i) -> {
// Log the error here.
});
If you want to disable all these logs by default it is possible adding this line into file application.properties:
logging.level.reactor.netty.http.client.HttpClient=OFF

Can Reactive Kafka Receiver work with non-reactive Elasticsearch client?

Below is a sample code which uses reactor-kafka and reads data from a topic (with retry logic) which has records published via a non-reactive producer. Inside my doOnNext() consumer I am using non-reactive elasticsearch client which indexes the record in the index. So I have few questions that I am still unclear about :
I know that consumers and producers are independent decoupled systems, but is it recommended to have reactive producer as well whose consumers are reactive?
If I am using something that is non-reactive, in this case Elasticsearch client org.elasticsearch.client.RestClient, does the "reactiveness" of the code work? If it does or does not, how do I test it? (By "reactiveness", I mean non blocking IO part of it i.e. if I spawn three reactive-consumers and one is latent for some reason, the thread should be unblocked and used for other reactive consumer).
In general the question is, if I wrap some API with reactive clients should the API be reactive as well?
public Disposable consumeRecords() {
long maxAttempts = 3, duration = 10;
RetryBackoffSpec retrySpec = Retry.backoff(maxAttempts, Duration.ofSeconds(duration)).transientErrors(true);
Consumer<ReceiverRecord<K, V>> doOnNextConsumer = x -> {
// use non-reactive elastic search client and index record x
};
return KafkaReceiver.create(receiverOptions)
.receive()
.doOnNext(record -> {
try {
// calling the non-reactive consumer
doOnNextConsumer.accept(record);
} catch (Exception e) {
throw new ReceiverRecordException(record, e);
}
record.receiverOffset().acknowledge();
})
.doOnError(t -> log.error("Error occurred: ", t))
.retryWhen(retrySpec)
.onErrorContinue((e, record) -> {
ReceiverRecordException receiverRecordException = (ReceiverRecordException) e;
log.error("Retries exhausted for: " + receiverRecordException);
receiverRecordException.getRecord().receiverOffset().acknowledge();
})
.repeat()
.subscribe();
}
Got some understanding around it.
Reactive KafkaReceiver will internally call some API; if that API is blocking API then even if KafkaReceiver is "reactive" the non-blocking IO will not work and the receiver thread will be blocked because you are calling Blocking API / non-reactive API.
You can test this out by creating a simple server (which blocks calls for sometime / sleep) and calling that server from this receiver

How to tell RSocket to read data stream by Java 8 Stream which backed by Blocking queue

I have the following scenario whereby my program is using blocking queue to process message asynchronously. There are multiple RSocket clients who wish to receive this message. My design is such a way that when a message arrives in the blocking queue, the stream that binds to the Flux will emit. I have tried to implement this requirement as below, but the client doesn't receive any response. However, I could see Stream supplier getting triggered correctly.
Can someone pls help.
#MessageMapping("addListenerHook")
public Flux<QueryResult> addListenerHook(String clientName){
System.out.println("Adding Listener:"+clientName);
BlockingQueue<QueryResult> listenerQ = new LinkedBlockingQueue<>();
Datalistener.register(clientName,listenerQ);
return Flux.fromStream(
()-> Stream.generate(()->streamValue(listenerQ))).map(q->{
System.out.println("I got an event : "+q.getResult());
return q;
});
}
private QueryResult streamValue(BlockingQueue<QueryResult> inStream){
try{
return inStream.take();
}catch(Exception e){
return null;
}
}
This is tough to solve simply and cleanly because of the blocking API. I think this is why there aren't simple bridge APIs here to help you implement this. You should come up with a clean solution to turn the BlockingQueue into a Flux first. Then the spring-boot part becomes a non-event.
This is why the correct solution is probably involving a custom BlockingQueue implementation like ObservableQueue in https://www.nurkiewicz.com/2015/07/consuming-javautilconcurrentblockingque.html
A alternative approach is in How can I create reactor Flux from a blocking queue?
If you need to retain the LinkedBlockingQueue, a starting solution might be something like the following.
val f = flux<String> {
val listenerQ = LinkedBlockingQueue<QueryResult>()
Datalistener.register(clientName,listenerQ);
while (true) {
send(bq.take())
}
}.subscribeOn(Schedulers.elastic())
With an API like flux you should definitely avoid any side effects before the subscribe, so don't register your listener until inside the body of the method. But you will need to improve this example to handle cancellation, or however you cancel the listener and interrupt the thread doing the take.

runOn followed by subscribeOn FLUX not working

My flow goes like this I'm doing Sqs polling on seperate thread using Flux.generate and I sending the flux to class which handles the flux paralelly which is not working
My Poller goes like this
return Flux.generate(synchronousSink -> {
log.info(queueName + " queue Polling ...");
List<Message> messages = sqs.receiveMessage(receive_request).getMessages();
synchronousSink.next(messages);
})
.subscribeOn(Schedulers.parallel());
and my operations on the flux goes like this
events.parallel()
.runOn(Schedulers.parallel())
.doOnNext(t->log.info("Not printing anything"))
.subscribe();
The events are not getting after runOn if I removed runOn they are working fine can any one help me here
Note -"I'm using subscibeOn in poller and runOn in Other class does this cause any issue"

Webflux parallel connections somehow limited to 256

I have a simple setup of server and client:
Flux.range(1, 5000)
.subscribeOn(Schedulers.parallel())
.flatMap(i -> WebClient.create()
.method(HttpMethod.POST)
.uri("http://localhost:8080/test")
.body(Mono.just(String.valueOf(i)), String.class)
.exchange())
.publishOn(Schedulers.parallel())
.subscribe(response ->
response.bodyToMono(String.class)
.publishOn(Schedulers.elastic())
.subscribe(body -> log.info("{}", body)));
here is the client:
#PostMapping
public Mono<String> test(#RequestBody Mono<String> body) {
return body.delayElement(Duration.ofSeconds(5));
}
Both things run on netty. Maybe someone has an idea what is causing this behavior?
This is not due to a WebClient limitation about connection pools, but this actually comes from a Reactor implementation details that you can change.
By default, Reactor operators such as flatMap have prefetch=32 (the number of elements we request before the end subscriber asks for those) and maxConcurrency=256 (the maximum number of elements processed concurrently by the operator).
You can use variants of Flux.flatMap(Function mapper, int concurrency, int prefetch) to change that behavior.
Your code snippet is using a mix of subscribeOn and publishOn; I'd say that given you're doing reactive I/O work with this code snippet, you shouldn't try to schedule work on an elastic/parallel scheduler. Removing those operators is best here.

Resources