I have a simple setup of server and client:
Flux.range(1, 5000)
.subscribeOn(Schedulers.parallel())
.flatMap(i -> WebClient.create()
.method(HttpMethod.POST)
.uri("http://localhost:8080/test")
.body(Mono.just(String.valueOf(i)), String.class)
.exchange())
.publishOn(Schedulers.parallel())
.subscribe(response ->
response.bodyToMono(String.class)
.publishOn(Schedulers.elastic())
.subscribe(body -> log.info("{}", body)));
here is the client:
#PostMapping
public Mono<String> test(#RequestBody Mono<String> body) {
return body.delayElement(Duration.ofSeconds(5));
}
Both things run on netty. Maybe someone has an idea what is causing this behavior?
This is not due to a WebClient limitation about connection pools, but this actually comes from a Reactor implementation details that you can change.
By default, Reactor operators such as flatMap have prefetch=32 (the number of elements we request before the end subscriber asks for those) and maxConcurrency=256 (the maximum number of elements processed concurrently by the operator).
You can use variants of Flux.flatMap(Function mapper, int concurrency, int prefetch) to change that behavior.
Your code snippet is using a mix of subscribeOn and publishOn; I'd say that given you're doing reactive I/O work with this code snippet, you shouldn't try to schedule work on an elastic/parallel scheduler. Removing those operators is best here.
Related
I'm having some trouble wrapping my head around a supposedly simple RESTful WS response handling scenario when using Spring WebFlux in combination with Kotlin coroutines. Suppose we have a simple WS method in our REST controller that is supposed to return a possibly huge number (millions) of response "things":
#GetMapping
suspend fun findAllThings(): Flow<Thing> {
//Reactive DB query, return a flow of things
}
This works as one would expect: the result is streamed to the client as long as a streaming media type (e.g. "application/x-ndjson") is used. In more complex service calls that also accounts for the possibility of errors/warnings I would like to return a response object of the following form:
class Response<T> {
val errors: Flow<String>
val things: Flow<T>
}
The idea here being that a response either is successful (returning an empty error Flow and a Flow of things), or failed (errors contained in the corresponding Flow while the things Flow being empty). In blocking programming this is a quite common response idiom. My question now is how can I adapt this idiom to the reactive approach in Kotlin/Spring WebFlux?
I know its possible to just return the Response as described (or Mono<Response> for Java users), but this somewhat defeats the purpose of being reactive as the entire Mono has to exist in memory at serialization time. Is there any way to solve this? The only possible solution I can think of right now is a custom Spring Encoder that is smart enough to stream both errors or things (whatever is present).
How about returning Success/Error per Thing?
class Result<T> private constructor(val result: T?, val error: String?) {
constructor(data: T) : this(data, null)
constructor(error: String) : this(null, error)
val isError = error != null
}
#GetMapping
suspend fun findAllThings(): Flow<Result<Thing>> {
//Reactive DB query, return a flow of things
}
Is it possible to allow emitting values from a Flux conditionally based on a global boolean variable?
I'm working with Flux delayUntil(...) but not able to fully grasp the functionality or my assumptions are wrong.
I have a global AtomicBoolean that represents the availability of a downstream connection and only want the upstream Flux to emit if the downstream is ready to process.
To represent the scenario, created a (not working) test sample
//Randomly generates a boolean value every 5 seconds
private Flux<Boolean> signalGenerator() {
return Flux.range(1, Integer.MAX_VALUE)
.delayElements(Duration.ofMillis(5000))
.map(integer -> new Random().nextBoolean());
}
and
Flux.range(1, Integer.MAX_VALUE)
.delayElements(Duration.ofMillis(1000))
.delayUntil(evt -> signalGenerator()) // ?? Only proceed when signalGenerator returns true
.subscribe(System.out::println);
I have another scenario where a downstream process can accept only x messages a second. In the current non-reactive implementation we have a Semaphore of x permits and the thread is blocked if no more permits are available, with Semaphore permits resetting every second.
In both scenarios I want upstream Flux to emit only when there is a demand from the downstream process, and I do not want to Buffer.
You might consider using Mono.fromRunnable() as an input to delayUntil() like below;
Helper class;
public class FluxCondition {
CountDownLatch latch = new CountDownLatch(10); // it depends, might be managed somehow
Runnable r = () -> { latch.await(); }
public void lock() { Mono.fromRunnable(r) };
public void release() { latch.countDown(); }
}
Usage;
FluxCondition delayCondition = new FluxCondition();
Flux.range(1, 10).delayUntil(o -> delayCondition.lock()).subscribe();
.....
delayCondition.release(); // shall call this for each element
I guess there might be a better solution by using sink.emitNext but this might also require a condition variable for controlling Flux flow.
According my understanding, in reactive programming, your data should be considered in every operator step. So it might be better for you to design your consumer as a reactive processor. In my case I had no chance and followed the way as I described above
I am using the functional endpoints of WebFlux. I translate exceptions sent by the service layer to an HTTP error code using onErrorResume:
public Mono<String> serviceReturningMonoError() {
return Mono.error(new RuntimeException("error"));
}
public Mono<ServerResponse> handler(ServerRequest request) {
return serviceReturningMonoError().flatMap(e -> ok().syncBody(e))
.onErrorResume( e -> badRequest(e.getMessage()));
}
It works well as soon as the service returns a Mono. In case of a service returning a Flux, what should I do?
public Flux<String> serviceReturningFluxError() {
return Flux.error(new RuntimeException("error"));
}
public Mono<ServerResponse> handler(ServerRequest request) {
???
}
Edit
I tried the approach below, but unfortunately it doesn't work. The Flux.error is not handled by the onErrorResume and propagated to the framework. When the exception is unboxed during the serialization of the http response, Spring Boot Exception management catch it and convert it into a 500.
public Mono<ServerResponse> myHandler(ServerRequest request) {
return ok().contentType(APPLICATION_JSON).body( serviceReturningFluxError(), String.class)
.onErrorResume( exception -> badRequest().build());
}
I am actually surprised of the behaviour, is that a bug?
I found another way to solve this problem catching the exception within the body method and mapping it to ResponseStatusException
public Mono<ServerResponse> myHandler(ServerRequest request) {
return ok().contentType(MediaType.APPLICATION_JSON)
.body( serviceReturningFluxError()
.onErrorMap(RuntimeException.class, e -> new ResponseStatusException( BAD_REQUEST, e.getMessage())), String.class);
}
With this approach Spring properly handles the response and returns the expected HTTP error code.
Your first sample is using Mono (i.e. at most one value), so it plays well with Mono<ServerResponse> - the value will be asynchronously resolved in memory and depending on the result we will return a different response or handle business exceptions manually.
In case of a Flux (i.e. 0..N values), an error can happen at any given time.
In this case you could use the collectList operator to turn your Flux<String> into a Mono<List<String>>, with a big warning: all elements will be buffered in memory. If the stream of data is important of if your controller/client relies on streaming data, this is not the best choice here.
I'm afraid I don't have a better solution for this issue and here's why: since an error can happen at any time during the Flux, there's no guarantee we can change the HTTP status and response: things might have been flushed already on the network. This is already the case when using Spring MVC and returning an InputStream or a Resource.
The Spring Boot error handling feature tries to write an error page and change the HTTP status (see ErrorWebExceptionHandler and implementing classes), but if the response is already committed, it will log error information and let you know that the HTTP status was probably wrong.
Though this is an old question, I'd like to answer it for anyone who may stumble upon this Stack Overflow post.
There is another way to address this particular issue (discussed below), without the need to cache / buffer all the elements in memory as detailed in one of the other answers. However, the approach shown below does have a limitation. First, I'll discuss the approach, then the limitation.
The approach
You need to first convert your cold flux into a hot flux. Then on the hot flux call .next(), to return a Mono<Your Object> On this mono, call .flatMap().switchIfEmpty().onErrorResume(). In the flatMap() concatenate the returned Your Object with the hot flux stream.
Here's the original code snippet posted in the question, modified to achieve what is needed:
public Flux<String> serviceReturningFluxError()
{
return Flux.error(new RuntimeException("error"));
}
public Mono<ServerResponse> handler(ServerRequest request)
{
Flux<String> coldStrFlux = serviceReturningFluxError();
// The following step is a very important step. It converts the cold flux
// into a hot flux.
Flux<String> hotStrFlux = coldStrFlux.publish().refCount(1, Duration.ofSeconds(2));
return hotStrFlux.next()
.flatMap( firstStr ->
{
Flux<String> reCombinedFlux = Mono.just(firstStr)
.concatWith(hotStrFlux);
return ServerResponse.ok()
.contentType(MediaType.APPLICATION_JSON)
.body(reCombinedFlux, String.class);
}
)
.switchIfEmpty(
ServerResponse.notFound().build()
)
.onErrorResume( throwable -> ServerResponse.badRequest().build() );
}
The reason for converting from cold to hot Flux is that by doing so, a second redundant HTTP request is not made.
For a more detailed answer please refer to the following Stack Over post, where I've commented upon this in greater detail:
Return relevant ServerResponse in case of Flux.error
Limitation
While the above approach will work for exceptions / Flux.error() streams returned from the service, it will not work for any exceptions that may arise while emitting the individual elements from the flux after the first element is successfully emitted.
The assumption in the above code is simple. If the service throws an exception, then the very first element returned from the service will be a Flux.error() element. This approach does not account for the fact that exceptions may be thrown in the returned Flux stream after the first element, say possibly due to some network connection issue that occurs after the first few elements are already emitted by the Flux stream.
Given an exchange using WebClient, filtered by a custom ExchangeFilterFunction:
#Override
public Mono<ClientResponse> filter(ClientRequest request, ExchangeFunction next) {
return next.exchange(request)
.doOnSuccess(response -> {
// ...
});
}
Trying to access the response body more than once using response.bodyToMono() will cause the underlying HTTP client connector to complain that only one receiver is allowed. AFAIK, there's no way to access the body's Publisher in order to cache() its signals (and I'm not sure it'd be a good idea, resource-wise), as well as no way to mutate or decorate the response object in a manner that allows access to its body (like it's possible with ServerWebExchange on the server side).
That makes sense, but I am wondering if there are any ways I could subscribe to the response body's publisher from a form of filter such as this one. My goal is to log the request/response being sent/received by a given WebClient instance.
I am new to reactive programming, so if there are any obvious no-nos here, please do explain :)
Only for logging you could add a wiretap to the HttpClient as desribed in this answer.
However, your question is also interesting in a more general sense outside of logging.
One possible way is to create a duplicate of the ClientResponse instance with a copy of the previous request body. This might go against reactive principles, but it got the job done for me and I don't see big downsides given the small size of the response bodies in my client.
In my case, I needed to do so because the server sending the request (outside of my control) uses the HTTP status 200 Ok even if requests fail. Therefore, I need to peek into the response body in order to find out if anything went wrong and what the cause was. In my case I evict a session cookie in the request headers from the cache if the error message indicates that the session expired.
These are the steps:
Get the response body as a Mono of a String (cf (1)).
Return a Mono.Error in case an error is detected (cf (2)).
Use the String of the response body to build a copy of the original response (cf (3)).
You could also use a dependency on the ObjectMapper to parse the String into an object for analysis.
Note that I wrote this in Kotlin but it should be easy enough to adapt to Java.
#Component
class PeekIntoResponseBodyExchangeFilterFunction : ExchangeFilterFunction {
override fun filter(request: ClientRequest, next: ExchangeFunction): Mono<ClientResponse> {
return next.exchange(request)
.flatMap { response ->
// (1)
response.bodyToMono<String>()
.flatMap { responseBody ->
if (responseBody.contains("Error message")) {
// (2)
Mono.error(RuntimeException("Response contains an error"))
} else {
// (3)
val clonedResponse = response.mutate().body(responseBody).build()
Mono.just(clonedResponse)
}
}
}
}
}
I have a Spring boot controller which makes two service calls. The second call should occur only after 10 secs, after getting response from first call.
public SomeResponse myAction() {
res = serviceCallA();
waitFor(10) {
serviceCallB();
}
return res;
}
The action doesn't have to wait for the response from serviceCallB(), to return response. Call to serviceCallB() just has to be triggered in separate thread.
Whats the best way to implement this? I need something like a ThreadPoolTaskExecutor, but with a delay.
Sample code would awesome..
Use a promise, not the horrible Thread.sleep from 1999 that wastes precious system resources. Your options are CompletableFuture, RxJava Publisher constructs, Spring's own Project Reactor.
Let serviceCallA return Mono<Something> (Project Reactor). Then:
res.delayElement(Duration.ofSeconds(10))
.doOnEach(unused -> serviceCallB())
.block();
There's probably 6 ways to do this in each library, the above being one.
Very straightforward answer;
SomeResponse myAction() {
res = serviceCallA();
serviceCallB();
return res;
}
#Async
void serviceCallB() {
Thread.sleep(10000) // 10 secs
// do service B call stuff
}
More on #Async with Spring also this
Beware though, since these calls will be running these serviceCallB() logic in new threads, and if used without proper control, might cause memory issues & kill your server.
With java.util.concurrent package you have the Executors
ScheduledExecutorService ex = Executors.newSingleThreadScheduledExecutor();
ex.schedule(() -> serviceCallB, 10, TimeUnit.SECONDS);