Handling exceptions and returning proper HTTP code with webflux - spring

I am using the functional endpoints of WebFlux. I translate exceptions sent by the service layer to an HTTP error code using onErrorResume:
public Mono<String> serviceReturningMonoError() {
return Mono.error(new RuntimeException("error"));
}
public Mono<ServerResponse> handler(ServerRequest request) {
return serviceReturningMonoError().flatMap(e -> ok().syncBody(e))
.onErrorResume( e -> badRequest(e.getMessage()));
}
It works well as soon as the service returns a Mono. In case of a service returning a Flux, what should I do?
public Flux<String> serviceReturningFluxError() {
return Flux.error(new RuntimeException("error"));
}
public Mono<ServerResponse> handler(ServerRequest request) {
???
}
Edit
I tried the approach below, but unfortunately it doesn't work. The Flux.error is not handled by the onErrorResume and propagated to the framework. When the exception is unboxed during the serialization of the http response, Spring Boot Exception management catch it and convert it into a 500.
public Mono<ServerResponse> myHandler(ServerRequest request) {
return ok().contentType(APPLICATION_JSON).body( serviceReturningFluxError(), String.class)
.onErrorResume( exception -> badRequest().build());
}
I am actually surprised of the behaviour, is that a bug?

I found another way to solve this problem catching the exception within the body method and mapping it to ResponseStatusException
public Mono<ServerResponse> myHandler(ServerRequest request) {
return ok().contentType(MediaType.APPLICATION_JSON)
.body( serviceReturningFluxError()
.onErrorMap(RuntimeException.class, e -> new ResponseStatusException( BAD_REQUEST, e.getMessage())), String.class);
}
With this approach Spring properly handles the response and returns the expected HTTP error code.

Your first sample is using Mono (i.e. at most one value), so it plays well with Mono<ServerResponse> - the value will be asynchronously resolved in memory and depending on the result we will return a different response or handle business exceptions manually.
In case of a Flux (i.e. 0..N values), an error can happen at any given time.
In this case you could use the collectList operator to turn your Flux<String> into a Mono<List<String>>, with a big warning: all elements will be buffered in memory. If the stream of data is important of if your controller/client relies on streaming data, this is not the best choice here.
I'm afraid I don't have a better solution for this issue and here's why: since an error can happen at any time during the Flux, there's no guarantee we can change the HTTP status and response: things might have been flushed already on the network. This is already the case when using Spring MVC and returning an InputStream or a Resource.
The Spring Boot error handling feature tries to write an error page and change the HTTP status (see ErrorWebExceptionHandler and implementing classes), but if the response is already committed, it will log error information and let you know that the HTTP status was probably wrong.

Though this is an old question, I'd like to answer it for anyone who may stumble upon this Stack Overflow post.
There is another way to address this particular issue (discussed below), without the need to cache / buffer all the elements in memory as detailed in one of the other answers. However, the approach shown below does have a limitation. First, I'll discuss the approach, then the limitation.
The approach
You need to first convert your cold flux into a hot flux. Then on the hot flux call .next(), to return a Mono<Your Object> On this mono, call .flatMap().switchIfEmpty().onErrorResume(). In the flatMap() concatenate the returned Your Object with the hot flux stream.
Here's the original code snippet posted in the question, modified to achieve what is needed:
public Flux<String> serviceReturningFluxError()
{
return Flux.error(new RuntimeException("error"));
}
public Mono<ServerResponse> handler(ServerRequest request)
{
Flux<String> coldStrFlux = serviceReturningFluxError();
// The following step is a very important step. It converts the cold flux
// into a hot flux.
Flux<String> hotStrFlux = coldStrFlux.publish().refCount(1, Duration.ofSeconds(2));
return hotStrFlux.next()
.flatMap( firstStr ->
{
Flux<String> reCombinedFlux = Mono.just(firstStr)
.concatWith(hotStrFlux);
return ServerResponse.ok()
.contentType(MediaType.APPLICATION_JSON)
.body(reCombinedFlux, String.class);
}
)
.switchIfEmpty(
ServerResponse.notFound().build()
)
.onErrorResume( throwable -> ServerResponse.badRequest().build() );
}
The reason for converting from cold to hot Flux is that by doing so, a second redundant HTTP request is not made.
For a more detailed answer please refer to the following Stack Over post, where I've commented upon this in greater detail:
Return relevant ServerResponse in case of Flux.error
Limitation
While the above approach will work for exceptions / Flux.error() streams returned from the service, it will not work for any exceptions that may arise while emitting the individual elements from the flux after the first element is successfully emitted.
The assumption in the above code is simple. If the service throws an exception, then the very first element returned from the service will be a Flux.error() element. This approach does not account for the fact that exceptions may be thrown in the returned Flux stream after the first element, say possibly due to some network connection issue that occurs after the first few elements are already emitted by the Flux stream.

Related

Spring WebFlux + Kotlin Response Handling

I'm having some trouble wrapping my head around a supposedly simple RESTful WS response handling scenario when using Spring WebFlux in combination with Kotlin coroutines. Suppose we have a simple WS method in our REST controller that is supposed to return a possibly huge number (millions) of response "things":
#GetMapping
suspend fun findAllThings(): Flow<Thing> {
//Reactive DB query, return a flow of things
}
This works as one would expect: the result is streamed to the client as long as a streaming media type (e.g. "application/x-ndjson") is used. In more complex service calls that also accounts for the possibility of errors/warnings I would like to return a response object of the following form:
class Response<T> {
val errors: Flow<String>
val things: Flow<T>
}
The idea here being that a response either is successful (returning an empty error Flow and a Flow of things), or failed (errors contained in the corresponding Flow while the things Flow being empty). In blocking programming this is a quite common response idiom. My question now is how can I adapt this idiom to the reactive approach in Kotlin/Spring WebFlux?
I know its possible to just return the Response as described (or Mono<Response> for Java users), but this somewhat defeats the purpose of being reactive as the entire Mono has to exist in memory at serialization time. Is there any way to solve this? The only possible solution I can think of right now is a custom Spring Encoder that is smart enough to stream both errors or things (whatever is present).
How about returning Success/Error per Thing?
class Result<T> private constructor(val result: T?, val error: String?) {
constructor(data: T) : this(data, null)
constructor(error: String) : this(null, error)
val isError = error != null
}
#GetMapping
suspend fun findAllThings(): Flow<Result<Thing>> {
//Reactive DB query, return a flow of things
}

Mono returned by ServerRequest.bodyToMono() method not extracting the body if I return ServerResponse immediately

I am using web reactive in spring web flux. I have implemented a Handler function for POST request. I want the server to return immediately. So, I have implemeted the handler as below -:
public class Sample implements HandlerFunction<ServerResponse>{
public Mono<ServerResponse> handle(ServerRequest request) {
Mono bodyMono = request.bodyToMono(String.class);
bodyMono.map(str -> {
System.out.println("body got is " + str);
return str;
}).subscribe();
return ServerResponse.status(HttpStatus.CREATED).build();
}
}
But the print statement inside the map function is not getting called. It means the body is not getting extracted.
If I do not return the response immediately and use
return bodyMono.then(ServerResponse.status(HttpStatus.CREATED).build())
then the map function is getting called.
So, how can I do processing on my request body in the background?
Please help.
EDIT
I tried using flux.share() like below -:
Flux<String> bodyFlux = request.bodyToMono(String.class).flux().share();
Flux<String> processFlux = bodyFlux.map(str -> {
System.out.println("body got is");
try{
Thread.sleep(1000);
}catch (Exception ex){
}
return str;
});
processFlux.subscribeOn(Schedulers.elastic()).subscribe();
return bodyFlux.then(ServerResponse.status(HttpStatus.CREATED).build());
In the above code, sometimes the map function is getting called and sometimes not.
As you've found, you can't just arbitrarily subscribe() to the Mono returned by bodyToMono(), since in that case the body simply doesn't get passed into the Mono for processing. (You can verify this by putting a single() call in that Mono, it'll throw an exception since no element will be emitted.)
So, how can I do processing on my request body in the background?
If you really still want to just use reactor to do a long task in the background while returning immediately, you can do something like:
return request.bodyToMono(String.class).doOnNext(str -> {
Mono.just(str).publishOn(Schedulers.elastic()).subscribe(s -> {
System.out.println("proc start!");
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("proc end!");
});
}).then(ServerResponse.status(HttpStatus.CREATED).build());
This approach immediately publishes the emitted element to a new Mono, set to publish on an elastic scheduler, that is then subscribed in the background. However, it's kind of ugly, and it's not really what reactor is designed to do. You may be misunderstanding the idea behind reactor / reactive programming here:
It's not written with the idea of "returning a quick result and then doing stuff in the background" - that's generally the purpose of a work queue, often implemented with something like RabbitMQ or Kafka. It's "raison d'ĂȘtre" is instead to be non-blocking, so a single thread is never idly blocked, waiting for something else to complete.
The map() method isn't designed for side effects, it's designed to transform each object into another. For side effects, you want doOnNext() instead;
Reactor uses a single thread by default, so your "additional processing" in your map() method would still block that thread.
If your application is for anything more than quick demo purposes, and/or you need to make heavy use of this pattern, then I'd seriously consider setting up a proper work queue instead.
This is not possible.
Web servers (including Reactor Netty, Tomcat, etc) clean up and recycle resources when request processing is done. This means that when your controller handler is done, the HTTP resources, the request itself, reusable buffers, etc are recycled or closed. At that point, you cannot read from the request body anymore.
In your case, you need to read and buffer the whole request body first, then return a response and kick off a task for processing that request in a separate execution.

Webflux parallel connections somehow limited to 256

I have a simple setup of server and client:
Flux.range(1, 5000)
.subscribeOn(Schedulers.parallel())
.flatMap(i -> WebClient.create()
.method(HttpMethod.POST)
.uri("http://localhost:8080/test")
.body(Mono.just(String.valueOf(i)), String.class)
.exchange())
.publishOn(Schedulers.parallel())
.subscribe(response ->
response.bodyToMono(String.class)
.publishOn(Schedulers.elastic())
.subscribe(body -> log.info("{}", body)));
here is the client:
#PostMapping
public Mono<String> test(#RequestBody Mono<String> body) {
return body.delayElement(Duration.ofSeconds(5));
}
Both things run on netty. Maybe someone has an idea what is causing this behavior?
This is not due to a WebClient limitation about connection pools, but this actually comes from a Reactor implementation details that you can change.
By default, Reactor operators such as flatMap have prefetch=32 (the number of elements we request before the end subscriber asks for those) and maxConcurrency=256 (the maximum number of elements processed concurrently by the operator).
You can use variants of Flux.flatMap(Function mapper, int concurrency, int prefetch) to change that behavior.
Your code snippet is using a mix of subscribeOn and publishOn; I'd say that given you're doing reactive I/O work with this code snippet, you shouldn't try to schedule work on an elastic/parallel scheduler. Removing those operators is best here.

How to handle empty event in Spring reactor

Well, this sounds counter-intuitive to what reactive programming is, but I am unable to comprehend a way to handle nulls/exceptions.
private static class Data {
public Mono<String> first() {
return Mono.just("first");
}
public Mono<String> second() {
return Mono.just("second");
}
public Mono<String> empty() {
return Mono.empty();
}
}
I understand that fundamentally unless a publisher publishes an event, a subscriber will not act. So a code like this would work.
Data data = new Data();
data.first()
.subscribe(string -> Assertions.assertThat(string).isEqualTo("first"));
And if the first call returns empty, I can do this.
Data data = new Data();
data.empty()
.switchIfEmpty(data.second())
.subscribe(string -> Assertions.assertThat(string).isEqualTo("second"));
But how do I handle a case when both the calls return empty (typically this is an exception scenario that would need to be propagated to the user).
Data data = new Data();
data.empty()
.switchIfEmpty(data.empty())
.handle((string, sink) -> Objects.requireNonNull(string))
.block();
The handle is not called in the above example since no event was published.
as JB Nizet pointed out, you can chain in a second switchIfEmpty with a Mono.error.
Or, if you're fine with a NoSuchElementException, you could chain in single(). It enforces a strong contract of exactly one element, otherwise propagating that standard exception.

Can I access the request/response body on an ExchangeFilterFunction?

Given an exchange using WebClient, filtered by a custom ExchangeFilterFunction:
#Override
public Mono<ClientResponse> filter(ClientRequest request, ExchangeFunction next) {
return next.exchange(request)
.doOnSuccess(response -> {
// ...
});
}
Trying to access the response body more than once using response.bodyToMono() will cause the underlying HTTP client connector to complain that only one receiver is allowed. AFAIK, there's no way to access the body's Publisher in order to cache() its signals (and I'm not sure it'd be a good idea, resource-wise), as well as no way to mutate or decorate the response object in a manner that allows access to its body (like it's possible with ServerWebExchange on the server side).
That makes sense, but I am wondering if there are any ways I could subscribe to the response body's publisher from a form of filter such as this one. My goal is to log the request/response being sent/received by a given WebClient instance.
I am new to reactive programming, so if there are any obvious no-nos here, please do explain :)
Only for logging you could add a wiretap to the HttpClient as desribed in this answer.
However, your question is also interesting in a more general sense outside of logging.
One possible way is to create a duplicate of the ClientResponse instance with a copy of the previous request body. This might go against reactive principles, but it got the job done for me and I don't see big downsides given the small size of the response bodies in my client.
In my case, I needed to do so because the server sending the request (outside of my control) uses the HTTP status 200 Ok even if requests fail. Therefore, I need to peek into the response body in order to find out if anything went wrong and what the cause was. In my case I evict a session cookie in the request headers from the cache if the error message indicates that the session expired.
These are the steps:
Get the response body as a Mono of a String (cf (1)).
Return a Mono.Error in case an error is detected (cf (2)).
Use the String of the response body to build a copy of the original response (cf (3)).
You could also use a dependency on the ObjectMapper to parse the String into an object for analysis.
Note that I wrote this in Kotlin but it should be easy enough to adapt to Java.
#Component
class PeekIntoResponseBodyExchangeFilterFunction : ExchangeFilterFunction {
override fun filter(request: ClientRequest, next: ExchangeFunction): Mono<ClientResponse> {
return next.exchange(request)
.flatMap { response ->
// (1)
response.bodyToMono<String>()
.flatMap { responseBody ->
if (responseBody.contains("Error message")) {
// (2)
Mono.error(RuntimeException("Response contains an error"))
} else {
// (3)
val clonedResponse = response.mutate().body(responseBody).build()
Mono.just(clonedResponse)
}
}
}
}
}

Resources