Handle onFailure from all streams - quarkus

In the following scenario, if an error happens on doSomething the onFailure works correctly:
this.restClient.getSomething()
.flatMap((something) -> {
return this.restClient.doSomething();
})
.flatMap((res) -> {
return Uni.createFrom().item(Response.ok().build());
}).onFailure().transform((e) -> {
LOG.error("Failed with", e);
throw new InternalServerErrorException("Failed!");
});
Problem is, if the error happens on getSomething the onFailure isn't called and I receive:
Unhandled asynchronous exception, sending back 500
I've tried using onFailure().call, onFailure().retry() and even set the onFailure at the start, in the middle, but nothing seems to work out.

If I understand it correctly, getSomething, create the stream.
But at creation something go wrong, e.g. the stream is never created.
As the stream is not create, onAnything can not happen, as the stream is not existing.
To avoid this behavior, I usually create the stream with voidItem and then map it to the correct creation. So that error in creation will be handled from the stream, and not from creation code.
Example for uni:
Uni.createFrom().voidItem()
.map( voidItem -> functionForRealValue)
.onFailure(handleFailure)
This way, you should be able to handle all errors in the stream. It's important to understand, that you need to create the stream, otherwise you can not run onFailure on it.

Related

How do I tigger the doOnError() in a Reactive Kakfa consumer?

private Flux<Record> consumeRecord() {
return reactiveKafkaConsumerTemplate
.receive()
.doOnNext(consumerRecord -> {
Record record = consumerRecord.value();
recordWorkflowService.handleRecord(record);
}
)
.map(ConsumerRecord::value)
.doOnError(throwable -> {
log.error("something bad happened while consuming : {}", throwable.getMessage());
});
}
Currently this is the code I have in my consumer. When a record comes in I do see that my recordWorflowService.handleRecord is called and the record is processed successfully, however I cannot get the error case to trigger.
I have a use case where I am consumer records from a kafka topic and do some processing on them. However, if any part of that processing fails I do not want the kafka record to be committed so that it can get reprocessed. So if any error occurs in the recordWorkflowService I want .doOnError() to be triggered and to not commit the offset (So it can be reprocessed).
Am I on the right path here? I have tried manually throwing an exception within handleRecord() but .doOnError() never seems to get triggered.

How to tell RSocket to read data stream by Java 8 Stream which backed by Blocking queue

I have the following scenario whereby my program is using blocking queue to process message asynchronously. There are multiple RSocket clients who wish to receive this message. My design is such a way that when a message arrives in the blocking queue, the stream that binds to the Flux will emit. I have tried to implement this requirement as below, but the client doesn't receive any response. However, I could see Stream supplier getting triggered correctly.
Can someone pls help.
#MessageMapping("addListenerHook")
public Flux<QueryResult> addListenerHook(String clientName){
System.out.println("Adding Listener:"+clientName);
BlockingQueue<QueryResult> listenerQ = new LinkedBlockingQueue<>();
Datalistener.register(clientName,listenerQ);
return Flux.fromStream(
()-> Stream.generate(()->streamValue(listenerQ))).map(q->{
System.out.println("I got an event : "+q.getResult());
return q;
});
}
private QueryResult streamValue(BlockingQueue<QueryResult> inStream){
try{
return inStream.take();
}catch(Exception e){
return null;
}
}
This is tough to solve simply and cleanly because of the blocking API. I think this is why there aren't simple bridge APIs here to help you implement this. You should come up with a clean solution to turn the BlockingQueue into a Flux first. Then the spring-boot part becomes a non-event.
This is why the correct solution is probably involving a custom BlockingQueue implementation like ObservableQueue in https://www.nurkiewicz.com/2015/07/consuming-javautilconcurrentblockingque.html
A alternative approach is in How can I create reactor Flux from a blocking queue?
If you need to retain the LinkedBlockingQueue, a starting solution might be something like the following.
val f = flux<String> {
val listenerQ = LinkedBlockingQueue<QueryResult>()
Datalistener.register(clientName,listenerQ);
while (true) {
send(bq.take())
}
}.subscribeOn(Schedulers.elastic())
With an API like flux you should definitely avoid any side effects before the subscribe, so don't register your listener until inside the body of the method. But you will need to improve this example to handle cancellation, or however you cancel the listener and interrupt the thread doing the take.

Mono returned by ServerRequest.bodyToMono() method not extracting the body if I return ServerResponse immediately

I am using web reactive in spring web flux. I have implemented a Handler function for POST request. I want the server to return immediately. So, I have implemeted the handler as below -:
public class Sample implements HandlerFunction<ServerResponse>{
public Mono<ServerResponse> handle(ServerRequest request) {
Mono bodyMono = request.bodyToMono(String.class);
bodyMono.map(str -> {
System.out.println("body got is " + str);
return str;
}).subscribe();
return ServerResponse.status(HttpStatus.CREATED).build();
}
}
But the print statement inside the map function is not getting called. It means the body is not getting extracted.
If I do not return the response immediately and use
return bodyMono.then(ServerResponse.status(HttpStatus.CREATED).build())
then the map function is getting called.
So, how can I do processing on my request body in the background?
Please help.
EDIT
I tried using flux.share() like below -:
Flux<String> bodyFlux = request.bodyToMono(String.class).flux().share();
Flux<String> processFlux = bodyFlux.map(str -> {
System.out.println("body got is");
try{
Thread.sleep(1000);
}catch (Exception ex){
}
return str;
});
processFlux.subscribeOn(Schedulers.elastic()).subscribe();
return bodyFlux.then(ServerResponse.status(HttpStatus.CREATED).build());
In the above code, sometimes the map function is getting called and sometimes not.
As you've found, you can't just arbitrarily subscribe() to the Mono returned by bodyToMono(), since in that case the body simply doesn't get passed into the Mono for processing. (You can verify this by putting a single() call in that Mono, it'll throw an exception since no element will be emitted.)
So, how can I do processing on my request body in the background?
If you really still want to just use reactor to do a long task in the background while returning immediately, you can do something like:
return request.bodyToMono(String.class).doOnNext(str -> {
Mono.just(str).publishOn(Schedulers.elastic()).subscribe(s -> {
System.out.println("proc start!");
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("proc end!");
});
}).then(ServerResponse.status(HttpStatus.CREATED).build());
This approach immediately publishes the emitted element to a new Mono, set to publish on an elastic scheduler, that is then subscribed in the background. However, it's kind of ugly, and it's not really what reactor is designed to do. You may be misunderstanding the idea behind reactor / reactive programming here:
It's not written with the idea of "returning a quick result and then doing stuff in the background" - that's generally the purpose of a work queue, often implemented with something like RabbitMQ or Kafka. It's "raison d'ĂȘtre" is instead to be non-blocking, so a single thread is never idly blocked, waiting for something else to complete.
The map() method isn't designed for side effects, it's designed to transform each object into another. For side effects, you want doOnNext() instead;
Reactor uses a single thread by default, so your "additional processing" in your map() method would still block that thread.
If your application is for anything more than quick demo purposes, and/or you need to make heavy use of this pattern, then I'd seriously consider setting up a proper work queue instead.
This is not possible.
Web servers (including Reactor Netty, Tomcat, etc) clean up and recycle resources when request processing is done. This means that when your controller handler is done, the HTTP resources, the request itself, reusable buffers, etc are recycled or closed. At that point, you cannot read from the request body anymore.
In your case, you need to read and buffer the whole request body first, then return a response and kick off a task for processing that request in a separate execution.

PushStreamContent and exceptions in the middle of streaming/serializing

We're using PushStreamContent to stream some large lumps with Content-Disposition headers set and the like. As a number of people have discovered, the drawback is what happens when something goes wrong in the streaming?
At the very least, we were trying to get the error logged on our side so someone could follow up.
Recently, I ran into a weird situation. Putting a try/catch around the streaming function worked well enough for errors encountered before you actually started streaming (i.e. errors in sql queries and the like), but if the error occurred later (like in the serialization), the catch block doesn't fire.
Would anyone have any idea why that is?
e.g.
HttpResponseMessage response = new HttpResponseMessage(HttpStatusCode.OK);
try
{
response.Content = new PushStreamContent((stream, content, context) =>
{
using (XmlWriter rWriter = PrepForXmlOutput(stream))
{
rpt.GenerateXmlReport(rWriter, reportParams, true);
}
}, "EventReport", extension);
}
catch (Exception e)
{
// The first step of GenerateXmlReport() is to run the sql;
// if the error happens there, this fires and will log the exception
// if the error happens later, during the result serialization, this does NOT fire
Log.Error(e);
}
return response;
Hate it when I see the answer just after I hit Post.
Try/catch around the outside only covers until I return the HttpResponseMessage. When/where I get the exception depends on how far the inner method gets before that return happens.
The try/catch needed to be on the inner call (the one where all the work happens) to cover the whole lifecycle.

How can I tread OpenDolphin client send HttpHostConnectException?

Is there way to handle situation when message is not delivered to server? Dolphin log infors about situation clearly, but I'would like to catch it from code. I was looking for some method like: onError to override like onFinished:
clientDolphin.send(message, new OnFinishedHandlerAdapter() {
#Override
public void onFinished(List<ClientPresentationModel> presentationModels) {
// Do something useful
}
}
});
, but there is nothing like that. Also wrapping send call in try/catch does not work(not suprising since send is not blocking its caller code).
I thing there is definitely some easy way to get informed about undelivered message, but I cant see it.
Thaks, in advace, for answers!
You can assign an onException handler to the ClientConnector - and you are actually supposed to do so. The exception handler will get the exception object passed in that happened in the asynchronous send action.
Below is the default handler that even tells you, what you should do ;-)
Closure onException = { Throwable up ->
def out = new StringWriter()
up.printStackTrace(new PrintWriter(out))
log.severe("onException reached, rethrowing in UI Thread, consider setting ClientConnector.onException\n${out.buffer}")
uiThreadHandler.executeInsideUiThread { throw up } // not sure whether this is a good default
}

Resources