How to use Spring WebClient to make multiple calls sequentionaly? - spring

I read the topic
How to use Spring WebClient to make multiple calls simultaneously? , but my case is a bit different. I'm calling 2 different external services using webclient, let's say from method Mono < Void > A() , followed by Mono < Void > B (). My goal is to extract data from A(), then pass it to B(). Is there correct way to avoid:
asynchronous call (which leads to Illegal arg exception, since B requesting args BEFORE A() complete);
blocking call, cause the system is reactive.
Is there are a standart way to achieve it?

First scenario:
Mono<> a = getFromAByWebClient();
and you want to send this data to call service B via a post or put request,
here, since mono is one object and you want to send it through api in a post or method, so must have that data with you, here you should wait until data is not come from first service, else it will hit the api with blank data or will result an exception.
Second scenario:
Since B is dependent on A, why not to call A service inside B service and get data.
Since in Spring reactive everything is stream, so can do operation with one data until others are on their way, but that operations which will be performed should have data.

Well, I was told how to refactor the code. The problem was fixed and for memorizing, here is the solution:
the original code returns
Mono.fromRunnable(()->apply(param param));
method 'apply' subscribes on call of external resource:
apply(param param) {
service.callRemote(val x).subscribe();
<---some bl --->
};
So,it seems Like when first beanA.process() followed beanB.process(), reactive pipeline falls apart, and lambda from runnable() branches into separate thread.
What was changed:
beanA and beanB methods apply return logic -
Mono.just.flatMap(service.callRemote(val x)).then();
apply() has been removed, remote call wrapped into flatMap() and integrated into pipeline. Now it works as expected, sequentionally calling remote resource.

Related

FireAndForget call to WebApi from Azure Function

I want to be able to call an HTTP endpoint (that I own) from an Azure Function at the end of the Azure Function request.
I do not need to know the result of the request
If there is a problem in the HTTP endpoint that is called I will log it there
I do not want to hold up the return to the client calling the initial Azure Function
Offloading the call of the secondary WebApi onto a background job queue is considered overkill for this requirement
Do I simply call HttpClient.PutAsync without an await?
I realise that the dependencies I have used up until the point that the call is made may well not be available when the call returns. Is there a safe way to check if they are?
My answer may cause some controversy but, you can always start a background task and execute it that way.
For anyone reading this answer, this is far from recommended. The OP has been very clear that they don't care about exceptions or understanding what sort of result the request is returning ...
Task.Run(async () =>
{
using (var httpClient = new HttpClient())
{
await httpClient.PutAsync(...);
}
});
If you want to ensure that the call has fired, it may be worth waiting for a second or two after the call is made to ensure it's actually on it's way.
await Task.Delay(1000);
If you're worried about dependencies in the call, be sure to construct your payload (i.e. serialise it, etc.) external to the Task.Run, basically, minimise any work the background task does.

Start processing Flux response from server before completion: is it possible?

I have 2 Spring-Boot-Reactive apps, one server and one client; the client calls the server like so:
Flux<Thing> things = thingsApi.listThings(5);
And I want to have this as a list for later use:
// "extractContent" operation takes 1.5s per "thing"
List<String> thingsContent = things.map(ThingConverter::extractContent)
.collect(Collectors.toList())
.block()
On the server side, the endpoint definition looks like this:
#Override
public Mono<ResponseEntity<Flux<Thing>>> listThings(
#NotNull #Valid #RequestParam(value = "nbThings") Integer nbThings,
ServerWebExchange exchange
) {
// "getThings" operation takes 1.5s per "thing"
Flux<Thing> things = thingsService.getThings(nbThings);
return Mono.just(new ResponseEntity<>(things, HttpStatus.OK));
}
The signature comes from the Open-API generated code (Spring-Boot server, reactive mode).
What I observe: the client jumps to things.map immediately but only starts processing the Flux after the server has finished sending all the "things".
What I would like: the server should send the "things" as they are generated so that the client can start processing them as they arrive, effectively halving the processing time.
Is there a way to achieve this? I've found many tutorials online for the server part, but none with a java client. I've heard of server-sent events, but can my goal be achieved using a "classic" Open-API endpoint definition that returns a Flux?
The problem seemed too complex to fit a minimal viable example in the question body; full code available for reference on Github.
EDIT: redirect link to main branch after merge of the proposed solution
I've got it running by changing 2 points:
First: I've changed the content type of the response of your /things endpoint, to:
content:
text/event-stream
Don't forget to change also the default response, else the client will expect the type application/json and will wait for the whole response.
Second point: I've changed the return of ThingsService.getThings to this.getThingsFromExistingStream (the method you comment out)
I pushed my changes to a new branch fix-flux-response on your Github, so you can test them directly.

Running a Mono in background while returning a response when using Spring Webflux

This questions is related to Return immediately in spring web flux but I don't think it's the same (at least the answer there is not satisfactory for me).
I have a function returning a Mono that when invoked starts a long-running job. This function is invoked when a call is made to a Spring Webflux HTTP API. Here's an example:
#PutMapping("/{jobId}")
fun startNewJob(#PathVariable("jobId") jobId: String,
request: ServerHttpRequest): Mono<ResponseEntity<Unit>> {
val longRunningJob : Mono<Job> = startNewJob(jobId)
longRunningJob.map { job ->
val jobUri = generateJobUri(request, job.id)
ResponseEntity.created(jobURI).build<Unit>()
}
}
The problem with the code above is that "201 Created" is created after the long running job is completed. I want to kick-off the longRunningJob in the background and return "201 Created" immediately.
I could perhaps do something like this:
#PutMapping("/{jobId}")
fun startNewJob(#PathVariable("jobId") jobId: String,
request: ServerHttpRequest): Mono<ResponseEntity<Unit>> {
startNewJob(jobId)
.subscribeOn(Schedulers.newSingle("thread"))
.subscribe()
val jobUri = generateJobUri(request, job.id)
val response = ResponseEntity.created(jobURI).build<Unit>()
Mono.just(response)
}
But it doesn't seem very idiomatic to me to have to call subscribe() manually (e.g. intellij is complaining that I call subscribe() in non-blocking scope). Isn't there a better way to compose the two "streams" without using an explicit subscribe? If so how do I modify the startNewJob function above to achieve this?
AFAIK, using one of the subscribe methods is the only way to really start a job in the background with its own lifecycle (not tied to the returned publisher).
If you were to use one of the operators to combine the job publisher and the response publisher (e.g. zip or merge), then the lifecycle of the job publisher would be tied to the response publisher, which is not what you want for a background job.
One thing you might want to consider is kicking off the background job within the response publisher stream, rather than directly in the method body. e.g. via doOnSubscibe or from an operator upstream of the response.
This would tie the start of the background job to the onSubscribe events of the response publisher, but still allow it to complete in the background.
Also note, that if you want to be able to cancel the background job (e.g. maybe during application shutdown), you'll need to save the Disposable returned from subscribe so you can later call dispose on it. This might be better done from some type of BackgroundJobManager that could keep track of all the jobs running.
private static final Scheduler backgroundTaskScheduler = Schedulers.newParallel("backgroundTaskScheduler", 2);
backgroundTaskScheduler.schedule(() -> doBackgroundJob());

Heavy REST Application

I have an Enterprise Service Bus (ESB) that posts Data to Microservices (MCS) via Rest. I use Spring to do this. The main Problem is that i have 6 Microservices, that run one after one. So it looks like this: MCS1 -> ESB -> MCS2 -> ESB -> ... -> MCS6
So my Problem looks like this: (ESB)
#RequestMapping(value = "/rawdataservice/container", method = RequestMethod.POST)
#Produces(MediaType.APPLICATION_JSON)
public void rawContainer(#RequestBody Container c)
{
// Here i want to do something to directly send a response and afterwards execute the
// heavy code
// In the heavy code is a postForObject to the next Microservice
}
And the Service does something like this:
#RequestMapping(value = "/container", method = RequestMethod.POST)
public void addDomain(#RequestBody Container container)
{
heavyCode();
RestTemplate rt = new RestTemplate();
rt.postForObject("http://134.61.64.201:8080/rest/rawdataservice/container",container, Container.class);
}
But i dont know how to do this. I looked up the post for Location method, but i dont think it would solve the Problem.
EDIT:
I have a chain of Microservices. The first Microservice waits for a Response of the ESB. In the response the ESB posts to another Microservice and waits for a response and the next one does the same as the first one. So the Problem is that the first Microservice is blocked as long as the complete Microservice Route is completed.
ESB Route
Maybe a picture could help. 1.rawdataService 2.metadataservice 3.syntaxservice 4.semantik
// Here i want to do something to directly send a response and afterwards execute the
// heavy code
The usual spelling of that is to use the data from the http request to create a Runnable that knows how to do the work, and dispatch that runnable to an executor service for later processing. Much the same, you copy the data you need into a queue, which is polled by other threads ready to complete the work.
The http request handler then returns as soon as the executor service/queue has accepted the pending work. The most common implementation is to return a "202 Accepted" response, including in the Location header the url for a resource that will allow the client to monitor the work in progress, if desired.
In Spring, it might be ResponseEntity that manages the codes for you. For instance
ResponseEntity.accepted()....
See also:
How to respond with HTTP 400 error in a Spring MVC #ResponseBody method returning String?
REST - Returning Created Object with Spring MVC
From the caller's point of view, it would invoke RestTemplate.postForLocation, receive a URI, and throw away that URI because the microservice only needs to know that the work as been accepted
Side note: in the long term, you are probably going to want to be able to correlate the activities of the different micro services, especially when you are troubleshooting. So make sure you understand what Gregor Hohpe has to say about correlation identifiers.

handling http calls from within an EJB transaction

This is the code I have:
//EJB
beanclass 1{
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public String method1(){
method2();
DBupdates();
return "";
}
}
//plain java class
class 2{
method 2(){
//call which may take a long time (but dont want to wait for it to complete)
makes http calls to an external URL method();
}}
The issue is: the Http call may take a long time. However the response of the call decides the next steps in method1 -> db updates and response.the response needs to go back to the end-user, and i cannot make the end-user wait for ever.
i can handle this situation in two ways:
move method2 into the EJB and put TransactionAttributeType.NEVER, so that the http call is not in the transaction, and the transaction of method1 is not waiting on it. In this case, the container manages the transaction of method1 and does no db updates and returns null if it didnt hear back from method2. How long does the method1's transaction wait before "returning"?
i can use JBoss annotation and put a TransactionTimeout of 2 minutes on method1(): in this case, if http call does not complete within 2 minutes, method1 can return null and do no DB updates.
Which of these two methods is advisable and fault-proof?
Thanks for your insights.
When you use TransactionAttributeType.NEVER, the transaction isn't propagated further.
You can use #Asynchronous annotation for a method which returns Future<V> object. Then you can invoke get(timeout, unit) on the object to get the result type V which waits for the given time for manipulation, but it's EJB-3.1 specific.
Can try JBoss specific annotation #TransactionTimeout at method or class level. Also can configure it in jboss.xml or jboss-service.xml depending on your server version. This will be fine with EJB-3.0, but will loose portability of application.

Resources