Spring integration concurrency - detecting completion - spring

I have a spring integration workflow that embeds task executors in its channels so as to enable concurrent processing. I manually fire off processing via a gateway and need to block the main thread until all asynchronous processes have completed. Is there a way to accomplish this? I have tried thinking along the lines of barriers, latches, and channel interceptors, but no solution is forthcoming. Any ideas anyone?

Have a look at the Aggregator section from the reference manual:
http://static.springsource.org/spring-integration/docs/latest-ga/reference/htmlsingle/#aggregator
If an aggregator is downstream from the gateway, the gateway caller can block (or use a Future if that's defined as the return type on the gateway interface) until the aggregator has received and released the correlated group of messages, even if those were processed on different threads asynchronously.
Essentially the Aggregator is a barrier itself, and its default release-strategy is essentially a countdown-latch based on the sequence-size of the message group.
Hope that helps.
-Mark

To answer my own question, here's what I ended up doing:
Create a customized ExecutorService that knows when to shutdown - in my case this was simply when releasing the last active thread - i.e. after executing the last piece in the workflow:
public class WorkflowThreadPoolExecutor extends ScheduledThreadPoolExecutor {
public WorkflowThreadPoolExecutor(int corePoolSize) {
super(corePoolSize);
}
#Override
protected void afterExecute(Runnable r, Throwable t) {
super.afterExecute(r, t);
if (getActiveCount() == 1) {
shutdown();
}
}
}
Await executor termination in main thread as follws:
try {
executorService.awaitTermination(Integer.MAX_VALUE, TimeUnit.SECONDS);
} catch (InterruptedException ex) {
LOG.error("message=Error awaiting termination of executor", ex);
}
Hope this helps someone else facing a similar issue.

Related

Pick next message after previous fully processed

I'm stucked with that kind of a problem. I use kafka as transport between services. Tried to draw sequence diagram
First of all planning service get main task and handling it, planning service pass it to few services then. My main problem is: I musn't pick another main task, until f.e. second service send result to kafka and planning service will process the result.
My main listener have this structure
#KafkaListener(
containerFactory = "genFactory",
topics = "${main}")
public void listenStartGeneratorTopic( GeneratorMessage message, Acknowledgment acknowledgment){
//do some logic
//THEN send message to first service, and then in that listener new task sends to second
sendTaskToQueue(task);
acknowledgment.acknowledge();
log.info("All done in method");
}
As I understood, I need aknowledge() after all my logic with result from second service will be done. So I tried to add boolean flag in CompletableFuture, setting it in true when my planning service get response from second service. And do blocking get() in main listener to continue after.
private CompletableFuture<Boolean> isMessageProcessed = new CompletableFuture<>();
#KafkaListener(topics = "${report}")
public void listenReport(ReportMessage reportMessage) {
isMessageProcessed = CompletableFuture.completedFuture(true);
}
}
#KafkaListener(
containerFactory = "genFactory",
topics = "${main}")
public void listenStartGeneratorTopic( GeneratorMessage message, Acknowledgment acknowledgment){
//do some logic
//THEN send message to first service, and then in that listener new task sends to second
sendTaskToQueue(task);
isMessageProcessed.join();
log.info("message is ready for commit");
acknowledgment.acknowledge();
}
That's looks strange enough and that idea doesn't bring me result.
So, can you give me advice, what can I do in that situation?
Why not using 6 topics? I believe this is better separation of duties and might allow you better scale,
Guess I would check KStream as well in your case...
My idea goes like this:
PLANNING SERVICE read from topic1.start do work send to topic2 ,
FIRST SERVICE read from topic2 do work and send to topic3
PLANNING SERVICE (another instance) read from topic3 do work and write to topic4
SECOND SERVICE reads topic4 do work send to topic5
PLANNING SERVICE (another instance) read from topic5 and write to topic6.done

Stop consumption of message if it cannot be completed

I'm new to mass transit and have a question regarding how I should solve a failure to consume a message. Given the below code I am consuming INotificationRequestContract's. As you can see the code will break and not complete.
public class NotificationConsumerWorker : IConsumer<INotificationRequestContract>
{
private readonly ILogger<NotificationConsumerWorker> _logger;
private readonly INotificationCreator _notificationCreator;
public NotificationConsumerWorker(ILogger<NotificationConsumerWorker> logger, INotificationCreator notificationCreator)
{
_logger = logger;
_notificationCreator = notificationCreator;
}
public Task Consume(ConsumeContext<INotificationRequestContract> context)
{
try
{
throw new Exception("Horrible error");
}
catch (Exception e)
{
// >>>>> insert code here to put message back for later consumption. <<<<<
_logger.LogError(e, "Failed to consume message");
throw;
}
}
}
How do I best handle a scenario such as this where the consumption fails? In my specific case this is likely to occur if a required external service is unavailable.
I can see two solutions.
If there is a way to put the message back, or cancel the consumption so that it will be tried again.
I could store it locally in a database and create my own re-try method to wrap this (but would prefer not to for sake of simplicity).
The exceptions section of the documentation provides sufficient guidance for dealing with consumer exceptions.
There are two retry approaches, which can be used in combination:
Message Retry, which waits while the message is locked, in-process, for the next retry. Therefore, these should be short, to deal with transient issues.
Message Redelivery, which delays the message using either the broker delayed delivery, or a message scheduler, so that it is redelivered to the receive endpoint at some point in the future.
Once all retry/redelivery attempts are exhausted, the message is moved to the _error queue.

How to tell RSocket to read data stream by Java 8 Stream which backed by Blocking queue

I have the following scenario whereby my program is using blocking queue to process message asynchronously. There are multiple RSocket clients who wish to receive this message. My design is such a way that when a message arrives in the blocking queue, the stream that binds to the Flux will emit. I have tried to implement this requirement as below, but the client doesn't receive any response. However, I could see Stream supplier getting triggered correctly.
Can someone pls help.
#MessageMapping("addListenerHook")
public Flux<QueryResult> addListenerHook(String clientName){
System.out.println("Adding Listener:"+clientName);
BlockingQueue<QueryResult> listenerQ = new LinkedBlockingQueue<>();
Datalistener.register(clientName,listenerQ);
return Flux.fromStream(
()-> Stream.generate(()->streamValue(listenerQ))).map(q->{
System.out.println("I got an event : "+q.getResult());
return q;
});
}
private QueryResult streamValue(BlockingQueue<QueryResult> inStream){
try{
return inStream.take();
}catch(Exception e){
return null;
}
}
This is tough to solve simply and cleanly because of the blocking API. I think this is why there aren't simple bridge APIs here to help you implement this. You should come up with a clean solution to turn the BlockingQueue into a Flux first. Then the spring-boot part becomes a non-event.
This is why the correct solution is probably involving a custom BlockingQueue implementation like ObservableQueue in https://www.nurkiewicz.com/2015/07/consuming-javautilconcurrentblockingque.html
A alternative approach is in How can I create reactor Flux from a blocking queue?
If you need to retain the LinkedBlockingQueue, a starting solution might be something like the following.
val f = flux<String> {
val listenerQ = LinkedBlockingQueue<QueryResult>()
Datalistener.register(clientName,listenerQ);
while (true) {
send(bq.take())
}
}.subscribeOn(Schedulers.elastic())
With an API like flux you should definitely avoid any side effects before the subscribe, so don't register your listener until inside the body of the method. But you will need to improve this example to handle cancellation, or however you cancel the listener and interrupt the thread doing the take.

Mono returned by ServerRequest.bodyToMono() method not extracting the body if I return ServerResponse immediately

I am using web reactive in spring web flux. I have implemented a Handler function for POST request. I want the server to return immediately. So, I have implemeted the handler as below -:
public class Sample implements HandlerFunction<ServerResponse>{
public Mono<ServerResponse> handle(ServerRequest request) {
Mono bodyMono = request.bodyToMono(String.class);
bodyMono.map(str -> {
System.out.println("body got is " + str);
return str;
}).subscribe();
return ServerResponse.status(HttpStatus.CREATED).build();
}
}
But the print statement inside the map function is not getting called. It means the body is not getting extracted.
If I do not return the response immediately and use
return bodyMono.then(ServerResponse.status(HttpStatus.CREATED).build())
then the map function is getting called.
So, how can I do processing on my request body in the background?
Please help.
EDIT
I tried using flux.share() like below -:
Flux<String> bodyFlux = request.bodyToMono(String.class).flux().share();
Flux<String> processFlux = bodyFlux.map(str -> {
System.out.println("body got is");
try{
Thread.sleep(1000);
}catch (Exception ex){
}
return str;
});
processFlux.subscribeOn(Schedulers.elastic()).subscribe();
return bodyFlux.then(ServerResponse.status(HttpStatus.CREATED).build());
In the above code, sometimes the map function is getting called and sometimes not.
As you've found, you can't just arbitrarily subscribe() to the Mono returned by bodyToMono(), since in that case the body simply doesn't get passed into the Mono for processing. (You can verify this by putting a single() call in that Mono, it'll throw an exception since no element will be emitted.)
So, how can I do processing on my request body in the background?
If you really still want to just use reactor to do a long task in the background while returning immediately, you can do something like:
return request.bodyToMono(String.class).doOnNext(str -> {
Mono.just(str).publishOn(Schedulers.elastic()).subscribe(s -> {
System.out.println("proc start!");
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("proc end!");
});
}).then(ServerResponse.status(HttpStatus.CREATED).build());
This approach immediately publishes the emitted element to a new Mono, set to publish on an elastic scheduler, that is then subscribed in the background. However, it's kind of ugly, and it's not really what reactor is designed to do. You may be misunderstanding the idea behind reactor / reactive programming here:
It's not written with the idea of "returning a quick result and then doing stuff in the background" - that's generally the purpose of a work queue, often implemented with something like RabbitMQ or Kafka. It's "raison d'ĂȘtre" is instead to be non-blocking, so a single thread is never idly blocked, waiting for something else to complete.
The map() method isn't designed for side effects, it's designed to transform each object into another. For side effects, you want doOnNext() instead;
Reactor uses a single thread by default, so your "additional processing" in your map() method would still block that thread.
If your application is for anything more than quick demo purposes, and/or you need to make heavy use of this pattern, then I'd seriously consider setting up a proper work queue instead.
This is not possible.
Web servers (including Reactor Netty, Tomcat, etc) clean up and recycle resources when request processing is done. This means that when your controller handler is done, the HTTP resources, the request itself, reusable buffers, etc are recycled or closed. At that point, you cannot read from the request body anymore.
In your case, you need to read and buffer the whole request body first, then return a response and kick off a task for processing that request in a separate execution.

Spring execute a block of code after a delay

I have a Spring boot controller which makes two service calls. The second call should occur only after 10 secs, after getting response from first call.
public SomeResponse myAction() {
res = serviceCallA();
waitFor(10) {
serviceCallB();
}
return res;
}
The action doesn't have to wait for the response from serviceCallB(), to return response. Call to serviceCallB() just has to be triggered in separate thread.
Whats the best way to implement this? I need something like a ThreadPoolTaskExecutor, but with a delay.
Sample code would awesome..
Use a promise, not the horrible Thread.sleep from 1999 that wastes precious system resources. Your options are CompletableFuture, RxJava Publisher constructs, Spring's own Project Reactor.
Let serviceCallA return Mono<Something> (Project Reactor). Then:
res.delayElement(Duration.ofSeconds(10))
.doOnEach(unused -> serviceCallB())
.block();
There's probably 6 ways to do this in each library, the above being one.
Very straightforward answer;
SomeResponse myAction() {
res = serviceCallA();
serviceCallB();
return res;
}
#Async
void serviceCallB() {
Thread.sleep(10000) // 10 secs
// do service B call stuff
}
More on #Async with Spring also this
Beware though, since these calls will be running these serviceCallB() logic in new threads, and if used without proper control, might cause memory issues & kill your server.
With java.util.concurrent package you have the Executors
ScheduledExecutorService ex = Executors.newSingleThreadScheduledExecutor();
ex.schedule(() -> serviceCallB, 10, TimeUnit.SECONDS);

Resources