How can I cancel a retry? - spring

Using Spring Retry, I have a retry going on in 1 thread.
In some circumstances, I would like to be able to cancel the retry before it has reached the configured number of retries.
What is the best way to achieve this with this library?
I have tried wrapping the lambda passed into execute so I can check whether or not the retry should be cancelled like this:
public <V, E extends Throwable> V execute(ThrowingSupplier<V, E> action) throws E {
return retryTemplate.execute(context -> {
if (shouldCancel()) {
//Do something to cancel the retry here
} else {
return action.get()
}
});
}
However, I feel like I am working against, rather than with, the framework by wrapping the retry template in this way.

Related

Immediately return first emitted value from two Monos while continuing to process the other asynchronously

I have two data sources, each returning a Mono:
class CacheCustomerClient {
Mono<Entity> createCustomer(Customer customer)
}
class MasterCustomerClient {
Mono<Entity> createCustomer(Customer customer)
}
Callers to my application are hitting a Spring WebFlux controller:
#PostMapping
#ResponseStatus(HttpStatus.CREATED)
public Flux<Entity> createCustomer(#RequestBody Customer customer) {
return customerService.createNewCustomer(entity);
}
As long as either data source successfully completes its create operation, I want to immediately return a success response to the caller, however, I still want my service to continue processing the result of the other Mono stream, in the event that an error was encountered, so it can be logged.
The problem seems to be that as soon as a value is returned to the controller, a cancel signal is propagated back through the stream by Spring WebFlux and, thus, no information is logged about a failure.
Here's one attempt:
public Flux<Entity> createCustomer(final Customer customer) {
var cacheCreate = cacheClient
.createCustomer(customer)
.doOnError(WebClientResponseException.class,
err -> log.error("Customer creation failed in cache"));
var masterCreate = masterClient
.createCustomer(customer)
.doOnError(WebClientResponseException.class,
err -> log.error("Customer creation failed in master"));
return Flux.firstWithValue(cacheCreate, masterCreate)
.onErrorMap((err) -> new Exception("Customer creation failed in cache and master"));
}
Flux.firstWithValue() is great for emitting the first non-error value, but then whichever source is lagging behind is cancelled, meaning that any error is never logged out. I've also tried scheduling these two sources on their own Schedulers and that didn't seem to help either.
How can I perform these two calls asynchronously, and emit the first value to the caller, while continuing to listen for emissions on the slower source?
You can achieve that by transforming you operators to "hot" publishers using share() operator:
First subscriber launch the upstream operator, and additional subscribers get back result cached from the first subscriber:
Further Subscriber will share [...] the same result.
Once a second subscription has been done, the publisher is not cancellable:
It's worth noting this is an un-cancellable Subscription.
So, to achieve your requirement:
Apply share() on each of your operators
Launch a subscription on shared publishers to trigger processing
Use shared operators in your pipeline (here firstWithValue).
Sample example:
import java.time.Duration;
import reactor.core.publisher.Mono;
public class TestUncancellableMono {
// Mock a mono successing quickly
static Mono<String> quickSuccess() {
return Mono.delay(Duration.ofMillis(200)).thenReturn("SUCCESS !");
}
// Mock a mono taking more time and ending in error.
static Mono<String> longError() {
return Mono.delay(Duration.ofSeconds(1))
.<String>then(Mono.error(new Exception("ERROR !")))
.doOnCancel(() -> System.out.println("CANCELLED"))
.doOnError(err -> System.out.println(err.getMessage()));
}
public static void main(String[] args) throws Exception {
// Transform to hot publisher
var sharedQuick = quickSuccess().share();
var sharedLong = longError().share();
// Trigger launch
sharedQuick.subscribe();
sharedLong.subscribe();
// Subscribe back to get the cached result
Mono
.firstWithValue(sharedQuick, sharedLong)
.subscribe(System.out::println, err -> System.out.println(err.getMessage()));
// Wait for subscription to end.
Thread.sleep(2000);
}
}
The output of the sample is:
SUCCESS !
ERROR !
We can see that error message has been propagated properly, and that upstream publisher has not been cancelled.

is it safe to unsubscribe while consuming reaches some condition?

i want to end subscribe a queue while consuming.
but my ack mode is AcknowledgeMode.AUTO, the container will issue the ack/nack based on whether the listener returns normally, or throws an exception.
so, if i unsubscribed in the consume method, then the method returns, and container try to ack, but it already unsubscribed before, so what would happens, is it safe to do so as follows:
unsubscribe way 1
DirectMessageListenerContainer container = getContainer();
container.setMessageListener(message -> {
// do something with message
// if some condition reaches, unsubscribe
if (reachEnd()) {
container.removeQueueNames(message.getMessageProperties().getConsumerQueue());
}
});
unsubscribe way 2
container.setMessageListener(new ChannelAwareMessageListener() {
#Override
public void onMessage(Message message, Channel channel) throws Exception {
// do something with message
// if some condition reaches, unsubscribe
if (reachEnd()) {
channel.basicCancel(message.getMessageProperties().getConsumerTag());
}
}
});
I would do neither, stop the container instead. Either way causes the consumer to be cancelled.
You should call stop() on a new thread, not the listener thread - it would cause a deadlock.

Masstransit How to disconnect from from RabbitMq

I am using Masstransit with RabbitMQ. As part of some deployment procedure, At some point in time I need my service to disconnect and stop receiving any messages.
Assuming that I won't need the bus until the next restart of the service, will it be Ok to use bus.StopAsync()?
Is there a way to get list of end points and then remove them from listining ?
You should StopAsync the bus, and then when ready, call StartAsync to bring it back up (or start it at the next service restart).
To stop receiving messages without stopping the buss I needed a solution that will avoid the consume message pipeline from consuming any type of message. I tried with observers but unsuccessfully. My solution came up with custom consuming message filter.
The filter part looks like this
public class ComsumersBlockingFilter<T> :
IFilter<ConsumeContext<T>>
where T : class
{
public void Probe(ProbeContext context)
{
var scope = context.CreateFilterScope("messageFilter");
}
public async Task Send(ConsumeContext<T> context, IPipe<ConsumeContext<T>> next)
{
// Check if the service is degraded (true for this demo)
var isServiceDegraded = true;
if (isServiceDegraded)
{
//Suspend the message for 5 seconds
await Task.Delay(TimeSpan.FromMilliseconds(5000), context.CancellationToken);
if (!context.CancellationToken.IsCancellationRequested)
{
//republish the message
await context.Publish(context.Message);
Console.WriteLine($"Message {context.MessageId} has been republished");
}
// NotifyConsumed to avoid skipped message
await context.NotifyConsumed(TimeSpan.Zero, "messageFilter");
}
else
{
//Next filter in the pipe is called
await next.Send(context);
}
}
}
The main idea is to delay with cancellation token and the republish the message. After that call contect.NotifyConsumed to avoid the next pipeline filters and return normally.

In Kotlin, how do I integrate a Kovenant promise with Elasticsearch async responses?

I use Kovenant in my Kotlin application, and I'm calling Elasticsearch which has its own async API. I would rather use promises but the best I can come up with is something like:
task {
esClient.prepareSearch("index123")
.setQuery(QueryBuilders.matchAllQuery())
.execute().actionGet()
} then {
...
} success {
...
} fail {
...
}
Which makes an Kovenant async task thread, then Elasticsearch uses a thread from its pool, and then actionGet() synchronously blocks Elasticsearch to get back a result. It seems silly to spawn new threads while blocking others. Is there an approach to integrate the thread dispatching more closely?
Note: this question is intentionally written and answered by the author (Self-Answered Questions), so that solutions for interesting problems are shared in SO.
You can use the Kovenant Deferred class to create a promise without dispatching via an async task as you did in your sample. The model is basically:
create a deferred instance
hook up to the async handlers and resolve or reject the deferred based on async callbacks
return the deferred.promise to the caller
In code, this would look like:
fun doSearch(): Promise<SearchResponse, Throwable> {
val deferred = deferred<Response, Throwable>()
esClient.prepareSearch("index")
.setQuery(QueryBuilders.matchAllQuery())
.execute(object: ActionListener<T> {
override fun onResponse(response: T) {
deferred.resolve(response)
}
override fun onFailure(e: Throwable) {
deferred.reject(e)
})
return deferred.promise
}
A re-usable way to do this is to first create an adapter that can just adapt Elasticsearch's desire for an ActionListener to work generically work with a promise:
fun <T: Any> promiseResult(deferred: Deferred<T, Exception>): ActionListener<T> {
return object: ActionListener<T> {
override fun onResponse(response: T) {
deferred.resolve(response)
}
override fun onFailure(e: Throwable) {
deferred.reject(wrapThrowable(e))
}
}
}
class WrappedThrowableException(cause: Throwable): Exception(cause.message, cause)
fun wrapThrowable(rawEx: Throwable): Exception = if (rawEx is Exception) rawEx else WrappedThrowableException(rawEx)
Note: the wrapThrowable() method is there to change a Throwable into an Exception because current versions (3.3.0) of Kovenant have some methods that expect the rejection type of the promise to descend from Exception (for example bind()) and you can stay with Throwable if you use unwrap() instead for nested promises.
Now use this adapter function to generically extend Elasticsearch ActionRequestBuilder which is pretty much the only thing you ever will call execute() on; creating a new promise() extension function:
fun <Request: ActionRequest<*>, Response: ActionResponse, RequestBuilder: ActionRequestBuilder<*, *, *, *>, Client: ElasticsearchClient<*>>
ActionRequestBuilder<Request, Response, RequestBuilder, Client>.promise(): Promise<Response, Exception> {
val deferred = deferred<Response, Exception>()
this.execute(promiseResult(deferred))
return deferred.promise
}
Now you can call promise() instead of execute():
esClient.prepareSearch("index")
.setQuery(QueryBuilders.matchAllQuery())
.promise()
And start chaining your promises...
esClient.admin().indices().prepareCreate("index1").setSettings("...").promise()
.bind {
esClient.admin().cluster().prepareHealth()
.setWaitForGreenStatus()
.promise()
} bind {
esClient.prepareIndex("index1", "type1")
.setSource(...)
.promise()
} bind {
esClient.prepareSearch("index1")
.setQuery(QueryBuilders.matchAllQuery())
.promise()
} then { searchResults ->
// ... use searchResults
}.success {
// ...
}.fail {
// ...
}
}
You should be familiar with bind() and unwrap() when you have nested promises you want to chain without nesting deeper. You can use unwrap().then in place of bind in the above cases if you did not want to include kovenant-functional.
Every call you have in Elasticsearch will be able to use promise() instead of execute() due to the consistent nature of all request objects in the Elasticsearch client.

How to implement kind of global try..finally in TPL?

I have async method that returns Task. From time to time my process is recycling/restarting. Work is interruping in the middle of the Task. Is there more or less general approach in TPL that I can at least log that Task was interruped?
I am hosting in ASP.NET, so I can use IRegisteredObject to cancel tasks with CancellationToken. I do not like this however. I need to pass CancellationToken in all methods and I have many of them.
try..finally in each method does not seem even to raise. ContinueWith also does not work
Any advice?
I have single place I start my async tasks, however each task can have any number of child tasks. To get an idea:
class CommandRunner
{
public Task Execute(object cmd, Func<object, Task> handler)
{
return handler(cmd).ContinueWith(t =>
{
if (t.State = == TaskStatus.Faulted)
{
// Handle faultes, log them
}
else if (x.Status == TaskStatus.RanToCompletion)
{
// Audit
}
})
}
}
Tasks don't just get "interrupted" somehow. They always get completed, faulted or cancelled. There is no global hook to find out about those completions. So the only option to do your logging is to either instrument the bodies of your tasks or hook up continuations for everything.

Resources