How to lock async operation\limit number of executions in a web-api controller - async-await

I have a web-api endpoint in which I would like to control its execution according to internal async operation. The objective of this endpoint is to limit the reconnection attempts to an external resource.
In the controller I have a singleton (service with property) counter which counts every time a reconnect attempt is being done.
The problem is that the frequency of the http request (the time between each http request) is smaller (lets say 1 second) that the time of the inner async operation (lets say 10 seconds), so different threads are making the async operation while the counter is not yet incremented.
The outcome of this code is lots of:
"Trying to connect, number of attempts: 0" entries in the logs.
I thought of using loch on the code block - but an await operation cannot be inside a body of lock
See my code:
public class SomeController : ControllerBase
{
private static readonly object LockObject = new object();
private readonly ISomeSingletonService _someSingletonService;
public SomeController(ISomeSingletonService someSingletonService)
{
_someSingletonService = someSingletonService;
}
[HttpPost("Connect")] // This is reached every second
public async Task Connect()
{
/*lock (LockObject) //can't do that because await cannot be in the body of lock
{*/
logger.LogInformation($"Trying to connect, number of attempts: {_someSingletonService.ReconnectionsAttempts}");
if (_someSingletonService.ReconnectionsAttempts < maxReconnectionsAttempts)
{
await someAsyncOeration(); // This operation can last few seconds
_someSingletonService.ReconnectionsAttempts++;
}
/*}*/
}
}

Related

Immediately return first emitted value from two Monos while continuing to process the other asynchronously

I have two data sources, each returning a Mono:
class CacheCustomerClient {
Mono<Entity> createCustomer(Customer customer)
}
class MasterCustomerClient {
Mono<Entity> createCustomer(Customer customer)
}
Callers to my application are hitting a Spring WebFlux controller:
#PostMapping
#ResponseStatus(HttpStatus.CREATED)
public Flux<Entity> createCustomer(#RequestBody Customer customer) {
return customerService.createNewCustomer(entity);
}
As long as either data source successfully completes its create operation, I want to immediately return a success response to the caller, however, I still want my service to continue processing the result of the other Mono stream, in the event that an error was encountered, so it can be logged.
The problem seems to be that as soon as a value is returned to the controller, a cancel signal is propagated back through the stream by Spring WebFlux and, thus, no information is logged about a failure.
Here's one attempt:
public Flux<Entity> createCustomer(final Customer customer) {
var cacheCreate = cacheClient
.createCustomer(customer)
.doOnError(WebClientResponseException.class,
err -> log.error("Customer creation failed in cache"));
var masterCreate = masterClient
.createCustomer(customer)
.doOnError(WebClientResponseException.class,
err -> log.error("Customer creation failed in master"));
return Flux.firstWithValue(cacheCreate, masterCreate)
.onErrorMap((err) -> new Exception("Customer creation failed in cache and master"));
}
Flux.firstWithValue() is great for emitting the first non-error value, but then whichever source is lagging behind is cancelled, meaning that any error is never logged out. I've also tried scheduling these two sources on their own Schedulers and that didn't seem to help either.
How can I perform these two calls asynchronously, and emit the first value to the caller, while continuing to listen for emissions on the slower source?
You can achieve that by transforming you operators to "hot" publishers using share() operator:
First subscriber launch the upstream operator, and additional subscribers get back result cached from the first subscriber:
Further Subscriber will share [...] the same result.
Once a second subscription has been done, the publisher is not cancellable:
It's worth noting this is an un-cancellable Subscription.
So, to achieve your requirement:
Apply share() on each of your operators
Launch a subscription on shared publishers to trigger processing
Use shared operators in your pipeline (here firstWithValue).
Sample example:
import java.time.Duration;
import reactor.core.publisher.Mono;
public class TestUncancellableMono {
// Mock a mono successing quickly
static Mono<String> quickSuccess() {
return Mono.delay(Duration.ofMillis(200)).thenReturn("SUCCESS !");
}
// Mock a mono taking more time and ending in error.
static Mono<String> longError() {
return Mono.delay(Duration.ofSeconds(1))
.<String>then(Mono.error(new Exception("ERROR !")))
.doOnCancel(() -> System.out.println("CANCELLED"))
.doOnError(err -> System.out.println(err.getMessage()));
}
public static void main(String[] args) throws Exception {
// Transform to hot publisher
var sharedQuick = quickSuccess().share();
var sharedLong = longError().share();
// Trigger launch
sharedQuick.subscribe();
sharedLong.subscribe();
// Subscribe back to get the cached result
Mono
.firstWithValue(sharedQuick, sharedLong)
.subscribe(System.out::println, err -> System.out.println(err.getMessage()));
// Wait for subscription to end.
Thread.sleep(2000);
}
}
The output of the sample is:
SUCCESS !
ERROR !
We can see that error message has been propagated properly, and that upstream publisher has not been cancelled.

MassTransit Mediator MessageNotConsumedException

I Noticed a weird issue in one of our applications, from time to time, we get MessageNotConsumedException errors on API requests which we route via MT's Mediator.
As you will notice below, we have configured a customer LogFilter<T> which implements IFilter<ConsumeContext<T>> which ensure that we log each mediator message before and after consuming, or a 'ConsumeFailed' log in case an exception is thrown in any consumer.
When the error manifests itself, in the logs we see the following sequence of events:
T 0 : PreConsume logged
T +5ms: PostConsume logged
T +6ms: R-FAULT logged (I believe this logging is made by MT's internals?)
T +9ms: API Request 500 response logged, with `MessageNotConsumedException` as internal error
In the production environment, we see these errors with various timings, it happens in requests taking as 'little' as 9ms, over several seconds up to 30+ seconds.
I've trying to reproduce this problem in my local development environment, and did manage to produce the same sequence of events, but only by adding a delay of 35 seconds inside the consumer (see GetSomethingById class below for consumer body)
If I reduce the delay to 30s or less, the reponse will be fine.
Since the production errors are happening with very low handling times in the consumer, I suspect what I'm able to reproduce is not exactly the same.
However I'd still like to understand why I'm getting the MessageNotConsumedException, since while debugging I can easily step through my entire consumer (after the delay has elapsed) and happily reach the context.RespondAsync() call without any problems. Also while stepping through the consumer, the context.CancellationToken has not been cancelled.
I also came across this question, which sounds exactly like what I'm having, however I did add the HttpContext scope as documented. To be fair, I didn't try this change in production yet, but my local issue with the 35s delay remains unchanged.
I have MassTransit medatior configured as follows:
services.AddHttpContextAccessor();
services.AddMediator(x =>
{
x.AddConsumer<GetSomethingByIdHandler>();
x.ConfigureMediator((context, cfg) =>
{
//The order of using the middleware matters, so don't change this
cfg.UseHttpContextScopeFilter(context); // Extension method & friends copy/pasted from https://masstransit-project.com/usage/mediator.html#http-context-scope
cfg.UseConsumeFilter(typeof(LogFilter<>), context);
});
});
The LogFilter which is configured is the following class:
public class LogFilter<T> : IFilter<ConsumeContext<T>> where T : class
{
private readonly ILogger<LogFilter<T>> _logger;
public LogFilter(ILogger<LogFilter<T>> logger)
{
_logger = logger;
}
public void Probe(ProbeContext context) => context.CreateScope("log-filter");
public async Task Send(ConsumeContext<T> context, IPipe<ConsumeContext<T>> next)
{
LogPreConsume(context);
try
{
await next.Send(context);
}
catch (Exception exception)
{
LogConsumeException(context, exception);
throw;
}
LogPostConsume(context);
}
private void LogPreConsume(ConsumeContext context) => _logger.LogInformation(
"{MessageType}:{EventType} correlated by {CorrelationId} on {Address}"
+ " with send time {SentTime:dd/MM/yyyy HH:mm:ss:ffff}",
typeof(T).Name,
"PreConsume",
context.CorrelationId,
context.ReceiveContext.InputAddress,
context.SentTime?.ToUniversalTime());
private void LogPostConsume(ConsumeContext context) => _logger.LogInformation(
"{MessageType}:{EventType} correlated by {CorrelationId} on {Address}"
+ " with send time {SentTime:dd/MM/yyyy HH:mm:ss:ffff}"
+ " and elapsed time {ElapsedTime}",
typeof(T).Name,
"PostConsume",
context.CorrelationId,
context.ReceiveContext.InputAddress,
context.SentTime?.ToUniversalTime(),
context.ReceiveContext.ElapsedTime);
private void LogConsumeException(ConsumeContext<T> context, Exception exception) => _logger.LogError(exception,
"{MessageType}:{EventType} correlated by {CorrelationId} on {Address}"
+ " with sent time {SentTime:dd/MM/yyyy HH:mm:ss:ffff}"
+ " and elapsed time {ElapsedTime}"
+ " and message {#message}",
typeof(T).Name,
"ConsumeFailure",
context.CorrelationId,
context.ReceiveContext.InputAddress,
context.SentTime?.ToUniversalTime(),
context.ReceiveContext.ElapsedTime,
context.Message);
}
I then have a controller method which looks like this:
[Route("[controller]")]
[ApiController]
public class SomethingController : ControllerBase
{
private readonly IMediator _mediator;
public SomethingController(IMediator mediator)
{
_mediator = mediator;
}
[HttpGet("{somethingId}")]
public async Task<IActionResult> GetSomething([FromRoute] int somethingId, CancellationToken ct)
{
var query = new GetSomethingByIdQuery(somethingId);
var response = await _mediator
.CreateRequestClient<GetSomethingByIdQuery>()
.GetResponse<Something>(query, ct);
return Ok(response.Message);
}
}
The consumer which handles this request is as follows:
public record GetSomethingByIdQuery(int SomethingId);
public class GetSomethingByIdHandler : IConsumer<GetSomethingByIdQuery>
{
public async Task Consume(ConsumeContext<GetSomethingByIdQuery> context)
{
await Task.Delay(35000, context.CancellationToken);
await context.RespondAsync(new Something{Name = "Something cool"});
}
}
MessageNotConsumedException is thrown when a message is sent using mediator and that message is not consumed by a consumer. That wouldn't typically be a transient error since one would expect that the consumer remains configured/connected to the mediator for the lifetime of the application.

How to handle sse connection closed?

I have an endpoint streamed as in the sample code block. When streaming, I call an async method through streamHelper.getStreamSuspendCount(). I am stopping this async method in changing state. But I can not access this async method when the browser is closed and the session is terminated. I am stopping the async method in session scope when changing state. But I can not access this async method when the browser is closed and the session is terminated. How can I access this scope when Session is closed?
#RequestMapping(value = "/stream/{columnId}/suspendCount", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
#ResponseBody
public Flux<Integer> suspendCount(#PathVariable String columnId) {
ColumnObject columnObject = streamHelper.findColumnObjectInListById(columnId);
return streamHelper.getStreamSuspendCount(columnObject);
}
getStreamSuspendCount(ColumnObject columnObject) {
...
//async flux
Flux<?> newFlux = beSubscribeFlow.get(i);
Disposable disposable = newFlux.subscribe();
beDisposeFlow.add(disposable); // my session scope variable. if change state, i will kill disposable (dispose()).
...
return Flux.fromStream(Stream.generate(() -> columnObject.getPendingObject().size())).distinctUntilChanged()
.doOnNext(i -> {
System.out.println(i);
}));
}
I think part of the problem is that you are attempting to get a Disposable that you want to call at the end of the session. But in doing so, you are subscribing to the sequence yourself. Spring Framework will also subscribe to the Flux returned by getStreamSuspendCount, and it is THAT subscription that needs to be cancelled for the SSE client to get notified.
Now how to achieve this? What you need is a sort of "valve" that will cancel its source upon receiving an external signal. This is what takeUntilOther(Publisher<?>) does.
So now you need a Publisher<?> that you can tie to the session lifecycle (more specifically the session close event): as soon as it emits, takeUntilOther will cancel its source.
2 options there:
the session close event is exposed in a listener-like API: use Mono.create
you really need to manually trigger the cancel: use MonoProcessor.create() and when the time comes, push any value through it
Here are simplified examples with made up APIs to clarify:
Create
return theFluxForSSE.takeUntilOther(Mono.create(sink ->
sessionEvent.registerListenerForClose(closeEvent -> sink.success(closeEvent))
));
MonoProcessor
MonoProcessor<String> processor = MonoProcessor.create();
beDisposeFlow.add(processor); // make it available to your session scope?
return theFluxForSSE.takeUntilOther(processor); //Spring will subscribe to this
Let's simulate the session close with a scheduled task:
Executors.newSingleThreadScheduledExecutor().schedule(() ->
processor.onNext("STOP") // that's the key part: manually sending data through the processor to signal takeUntilOther
, 2, TimeUnit.SECONDS);
Here is a simulated unit test example that you can run to better understand what happens:
#Test
public void simulation() {
Flux<Long> theFluxForSSE = Flux.interval(Duration.ofMillis(100));
MonoProcessor<String> processor = MonoProcessor.create();
Executors.newSingleThreadScheduledExecutor().schedule(() -> processor.onNext("STOP"), 2, TimeUnit.SECONDS);
theFluxForSSE.takeUntilOther(processor.log())
.log()
.blockLast();
}

await async code seems to still be running sync

I'm new to async / await, and have been trying to implement it in my 4.6 web api 2 project.
public class MyController : ApiController
{
public async Task<Thing> Search(String searchTerms)
{
myThing = new Thing();
myThing.FirstProperty = await doFirstPropertyAsync(searchTerms);
myThing.SecondProperty = await doSecondPropertyAsync(searchTerms);
return myThing;
}
}
Basically I'm returning a class (Thing) that has two properties that take a few seconds each to populate. I'm actually loading maybe ~10 properties, but it's the same logic for all of them.
public async Task<MyCoolSubObject> doFirstPropertyAsync(string searchTerms)
{
SomeController sController = new SomeController();
Debug.WriteLine("first - starting.");
var x = await Task.Run(()=>sController.Lookup(searchTerms));
Debug.WriteLine("first - finishing.");
return x;
}
public async Task<MyCoolSubObject> doSecondPropertyAsync(string searchTerms)
{
SomeOtherController sController = new SomeOtherController();
Debug.WriteLine("second - starting.");
var x = await Task.Run(()=>sController.Lookup(searchTerms));
Debug.WriteLine("second - finishing.");
return x;
}
What's got my scratching my head:
When I look at the debug outputs, the first property assignment method call starts and finishes before the second completes. Again, I actually have like ten of these and no matter what order I put the property assignments in they complete in a serial fashion (ie: nothing starts until another one finishes).
These property assignments under the hood are basically doing database calls that take a while, hence I wanted them running in parallel if possible. The methods themselves ( SomeController.Lookup(string) ) contain no await/async/task stuff.
Again, I actually have like ten of these and no matter what order I
put the property assignments in they complete in a serial fashion (ie:
nothing starts until another one finishes).
This happens because in your code you use the await keyword as soon as you kickoff the task, by doing that you prevent the method to continue to execute the next statement before the task will be done.
If you want to run your tasks in parallel you should kickoff all of them and only then await all of them using Task.WhenAll:
public async Task<Thing> Search(String searchTerms)
{
myThing = new Thing();
var firstTask = doFirstPropertyAsync(searchTerms);
var secondTask = doSecondPropertyAsync(searchTerms);
await Task.WhenAll(firstTask, secondTask);
myThing.FirstProperty = await firstTask;
myThing.SecondProperty = await secondTask;
return myThing;
}
Note that when we await every task separately after we await Task.WhenAll the tasks have already been done, we do that in order to get the result from the task, although we can use the Result property (it will not block since we know the task has already been done) I prefer to use await for consistency reasons.

In-Memory Caching with auto-regeneration on ASP.Net Core

I guess there is not built-in way to achieve that:
I have some cached data, that need to be always up to date (interval of few 10s of minutes). Its generation takes around 1-2 minutes, therefore it leads sometimes to timeout requests.
For performances optimisation, I put it into memory cache, using Cache.GetOrCreateAsync, so I am sure to have fast access to the data during 40 minutes. However it still takes time when the cache expires.
I would like to have a mechanism that auto-refreshes the data before its expiration, so the users are not impacted from this refresh and can still access the "old data" during the refresh.
It would actually be adding a "pre-expiration" process, that would avoid data expiration to arrive at its term.
I feel that is not the functioning of the default IMemoryCache cache, but I might be wrong?
Does it exist? If not, how would you develop this feature?
I am thinking of using PostEvictionCallbacks, with an entry set to be removed after 35 minutes and that would trigger the update method (it involves a DbContext).
This is how I solve it:
The part called by the web request (the "Create" method should be called only the first time).
var allPlaces = await Cache.GetOrCreateAsync(CACHE_KEY_PLACES
, (k) =>
{
k.AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(40);
UpdateReset();
return GetAllPlacesFromDb();
});
And then the magic (This could have been implemented through a timer, but didn't want to handle timers there)
// This method adds a trigger to refresh the data from background
private void UpdateReset()
{
var mo = new MemoryCacheEntryOptions();
mo.RegisterPostEvictionCallback(RefreshAllPlacessCache_PostEvictionCallback);
mo.AddExpirationToken(new CancellationChangeToken(new CancellationTokenSource(TimeSpan.FromMinutes(35)).Token));
Cache.Set(CACHE_KEY_PLACES_RESET, DateTime.Now, mo);
}
// Method triggered by the cancellation token that triggers the PostEvictionCallBack
private async void RefreshAllPlacesCache_PostEvictionCallback(object key, object value, EvictionReason reason, object state)
{
// Regenerate a set of updated data
var places = await GetLongGeneratingData();
Cache.Set(CACHE_KEY_PLACES, places, TimeSpan.FromMinutes(40));
// Re-set the cache to be reloaded in 35min
UpdateReset();
}
So the cache gets two entries, the first one with the data, expiring after 40 minutes, the second one expiring after 35min via a cancellation token that triggers the post eviction method.
This callback refreshes the data before it expires.
Keep in mind that this will keep the website awake and using memory even if not used.
** * UPDATE USING TIMERS * **
The following class is registered as a singleton. DbContextOptions is passed instead of DbContext to create a DbContext with the right scope.
public class SearchService
{
const string CACHE_KEY_ALLPLACES = "ALL_PLACES";
protected readonly IMemoryCache Cache;
private readonly DbContextOptions<AppDbContext> AppDbOptions;
public SearchService(
DbContextOptions<AppDbContext> appDbOptions,
IMemoryCache cache)
{
this.AppDbOptions = appDbOptions;
this.Cache = cache;
InitTimer();
}
private void InitTimer()
{
Cache.Set<AllEventsResult>(CACHE_KEY_ALLPLACESS, new AllPlacesResult() { Result = new List<SearchPlacesResultItem>(), IsBusy = true });
Timer = new Timer(TimerTickAsync, null, 1000, RefreshIntervalMinutes * 60 * 1000);
}
public Task LoadingTask = Task.CompletedTask;
public Timer Timer { get; set; }
public long RefreshIntervalMinutes = 10;
public bool LoadingBusy = false;
private async void TimerTickAsync(object state)
{
if (LoadingBusy) return;
try
{
LoadingBusy = true;
LoadingTask = LoadCaches();
await LoadingTask;
}
catch
{
// do not crash the app
}
finally
{
LoadingBusy = false;
}
}
private async Task LoadCaches()
{
try
{
var places = await GetAllPlacesFromDb();
Cache.Set<AllPlacesResult>(CACHE_KEY_ALLPLACES, new AllPlacesResult() { Result = places, IsBusy = false });
}
catch{}
}
private async Task<List<SearchPlacesResultItem>> GetAllPlacesFromDb()
{
// blablabla
}
}
Note:
DbContext options require to be registered as singleton, default options are now Scoped (I believe to allow simpler multi-tenancy configurations)
services.AddDbContext<AppDbContext>(o =>
{
o.UseQueryTrackingBehavior(QueryTrackingBehavior.NoTracking);
o.UseSqlServer(connectionString);
},
contextLifetime: ServiceLifetime.Scoped,
optionsLifetime: ServiceLifetime.Singleton);

Resources