How to use HttpContext inside Task.Run - async-await

There is some posts explain how to tackle, but couldnt help me much..
Logging Request/Response in middleware, it works when use 'await' with Task.Run() but since its awaited current operation to complete there is performance issue.
When I remove await as below, it runs fast but not logging anything, since HttpContext instance not available to use inside parallel thread
public class LoggingHandlerMiddleware
{
private readonly RequestDelegate next;
private readonly ILoggerManager _loggerManager;
public LoggingHandlerMiddleware(RequestDelegate next, ILoggerManager loggerManager)
{
this.next = next;
_loggerManager = loggerManager;
}
public async Task Invoke(HttpContext context, ILoggerManager loggerManager, IWebHostEnvironment environment)
{
_ = Task.Run(() =>
{
AdvanceLoggingAsync(context, _loggerManager, environment);
});
...
}
private void AdvanceLoggingAsync(HttpContext context, ILoggerManager loggerManager, IWebHostEnvironment environment, bool IsResponse = false)
{
{
context.Request.EnableBuffering(); // Throws ExecutionContext.cs not found
result += $"ContentType:{context.Request.ContentType},";
using (StreamReader reader = new StreamReader(context.Request.Body, Encoding.UTF8, true, 1024, true))
{
result += $"Body:{await reader.ReadToEndAsync()}";
context.Request.Body.Position = 0;
}
loggerManager.LogInfo($"Advance Logging Content(Request)-> {result}");
}
How can I leverage Task.Run() performance with accessing HttpContext?

Well, you can extract what you need from the context, build your string you want to log, and then pass that string to the task you run.
However, firing and forgetting a task is not good. If it throws an exception, you risk of bringing down the server, or at least you will have very hard time getting information about the error.
If you are concerned about the logging performance, better add what you need to log to a message queue, and have a process that responds to new messages in the queue and logs the message to the log file.

Related

MassTransit Mediator MessageNotConsumedException

I Noticed a weird issue in one of our applications, from time to time, we get MessageNotConsumedException errors on API requests which we route via MT's Mediator.
As you will notice below, we have configured a customer LogFilter<T> which implements IFilter<ConsumeContext<T>> which ensure that we log each mediator message before and after consuming, or a 'ConsumeFailed' log in case an exception is thrown in any consumer.
When the error manifests itself, in the logs we see the following sequence of events:
T 0 : PreConsume logged
T +5ms: PostConsume logged
T +6ms: R-FAULT logged (I believe this logging is made by MT's internals?)
T +9ms: API Request 500 response logged, with `MessageNotConsumedException` as internal error
In the production environment, we see these errors with various timings, it happens in requests taking as 'little' as 9ms, over several seconds up to 30+ seconds.
I've trying to reproduce this problem in my local development environment, and did manage to produce the same sequence of events, but only by adding a delay of 35 seconds inside the consumer (see GetSomethingById class below for consumer body)
If I reduce the delay to 30s or less, the reponse will be fine.
Since the production errors are happening with very low handling times in the consumer, I suspect what I'm able to reproduce is not exactly the same.
However I'd still like to understand why I'm getting the MessageNotConsumedException, since while debugging I can easily step through my entire consumer (after the delay has elapsed) and happily reach the context.RespondAsync() call without any problems. Also while stepping through the consumer, the context.CancellationToken has not been cancelled.
I also came across this question, which sounds exactly like what I'm having, however I did add the HttpContext scope as documented. To be fair, I didn't try this change in production yet, but my local issue with the 35s delay remains unchanged.
I have MassTransit medatior configured as follows:
services.AddHttpContextAccessor();
services.AddMediator(x =>
{
x.AddConsumer<GetSomethingByIdHandler>();
x.ConfigureMediator((context, cfg) =>
{
//The order of using the middleware matters, so don't change this
cfg.UseHttpContextScopeFilter(context); // Extension method & friends copy/pasted from https://masstransit-project.com/usage/mediator.html#http-context-scope
cfg.UseConsumeFilter(typeof(LogFilter<>), context);
});
});
The LogFilter which is configured is the following class:
public class LogFilter<T> : IFilter<ConsumeContext<T>> where T : class
{
private readonly ILogger<LogFilter<T>> _logger;
public LogFilter(ILogger<LogFilter<T>> logger)
{
_logger = logger;
}
public void Probe(ProbeContext context) => context.CreateScope("log-filter");
public async Task Send(ConsumeContext<T> context, IPipe<ConsumeContext<T>> next)
{
LogPreConsume(context);
try
{
await next.Send(context);
}
catch (Exception exception)
{
LogConsumeException(context, exception);
throw;
}
LogPostConsume(context);
}
private void LogPreConsume(ConsumeContext context) => _logger.LogInformation(
"{MessageType}:{EventType} correlated by {CorrelationId} on {Address}"
+ " with send time {SentTime:dd/MM/yyyy HH:mm:ss:ffff}",
typeof(T).Name,
"PreConsume",
context.CorrelationId,
context.ReceiveContext.InputAddress,
context.SentTime?.ToUniversalTime());
private void LogPostConsume(ConsumeContext context) => _logger.LogInformation(
"{MessageType}:{EventType} correlated by {CorrelationId} on {Address}"
+ " with send time {SentTime:dd/MM/yyyy HH:mm:ss:ffff}"
+ " and elapsed time {ElapsedTime}",
typeof(T).Name,
"PostConsume",
context.CorrelationId,
context.ReceiveContext.InputAddress,
context.SentTime?.ToUniversalTime(),
context.ReceiveContext.ElapsedTime);
private void LogConsumeException(ConsumeContext<T> context, Exception exception) => _logger.LogError(exception,
"{MessageType}:{EventType} correlated by {CorrelationId} on {Address}"
+ " with sent time {SentTime:dd/MM/yyyy HH:mm:ss:ffff}"
+ " and elapsed time {ElapsedTime}"
+ " and message {#message}",
typeof(T).Name,
"ConsumeFailure",
context.CorrelationId,
context.ReceiveContext.InputAddress,
context.SentTime?.ToUniversalTime(),
context.ReceiveContext.ElapsedTime,
context.Message);
}
I then have a controller method which looks like this:
[Route("[controller]")]
[ApiController]
public class SomethingController : ControllerBase
{
private readonly IMediator _mediator;
public SomethingController(IMediator mediator)
{
_mediator = mediator;
}
[HttpGet("{somethingId}")]
public async Task<IActionResult> GetSomething([FromRoute] int somethingId, CancellationToken ct)
{
var query = new GetSomethingByIdQuery(somethingId);
var response = await _mediator
.CreateRequestClient<GetSomethingByIdQuery>()
.GetResponse<Something>(query, ct);
return Ok(response.Message);
}
}
The consumer which handles this request is as follows:
public record GetSomethingByIdQuery(int SomethingId);
public class GetSomethingByIdHandler : IConsumer<GetSomethingByIdQuery>
{
public async Task Consume(ConsumeContext<GetSomethingByIdQuery> context)
{
await Task.Delay(35000, context.CancellationToken);
await context.RespondAsync(new Something{Name = "Something cool"});
}
}
MessageNotConsumedException is thrown when a message is sent using mediator and that message is not consumed by a consumer. That wouldn't typically be a transient error since one would expect that the consumer remains configured/connected to the mediator for the lifetime of the application.

In-Memory Caching with auto-regeneration on ASP.Net Core

I guess there is not built-in way to achieve that:
I have some cached data, that need to be always up to date (interval of few 10s of minutes). Its generation takes around 1-2 minutes, therefore it leads sometimes to timeout requests.
For performances optimisation, I put it into memory cache, using Cache.GetOrCreateAsync, so I am sure to have fast access to the data during 40 minutes. However it still takes time when the cache expires.
I would like to have a mechanism that auto-refreshes the data before its expiration, so the users are not impacted from this refresh and can still access the "old data" during the refresh.
It would actually be adding a "pre-expiration" process, that would avoid data expiration to arrive at its term.
I feel that is not the functioning of the default IMemoryCache cache, but I might be wrong?
Does it exist? If not, how would you develop this feature?
I am thinking of using PostEvictionCallbacks, with an entry set to be removed after 35 minutes and that would trigger the update method (it involves a DbContext).
This is how I solve it:
The part called by the web request (the "Create" method should be called only the first time).
var allPlaces = await Cache.GetOrCreateAsync(CACHE_KEY_PLACES
, (k) =>
{
k.AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(40);
UpdateReset();
return GetAllPlacesFromDb();
});
And then the magic (This could have been implemented through a timer, but didn't want to handle timers there)
// This method adds a trigger to refresh the data from background
private void UpdateReset()
{
var mo = new MemoryCacheEntryOptions();
mo.RegisterPostEvictionCallback(RefreshAllPlacessCache_PostEvictionCallback);
mo.AddExpirationToken(new CancellationChangeToken(new CancellationTokenSource(TimeSpan.FromMinutes(35)).Token));
Cache.Set(CACHE_KEY_PLACES_RESET, DateTime.Now, mo);
}
// Method triggered by the cancellation token that triggers the PostEvictionCallBack
private async void RefreshAllPlacesCache_PostEvictionCallback(object key, object value, EvictionReason reason, object state)
{
// Regenerate a set of updated data
var places = await GetLongGeneratingData();
Cache.Set(CACHE_KEY_PLACES, places, TimeSpan.FromMinutes(40));
// Re-set the cache to be reloaded in 35min
UpdateReset();
}
So the cache gets two entries, the first one with the data, expiring after 40 minutes, the second one expiring after 35min via a cancellation token that triggers the post eviction method.
This callback refreshes the data before it expires.
Keep in mind that this will keep the website awake and using memory even if not used.
** * UPDATE USING TIMERS * **
The following class is registered as a singleton. DbContextOptions is passed instead of DbContext to create a DbContext with the right scope.
public class SearchService
{
const string CACHE_KEY_ALLPLACES = "ALL_PLACES";
protected readonly IMemoryCache Cache;
private readonly DbContextOptions<AppDbContext> AppDbOptions;
public SearchService(
DbContextOptions<AppDbContext> appDbOptions,
IMemoryCache cache)
{
this.AppDbOptions = appDbOptions;
this.Cache = cache;
InitTimer();
}
private void InitTimer()
{
Cache.Set<AllEventsResult>(CACHE_KEY_ALLPLACESS, new AllPlacesResult() { Result = new List<SearchPlacesResultItem>(), IsBusy = true });
Timer = new Timer(TimerTickAsync, null, 1000, RefreshIntervalMinutes * 60 * 1000);
}
public Task LoadingTask = Task.CompletedTask;
public Timer Timer { get; set; }
public long RefreshIntervalMinutes = 10;
public bool LoadingBusy = false;
private async void TimerTickAsync(object state)
{
if (LoadingBusy) return;
try
{
LoadingBusy = true;
LoadingTask = LoadCaches();
await LoadingTask;
}
catch
{
// do not crash the app
}
finally
{
LoadingBusy = false;
}
}
private async Task LoadCaches()
{
try
{
var places = await GetAllPlacesFromDb();
Cache.Set<AllPlacesResult>(CACHE_KEY_ALLPLACES, new AllPlacesResult() { Result = places, IsBusy = false });
}
catch{}
}
private async Task<List<SearchPlacesResultItem>> GetAllPlacesFromDb()
{
// blablabla
}
}
Note:
DbContext options require to be registered as singleton, default options are now Scoped (I believe to allow simpler multi-tenancy configurations)
services.AddDbContext<AppDbContext>(o =>
{
o.UseQueryTrackingBehavior(QueryTrackingBehavior.NoTracking);
o.UseSqlServer(connectionString);
},
contextLifetime: ServiceLifetime.Scoped,
optionsLifetime: ServiceLifetime.Singleton);

How to use async method in DelegateCommand

I want to link async method to a delegate command in prism framework in Xamarin.Forms and my question is how to do it?
Is below solution correct? Is there exist any pitfall? (deadlock, UI slow or freezing, bad practices, ...)
{ // My view model constructor
...
MyCommand = new DelegateCommand(async () => await MyJobAsync());
...
}
private async Task MyJobAsync()
{
... // Some await calls
... // Some UI element changed such as binded Observable collections
}
You can use async void directly. However, a few notes from my experience...
The structure of your code is: start asynchronous operation and then update UI with the results. This implies to me that you would be better served with a NotifyTask<T> kind of approach to asynchronous data binding, not commands. See my async MVVM data binding article for more about the design behind NotifyTask<T> (but note that the latest code has a bugfix and other enhancements).
If you really do need an asynchronous command (which is much more rare), you can use async void directly or build an async command type as I describe in my article on async MVVM commmands. I also have types to support this but the APIs for these are more in flux.
If you do choose to use async void directly:
Consider making your async Task logic public, or at least accessible to your unit tests.
Don't forget to handle exceptions properly. Just like a plain DelegateTask, any exceptions from your delegate must be properly handled.
Just have a look at this link if you're using Prism Library: https://prismlibrary.com/docs/commands/commanding.html#implementing-a-task-based-delegatecommand
In case you want to pass a CommandParameter to DelegateCommand, use in the DelegateCommand variable declaration this syntax
public DelegateCommand<object> MyCommand { get; set; }
In the constructor of the ViewModel initialize it this way:
MyCommand = new DelegateCommand<object>(HandleTap);
where HandleTap is declared as
private async void HandleTap(object param)
Hope it helps.
As has already been mentioned the way to handle async code with delegate command is to use async void. There has been a lot of discussion on this, far beyond just Prism or Xamarin Forms. The bottom line is that ICommand that both the Xamarin Forms Command and Prism DelegateCommand are limited by ICommand's void Execute(object obj). If you'd like to get more information on this I would encourage you to read the blog by Brian Lagunas explaining why DelegateCommand.FromAsync handler is obsolete.
Generally most concerns are handled very easily by updating the code. For example. I often hear complaints about Exceptions as "the reason" why FromAsync was necessary, only to see in their code they never had a try catch. Because async void is fire and forget, another complaint I've heard is that a command could execute twice. That also is easily fixed with DelegateCommands ObservesProperty and ObservesCanExecute.
I think the two main problems when calling an asynchronous method from one that executes synchronously (ICommand.Execute) are 1) denying to execute again while previous call is still running 2) handling of exceptions. Both can be tackled with an implementation like the following (prototype). This would be an async replacement for the DelegateCommand.
public sealed class AsyncDelegateCommand : ICommand
{
private readonly Func<object, Task> func;
private readonly Action<Exception> faultHandlerAction;
private int callRunning = 0;
// Pass in the async delegate (which takes an object parameter and returns a Task)
// and a delegate which handles exceptions
public AsyncDelegateCommand(Func<object, Task> func, Action<Exception> faultHandlerAction)
{
this.func = func;
this.faultHandlerAction = faultHandlerAction;
}
public bool CanExecute(object parameter)
{
return callRunning == 0;
}
public void Execute(object parameter)
{
// Replace value of callRunning with 1 if 0, otherwise return - (if already 1).
// This ensures that there is only one running call at a time.
if (Interlocked.CompareExchange(ref callRunning, 1, 0) == 1)
{
return;
}
OnCanExecuteChanged();
func(parameter).ContinueWith((task, _) => ExecuteFinished(task), null, TaskContinuationOptions.ExecuteSynchronously);
}
private void ExecuteFinished(Task task)
{
// Replace value of callRunning with 0
Interlocked.Exchange(ref callRunning, 0);
// Call error handling if task has faulted
if (task.IsFaulted)
{
faultHandlerAction(task.Exception);
}
OnCanExecuteChanged();
}
public event EventHandler CanExecuteChanged;
private void OnCanExecuteChanged()
{
// Raising this event tells for example a button to display itself as "grayed out" while async operation is still running
var handler = CanExecuteChanged;
if (handler != null) handler(this, EventArgs.Empty);
}
}
async void
I personally would avoid "async void" at all cost. It is impossible to know from the outside when the operation has finished and error handling becomes tricky. In regards to latter, for instance writing an "async Task" method which is called from an "async void" method almost needs to be aware of how its failing Task is propagated:
public async Task SomeLogic()
{
var success = await SomeFurtherLogic();
if (!success)
{
throw new DomainException(..); // Normal thing to do
}
}
And then someone writing on a different day:
public async void CommandHandler()
{
await SomeLogic(); // Calling a method. Normal thing to do but can lead to an unobserved Task exception
}
Is UI thread running DelegateCommand and background threads running await expression?
Yes, the UI thread runs the DelegateCommand. In case of an async one, it runs until the first await statement, and then resumes his regular UI thread work. If the awaiter is configured to capture the synchronization context (that is, you do not use .ConfigureAwait(false)) the UI thread will continue to run the DelegateCommand after the await.
Is UI thread running DelegateCommand and background threads running await expression?
Whether the "await expression" runs on a background thread, foreground thread, a threadpool thread or whatever depends on the api you call. For example, you can push cpu-bound work to the threadpool using Task.Run or you can wait for an i/o-operation without using any thread at all with methods like Stream.ReadAsync
public ICommand MyCommand{get;set;}
//constructor
public ctor()
{
MyCommand = new Xamarin.Forms.Command(CmdDoTheJob);
}
public async void DoTheJob()
{
await TheMethod();
}
public DelegateCommand MyCommand => new DelegateCommand(MyMethod);
private async void MyMethod()
{
}
There are no pitfalls. A void return type in async method was created especially for delegates. If you want to change something, that has reflected on UI, insert relevant code in this block:
Device.BeginOnMainThread(()=>
{
your code;
});
Actually, ICommand and DelegateCommand pretty similar, so an above answer is quite right.

How to cancel a running async function in xamarin forms

I called async function in my code , which call rest service and populate a data structure. But somehow i need to cancel that function before its completion , how can i achieve this.
getAdDetails(ad.id,ad.campaign_type);
private async void getAdDetails(int campaign_id, string campaign_type) {
// some code here
}
There is something called "CancelationToken" which is supposed to be for such stuff.
Another way to do so is by throwing an exception when you want to cancel the process .
Another way is by having a flag which can be named "ShouldExecute" , and in the method you keep monitoring it.
I also tend to ignore the results which come from the method when they are not needed and let the thread executes in peace but yet ignored when it comes back.
Assuming you have some background logic in your function:
CancellationTokenSource _cancellation;
public void SomeFunctionToStartDataRefresh(){
_cancellation = new CancellationTokenSource();
try{
getAdDetails(id, type, cancellation.Token);
}catch(OperationCanceledException ex){
//Operation is cancelled
}
}
private async Task getAdDetails(ad.id,ad.campaign_type);
private async void getAdDetails(int campaign_id, string campaign_type, CancellationToken token) {
var data = await fetchDatafromServer()
token.ThrowIfCancellationRequested();
await DosomethingWithData();
token.ThrowIfCancellationRequested();
await DoSomethingElseWithData();
}

Non-Blocking Endpoint: Returning an operation ID to the caller - Would like to get your opinion on my implementation?

Boot Pros,
I recently started to program in spring-boot and I stumbled upon a question where I would like to get your opinion on.
What I try to achieve:
I created a Controller that exposes a GET endpoint, named nonBlockingEndpoint. This nonBlockingEndpoint executes a pretty long operation that is resource heavy and can run between 20 and 40 seconds.(in the attached code, it is mocked by a Thread.sleep())
Whenever the nonBlockingEndpoint is called, the spring application should register that call and immediatelly return an Operation ID to the caller.
The caller can then use this ID to query on another endpoint queryOpStatus the status of this operation. At the beginning it will be started, and once the controller is done serving the reuqest it will be to a code such as SERVICE_OK. The caller then knows that his request was successfully completed on the server.
The solution that I found:
I have the following controller (note that it is explicitely not tagged with #Async)
It uses an APIOperationsManager to register that a new operation was started
I use the CompletableFuture java construct to supply the long running code as a new asynch process by using CompletableFuture.supplyAsync(() -> {}
I immdiatelly return a response to the caller, telling that the operation is in progress
Once the Async Task has finished, i use cf.thenRun() to update the Operation status via the API Operations Manager
Here is the code:
#GetMapping(path="/nonBlockingEndpoint")
public #ResponseBody ResponseOperation nonBlocking() {
// Register a new operation
APIOperationsManager apiOpsManager = APIOperationsManager.getInstance();
final int operationID = apiOpsManager.registerNewOperation(Constants.OpStatus.PROCESSING);
ResponseOperation response = new ResponseOperation();
response.setMessage("Triggered non-blocking call, use the operation id to check status");
response.setOperationID(operationID);
response.setOpRes(Constants.OpStatus.PROCESSING);
CompletableFuture<Boolean> cf = CompletableFuture.supplyAsync(() -> {
try {
// Here we will
Thread.sleep(10000L);
} catch (InterruptedException e) {}
// whatever the return value was
return true;
});
cf.thenRun(() ->{
// We are done with the super long process, so update our Operations Manager
APIOperationsManager a = APIOperationsManager.getInstance();
boolean asyncSuccess = false;
try {asyncSuccess = cf.get();}
catch (Exception e) {}
if(true == asyncSuccess) {
a.updateOperationStatus(operationID, Constants.OpStatus.OK);
a.updateOperationMessage(operationID, "success: The long running process has finished and this is your result: SOME RESULT" );
}
else {
a.updateOperationStatus(operationID, Constants.OpStatus.INTERNAL_ERROR);
a.updateOperationMessage(operationID, "error: The long running process has failed.");
}
});
return response;
}
Here is also the APIOperationsManager.java for completness:
public class APIOperationsManager {
private static APIOperationsManager instance = null;
private Vector<Operation> operations;
private int currentOperationId;
private static final Logger log = LoggerFactory.getLogger(Application.class);
protected APIOperationsManager() {}
public static APIOperationsManager getInstance() {
if(instance == null) {
synchronized(APIOperationsManager.class) {
if(instance == null) {
instance = new APIOperationsManager();
instance.operations = new Vector<Operation>();
instance.currentOperationId = 1;
}
}
}
return instance;
}
public synchronized int registerNewOperation(OpStatus status) {
cleanOperationsList();
currentOperationId = currentOperationId + 1;
Operation newOperation = new Operation(currentOperationId, status);
operations.add(newOperation);
log.info("Registered new Operation to watch: " + newOperation.toString());
return newOperation.getId();
}
public synchronized Operation getOperation(int id) {
for(Iterator<Operation> iterator = operations.iterator(); iterator.hasNext();) {
Operation op = iterator.next();
if(op.getId() == id) {
return op;
}
}
Operation notFound = new Operation(-1, OpStatus.INTERNAL_ERROR);
notFound.setCrated(null);
return notFound;
}
public synchronized void updateOperationStatus (int id, OpStatus newStatus) {
iteration : for(Iterator<Operation> iterator = operations.iterator(); iterator.hasNext();) {
Operation op = iterator.next();
if(op.getId() == id) {
op.setStatus(newStatus);
log.info("Updated Operation status: " + op.toString());
break iteration;
}
}
}
public synchronized void updateOperationMessage (int id, String message) {
iteration : for(Iterator<Operation> iterator = operations.iterator(); iterator.hasNext();) {
Operation op = iterator.next();
if(op.getId() == id) {
op.setMessage(message);
log.info("Updated Operation status: " + op.toString());
break iteration;
}
}
}
private synchronized void cleanOperationsList() {
Date now = new Date();
for(Iterator<Operation> iterator = operations.iterator(); iterator.hasNext();) {
Operation op = iterator.next();
if((now.getTime() - op.getCrated().getTime()) >= Constants.MIN_HOLD_DURATION_OPERATIONS ) {
log.info("Removed operation from watchlist: " + op.toString());
iterator.remove();
}
}
}
}
The questions that I have
Is that concept a valid one that also scales? What could be improved?
Will i run into concurrency issues / race conditions?
Is there a better way to achieve the same in boot spring, but I just didn't find that yet? (maybe with the #Async directive?)
I would be very happy to get your feedback.
Thank you so much,
Peter P
It is a valid pattern to submit a long running task with one request, returning an id that allows the client to ask for the result later.
But there are some things I would suggest to reconsider :
do not use an Integer as id, as it allows an attacker to guess ids and to get the results for those ids. Instead use a random UUID.
if you need to restart your application, all ids and their results will be lost. You should persist them to a database.
Your solution will not work in a cluster with many instances of your application, as each instance would only know its 'own' ids and results. This could also be solved by persisting them to a database or Reddis store.
The way you are using CompletableFuture gives you no control over the number of threads used for the asynchronous operation. It is possible to do this with standard Java, but I would suggest to use Spring to configure the thread pool
Annotating the controller method with #Async is not an option, this does not work no way. Instead put all asynchronous operations into a simple service and annotate this with #Async. This has some advantages :
You can use this service also synchronously, which makes testing a lot easier
You can configure the thread pool with Spring
The /nonBlockingEndpoint should not return the id, but a complete link to the queryOpStatus, including id. The client than can directly use this link without any additional information.
Additionally there are some low level implementation issues which you may also want to change :
Do not use Vector, it synchronizes on every operation. Use a List instead. Iterating over a List is also much easier, you can use for-loops or streams.
If you need to lookup a value, do not iterate over a Vector or List, use a Map instead.
APIOperationsManager is a singleton. That makes no sense in a Spring application. Make it a normal PoJo and create a bean of it, get it autowired into the controller. Spring beans by default are singletons.
You should avoid to do complicated operations in a controller method. Instead move anything into a service (which may be annotated with #Async). This makes testing easier, as you can test this service without a web context
Hope this helps.
Do I need to make database access transactional ?
As long as you write/update only one row, there is no need to make this transactional as this is indeed 'atomic'.
If you write/update many rows at once you should make it transactional to guarantee, that either all rows are updated or none.
However, if two operations (may be from two clients) update the same row, always the last one will win.

Resources