Asynchronous results (Promise) vs AJAX - ajax

So I'm using Play as my MVC framework. My web application simply does calls to a Postgres database to feed data to the views. I'm currently using AJAX in the view to get the data. I figured there are times when the database will lag a bit in sending the data so I would use AJAX to allow other elements in the view to load.
Now my question is, since I'm already using AJAX in the view, should I be using Promises in the controllers? Would it make a difference if I used Promises? I haven't had enough experience to figure out how having asynchronous actions both in the view and in the controller can affect my web application. My intuition says that the AJAX action is enough for a web application with about 100 - 150 hits a day.
What are your insights on this?

The point of non-blocking I/O and Promises is not about speed or latency, but about better use of resources. By default, Play gives you one thread per CPU core. This works very well when all of the work you're doing on those threads is extremely fast - that is, you avoid expensive computations and use only non-blocking I/O. However, all JDBC calls are synchronous, so they will block the few available threads for a long time, and if you have enough traffic, any new requests will have to queue up, increasing load time for your pages.
Therefore, whether you should use Promises - or, more accurately, a separate thread pool - for DB calls has nothing to do with whether you make those DB calls on the initial page load or via AJAX calls. It's simply about how much traffic you expect, how many threads you have, and how long you'll be using each thread. For most applications, the best practice is to run DB calls on a separate thread pool so they don't block the main worker threads.
For example, configure a thread pool (ie, ExecutionContext) in application.conf as follows:
akka {
actor {
db-context {
fork-join-executor {
parallelism-factor = 20.0
parallelism-max = 200
}
}
}
}
Then use that thread pool in your DB code:
def dbLookup(someId: Int): Future[SomeDbValue] = {
val dbExecutionContext = Akka.system.dispatchers.lookup("db-context")
Future {
// DB code to fetch SomeDbValue
}(dbExecutionContext)
}
See Play Framework Thread Pools for full instructions and Play Framework: async I/O without the thread pool and callback hell for more background info.

Related

How to implement a cache in a Vertx application?

I have an application that at some point has to perform REST requests towards another (non-reactive) system. It happens that a high number of requests are performed towards exactly the same remote resource (the resulting HTTP request is the same).
I was thinking to avoid flooding the other system by using a simple cache in my app.
I am in full control of the cache and I have proper moments when to invalidate it, so this is not an issue. Without this cache, I'm running into other issues, like connection timeout or read timeout, the other system having troubles with high load.
Map<String, Future<Element>> cache = new ConcurrentHashMap<>();
Future<Element> lookupElement(String id) {
String key = createKey(id);
return cache.computeIfAbsent(key, key -> {
return performRESTRequest(id);
}.onSucces( element -> {
// some further processing
}
}
As I mentioned lookupElement() is invoked from different worker threads with same id.
The first thread will enter in the computeIfAbsent and perform the remote quest while the other threads will be blocked by ConcurrentHashMap.
However, when the first thread finishes, the waiting threads will receive the same Future object. Imagine 30 "clients" reacting to the same Future instance.
In my case this works quite fine and fast up to a particular load, but when the processing input of the app increases, resulting in even more invocations to lookupElement(), my app becomes slower and slower (although it reports 300% CPU usage, it logs slowly) till it starts to report OutOfMemoryException.
My questions are:
Do you see any Vertx specific issue with this approach?
Is there a more Vertx friendly caching approach I could use when there is a high concurrency on the same cache key?
Is it a good practice to cache the Future?
So, a bit unusual to respond to my own question, but I managed to solve the problem.
I was having two dilemmas:
Is ConcurentHashMap and computeIfAbsent() appropriate for Vertx?
Is it safe to cache a Future object?
I am using this caching approach in two places in my app, one for protecting the REST endpoint, and one for some more complex database query.
What was happening is that for the database query there was up to 1300 "clients" waiting for a response. Or 1300 listeners waiting for an onSuccess() of the same Future. When the Future was emitting strange things were happening. Some kind of thread strangulation.
I did a bit of refactoring eliminating this concurrency on the same resource/key, but I did kept both caches and things went back to normal.
In conclusion I think my caching approach is safe as long as we have enough spreading or in other words, we don't have such a high concurrency on the same resource. Having 20-30 listeners on the same Future works just fine.

Task.WhenAll with Select is a footgun - but why?

Consider: you have a collection of user ids and want to load the details of each user represented by their id from an API. You want to bag up all of those users into some kind of collection and send it back to the calling code. And you want to use LINQ.
Something like this:
var userTasks = userIds.Select(userId => GetUserDetailsAsync(userId));
var users = await Task.WhenAll(tasks); // users is User[]
This was fine for my app when I had relatively few users. But, there came a point where it didn't scale. When it got to the point of thousands of users, this resulted in thousands of HTTP requests being fired concurrently and bad things started to happen. Not only did we realise we were launching a denial of service attack on the API we were consuming as did this, we were also bringing our own application to the point of collapse through thread starvation.
Not a proud day.
Once we realised that the cause of our woes was a Task.WhenAll / Select combo, we were able to move away from that pattern. But my question is this:
What is going wrong here?
As I read around on the topic, this scenario seems well described by #6 on Mark Heath's list of Async antipatterns: "Excessive parallelization":
Now, this does "work", but what if there were 10,000 orders? We've flooded the thread pool with thousands of tasks, potentially preventing other useful work from completing. If ProcessOrderAsync makes downstream calls to another service like a database or a microservice, we'll potentially overload that with too high a volume of calls.
Is this actually the reason? I ask as my understanding of async / await becomes less clear the more I read about the topic. It's very clear from many pieces that "threads are not tasks". Which is cool, but my code appears to be exhausting the number of threads that ASP.NET Core can handle.
So is that what it is? Is my Task.WhenAll and Select combo exhausting the thread pool or similar? Or is there another explanation for this that I'm not aware of?
Update:
I turned this question into a blog post with a little more detail / waffle. You can find it here: https://blog.johnnyreilly.com/2020/06/taskwhenall-select-is-footgun.html
N+1 Problem
Putting threads, tasks, async, parallelism to one side, what you describe is an N+1 problem, which is something to avoid for exactly what happened to you. It's all well and good when N (your user count) is small, but it grinds to a halt as the users grow.
You may want to find a different solution. Do you have to do this operation for all users? If so, then maybe switch to a background process and fan-out for each user.
Back to the footgun (I had to look that up BTW 🙂).
Tasks are a promise, similar to JavaScript. In .NET they may complete on a separate thread - usually a thread from the thread pool.
In .NET Core, they usually do complete on a separate thread if not complete and the point of awaiting, for an HTTP request that is almost certain to be the case.
You may have exhausted the thread pool, but since you're making HTTP requests, I suspect you've exhausted the number of concurrent outbound HTTP requests instead. "The default connection limit is 10 for ASP.NET hosted applications and 2 for all others." See the documentation here.
Is there a way to achieve some parallelism and not take exhaust a resource (threads or http connections)? - Yes.
Here's a pattern I often implement for just this reason, using Batch() from morelinq.
IEnumerable<User> users = Enumerable.Empty<User>();
IEnumerable<IEnumerable<string>> batches = userIds.Batch(10);
foreach (IEnumerable<string> batch in batches)
{
Task<User> batchTasks = batch.Select(userId => GetUserDetailsAsync(userId));
User[] batchUsers = await Task.WhenAll(batchTasks);
users = users.Concat(batchUsers);
}
You still get ten asynchronous HTTP requests to GetUserDetailsAsync(), and you don't exhaust threads or concurrent HTTP requests (or at least max out with the 10).
Now if this is a heavily used operation or the server with GetUserDetailsAsync() is heavily used elsewhere in the app, you may hit the same limits when your system is under load, so this batching is not always a good idea. YMMV.
You already have an excellent answer here, but just to chime in:
There's no problem with creating thousands of tasks. They're not threads.
The core problem is that you're hitting the API way too much. So the best solutions are going to change how you call that API:
Do you really need user details for thousands of users, all at once? If this is for a dashboard display, then change your API to enforce paging; if this is for a batch process, then see if you can access the data directly from the batch process.
Use a batch route for that API if it supports one.
Use caching if possible.
Finally, if none of the above are possible, look into throttling the API calls.
The standard pattern for asynchronous throttling is to use SemaphoreSlim, which looks like this:
using var throttler = new SemaphoreSlim(10);
var userTasks = userIds.Select(async userId =>
{
await throttler.WaitAsync();
try { await GetUserDetailsAsync(userId); }
finally { throttler.Release(); }
});
var users = await Task.WhenAll(tasks); // users is User[]
Again, this kind of throttling is best only if you can't make the design changes to avoid thousands of API calls in the first place.
While there is no thread waiting for async operation if the async operation is pure, there is a thread for continuation, so assuming that your GetUserDetailsAsync will await for some IO-bound operation the continuation (parsing output, returning result ...) will need to run on some thread so your Task.Result which was created by GetUserDetailsAsync can be set, so every one of them will wait for a thread from thread pool to finish.

How to use Pomelo.EntityFrameworkCore.MySql provider for ef core 3 in async mode properly?

We are building an asp.net core 3 application which uses ef core 3.0 with Pomelo.EntityFrameworkCore.MySql provider 3.0.
Right now we are trying to replace all database calls from sync to async, like:
//from
dbContext.SaveChanges();
//to
await dbContext.SaveChangesAsync();
Unfortunetly when we do it we expereince two issues:
Number of connections to the server grows significatntly compared to the same tests for sync calls
Average processing speed of our application drops significantly
What is the recommended way to use ef core with mysql asynchronously? Any working example or evidence of using ef-core 3 with MySql asynchonously would be appreciated.
It's hard to say what the issue here is without seeing more code. Can you provide us with a small sample app that reproduces the issue?
Any DbContext instance uses exactly one database connection for normal operations, independent of whether you call sync or async methods.
Number of connections to the server grows significatntly compared to the same tests for sync calls
What kind of tests are we talking about? Are they automated? If so, how many tests are being run? Because of the nature of async calls, if you run 1000 tests in parallel, every test with its own DbContext, you will end up with 1000 parallel connections.
Though with Pomelo, you will not end up additionally with 1000 threads, as you would with using Oracle's provider.
Update:
We test asp.net core call (mvc) which goes to db and read and writes something. 50 threads, using DbContextPool with limit 500. If i use dbContext.SaveChanges(), Add(), all context methods sync, I am land up with around 50 connections to MySql, using dbContext.SaveChangesAsnyc() also AddAsnyc, ReadAsync etc, I end up seeing max 250 connection to MySql and the average response time of the page drops by factor of 2 to 3.
(I am talking about ASP.NET and requests below. The same is true for parallel run test cases.)
If you use Async methods all the way, nothing will block, so your 50 threads are free to handle the next 50 requests while the database is still executing the queries for the first 50 requests.
This will happen again and again because ASP.NET might process your requests faster than your database can return its results. So you will end up with a lot of parallel database queries.
This does not happen when executing the Sync methods, because every thread blocks and you end up with a maximum of 50 parallel queries (one per thread).
So this is expected behavior and just a consequence of async method calls.
You can always modify your code or web server configuration to limit the amount of concurrent ASP.NET requests.
50 threads, using DbContextPool with limit 500.
Also be aware that DbContextPool does not limit how many DbContext objects can concurrently exist, but only how many will be kept in the pool. So if you set DbContextPool to 500, you can create more than 500 contexts, but only 500 will be kept alive after using them.
Update:
There is a very interesting low level talk about lock-free database pool programming from #roji that addresses this behavior and takes your position, that there should be an upper limit in the connection pool that should result in blocking when exceeded and makes a great case for this behavior.
According to #bgrainger from MySqlConnector, that is how it is already implemented (the docs did not explicitly state this, but they do now). The MaxPoolSize connection string option has a default value of 100, so if you use connection pooling and if you don't overwrite this value and if you don't use multiple connection pools, you should not have more than 100 connections active at a given time.
From GitHub:
This is a documentation error, if you are interpreting the docs to mean that you can create an unlimited number of connections.
When pooling is true, each connection pool (there is one per unique connection string) only allows MaximumPoolSize connections to be open simultaneously. Each additional call to MySqlConnection.Open will block until a connection is returned to the pool.
When pooling is false, there is no limit to the number of connections that can be opened simultaneously; it's up to the user to manage the concurrency.
Check to see whether you have Pooling=false in your connection string, as mentioned by Bradley Grainger in comments.
After I removed pooling=false from my connection string, my app ran literally 3x faster.

Difference between event loop and thread per request model

According to my research, I found out that in thread per request model, every request that comes spawns a new thread. Let's say I had 100 requests, I'd be having 100 threads running at once. Coming to event looped model (similar to spring webflux), we have a main thread that listens to the requests and delegates tasks to other threads.
Now let's say we have 100 requests on event looped model. Here, the main thread will be free to listen but it will also have threads which will be waiting for response from DB or network, just like thread per request model. How does it make event looped model more scalable.
The key difference between Tomcat with Servlet API < 3.1 and servers as Netty powered with a Spring WebFlux is the way which IO and requests are processed : blocking or non-blocking.
Spring WebFlux favors the second approach :
Part of the answer is the need for a non-blocking web stack to handle
concurrency with a small number of threads and scale with fewer
hardware resources.
So to sum that, by using the Spring WebFlux API, much less threads will be created for as many as client http requests because a thread is not dedicated to a single client http request in this model.
The no-blocking approach means that : whatever the time to process a request, a thread that handles that will not block the application and keep the thread waited for a long time but will process another request during this time.
Take that example : your Rest or Mvc controller receives a request and the essential of the task to perform is requesting the database. With a blocking approach you create one thread by http request. With a no blocking approach, the thread delegates to the database, may serve other requests and that thread or another of the pool will go on the processing when the interaction with the database is finished.

Asynchronous methods of ApiController -- what's the profit? When to use?

(This probably duplicates the question ASP.NET MVC4 Async controller - Why to use?, but about webapi, and I do not agree with answers in there)
Suppose I have a long running SQL request. Its data should be than serialized to JSON and sent to browser (as a response for xhr request). Sample code:
public class DataController : ApiController
{
public Task<Data> Get()
{
return LoadDataAsync(); // Load data asynchronously?
}
}
What actually happens when I do $.getJson('api/data', ...) (see this poster http://www.asp.net/posters/web-api/ASP.NET-Web-API-Poster.pdf):
[IIS] Request is accepted by IIS.
[IIS] IIS waits for one thread [THREAD] from the managed pool (http://msdn.microsoft.com/en-us/library/0ka9477y(v=vs.110).aspx) and starts work in it.
[THREAD] Webapi Creates new DataController object in that thread, and other classes.
[THREAD] Uses task-parallel lib to start a sql-query in [THREAD2]
[THREAD] goes back to managed pool, ready for other processing
[THREAD2] works with sql driver, reads data as it ready and invokes [THREAD3] to reply for xhr request
[THREAD3] sends response.
Please, feel free to correct me, if there's something wrong.
In the question above, they say, the point and profit is, that [THREAD2] is not from The Managed Pool, however MSDN article (link above) says that
By default, parallel library types like Task and Task<TResult> use thread pool threads to run tasks.
So I make a conclusion, that all THREE THREADS are from managed pool.
Furthermore, if I used synchronous method, I would still keep my server responsive, using only one thread (from the precious thread pool).
So, what's the actual point of swapping from 1 thread to 3 threads? Why not just maximize threads in thread pool?
Are there any clearly useful ways of using async controllers?
I think the key misunderstanding is around how async tasks work. I have an async intro on my blog that may help.
In particular, a Task returned by an async method does not run any code. Rather, it is just a convenient way to notify callers of the result of that method. The MSDN docs you quoted only apply to tasks that actually run code, e.g., Task.Run.
BTW, the poster you referenced has nothing to do with threads. Here's what happens in an async database request (slightly simplified):
Request is accepted by IIS and passed to ASP.NET.
ASP.NET takes one of its thread pool threads and assigns it to that request.
WebApi creates DataController etc.
The controller action starts an asynchronous SQL query.
The request thread returns to the thread pool. There are now no threads processing the request.
When the result arrives from the SQL server, a thread pool thread reads the response.
That thread pool thread notifies the request that it is ready to continue processing.
Since ASP.NET knows that no other threads are handling that request, it just assigns that same thread the request so it can finish it off directly.
If you want some proof-of-concept code, I have an old Gist that artificially restricts the ASP.NET thread pool to the number of cores (which is its minimum setting) and then does N+1 synchronous and asynchronous requests. That code just does a delay for a second instead of contacting a SQL server, but the general principle is the same.
The profit of asynchronous actions is that while the controller is waiting for the sql query to finish no threads are allocated for this request, while if you used a synchronous method a thread would be locked up in the execution of this method from the start to end of that method. While SQL server is doing its job the thread isn't doing anything but waiting.
If you use asynchronous methods this same thread can respond to other requests while SQL server is doing its thing.
I believe your steps are wrong at step 4, I don't think it will create a new thread to do the SQL query. At 6 there isn't created a new thread, it is just one of the available threads that will be used to continue from where the first thread left off. The thread at 6 could be the same as started the async operation.
The point of async is not to make an application multi threaded but rather to let a single threaded application carry on with something different instead of waiting for a response from an external call that is executing on a different thread or process.
Consider a desk top application that shows stock prices from different exchanges. The application needs to make a couple of REST / http calls to get some data from each of the remote stock exchange servers.
A single threaded application would make the first call, wait doing nothing until it got the first set of prices, update it's window, then make a call to the next external stock price server, again wait doing nothing until it got the prices, update it's window... etc..
We could go all multi threaded kick off the requests in parallel and update the screen in parallel, but since most of the time is spent waiting for a response from the remote server this seems overkill.
It might be better for the thread to:
Make a request for the first server but instead of waiting for the answer leave a marker, a place to come back to when the prices arrive and move on to issuing the second request, again leaving a marker of a place to come back to...etc.
When all the requests have been issued the application execution thread can go on to dealing with user input or what ever is required.
Now when a response from one of the servers is received the thread can be directed to continue from the marker it laid down previously and update the window.
All of the above could have been coded long hand, single threaded, but was so horrendous going multi threaded was often easier. Now the process of leaving the marker and coming back is done by the compiler when we write async/await. All single threaded.
There are two key points here:
1) Multi threaded does still happen! The processing of our request for stock prices happens on a different thread (on a different machine). If we were doing db access the same would be true. In examples where the wait is for a timer, the timer runs on a different thread. Our application though is single threaded, the execution point just jumps around (in a controlled manner) while external threads execute
2) We lose the benefit of async execution as soon as the application requires an async operation to complete. Consider an application showing the price of coffee from two exchanges, the application could initiate the requests and update it's windows asynchronously on a single thread, but now if the application also calculated the difference in price between two exchanges it would have to wait for the async calls to complete. This is forced on us because an async method (such as one we might write to call an exchange for a stock price) does not return the stock price but a Task, which can be thought of as a way to get back to the marker that was laid down so the function can complete and return the stock price.
This means that every function that calls an async function needs to be async or to wait for the "other thread/process/machine" call at the bottom of the call stack to complete, and if we are waiting for the bottom call to complete well why bother with async at all?
When writing a web api, IIS or other host is the desktop application, we write our controller methods async so that the host can execute other methods on our thread to service other requests while our code is waiting for a response from work on a different thread/process/machine.
I my opinion the following describes a clear advantage of async controllers over synchronous ones.
A web application using synchronous methods to service high latency
calls where the thread pool grows to the .NET 4.5 default maximum of
5, 000 threads would consume approximately 5 GB more memory than an
application able the service the same requests using asynchronous
methods and only 50 threads. When you’re doing asynchronous work,
you’re not always using a thread. For example, when you make an
asynchronous web service request, ASP.NET will not be using any
threads between the async method call and the await. Using the thread
pool to service requests with high latency can lead to a large memory
footprint and poor utilization of the server hardware.
from Using Asynchronous Methods in ASP.NET MVC 4

Categories

Resources