Scenario:
In my site I display books.
The user can add every book to a "Read Later" list.
Behavior:
When the user enters the site, they are presented with a list of books.
Some of which are already in their "Read Later" list, some aren't.
The user has an indication next to each book telling them whether the book has been added to the list or not.
My issue
I am debating which option is the ideal for my situation.
Option 1:
For every book, query the server whether it already exists in the user's list.
Update the indicator for each book.
Pro:
Very small request to the server, and very easy response (true or false).
Con: In a page with 30 books, I will send 30 separate http requests, which can block sockets, and is rather slow considering the browser and the server have to perform the entire handshake for each transaction.
Option 2:
I query the server once, and get a response with the full list of books in the "Read Later" list as an array.
In the browser, I go over the array, and update the indication for each book based on whether it exists in the array or not.
Pro: I only make one request, and update the indicator for all the books at once.
Con: The "Read Later" list might have hundreds of books, and passing a big array might prove slow and excessive. Especially in scenarios when not 30 books appear on the screen, but only 2-3. (That is, I want to check if a certain book is in the list, and for this I have the server send the client the entire list of books from the list).
So,
Which way would you go to maximize performance: 1 or 2?
Is there any alternative I am missing?
I think in 2017, and beyond, the solution is much less about overall performance but about user experience and user expectations.
Nowadays users do not tolerate delays. In that sense sophisticated user interfaces try to be responsive as quickly as possible. Thus: if you can use those small requests to enable the user to do something quickly (instead of waiting 2 seconds for that one big request to return) you should prefer that solution.
To my knowledge, there are many "high fidelity" sites out there where a single page might send 50, 100 requests. Therefore I consider that to be common practice!
And maybe it is helpful here: se-radio.net podcast episode 277 discusses this topic intensively, in the context of tail latency.
Option 1 sounds good but has a big problem in terms of scalability.
Option 2 mitigates this scalability problem and we can improve its design:
Client side, via javascript, collect only displayed book ids and query once, via ajax, for an array of read-later info, only for those 30 books. This way you still serve the page fast and request a small set of additional info, once with a single http request.
Server side you can further improve caching an in memory array of read-later ids for each user.
Live Testing, Solution & Real-World Data
This answer is written in JavaScript, and includes easy to understand code examples.
Introduction
The OP asked what is the most efficient way to make requests to a "Read Later" API that each request requires to wait some time while the backend saves the book.
For this answer, I have created a demo of a "Read Later" API endpoint, every request waits randomly from 70-130 milliseconds for saving each book.
I am testing in all scenarios 30 books every time.
Finally, we will see the best results for each method by measuring professionally real runtime of every action we will take.
Synchronous Requests (OP's Option 1)
Here, we will run every call via JS, one after the other synchronously.
The code:
async function saveBooksSync() {
console.time('save-books-sync');
// creates 30 book IDs
const booksIds = Array.from({length: 30}, (_, i) => i + 1);
// creates 30 API links for each request
const urls = booksIds.map(bookId => `http://localhost:7777/books/read-later?bookId=${bookId}`);
for(let url of urls) {
const response = await fetch(url);
const json = await response.json();
console.log(json);
}
console.timeEnd('save-books-sync');
}
Runtime: 3712.40087890625 ms
One Big Request
Although we will not be creating many request connection to the server, the runtime speaks for itself.
The code:
async function saveAllBooksAtOnce() {
console.time('save-all-books')
const booksIds = Array.from({length: 30}, (_, i) => i + 1);
const url = `http://localhost:7777/books/read-later?all=1`;
const response = await fetch(url);
const json = await response.json();
console.timeEnd('save-all-books');
}
Runtime: 3486.71484375 ms
Parallel Asynchronous Requests (solution)
Here the magic happens, the solution to the question, what is the most efficient request method.
Here we are making 30 parallel small requests with amazing results.
The code:
async function saveBooksParallel() {
console.time('save-books')
const booksIds = Array.from({length: 30}, (_, i) => i + 1);
const urls = booksIds.map(bookId => `http://localhost:7777/books/read-later?bookId=${bookId}`);
const promises = urls.map((url) =>
fetch(url).then((response) => response.json())
);
const data = await Promise.all(promises);
console.log(data);
console.timeEnd('save-books');
}
Here in this asynchronous parallel example, I used the Promise.all method.
The Promise.all() method takes an iterable of promises as an input,
and returns a single Promise that resolves to an array of the results
of the input promises
Runtime: 668.47705078125 ms
Conclusion
The results are clear, the most efficient way to make these multiple requests is to do this in Asynchronous Parallel.
Update: I followed #Iglesias Leonardo's request to remove the console.log() of the data output because (presumably) it takes high resources.
These are the runtime results:
Synchronous Requests: 3371.695 ms
One Big Request: 3358.269 ms
Parallel Asynchronous Requests: 613.506
Update Conclusion:
The runtimes stayed almost the same and thus reflect the reality that Parallel Asynchronous Requests are unmatched by speed
In my view it depends on how the data is stored. If a relational database is being used you could easily get the boolean flag into the list of books by simply doing a join on the corresponding tables.
This will most likely give you the best results and you wouldn't have to write any algorithms in the front end.
Related
I am currently working on performance improvements for a React-based SPA. Most of the more basic stuff is already done so I started looking into more advanced stuff such as service workers.
The app makes quite a lot of requests on each page (most of the calls are not to REST endpoints but to an endpoint that basically makes different SQL queries to the database, hence the amount of calls). The data in the DB is not updated too often so we have a local cache for the responses, but it's obviously getting lost when a user refreshes a page. This is where I wanted to use the service worker - to keep the responses either in cache store or in IndexedDB (I went with the second option). And, of course, the cache-first approach does not fit here too well as there is still a chance that the data may become stale. So I tried to implement the stale-while-revalidate strategy: fetch the data once, then if the response for a given request is already in cache, return it, but make a real request and update the cache just in case.
I tried the approach from Jake Archibald's offline cookbook but it seems like the app is still waiting for real requests to resolve even when there is a cache entry to return from (I see those responses in Network tab).
Basically the sequence seems to be the following: request > cache entry found! > need to update the cache > only then show the data. Doing the update immediately is unnecessary in my case so I was wondering if there is any way to delay that? Or, alternatively, not to wait for the "real" response to be resolved?
Here's the code that I currently have (serializeRequest, cachePut and cacheMatch are helper functions that I have to communicate with IndexedDB):
self.addEventListener('fetch', (event) => {
// some checks to get out of the event handler if certain conditions don't match...
event.respondWith(
serializeRequest(request).then((serializedRequest) => {
return cacheMatch(serializedRequest, db.post_cache).then((response) => {
const fetchPromise = fetch(request).then((networkResponse) => {
cachePut(serializedRequest, response.clone(), db.post_cache);
return networkResponse;
});
return response || fetchPromise;
});
})
);
})
Thanks in advance!
EDIT: Can this be due to the fact that I put stuff into IndexedDB instead of cache? I am sort of forced to use IndexedDB instead of the cache because those "magic endpoints" are POST instead of GET (because of the fact they require the body) and POST cannot be inserted into the cache...
I'm trying to synchronize my POSTs to an endpoint in Angular. I did see some examples of doing a synchronized GET but had trouble understanding the examples well enough to apply them to POSTs.
The POSTs are pretty simple, at least from my perspective as the front-end developer. I send an object with an parent group ID and a sub group ID to a /parentgroups endpoint. On the backend, however, async calls cause the data to get overwritten.
Apologies for lack of an example, but I am pretty far from having one that's close to working how I need. My code is still async and a loop over calls to $http.post().
You actually cannot do real synchronous (as in blocking) http calls in Angular, it forces you do use async. If you can't do it with callbacks then you have a problem with your architecture that the entire team should focus on solving ASAP. If your current architecture requires the frontend to do blocking calls then your architecture is quite simply broken and needs to be fixed.
Anyway, while I recommend against it you could always register your request in a list, and then in each callback you pop the next request from the list and run it. That way you can just keep pushing requests into the list without knowing how many there will be. Something like this (untested, but the general principle should work):
var requestList = [];
requestList.push(function() {
$http.post('/someUrl', {})
.success(function(data, status, headers, config) {
// Remove the next request from list and call it
requestList.shift()();
});
});
requestList.push(function() {
$http.post('/someOtherUrl', {})
.success(function(data, status, headers, config) {
// Remove the next request from list and call it
requestList.shift()();
});
});
// Start the first request
requestList.shift()();
This is fairly clean, but still a bit of a hack. It would probably work fine but I would be taking a good long look at why the API forces you to do something like this.
I need to design a rate limiter service for throttling requests.
For every incoming request a method will check if the requests per second has exceeded its limit or not. If it has exceeded then it will return the amount of time it needs to wait for being handled.
Looking for a simple solution which just uses system tick count and rps(request per second). Should not use queue or complex rate limiting algorithms and data structures.
Edit: I will be implementing this in c++. Also, note I don't want to use any data structures to store the request currently getting executed.
API would be like:
if (!RateLimiter.Limit())
{
do work
RateLimiter.Done();
}
else
reject request
The most common algorithm used for this is token bucket. There is no need to invent a new thing, just search for an implementation on your technology/language.
If your app is high avalaible / load balanced you might want to keep the bucket information on some sort of persistent storage. Redis is a good candidate for this.
I wrote Limitd is a different approach, is a daemon for limits. The application ask the daemon using a limitd client if the traffic is conformant. The limit is configured on the limitd server and the app is agnostic to the algorithm.
since you give no hint of language or platform I'll just give out some pseudo code..
things you are gonna need
a list of current executing requests
a wait to get notified where a requests is finished
and the code can be as simple as
var ListOfCurrentRequests; //A list of the start time of current requests
var MaxAmoutOfRequests;// just a limit
var AverageExecutionTime;//if the execution time is non deterministic the best we can do is have a average
//for each request ether execute or return the PROBABLE amount to wait
function OnNewRequest(Identifier)
{
if(count(ListOfCurrentRequests) < MaxAmoutOfRequests)//if we have room
{
Struct Tracker
Tracker.Request = Identifier;
Tracker.StartTime = Now; // save the start time
AddToList(Tracker) //add to list
}
else
{
return CalculateWaitTime()//return the PROBABLE time it will take for a 'slot' to be available
}
}
//when request as ended release a 'slot' and update the average execution time
function OnRequestEnd(Identifier)
{
Tracker = RemoveFromList(Identifier);
UpdateAverageExecutionTime(Now - Tracker.StartTime);
}
function CalculateWaitTime()
{
//the one that started first is PROBABLY the first to finish
Tracker = GetTheOneThatIsRunnigTheLongest(ListOfCurrentRequests);
//assume the it will finish in avg time
ProbableTimeToFinish = AverageExecutionTime - Tracker.StartTime;
return ProbableTimeToFinish
}
but keep in mind that there are several problems with this
assumes that by returning the wait time the client will issue a new request after the time as passed. since the time is a estimation, you can not use it to delay execution, or you can still overflow the system
since you are not keeping a queue and delaying the request, a client can be waiting for more time that what he needs.
and for last, since you do not what to keep a queue, to prioritize and delay the requests, this mean that you can have a live lock, where you tell a client to return later, but when he returns someone already took its spot, and he has to return again.
so the ideal solution should be a actual execution queue, but since you don't want one.. I guess this is the next best thing.
according to your comments you just what a simple (not very precise) requests per second flag. in that case the code can be something like this
var CurrentRequestCount;
var MaxAmoutOfRequests;
var CurrentTimestampWithPrecisionToSeconds
function CanRun()
{
if(Now.AsSeconds > CurrentTimestampWithPrecisionToSeconds)//second as passed reset counter
CurrentRequestCount=0;
if(CurrentRequestCount>=MaxAmoutOfRequests)
return false;
CurrentRequestCount++
return true;
}
doesn't seem like a very reliable method to control whatever.. but.. I believe it's what you asked..
I want to call 1 million different URLs with AJAX.
What I did is (Javascript, jQuery used):
var numbers = [1, 2, 3...1000000]; // numbers.length = 1000000
$(function () {
$.each(numbers, function(key, val) {
$.ajax({
url: '/getter.php',
data: { id: val},
success: function () {
console.info(id);
}
});
});
});
I loop over 1 million integers, and passing them to my getter.php (which is doing something cool with that numbers).
The problem is after ~1,5k of requests I get Google Chrome dead.
I know I do this ineffective way, that's why am asking for help - how to actually do it right? How to GET request a php script 1 million times (not necessary with JavaScript!)?
You could use a persistent connection between your php script and the client requesting the data.
I think you are bumping into a limitation of the time to live on the single request you are calling. Also the HTTP requests functions as a Request - Response why, when you put it in your foreach statement, each single statement is being processed alone like:
1. iteration: GET /getter.php with value 1 .... wait... Oh theres a response
2. iteration GET /getter.php with value 2 .... wait Oh another response
This is a seemingly long and wrong proces as you might already have figured out.
Another approach would be to set up a persistent socket, which functions like the TCP procotol:
1. open the connection
2. send all the data
3. close the connection
Have you considered trying with a websocket?
Heres a few tutorials:
HTML5websocket
http://www.tutorialspoint.com/html5/html5_websocket.htm
PHP socket:
http://www.phpbuilder.com/articles/application-architecture/optimization/creating-real-time-applications-with-php-and-websockets.html
EDIT:
Also see this article on the difference between AJAX and websocket.
"AJAX is great if you aren’t in a hurry, but if you’re moving a high volume of data then the overhead of creating an HTTP connection every time is going to be a bottleneck. You need a persistent connection instead. In addition, AJAX always has to poll the server for data rather than receive it via push from the server. If you want speed and efficiency you need WebSockets."
http://blog.safe.com/2014/08/websockets-ajax-webhooks-comparison/
We recently developed a site based on SOA but this site ended up having terrible load and performance issues when it went under load. I posted a question related this issue here:
ASP.NET website becomes unresponsive under load
The site is made of an API (WEB API) site which is hosted on a 4-node cluster and a web site which is hosted on another 4-node cluster and makes calls to the API. Both are developed using ASP.NET MVC 5 and all actions/methods are based on async-await method.
After running the site under some monitoring tools such as NewRelic, investigating several dump files and profiling the worker process, it turned out that under a very light load (e.g. 16 concurrent users) we ended up having around 900 threads which utilized 100% of CPU and filled up the IIS thread queue!
Even though we managed to deploy the site to the production environment by introducing heaps of caching and performance amendments many developers in our team believe that we have to remove all async methods and covert both API and the web site to normal Web API and Action methods which simply return an Action result.
I personally am not happy with approach because my gut feeling is that we have not used the async methods properly otherwise it means that Microsoft has introduced a feature that basically is rather destructive and unusable!
Do you know any reference that clears it out that where and how async methods should/can be used? How we should use them to avoid such dramas? e.g. Based on what I read on MSDN I believe the API layer should be async but the web site could be a normal no-async ASP.NET MVC site.
Update:
Here is the async method that makes all the communications with the API.
public static async Task<T> GetApiResponse<T>(object parameters, string action, CancellationToken ctk)
{
using (var httpClient = new HttpClient())
{
httpClient.BaseAddress = new Uri(BaseApiAddress);
var formatter = new JsonMediaTypeFormatter();
return
await
httpClient.PostAsJsonAsync(action, parameters, ctk)
.ContinueWith(x => x.Result.Content.ReadAsAsync<T>(new[] { formatter }).Result, ctk);
}
}
Is there anything silly with this method? Note that when we converted all method to non-async methods we got a heaps better performance.
Here is a sample usage (I've cut the other bits of the code which was related to validation, logging etc. This code is the body of a MVC action method).
In our service wrapper:
public async static Task<IList<DownloadType>> GetSupportedContentTypes()
{
string userAgent = Request.UserAgent;
var parameters = new { Util.AppKey, Util.StoreId, QueryParameters = new { UserAgent = userAgent } };
var taskResponse = await Util.GetApiResponse<ApiResponse<SearchResponse<ProductItem>>>(
parameters,
"api/Content/ContentTypeSummary",
default(CancellationToken));
return task.Data.Groups.Select(x => x.DownloadType()).ToList();
}
And in the Action:
public async Task<ActionResult> DownloadTypes()
{
IList<DownloadType> supportedTypes = await ContentService.GetSupportedContentTypes();
Is there anything silly with this method? Note that when we converted
all method to non-async methods we got a heaps better performance.
I can see at least two things going wrong here:
public static async Task<T> GetApiResponse<T>(object parameters, string action, CancellationToken ctk)
{
using (var httpClient = new HttpClient())
{
httpClient.BaseAddress = new Uri(BaseApiAddress);
var formatter = new JsonMediaTypeFormatter();
return
await
httpClient.PostAsJsonAsync(action, parameters, ctk)
.ContinueWith(x => x.Result.Content
.ReadAsAsync<T>(new[] { formatter }).Result, ctk);
}
}
Firstly, the lambda you're passing to ContinueWith is blocking:
x => x.Result.Content.ReadAsAsync<T>(new[] { formatter }).Result
This is equivalent to:
x => {
var task = x.Result.Content.ReadAsAsync<T>(new[] { formatter });
task.Wait();
return task.Result;
};
Thus, you're blocking a pool thread on which the lambda is happened to be executed. This effectively kills the advantage of the naturally asynchronous ReadAsAsync API and reduces the scalability of your web app. Watch out for other places like this in your code.
Secondly, an ASP.NET request is handled by a server thread with a special synchronization context installed on it, AspNetSynchronizationContext. When you use await for continuation, the continuation callback will be posted to the same synchronization context, the compiler-generated code will take care of this. OTOH, when you use ContinueWith, this doesn't happen automatically.
Thus, you need to explicitly provide the correct task scheduler, remove the blocking .Result (this will return a task) and Unwrap the nested task:
return
await
httpClient.PostAsJsonAsync(action, parameters, ctk).ContinueWith(
x => x.Result.Content.ReadAsAsync<T>(new[] { formatter }),
ctk,
TaskContinuationOptions.None,
TaskScheduler.FromCurrentSynchronizationContext()).Unwrap();
That said, you really don't need such added complexity of ContinueWith here:
var x = await httpClient.PostAsJsonAsync(action, parameters, ctk);
return await x.Content.ReadAsAsync<T>(new[] { formatter });
The following article by Stephen Toub is highly relevant:
"Async Performance: Understanding the Costs of Async and Await".
If I have to call an async method in a sync context, where using await
is not possible, what is the best way of doing it?
You almost never should need to mix await and ContinueWith, you should stick with await. Basically, if you use async, it's got to be async "all the way".
For the server-side ASP.NET MVC / Web API execution environment, it simply means the controller method should be async and return a Task or Task<>, check this. ASP.NET keeps track of pending tasks for a given HTTP request. The request is not getting completed until all tasks have been completed.
If you really need to call an async method from a synchronous method in ASP.NET, you can use AsyncManager like this to register a pending task. For classic ASP.NET, you can use PageAsyncTask.
At worst case, you'd call task.Wait() and block, because otherwise your task might continue outside the boundaries of that particular HTTP request.
For client side UI apps, some different scenarios are possible for calling an async method from synchronous method. For example, you can use ContinueWith(action, TaskScheduler.FromCurrentSynchronizationContext()) and fire an completion event from action (like this).
async and await should not create a large number of threads, particularly not with just 16 users. In fact, it should help you make better use of threads. The purpose of async and await in MVC is to actually give up the thread pool thread when it's busy processing IO bound tasks. This suggests to me that you are doing something silly somewhere, such as spawning threads and then waiting indefinitely.
Still, 900 threads is not really a lot, and if they're using 100% cpu, then they're not waiting.. they're chewing on something. It's this something that you should be looking into. You said you have used tools like NewRelic, well what did they point to as the source of this CPU usage? What methods?
If I were you, I would first prove that merely using async and await are not the cause of your problems. Simply create a simple site that mimics the behavior and then run the same tests on it.
Second, take a copy of your app, and start stripping stuff out and then running tests against it. See if you can track down where the problem is exactly.
There is a lot of stuff to discuss.
First of all, async/await can help you naturally when your application has almost no business logic. I mean the point of async/await is to do not have many threads in sleep mode waiting for something, mostly some IO, e.g. database queries (and fetching). If your application does huge business logic using cpu for 100%, async/await does not help you.
The problem of 900 threads is that they are inefficient - if they run concurrently. The point is that it's better to have such number of "business" threads as you server has cores/processors. The reason is thread context switching, lock contention and so on. There is a lot of systems like LMAX distruptor pattern or Redis which process data in one thread (or one thread per core). It's just better as you do not have to handle locking.
How to reach described approach? Look at disruptor, queue incoming requests and processed them one by one instead of parallel.
Opposite approach, when there is almost no business logic, and many threads just waits for IO is good place where to put async/await into work.
How it mostly works: there is a thread which reads bytes from network - mostly only one. Once some some request arrive, this thread reads the data. There is also limited thread pool of workers which processes requests. The point of async is that once one processing thread is waiting for some thing, mostly io, db, the thread is returned in poll and can be used for another request. Once IO response is ready, some thread from pool is used to finish the processing. This is the way how you can use few threads to server thousand request in a second.
I would suggest that you should draw some picture how your site is working, what each thread does and how concurrently it works. Note that it's necessary to decide whether throughput or latency is important for you.