Ajax How to get Multiple Response while response is generating - ajax

I want to know if it's possible to get print something while Ajax is processing the Request.If yes then please let me know, Because i am facing one problem and i want to get to print something in between Ajax call request and it's response comes
actually i want to read csv of 3000+ rows and in between this process i want to display no of rows read and copied in another csv.
so i want to show something like process bar that out of 3000 there are 50 rows copies completely and this will continue process until it will reach to 3000 rows.
it there a way then let me know!

You can use XHR progress events (if your browser supports them):
https://developer.mozilla.org/en-US/docs/DOM/XMLHttpRequest/Using_XMLHttpRequest#Monitoring_progress
https://dvcs.w3.org/hg/progress/raw-file/tip/Overview.html#interface-progressevent
How to check in JavaScript if XMLHttpRequest object supports W3C Progress Events?
But your question is more than simply I have 1000 bytes out of a possible 9999. You want to know exactly how many rows you have read up to that point.
I think you can read xhr.responseText on each progress event, but only if the request is parseable when incomplete (such as plain text), and also when not using an async request - https://stackoverflow.com/a/5319647/319878.
But for libraries like zip.js that use non-text binary data with arraybuffers when requesting zip content, this will not work, as these requests must be async (http://www.w3.org/TR/XMLHttpRequest/#the-responsetype-attribute). If you try to use xhr.responseText in this case, you will get an error like this (example from Firefox):
[Exception... "An attempt was made to use an object that is not, or is no longer, usable"
code: "11"
nsresult: "0x8053000b (InvalidStateError)"
location: "http://localhost/xhr-progress-test/test-xhr-on-progress.html Line: 1075"]
So for zipped content (using zip.js) I think you would have to wait for all of the response to arrive before using it. This is not what you want.
Much simpler would be to modify your server code to return a selection of rows at a time. E.g. rows 1-100, then 101-200, etc.

You may try one of multiple approaches to solve this problem:
JSONP Callback:
This is an apt solution for your problem. You may batch size your response to 50 rows of CSV data. Each JSONP call will have a start and will get 50 rows of data. So, to fetch 300 rows of data you'll have 6 callbacks. You may pass a start parameter and return start - start+49 rows (total 50) and when start == 1 you may pass the header too. On each successful callback you may increase the progress bar by a proportionate value.
Pros: Easy to show progress on no. of rows.
Cons: Data to return in the response has to be in JSON format; Need to implement callback function.
Calculating bytes:
Showing progress on no. of bytes transferred. One example is available on Stackoverflow. Not sure if it would apply in your case because you want to show progress on the basis of no. of rows.
Pros: Easy to show progress on no. of bytes.
Cons: May not apply in your case.
Loading icon or bar:
This is the easiest approach of all. Just show a loading image or a continuous progress bar which keeps on repeating.
Pros: Easy to show continuous progress.
Cons: Just shows the user that something is happening, nothing on percentage basis.
There could be other solutions too based on your situation, so please take this as a starting point and explore further.

Related

Should I make multiple Ajax requests or combine into one

I am building some html reports. The user can choose to view additional data for individual elements of the report, or choose to view all additional data.
To view a single line of additional data, an Ajax request is made.
My question is that if a user clicks "View all additional data", should I make 20 or so asynchronous Ajax calls, or just make a single Ajax call that might take a little longer.
Aside from usability, are there any best practices as far as making lots of smaller Ajax requests vs one larger one?
I would say normally you would want to make one call. Your sending a request to the server - while you are there - just get all the data you need before coming back. Depending on the situation you could always cache some of the data (by storing in a variable) - to limit the amount of information you are retrieving.

Mailchimp members activity

I've got some kind of script. Goal is:
Get Mailchimp Lists
For each list get members
For each member get activity
Store it
Does anyone know - if there any way to not use one API call for each member to get his activity?
I've got around 28 000 members.
28 000 API calls - seems as bad as it can be.
I've tried to get Lists Activity, but no way, it is always empty. So I really have to get exactly members activity.
I'm currently attempting to do something very similar and there is a workaround, although I am not sure how feasible it is. Basically, you can do it through reports, email activity:
http://developer.mailchimp.com/documentation/mailchimp/reference/reports/email-activity/
The challenge here will be that you will try to pull 28.000+ records at a time, therefore it will take a long time. From my brief calculations it can take up to 1 minute per 1000 records (you will need to loop through 1000 records at a time, otherwise it will most likely time out).
The larger problem is maintaining this 'database', if you have activity constantly happening (i.e. opens/clicks/bounces) then you will need to pull the whole campaign activity again and update wherever you store it. I've been trying to find a workaround with no success. You could use the 'since=2017-10-07T00:00:00+00:00' parameter, however it still returns a blank list when there is no activity unfortunately. If only 1000 members are actually active, it will return 27.000 rows of no activity. It would be great if there would be another parameter we could potentially apply to return only emails where there was an action.
Please let me know if you find a better solution.
P.S. - it might be worth reaching out to mailchimp support for this
Update - you can use the Mailchimp Export api: https://developer.mailchimp.com/documentation/mailchimp/guides/how-to-use-the-export-api/ and extract the email activity. I had huge issues unpacking it, please follow the links below: Decode text response from API in Python 3.6 and Separate pd DataFrame Rows that are dictionaries into columns . Let me know if you have any other questions.

Do browsers limit AJAX polling rate? What is the limit?

I just read that some browsers would prevent HTTP polling (I guess by limiting the rate of requests)...
From https://github.com/sstrigler/JSJaC:
Note: As security restrictions of most modern browsers prevent HTTP
Polling from being usable anymore this module is disabled by default
now. If you want to compile it in use 'make polling'.
This could explain some misbehavior of some of my JavaScripts (sometimes requests are just not sent or retried, even if they were actually successful). But I couldn't find further information on details..
Questions
if it's "max. number of requests n per x seconds", what are the usual/default settings for x and n?
Is there any way good resource for this?
Any way to detect if a request has been "delayed" or "rejected" because of a rate limit?
Thanks for your help...
Stefan
Yes, as far as I am aware there is a default pool limit of 10 and a default request timeout of 30 seconds per request, however the timeout and poll limits can be controlled and different browsers implement different limitations!
Check out this Google implementation.
and this is an awesome implementation of catching a timeout error!
You can find the Firefox specifics HERE!
Internet Explorer specifics are controlled from inside the Windows registry.
Also have a look at this question.
Basically, the way you control is not by changing the browser limitations, but by abiding them. So you apply a technique called throttle-ing.
Think of it as creating a FIFO/priority queue of functions. A queue struct that takes xhr requests as members and enforces delay between them is an Xhr Poll. For instance, I am using
Jsonp to get data from a node.js server located on another domain and I am polling of course due to browser limitations. Otherwise, I get zero response back from the server and that is only because of browser limitations.
I am actually doing a console log for every request that's supposed to be sent, but not all of them are being logged. So the browser limits them.
I'll be even more specific with helping you out. I have a page on my website which is supposed to render a view for tens or even hundreds of articles. You go through them using a cool horizontal slider.
The current value of the slider matches the currrent 'page'. Since I am only displaying 5 articles per page and I can't exactly load thousands of articles 'onload' without severe performance implications, I load the articles for the current page. I get them from a MongoDB by sending a cross-domain request to a Python script.
The script is supposed to return an array of five objects with all the details I need to build the DOM elements for a 'page'. However, there are a couple of issues.
First, the slider works extremely fast, as it's more or less a value change. Even if there is drag drop functionality, key down events etc, the actual change takes miliseconds. However, the code of the slider looks something like this:
goog.events.listen(slider, goog.events.EventType.CHANGE, function() {
myProject.Articles.page(slider.getValue());
}
The slider.getValue() method returns an int with the current page number, so basically I have to load from:
currentPage * articlesPerPage to (currentPage * articlesPerPage + 1) - 1
But in order to load, i do something like this:
I have a storage engine(think of it as an array):
I check if the content is not already there
If it is, there is no point to make another request, so go forward with getting the DOM elements from the array with the already created DOM elements in place.
If it isn't, then I need to get it so I need to send that request I was mentioning, which would look something like(without accounting for browser limitations):
JSONP.send({'action':'getMeSomeArticles','start':start,'length': itemsPerPage, function(callback){
// now I just parse the callback quickly to make sure it is consistent
// create DOM elements, and populate the client side storage
// and update the view for the user.
}}
The problem comes from the speed with which you can change that slider. Since every change supposedly triggers a request(same would happen for normal Xhr requests), then you are basically crossing the limitations of all browsers, so without throttle-ing, there would be no 'callback' for most of the requests. 'callback' is the JS code returned by the JSONP request(which is more of a remote script inclusion than anything else).
So what I do is push a request to a priority queue, not POLL, as now I don't need to send multiple simultaneous requests. If the queue is empty, the recently added member is executed and everyone is happy. If it's not, then all non-completed requests in progress are cancelled and only the last one is executed.
Now in my particular case, I do a binary search(0(log n)) to see if the storage engine doesn't have data for the previous requests yet, which tells me if the previous request has been completed or not. If it has, then it's removed from the queue and the current one is processed, otherwise the new one fires. So an and so forth.
Again, for speed consideration and shit browser wanna-bes such as Internet Explorer, I do the above described procedure about 3-4 steps ahead. So I pre-load 20 pages ahead till everything is the client side storage engine. This way, every limitation is successfully dealt with.
The cooldown time is covered by the minimum time it would take to slide through 20 pages and the throttle-ing makes sure there are no more than 1 active requests at any given time(with backwards compatibility going as far as Internet Explorer 5).
The reason why I wrote all this is to give you an example trying to say that you cannot always enforce delay directly from the FIFO structure, as your calls may need to turn into what a user sees, and you don't exactly want to make a user wait 10-15 seconds for a single page to render.
Also, always minimize the polling and the need to poll(simultaneously fired Ajax events, as not all browsers actually do good things with them). For instance, instead of doing something like sending one request to get content and sending another for that content to be tracked as viewed in your app metrics, do as many tasks at server level as you possibly can!
Of course, you probably want to track your errors properly, so your Xhr object from your library of choice implement error handling for ajax and because you are an awesome developer you want to make use of them.
so say you have a try - catch block in place
The scenario is this:
An Ajax call has finished and it's supposed to return a JSON, but the call somehow failed. However, you try to parse the JSON and do whatever you need to do with it.
so
function onAjaxSuccess (ajaxResponse) {
try {
var yourObj = JSON.parse(ajaxRespose);
} catch (err) {
// Now I've actually seen this on a number of occasions, to log that an error occur
// a lot of developers will attempt to send yet another ajax request to log the
// failure of the previous one.
// for these reasons, workers exist.
myProject.worker.message('preferrably a pre-determined error code should go here');
// Then only the worker should again throttle and poll the ajax requests that log the
//specific error.
};
};
While I have seen various implementations that try to fire as many Xhr requests at the same time as they possible can until they encounter browser limitations, then do quite a good job at stalling the ones that haven't fired in wait for the browser 'cooldown', what I can advise you is to think about the following:
How important is speed for your app?
Just how scalable and how intensive the I/O will be?
If the answer to the first one is 'very' and to the latter 'OMFG modern technology', then try to optimize your code and architecture as much as you can so that you never need to send 10 simultaneous Xhr requests. Also, for large scale apps, multi-thread your processes. The JavaScript way to accomplish that is by using workers. Or you could call the ECMA board, tell them to make this a default, and then post it here so that the rest of us JS devs can enjoy native multi-threading in JS:)(how dafuq did they not think about this?!?!)
Stefan, quick answers below:
-if it's "max. number of requests n per x seconds", what are the usual/default settings for x and n?
This sounds more like a server restriction. The browser ones usually sound like:
-"the maximum requests for the same hostname is x"
-"the maximum connections for ANY hostname is y"
-Is there any way good resource for this?
http://www.browserscope.org/?category=network (also hover over table headers to see what is measured)
http://www.stevesouders.com/blog/2008/03/20/roundup-on-parallel-connections
-Any way to detect if a request has been "delayed" or "rejected" because of a rate limit?
You could look at the http headers for "Connection: close" to detect server restrictions but I am not aware of being able in JavaScript to read settings from so many browsers in a consistent, browser-independent way. (For Firefox, you could read this http://support.mozilla.org/en-US/questions/746848)
Hope this quick answer helps?
No, browser does not in any way affect polling. I think what was meant on that page is the same origin policy - you can only access the same host and port as your original page.
Only known limitation to connections themselves is that you usually can only have from two to four simultaneous connections to the same host.
I've written some apps with long poll, some with C++ backend with my own webserver, and one with PHP backend with Apache2.
My long poll timeout is 4..10 s. When something occurs, or 4..10 s passes, my server returns an empty response. Then the client immediatelly starts another AJAX request. I found that some browsers hangs up when I start AJAX call from previous AJAX handler, so I am using setTimeout() with a small value to start the next AJAX request.
When something happens on the client side, which should be sent to server, I use another AJAX request for it, but it's a one-way thing: the server does not send any response, and the client does not process anything. The result of the operation (if any) will be received on the long poll. It requires max. 2 connection to the server, which all browsers supports.
Keep in mind, that if there's 500 client, it means 500 server-side webserver thread, which will move together, occurring load peaks, because when something happens, the server have to report it at the same time for each clients, the clients will process it near same time long, they will start the next long request in the same time, and from then, the timeout will expire also at the same time, and furthcoming ones too. You can trick with rnd timeout, say 4 rnd(0..4), but it's worthless, if anything happens, they will "sync" again, all the request have to be served at the same time, when something reportable happens.
I've tested it thru a router, and it works. I assume, routers respects 4..10 lag, it's around the speed of a slow webapge (far, far away), which no router think, that it should be canceled.
My PHP work is a collaborative spreadsheet, it looks amazing when you hit enter and the stuff is updating simultaneously in several browsers. Have fun!
No limit for no of ajax requests. However it will be on same host & port.
Server can limit no of request from a machine based on its setting.
For example. A server can set so that if there are more than few request from same machine within specified time it will reject request.
After small mistake in javascript code, neverending loop was made witch each step calling 2 ajax requests. In firebug i could see more and more requests until firefox started to slow down, dont response and finally crash.
So, yes, there is a "limit" ;)

Ajax and Performance/Speed

I'm currently creating a small todo site, and I have multiple questions related to ajax and performance... So here are my questions:
In order to reduce number of request, I want to get all data from one request, so I will pass for example these attributes:
1.1. to get 1 task:
entity=task&id=2&type=single&extra=subtasks%%contexts
1.2. to get list of tasks and events in one listing:
entity=task%%event&user_id=2%type=multiple%order=date&limit=10
Do you think it will reduce number of request and improves some how the performance?
If all requests will go to one file, it means that that .php file might be quite big, is it bad? Or it not really matter?
For the listing. I will be able to change the order of listing and maybe filter it somehow. Do you think it will be better to load all tasks and event to
To keep things fast there are two concerns:
Reduce HTTP requests – if you need two separate bits of data, send them in one file.
Keep the content delivered in each AJAX request small – gzip and caching works wonders here.
So, yes, bundle things together. Large PHP file doesn't make any difference, DB queries are the only real bottleneck in a normally trafficked webpage.
For filtering and sorting, a good approach is to use JSON for the AJAX response, then sort/filter based on that on the client side if you are talking about a smallish number of items (probably upto 1000 items). If you have 100s of thousands of items, then returning a subset from the server will be better.

Large number of concurrent ajax calls and ways to deal with it

I have a web page which, upon loading, needs to do a lot of JSON fetches from the server to populate various things dynamically. In particular, it updates parts of a large-ish data structure from which I derive a graphical representation of the data.
So it works great in Chrome; however, Safari and Firefox appear to suffer somewhat. Upon the querying of the numerous JSON requests, the browsers become sluggish and unusable. I am under the assumption that this is due to the rather expensive iteration of said data structure. Is this a valid assumption?
How can I mitigate this without changing the query language so that it's a single fetch?
I was thinking of applying a queue that could limit the number of concurrent Ajax queries (and hence also limit the number of concurrent updates to the data structure)... Any thoughts? Useful pointers? Other suggestions?
In browser-side JS, create a wrapper around jQuery.post() (or whichever method you are using)
that appends the requests to a queue.
Also create a function 'queue_send' that will actually call jQuery.post() passing the entire queue structure.
On server create a proxy function called 'queue_receive' that replays the JSON to your server interfaces as though it came from the browser, collects the results into a single response, sends back to browser.
Browser-side queue_send_success() (success handler for queue_send) must decode this response and populate your data structure.
With this, you should be able to reduce your initialization traffic to one actual request, and maybe consolidate some other requests on your website as well.
in particular, it updates parts of a largish data structure from which i derive a graphical representation of the data.
I'd try:
Queuing responses as they come in, then update the structure once
Hiding the representation invisible until the responses are in
Magicianeer's answer is also good - but I'm not sure if it fits your definition of "without changing the query language so that it's a single fetch" - it would avoid re-engineering existing logic.

Resources