Prototype.js, AJAX form submission occasionally returns status 0, XHR stays in readyState 1 - ajax

I've got an odd problem here with Prototype 1.7.0 and an AJAX form submission using form.request().
The response status is either 202 or 200 depending on whether the server expects to be polled again with the same form submission after a timeout. 200 indicates that the response contents are done and are to be displayed to the user (backend uses WebWork's execAndWait-interceptor to execute a long-running job).
The problem is that most of the time, everything works just fine. However, occasionally, the response comes back as status code 0 and XMLHTTPRequest readyState 1. Firebug indicates correct response codes are coming from the backend, and that the actual response contents are fine, it's just that Prototype's on200 and on202 handlers do not fire (on0 does).
It appears there are similar issues reported over the Internet, but there is no conclusive solution. Is this some well known problem?

A response code 0 from prototype means that it can't communicate with the server. You can remedy this by adding an "on0: function() {}" event handler in your request.
How you handle it is up to you...either alert the user that something went wrong, and redisplay their form, or silently try and re-submit your request to the backend in a loop. If you choose the second option, set a wait timeout and each time you can't talk to the server multiply it by some factor so you don't infinite loop their browser.
You might also want to look into queuing these requests on the client-side so you're only firing one at a time, in order.
Hope that helps.

Related

application ajax time out

Controller:
The code that handles post back has a db call in biz layer that is long running. After 30 minutes, the browser showed the error message from the ajax call, however my db logs showed that the db method was still running - ran way past the 30 min mark.
My ? is how does the ajax error get raised when it wasn't raised in the controller [log4j did not show any errors trapped] as the controller was still waiting for feedback from the biz layer.
Does adjusting the timeout in tomcat help? I assume not as the app was still processing
My View:
function runAjax(){
$.ajax({
type: "POST",
url: "Test.html",
data: { testparam1,testparam2},
success: function(data){
document.getElementById("processdata").innerHTML="Success";
},
error: function(request){
document.getElementById("processdata").innerHTML="Error";
}
});
}
The browser, aka JavaScript, is not going to wait forever for a response. After some time (read 30min in this case), it will give up on the request, thinking the server is not responding. HTTP is not really set up to handle requests that take > 30min, at least not through the normal means.
There are some options. First, you could optimize whatever you are doing in the backend to make it faster. I mean, if it's taking 30+ min for a query to run, then you may need to rethink your SQL approach a bit.
Second, you could simply break up the steps, so that you can run each step, verify, then run the next. Depending on what you are doing, this may open up the possibility to do some work in parallel, speeding up the process. The challenge here would be in rolling back in the case when something failed.
Third, you could implement a Pub/Sub model. There are several frameworks out there to do Pub/Sub for JavaScript. I have personally used Atmosphere, which has a nice jQuery plugin and falls back to more HTTP conventional methods when the more advanced methods are not available in the browser. In this model, your View would submit the request, the backend would queue up the work and tell the View to subscribe to a particular queue/channel to get the result. The View can simply wait for the response.
Forth, you could post the request, have an interrum page that simply keeps refreshing via JavaScript until the backend says "Ok, we are done!". Lots of older web applications used to do this for long-running transactions.
I am sure there are probably 1000 more options you could use, but if your request will always take that long to run, I think AJAX is not the tool for you.

Do browsers limit AJAX polling rate? What is the limit?

I just read that some browsers would prevent HTTP polling (I guess by limiting the rate of requests)...
From https://github.com/sstrigler/JSJaC:
Note: As security restrictions of most modern browsers prevent HTTP
Polling from being usable anymore this module is disabled by default
now. If you want to compile it in use 'make polling'.
This could explain some misbehavior of some of my JavaScripts (sometimes requests are just not sent or retried, even if they were actually successful). But I couldn't find further information on details..
Questions
if it's "max. number of requests n per x seconds", what are the usual/default settings for x and n?
Is there any way good resource for this?
Any way to detect if a request has been "delayed" or "rejected" because of a rate limit?
Thanks for your help...
Stefan
Yes, as far as I am aware there is a default pool limit of 10 and a default request timeout of 30 seconds per request, however the timeout and poll limits can be controlled and different browsers implement different limitations!
Check out this Google implementation.
and this is an awesome implementation of catching a timeout error!
You can find the Firefox specifics HERE!
Internet Explorer specifics are controlled from inside the Windows registry.
Also have a look at this question.
Basically, the way you control is not by changing the browser limitations, but by abiding them. So you apply a technique called throttle-ing.
Think of it as creating a FIFO/priority queue of functions. A queue struct that takes xhr requests as members and enforces delay between them is an Xhr Poll. For instance, I am using
Jsonp to get data from a node.js server located on another domain and I am polling of course due to browser limitations. Otherwise, I get zero response back from the server and that is only because of browser limitations.
I am actually doing a console log for every request that's supposed to be sent, but not all of them are being logged. So the browser limits them.
I'll be even more specific with helping you out. I have a page on my website which is supposed to render a view for tens or even hundreds of articles. You go through them using a cool horizontal slider.
The current value of the slider matches the currrent 'page'. Since I am only displaying 5 articles per page and I can't exactly load thousands of articles 'onload' without severe performance implications, I load the articles for the current page. I get them from a MongoDB by sending a cross-domain request to a Python script.
The script is supposed to return an array of five objects with all the details I need to build the DOM elements for a 'page'. However, there are a couple of issues.
First, the slider works extremely fast, as it's more or less a value change. Even if there is drag drop functionality, key down events etc, the actual change takes miliseconds. However, the code of the slider looks something like this:
goog.events.listen(slider, goog.events.EventType.CHANGE, function() {
myProject.Articles.page(slider.getValue());
}
The slider.getValue() method returns an int with the current page number, so basically I have to load from:
currentPage * articlesPerPage to (currentPage * articlesPerPage + 1) - 1
But in order to load, i do something like this:
I have a storage engine(think of it as an array):
I check if the content is not already there
If it is, there is no point to make another request, so go forward with getting the DOM elements from the array with the already created DOM elements in place.
If it isn't, then I need to get it so I need to send that request I was mentioning, which would look something like(without accounting for browser limitations):
JSONP.send({'action':'getMeSomeArticles','start':start,'length': itemsPerPage, function(callback){
// now I just parse the callback quickly to make sure it is consistent
// create DOM elements, and populate the client side storage
// and update the view for the user.
}}
The problem comes from the speed with which you can change that slider. Since every change supposedly triggers a request(same would happen for normal Xhr requests), then you are basically crossing the limitations of all browsers, so without throttle-ing, there would be no 'callback' for most of the requests. 'callback' is the JS code returned by the JSONP request(which is more of a remote script inclusion than anything else).
So what I do is push a request to a priority queue, not POLL, as now I don't need to send multiple simultaneous requests. If the queue is empty, the recently added member is executed and everyone is happy. If it's not, then all non-completed requests in progress are cancelled and only the last one is executed.
Now in my particular case, I do a binary search(0(log n)) to see if the storage engine doesn't have data for the previous requests yet, which tells me if the previous request has been completed or not. If it has, then it's removed from the queue and the current one is processed, otherwise the new one fires. So an and so forth.
Again, for speed consideration and shit browser wanna-bes such as Internet Explorer, I do the above described procedure about 3-4 steps ahead. So I pre-load 20 pages ahead till everything is the client side storage engine. This way, every limitation is successfully dealt with.
The cooldown time is covered by the minimum time it would take to slide through 20 pages and the throttle-ing makes sure there are no more than 1 active requests at any given time(with backwards compatibility going as far as Internet Explorer 5).
The reason why I wrote all this is to give you an example trying to say that you cannot always enforce delay directly from the FIFO structure, as your calls may need to turn into what a user sees, and you don't exactly want to make a user wait 10-15 seconds for a single page to render.
Also, always minimize the polling and the need to poll(simultaneously fired Ajax events, as not all browsers actually do good things with them). For instance, instead of doing something like sending one request to get content and sending another for that content to be tracked as viewed in your app metrics, do as many tasks at server level as you possibly can!
Of course, you probably want to track your errors properly, so your Xhr object from your library of choice implement error handling for ajax and because you are an awesome developer you want to make use of them.
so say you have a try - catch block in place
The scenario is this:
An Ajax call has finished and it's supposed to return a JSON, but the call somehow failed. However, you try to parse the JSON and do whatever you need to do with it.
so
function onAjaxSuccess (ajaxResponse) {
try {
var yourObj = JSON.parse(ajaxRespose);
} catch (err) {
// Now I've actually seen this on a number of occasions, to log that an error occur
// a lot of developers will attempt to send yet another ajax request to log the
// failure of the previous one.
// for these reasons, workers exist.
myProject.worker.message('preferrably a pre-determined error code should go here');
// Then only the worker should again throttle and poll the ajax requests that log the
//specific error.
};
};
While I have seen various implementations that try to fire as many Xhr requests at the same time as they possible can until they encounter browser limitations, then do quite a good job at stalling the ones that haven't fired in wait for the browser 'cooldown', what I can advise you is to think about the following:
How important is speed for your app?
Just how scalable and how intensive the I/O will be?
If the answer to the first one is 'very' and to the latter 'OMFG modern technology', then try to optimize your code and architecture as much as you can so that you never need to send 10 simultaneous Xhr requests. Also, for large scale apps, multi-thread your processes. The JavaScript way to accomplish that is by using workers. Or you could call the ECMA board, tell them to make this a default, and then post it here so that the rest of us JS devs can enjoy native multi-threading in JS:)(how dafuq did they not think about this?!?!)
Stefan, quick answers below:
-if it's "max. number of requests n per x seconds", what are the usual/default settings for x and n?
This sounds more like a server restriction. The browser ones usually sound like:
-"the maximum requests for the same hostname is x"
-"the maximum connections for ANY hostname is y"
-Is there any way good resource for this?
http://www.browserscope.org/?category=network (also hover over table headers to see what is measured)
http://www.stevesouders.com/blog/2008/03/20/roundup-on-parallel-connections
-Any way to detect if a request has been "delayed" or "rejected" because of a rate limit?
You could look at the http headers for "Connection: close" to detect server restrictions but I am not aware of being able in JavaScript to read settings from so many browsers in a consistent, browser-independent way. (For Firefox, you could read this http://support.mozilla.org/en-US/questions/746848)
Hope this quick answer helps?
No, browser does not in any way affect polling. I think what was meant on that page is the same origin policy - you can only access the same host and port as your original page.
Only known limitation to connections themselves is that you usually can only have from two to four simultaneous connections to the same host.
I've written some apps with long poll, some with C++ backend with my own webserver, and one with PHP backend with Apache2.
My long poll timeout is 4..10 s. When something occurs, or 4..10 s passes, my server returns an empty response. Then the client immediatelly starts another AJAX request. I found that some browsers hangs up when I start AJAX call from previous AJAX handler, so I am using setTimeout() with a small value to start the next AJAX request.
When something happens on the client side, which should be sent to server, I use another AJAX request for it, but it's a one-way thing: the server does not send any response, and the client does not process anything. The result of the operation (if any) will be received on the long poll. It requires max. 2 connection to the server, which all browsers supports.
Keep in mind, that if there's 500 client, it means 500 server-side webserver thread, which will move together, occurring load peaks, because when something happens, the server have to report it at the same time for each clients, the clients will process it near same time long, they will start the next long request in the same time, and from then, the timeout will expire also at the same time, and furthcoming ones too. You can trick with rnd timeout, say 4 rnd(0..4), but it's worthless, if anything happens, they will "sync" again, all the request have to be served at the same time, when something reportable happens.
I've tested it thru a router, and it works. I assume, routers respects 4..10 lag, it's around the speed of a slow webapge (far, far away), which no router think, that it should be canceled.
My PHP work is a collaborative spreadsheet, it looks amazing when you hit enter and the stuff is updating simultaneously in several browsers. Have fun!
No limit for no of ajax requests. However it will be on same host & port.
Server can limit no of request from a machine based on its setting.
For example. A server can set so that if there are more than few request from same machine within specified time it will reject request.
After small mistake in javascript code, neverending loop was made witch each step calling 2 ajax requests. In firebug i could see more and more requests until firefox started to slow down, dont response and finally crash.
So, yes, there is a "limit" ;)

ajax request per div

I have 10 divs on my page and each div will render its own ajax request when the page loads. I know i can make max 2 ajax requests and then i have to wait (based on the browser) before the next request gets fired. I was wondering what will be the best way to design such a page.
Should i create ajax request inside the divs so that i can pass the div as a context to the ajax reponse? something like this:
<div id="request1">
make an ajax request
</div>
<div id="request2">
make an ajax request
</div>
and so on......
is there any chance that result may get mixed up and wrong div will render the result from the different request?
--Edit--
I cannot make a single call as they all make calls to separate service and that service may or may not be available.
AJAX is Asynchronous, that way, if you call 10 AJAX requests using either $.get, $.post or $.ajax, those requests will fire independently without waiting for the previous ones. So unless you have a special requirements that need to avoid that, just go ahead
Why don't you send only one ajax request when the page loads, and let your server side script return the data needed for the 10 divs in form of json? That would reduce the number of requests sent to the server and the work would be a lot cleaner as well.
Edit : ok since this is no longer and option. You can queue the requests one after another, if each request you are sending, depends on each other (for eg: you might set a flag in the first request, which again gets check in a later request) you can queue them. I have been using this plugin for quite a while now, and it has come in handy)
You might be able to use in your case so check it out.
http://www.protofunc.com/scripts/jquery/ajaxManager/

ExtJS 4 - How to check if all current ajax requests are completed and then perform an action?

I have a page which fires Ajax requests for validations at server side. I need to perform an action when all the ajax requests have finished loading or are completed.
For this, I am using Ext.Ajax.isLoading() in a recursive function in following way:
function chechValid(){
if(Ext.Ajax.isLoading()){
checkValid();
}else{
//Code for Action 1
}
}//EOF
checkValid();
//Code for Action 2
The problem is that when I do this, browsers give the following errors:
Mozill FF - too much recursions
IE - Stack overflow at line:18134
If this recursion is a heavy thing for the browsers, then how to perform a task when all the Ajax requests have finished loading?
Using delay is not what I want as, if delay is used then browser begins executing the other code (like 'Code for Action 2' as shared above) which is not what is expected.
The main aim is that the browser shouldn't execute anything unless all the Ajax requests are complete and once completed then it should perform a particular action.
Any suggestions/help on this one?
Thanks in Advance.
PS: Using ExtJs 4.0.7
(Updated)More Detail about the actual situation:-
Here is brief description of the situtaion being faced - There is a form, in which I need to perform server side validations on various fields. I am doing so by firing an ajax request on blur event. Depending upon the server response of validation Ajax fired on blur, fields are marked invalid and form submission is not allowed. (Avoiding 'change' event as that causes alot of overhead on server due to high number of Ajas requests and also leads to fluctuating effects on a field when response from various such Ajax requests are received).
Things are working fine except in one case - when user modifies the value of a field and instead of 'tab'bing out from the field she directly clicks at the save button. In such a case, though, the blur event gets fired but the processing of 'Save' doesn't wait for Ajax Validation response and submits the form. Thus, I somehow need to check if Ajax requests have finihed loading and the process the saving of form. requestComplete would unfortunately not serve the purpose here. And if try using the recursion, then of course, the browser is hung due to high usage of resources. Same case occurs if I try using a pause script work around ( as shared here - Javascript Sleep).
Any possible workaround for this one?
TIA
Your method will lead to infinite recursion.
A better way is to register a callback function in Ext.Ajax.requestcomplete, something like this (not tested):
Ext.Ajax.on('requestcomplete', function(conn, response, options) {
if (!Ext.Ajax.isLoading()) {
//your action...
}
}
};
Unless I am misunderstanding the issue couldn't you create a couple of globals. I know globals are bad, but in this case it will save you quite a bit of headache. One global would be "formReady" and initially set it to false, the other would be "ajaxActive" and set to false. You would also add an onSubmit method that would validate that "formReady" was true and if not alert the user that validation was occurring (or you could set a timeout for form submission again and have a second validation that checks to see if "ajaxActive" is true). When the AJAX call is made it would set the variable "ajaxActive" to true and once complete would set formReady to true. You could also potentially resubmit the form automatically if the response from the AJAX was that the form was good.
Ext.Ajax.request() returns a transaction object when you call it, which is unique and allows you to recognise and abort specific Ajax requests.
By just calling Ext.Ajax.isLoading() without a specified transaction object, it defaults to the last request, which is why you have to call it recursively at the moment.
If it were me, I'd create an array of these transaction objects as you fire them off, and pass each of those in as optional parameters to the Ext.Ajax.isLoading() function to check if a particular request has finished. If it has, you can remove that transaction object from the array, and only progress with the save when your array is empty.
This would get round your recursion problem, since you've always got a finite number of requests that you're waiting on.
if (Object.keys(Ext.Ajax.requests).length === 0) console.log("No active requests");

how to clear the cache data when using ajax?

I am using Ajax to retrieve the data from server as below based on some ID to perform auto suggest function. however when i submit the form and update the database, the auto suggest field suppose should not contain anything for this ID anymore, but it will still retrieve data from its cache. do anyone know how to clear the cache and make the Ajax sending request to get the latest data from server every time i press the button? Pls help i really stuck on this whole weeks and couldnt find the solution.
For example: when ID field is 00001, auto suggest field will be 1,2,3. After i submit the form and update the database, when i search for 00001 again, it should not contain anything but it does, it still cache the data as 1,2,3 in suggest field...
if (window.XMLHttpRequest)
{// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp=new XMLHttpRequest();
}
else
{// code for IE6, IE5
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.onreadystatechange=function()
{
if (xmlhttp.readyState==4 && xmlhttp.status==200)
{
var data=xmlhttp.responseText;
alert(data);
}
}
xmlhttp.open("GET","gethint.php?q="+str,true);
xmlhttp.send();
I had this problem once before. This is probably something you can fix in your server settings. What the server does is get a server request, build the answer, and when the same request is done again it sends the same response it built before.
To easily avoid this problem, I added an extra request parameter (a UID).
so:
xmlhttp.open("GET","gethint.php?q="+str+**"?something"=RANDOMGUID**,true);
this way you always ha a unique request.
Works with IE8
xmlHttp.open("GET", URL, true);
xmlHttp.setRequestHeader("Cache-Control", "no-cache");
xmlHttp.setRequestHeader("Pragma", "no-cache");
xmlHttp.setRequestHeader("If-Modified-Since", "Sat, 1 Jan 2000 00:00:00 GMT");
you could use http headers to prevent the response from being cached:
Cache-Control: no-cache
Expires: Mon, 24 Oct 2005 16:13:22 GMT
Another option is to add another parameter to the url that varies everytime (for example the current time in millis), that way for the browser you ask another url and the cache won't be used.
Easiest thing to do is use jQuery#ajax and disable caching.
jQuery will suffix a parameter ?somenumber to your ajax call which just is sufficient to persuade the browser it cannot use cached data.
I came across this once. Here's the answer I got: Why does jQuery.ajax() add a parameter to the url? .
You could do the same thing manually too, but you would have to check if the addition of the parameter is all there is to it.
No code provided but some guidance that can help you manage account state consistently across many potential tabs, history, devices, etc:
First, the more condensed version in one long paragraph:
If you want a consistent view for an account, regardless of history back/forward buttons, extra tabs or extra windows (even with different IPs/devices), you can use an increment counter and a heartbeat (heartbeat can be implemented as an XMLHttpRequest send() during a setInterval call configured to say 2 seconds). The increment is managed at the server. The client provides the counter value on each call to server. On each request, the server checks the counter value provided by the client with its own saved value. The server produces the next counter value, persists it, and returns that counter value in the reply so client can use it on its next call. If the client provided the expected counter value, it was a "good" request. If the value provided was different than what the server had stored, the client's call was a "bad" request. Honor good requests. Server may only partly honor bad requests. The client's "next" call could be the next heartbeat or any other request on that account. Multiple client views into that account can overlap but essentially one client only would get the next good call. All other clients would get bad next calls because their counter values will no longer match what the server has stored. If you use one account view, every call to server should be a good call once the session is initiated. [A session can last when browser javascript maintains the counter value, but unless you use cookies or the like, you cannot extend a session if the page is refreshed since javascript would be reinitialized. This means every first call to page would be a "bad" call.] If you use history back, some other tab, or some other device, you should be able to use it, but you will get a bad call at a minimum whenever you switch from one to the other. To limit these bad call cases, turn off heartbeat when that browser view is inactive. Note, don't even implement a heartbeat if you don't mind showing the user a possibly stale page for a prolonged time or if the particular page is unlikely to be stale (but this question assumes you can get stale data on user's browser view).
Let's add more detail:
Every request to a server from an existing opened browser page provides the counter value. This can be, for example, during a form submit or during javascript XMLHttpRequest object .send().
Requests typed from url bar by the user may not have a counter value sent. This and logon can just be treated as having an incorrect count value. These would be examples of "bad" calls, and should be handled as gracefully as possible but should generally not be allowed to update the account if you want a consistent view.
Every request seeking to modify the account (a "writer") must have provided the anticipated counter value (which can be updated at the server other than as +1 if you have more elaborate needs but must be anticipated/unique for a next request). At the server end, if the counter value is the expected one, then process the request variables normally and allow write access. Then include in the reply to client the next legit value the server will expect for that variable (eg, cnt++) and persist that value on the server end (eg, update counter value in database or in some other server file) so that the server will know the next legit counter value to expect whenever the next request comes in for that account.
A request that is a simple "read" is processed the same way as a write request except that if it is a bad request (if the counter doesn't match), a read is more likely to be able to be safely processed.
All requests that provide a different counter value than expected ("bad" requests) still result in the updating of the counter at the server and still result in the client's reply getting the good next expected counter value; however, those bad requests should be ignored to the extent they ask to update the account. Bad requests could even result in more drastic action (such as logging user out).
Client javascript will update the value of counter upon every server reply to what the server returns so that this updated counter value is sent back on any next call (eg, on heartbeat or any talk to server). Every client request will always get a legit next value sent back but only the client that uses that first will be treated as ok by server.
The other clients (ie, any client request that doesn't provide the expected counter value) will instead be given a safe state, eg, the current state as per the database while ignoring any write/update requests. The server can treat the "bad" client calls in other more drastic ways, eg, by logging the user out or whatever, but primarily just make sure to honor at most the bad client's safe read requests, not updating the account in any way.
The heartbeat is necessary only if you want a clean view in short order. To make it light on server, you can have the heartbeat be a simple ping (sending the counter value along). If acknowledged as the good client, you can be done for that heartbeat. If you were a bad client however, then server can return say good fresh info which can be used by javascript in heartbeat code to update the GUI. The heartbeat can be to a different php server page or main one but if different make sure that page gets consistent view of server saved counter variable (eg, use a database).
Another feature you may want to implement for an account is "active/inactive status. The client would be inactive if the mouse position has not changed for a number of seconds or minutes (and no keys typed or other user input during that time). The heartbeat can deactivate itself (clearInterval) when client is inactive. On every user input check if heartbeat is stopped and if so restart it. Heartbeat restart also means user is changing from inactive to active. Stopping the heartbeat would conserve client/server resources when user is browsing on other tab or afk. When becoming active again, you can do things like log out user if they were inactive for a long time, etc... or just do nothing new except restart heartbeat. [Remember, the reply to a heartbeat could indicate the heartbeat request was "bad".. which might possibly be a "drastic" reason to log user out as mentioned above.]
I know that an answer has been accepted, but it didn't worked in my case. I already had no-cache header added. In my case this was the solution that really worked, because if you add a code after the request, it might not get executed in time for the second piece of code to run successfully:
x = new XMLHttpRequest();
x.onreadystatechange = function() {
if (this.readyState == 4 && this.status == 200) {
//execute the other code or function only when the page is really loaded!
}
};
x.open("GET", "add your url here", true);
x.send();

Resources