Auth::check() fails on an ajax call (sometimes) - ajax

I have a logged user that access my JavaScript app.
During the initialization, the app send a couple of Ajax calls to gather some informations.
Sometimes, I would say about one time out of ten, one of the calls abort in one of my route filters.
What I observed about it :
doesn't occurs every time
not always the same route (call)
there can be more than one fail at a same time
a simple page refresh re-trigger the calls, and
as it's not a constant failure, everything goes back to normal...
until the next glitch.
Here's the filter that is faulty:
I know it's this one because I replaced the 403 with 418 and it transformed the "forbidden" glitch into a "teapot" glitch.
Route::filter('auth-api', function() {
if (!Auth::check()) { App::abort(403, "Auth-api filter denied"); }
});
And here's the strange bug in action :
All the /api/[whatever] goes though the same filters, in this case, the /api/assurances died while the others went good.

It sounds like your sessions are failing for some reason. It is possibly due to the file session driver, which can lead to race conditions when accessed multiple times in short succession.
The best option is to change the session driver and test if the problem persists with another session driver. I recommend trying with Redis or Memcache - as these are designed to be fast, quick, and reliable.

Related

Simple question about waiting on an AJAX call

I need a Javascript function that serves the purpose shown below. I simply want to wait on the response from the server.
console.log('Before getting the city name.');
zip_code = '60601';
city_name = function_that_slowly_gets_city_name_from_server(zip_code);
console.log(city_name);
console.log('After getting the city name.');
Output in console:
Before getting the city name.
Chicago
After getting the city name.
I do not want the answer ('Chicago') sent to the console in a callback function. I understand that async:false is now taboo with $.ajax(), but I still need for it to work as shown above. I cannot find posts that provide a consistent, straightforward answer.
FOLLOW UP:
I've found many answers on StackOverflow that say synchronous calls are evil. Yet, is there a way to do it anyway?
Based on your comment, your use-case is a quick method of disabling user-interaction while the AJAX call is occurring to ensure the user can't do anything bad (e.g. start a duplicate request/race condition or navigate to a different part of the app, etc.). So maybe locking the thread ain't such a bad idea then, especially for an internal app that doesn't need a ton of frills?
But Here's the Problem:
The user can continue to queue events even during a locked thread. That means that any actions the user takes while a synchronous request is occurring (such as submitting a form) will continue to line up in the background, and will then begin firing as soon as the initial request is finished. So the threat of your user double or triple clicking out of impatience (or even just accidentally) -- and as a result causing duplicate calls to the database -- is very real and likely (for reference, I can double click in ~120ms pretty easily).
The same thread is also responsible for things that might surprise you, such as certain browser-level hotkeys or even exiting the tab at all, meaning yes, you could actually significantly delay the user from closing the application, though that's not likely for a low-traffic database. However, it's certainly not impossible, and it's definitely not desirable, even for an application that doesn't need all the frills of a commercial product.
So What Should I Do as a Quick Solution Instead?:
Well if you still need a quick solution that can effectively freeze the entirety of your application in one go, then depending on your existing code, this shouldn't be too bad either.
Make the request async, as is the default and standard. But before that request fires, select all elements typically in charge of event handling, disable them with the "disabled" attribute, and then re-enable them in the callback. Something like this:
var userStuff = $("input, button, submit, form");
userStuff.prop("disabled", true);
$.ajax({
// other ajax request settings ...
// ...
// ...
complete: (data) => {
userStuff.prop("disabled", false);
}
});
The elements contained within userStuff are just common elements that typically have some event-handling to them. It's up to you to determine if those elements are sufficient for your application, or if your application is so large that such a query could itself have a performance impact. But assuming that checks out, this will prevent the user from interacting with/queueing anything until the request has finished.
I Don't Care. Give me the Sync:
Well in that case, why not just use async: false as mentioned in your OP? I'm somewhat speculating here, but I believe it's not just async: false that's deprecated, but all means of synchronous XMLHttpRequest (which I believe $.ajax still uses under the hood), and I don't think there's any other synchronous alternative to that. So anything you do with synchronous network in mind is going to be evil, but at least in Chrome 89.0, $.ajax({async: false}) still works for me.

application ajax time out

Controller:
The code that handles post back has a db call in biz layer that is long running. After 30 minutes, the browser showed the error message from the ajax call, however my db logs showed that the db method was still running - ran way past the 30 min mark.
My ? is how does the ajax error get raised when it wasn't raised in the controller [log4j did not show any errors trapped] as the controller was still waiting for feedback from the biz layer.
Does adjusting the timeout in tomcat help? I assume not as the app was still processing
My View:
function runAjax(){
$.ajax({
type: "POST",
url: "Test.html",
data: { testparam1,testparam2},
success: function(data){
document.getElementById("processdata").innerHTML="Success";
},
error: function(request){
document.getElementById("processdata").innerHTML="Error";
}
});
}
The browser, aka JavaScript, is not going to wait forever for a response. After some time (read 30min in this case), it will give up on the request, thinking the server is not responding. HTTP is not really set up to handle requests that take > 30min, at least not through the normal means.
There are some options. First, you could optimize whatever you are doing in the backend to make it faster. I mean, if it's taking 30+ min for a query to run, then you may need to rethink your SQL approach a bit.
Second, you could simply break up the steps, so that you can run each step, verify, then run the next. Depending on what you are doing, this may open up the possibility to do some work in parallel, speeding up the process. The challenge here would be in rolling back in the case when something failed.
Third, you could implement a Pub/Sub model. There are several frameworks out there to do Pub/Sub for JavaScript. I have personally used Atmosphere, which has a nice jQuery plugin and falls back to more HTTP conventional methods when the more advanced methods are not available in the browser. In this model, your View would submit the request, the backend would queue up the work and tell the View to subscribe to a particular queue/channel to get the result. The View can simply wait for the response.
Forth, you could post the request, have an interrum page that simply keeps refreshing via JavaScript until the backend says "Ok, we are done!". Lots of older web applications used to do this for long-running transactions.
I am sure there are probably 1000 more options you could use, but if your request will always take that long to run, I think AJAX is not the tool for you.

Do browsers limit AJAX polling rate? What is the limit?

I just read that some browsers would prevent HTTP polling (I guess by limiting the rate of requests)...
From https://github.com/sstrigler/JSJaC:
Note: As security restrictions of most modern browsers prevent HTTP
Polling from being usable anymore this module is disabled by default
now. If you want to compile it in use 'make polling'.
This could explain some misbehavior of some of my JavaScripts (sometimes requests are just not sent or retried, even if they were actually successful). But I couldn't find further information on details..
Questions
if it's "max. number of requests n per x seconds", what are the usual/default settings for x and n?
Is there any way good resource for this?
Any way to detect if a request has been "delayed" or "rejected" because of a rate limit?
Thanks for your help...
Stefan
Yes, as far as I am aware there is a default pool limit of 10 and a default request timeout of 30 seconds per request, however the timeout and poll limits can be controlled and different browsers implement different limitations!
Check out this Google implementation.
and this is an awesome implementation of catching a timeout error!
You can find the Firefox specifics HERE!
Internet Explorer specifics are controlled from inside the Windows registry.
Also have a look at this question.
Basically, the way you control is not by changing the browser limitations, but by abiding them. So you apply a technique called throttle-ing.
Think of it as creating a FIFO/priority queue of functions. A queue struct that takes xhr requests as members and enforces delay between them is an Xhr Poll. For instance, I am using
Jsonp to get data from a node.js server located on another domain and I am polling of course due to browser limitations. Otherwise, I get zero response back from the server and that is only because of browser limitations.
I am actually doing a console log for every request that's supposed to be sent, but not all of them are being logged. So the browser limits them.
I'll be even more specific with helping you out. I have a page on my website which is supposed to render a view for tens or even hundreds of articles. You go through them using a cool horizontal slider.
The current value of the slider matches the currrent 'page'. Since I am only displaying 5 articles per page and I can't exactly load thousands of articles 'onload' without severe performance implications, I load the articles for the current page. I get them from a MongoDB by sending a cross-domain request to a Python script.
The script is supposed to return an array of five objects with all the details I need to build the DOM elements for a 'page'. However, there are a couple of issues.
First, the slider works extremely fast, as it's more or less a value change. Even if there is drag drop functionality, key down events etc, the actual change takes miliseconds. However, the code of the slider looks something like this:
goog.events.listen(slider, goog.events.EventType.CHANGE, function() {
myProject.Articles.page(slider.getValue());
}
The slider.getValue() method returns an int with the current page number, so basically I have to load from:
currentPage * articlesPerPage to (currentPage * articlesPerPage + 1) - 1
But in order to load, i do something like this:
I have a storage engine(think of it as an array):
I check if the content is not already there
If it is, there is no point to make another request, so go forward with getting the DOM elements from the array with the already created DOM elements in place.
If it isn't, then I need to get it so I need to send that request I was mentioning, which would look something like(without accounting for browser limitations):
JSONP.send({'action':'getMeSomeArticles','start':start,'length': itemsPerPage, function(callback){
// now I just parse the callback quickly to make sure it is consistent
// create DOM elements, and populate the client side storage
// and update the view for the user.
}}
The problem comes from the speed with which you can change that slider. Since every change supposedly triggers a request(same would happen for normal Xhr requests), then you are basically crossing the limitations of all browsers, so without throttle-ing, there would be no 'callback' for most of the requests. 'callback' is the JS code returned by the JSONP request(which is more of a remote script inclusion than anything else).
So what I do is push a request to a priority queue, not POLL, as now I don't need to send multiple simultaneous requests. If the queue is empty, the recently added member is executed and everyone is happy. If it's not, then all non-completed requests in progress are cancelled and only the last one is executed.
Now in my particular case, I do a binary search(0(log n)) to see if the storage engine doesn't have data for the previous requests yet, which tells me if the previous request has been completed or not. If it has, then it's removed from the queue and the current one is processed, otherwise the new one fires. So an and so forth.
Again, for speed consideration and shit browser wanna-bes such as Internet Explorer, I do the above described procedure about 3-4 steps ahead. So I pre-load 20 pages ahead till everything is the client side storage engine. This way, every limitation is successfully dealt with.
The cooldown time is covered by the minimum time it would take to slide through 20 pages and the throttle-ing makes sure there are no more than 1 active requests at any given time(with backwards compatibility going as far as Internet Explorer 5).
The reason why I wrote all this is to give you an example trying to say that you cannot always enforce delay directly from the FIFO structure, as your calls may need to turn into what a user sees, and you don't exactly want to make a user wait 10-15 seconds for a single page to render.
Also, always minimize the polling and the need to poll(simultaneously fired Ajax events, as not all browsers actually do good things with them). For instance, instead of doing something like sending one request to get content and sending another for that content to be tracked as viewed in your app metrics, do as many tasks at server level as you possibly can!
Of course, you probably want to track your errors properly, so your Xhr object from your library of choice implement error handling for ajax and because you are an awesome developer you want to make use of them.
so say you have a try - catch block in place
The scenario is this:
An Ajax call has finished and it's supposed to return a JSON, but the call somehow failed. However, you try to parse the JSON and do whatever you need to do with it.
so
function onAjaxSuccess (ajaxResponse) {
try {
var yourObj = JSON.parse(ajaxRespose);
} catch (err) {
// Now I've actually seen this on a number of occasions, to log that an error occur
// a lot of developers will attempt to send yet another ajax request to log the
// failure of the previous one.
// for these reasons, workers exist.
myProject.worker.message('preferrably a pre-determined error code should go here');
// Then only the worker should again throttle and poll the ajax requests that log the
//specific error.
};
};
While I have seen various implementations that try to fire as many Xhr requests at the same time as they possible can until they encounter browser limitations, then do quite a good job at stalling the ones that haven't fired in wait for the browser 'cooldown', what I can advise you is to think about the following:
How important is speed for your app?
Just how scalable and how intensive the I/O will be?
If the answer to the first one is 'very' and to the latter 'OMFG modern technology', then try to optimize your code and architecture as much as you can so that you never need to send 10 simultaneous Xhr requests. Also, for large scale apps, multi-thread your processes. The JavaScript way to accomplish that is by using workers. Or you could call the ECMA board, tell them to make this a default, and then post it here so that the rest of us JS devs can enjoy native multi-threading in JS:)(how dafuq did they not think about this?!?!)
Stefan, quick answers below:
-if it's "max. number of requests n per x seconds", what are the usual/default settings for x and n?
This sounds more like a server restriction. The browser ones usually sound like:
-"the maximum requests for the same hostname is x"
-"the maximum connections for ANY hostname is y"
-Is there any way good resource for this?
http://www.browserscope.org/?category=network (also hover over table headers to see what is measured)
http://www.stevesouders.com/blog/2008/03/20/roundup-on-parallel-connections
-Any way to detect if a request has been "delayed" or "rejected" because of a rate limit?
You could look at the http headers for "Connection: close" to detect server restrictions but I am not aware of being able in JavaScript to read settings from so many browsers in a consistent, browser-independent way. (For Firefox, you could read this http://support.mozilla.org/en-US/questions/746848)
Hope this quick answer helps?
No, browser does not in any way affect polling. I think what was meant on that page is the same origin policy - you can only access the same host and port as your original page.
Only known limitation to connections themselves is that you usually can only have from two to four simultaneous connections to the same host.
I've written some apps with long poll, some with C++ backend with my own webserver, and one with PHP backend with Apache2.
My long poll timeout is 4..10 s. When something occurs, or 4..10 s passes, my server returns an empty response. Then the client immediatelly starts another AJAX request. I found that some browsers hangs up when I start AJAX call from previous AJAX handler, so I am using setTimeout() with a small value to start the next AJAX request.
When something happens on the client side, which should be sent to server, I use another AJAX request for it, but it's a one-way thing: the server does not send any response, and the client does not process anything. The result of the operation (if any) will be received on the long poll. It requires max. 2 connection to the server, which all browsers supports.
Keep in mind, that if there's 500 client, it means 500 server-side webserver thread, which will move together, occurring load peaks, because when something happens, the server have to report it at the same time for each clients, the clients will process it near same time long, they will start the next long request in the same time, and from then, the timeout will expire also at the same time, and furthcoming ones too. You can trick with rnd timeout, say 4 rnd(0..4), but it's worthless, if anything happens, they will "sync" again, all the request have to be served at the same time, when something reportable happens.
I've tested it thru a router, and it works. I assume, routers respects 4..10 lag, it's around the speed of a slow webapge (far, far away), which no router think, that it should be canceled.
My PHP work is a collaborative spreadsheet, it looks amazing when you hit enter and the stuff is updating simultaneously in several browsers. Have fun!
No limit for no of ajax requests. However it will be on same host & port.
Server can limit no of request from a machine based on its setting.
For example. A server can set so that if there are more than few request from same machine within specified time it will reject request.
After small mistake in javascript code, neverending loop was made witch each step calling 2 ajax requests. In firebug i could see more and more requests until firefox started to slow down, dont response and finally crash.
So, yes, there is a "limit" ;)

SQLAlchemy session: how to keep it alive?

I have a session object that gets passed around a whole lot and at some point the following lines of code are called (this is unavoidable):
import transaction
transaction.commit()
This renders the session unusable (by closing it I think).
My question is two part:
How do I check if a session is still alive and well?
Is there a quick way to revitalize a dead session?
For 2: The only way I currently know is to use sqlalchemy.orm.scoped_session, then call query(...)get(id) many times to recreate the necessary model instances but this seems pretty darn inefficient.
EDIT
Here's an example of the sequence of events that causes the error:
modelInstance = DBSession.query(ModelClass).first()
import transaction
transaction.commit()
modelInstance.some_relationship
And here is the error:
sqlalchemy.orm.exc.DetachedInstanceError: Parent instance <CategoryNode at 0x7fdc4c4b3110> is not bound to a Session; lazy load operation of attribute 'children' cannot proceed
I don't really want to turn off lazy loading.
EDIT
DBSession.is_active seems to be no indication of whether or not the session is in fact alive and well in this case:
transaction.commit()
print(DBSession.is_active)
this prints True...
EDIT
This seemed too big for a comment so I'm putting it here.
zzzeek said:
"An expired object will automatically load new state from the database, via the Session, as soon as you access anything on it, so there's no need to tell the Session to do anything here."
So how do I get stuff committed in such a way that this will happen? calling transaction.commit is wrong, what's the correct way?
so the first thing to observe here is "import transaction" is a package called zope.transaction. this is a generic transaction that takes hold of any number of sub-tasks, of which the SQLAlchemy Session is one of them, via the zope.sqlalchemy extension.
What zope.sqlalchemy here is going to do is call the begin()/rollback()/commit() methods of the Session itself, in response to it's own management of the "transaction".
The Session itself works in such a way that it is almost always ready for use, even if its internal transaction has been committed. When this happens, the Session upon next use just keeps going, either starting a new transaction if it's in autocommit=False, or if autocommit=True it continues in "autocommit" mode. Basically it is auto-revitalizing.
The one time that the Session is not able to proceed is if a flush has failed, and the rollback() method has not been called, which, when in autocommit=False mode, the Session would like you do to explicitly when flush() fails. To see if the Session is in this specific state, the session.is_active property will return False in that case.
I'm not 100% sure what the implications are of continuing to use the Session when zope.transaction is in use. I think it depends on how you're using zope.transaction in the bigger scheme.
Which leads us where lots of these questions do, which is what are you really trying to do. Like, "recreate the necessary model instances" is not something the Session does, unless you are referring to existing instances which have been expired (their guts emptied out). An expired object will automatically load new state from the database, via the Session, as soon as you access anything on it, so there's no need to tell the Session to do anything here.
It's of course an option to even turn off auto-expiration entirely, but that you are even arriving at a problem here implies something is not working as it should. Like there's some error message you're getting. More detail would be needed to understand exactly what the issue you're having is.

ExtJS 4 - How to check if all current ajax requests are completed and then perform an action?

I have a page which fires Ajax requests for validations at server side. I need to perform an action when all the ajax requests have finished loading or are completed.
For this, I am using Ext.Ajax.isLoading() in a recursive function in following way:
function chechValid(){
if(Ext.Ajax.isLoading()){
checkValid();
}else{
//Code for Action 1
}
}//EOF
checkValid();
//Code for Action 2
The problem is that when I do this, browsers give the following errors:
Mozill FF - too much recursions
IE - Stack overflow at line:18134
If this recursion is a heavy thing for the browsers, then how to perform a task when all the Ajax requests have finished loading?
Using delay is not what I want as, if delay is used then browser begins executing the other code (like 'Code for Action 2' as shared above) which is not what is expected.
The main aim is that the browser shouldn't execute anything unless all the Ajax requests are complete and once completed then it should perform a particular action.
Any suggestions/help on this one?
Thanks in Advance.
PS: Using ExtJs 4.0.7
(Updated)More Detail about the actual situation:-
Here is brief description of the situtaion being faced - There is a form, in which I need to perform server side validations on various fields. I am doing so by firing an ajax request on blur event. Depending upon the server response of validation Ajax fired on blur, fields are marked invalid and form submission is not allowed. (Avoiding 'change' event as that causes alot of overhead on server due to high number of Ajas requests and also leads to fluctuating effects on a field when response from various such Ajax requests are received).
Things are working fine except in one case - when user modifies the value of a field and instead of 'tab'bing out from the field she directly clicks at the save button. In such a case, though, the blur event gets fired but the processing of 'Save' doesn't wait for Ajax Validation response and submits the form. Thus, I somehow need to check if Ajax requests have finihed loading and the process the saving of form. requestComplete would unfortunately not serve the purpose here. And if try using the recursion, then of course, the browser is hung due to high usage of resources. Same case occurs if I try using a pause script work around ( as shared here - Javascript Sleep).
Any possible workaround for this one?
TIA
Your method will lead to infinite recursion.
A better way is to register a callback function in Ext.Ajax.requestcomplete, something like this (not tested):
Ext.Ajax.on('requestcomplete', function(conn, response, options) {
if (!Ext.Ajax.isLoading()) {
//your action...
}
}
};
Unless I am misunderstanding the issue couldn't you create a couple of globals. I know globals are bad, but in this case it will save you quite a bit of headache. One global would be "formReady" and initially set it to false, the other would be "ajaxActive" and set to false. You would also add an onSubmit method that would validate that "formReady" was true and if not alert the user that validation was occurring (or you could set a timeout for form submission again and have a second validation that checks to see if "ajaxActive" is true). When the AJAX call is made it would set the variable "ajaxActive" to true and once complete would set formReady to true. You could also potentially resubmit the form automatically if the response from the AJAX was that the form was good.
Ext.Ajax.request() returns a transaction object when you call it, which is unique and allows you to recognise and abort specific Ajax requests.
By just calling Ext.Ajax.isLoading() without a specified transaction object, it defaults to the last request, which is why you have to call it recursively at the moment.
If it were me, I'd create an array of these transaction objects as you fire them off, and pass each of those in as optional parameters to the Ext.Ajax.isLoading() function to check if a particular request has finished. If it has, you can remove that transaction object from the array, and only progress with the save when your array is empty.
This would get round your recursion problem, since you've always got a finite number of requests that you're waiting on.
if (Object.keys(Ext.Ajax.requests).length === 0) console.log("No active requests");

Resources