I have an action as follows:
SomeActions.doAction1(){
//..dispatch event "started"...
//...do some process....
FewActions.doAnotherAction(); //CAN WE DO THIS
//...do something more....
//..dispatch event "completed"..
}
While the above works with no problems, just wondering, if it is valid according to flux pattern/standard or is there a better way.
Also, I guess calling Actions from Stores are a bad idea. Correct me if I am wrong.
Yes, calling an Action within another Action is a bad practice. Actions should be atomic; all changes in the Stores should be in response to a single action. They should describe one thing that happened in the real world: the user clicked on a button, the server responded with data, the screen refreshed, etc.
Most people get confused by Actions when they are thinking about them as imperative instructions (first do A, then do B) instead of descriptions of what happened and the starting point for reactive processes.
This is why I recommend to people that they name their Action types in the past tense: BUTTON_CLICKED. This reminds the programmer of the fundamentally externally-driven, descriptive nature of Actions.
Actions are like a newspaper that gets delivered to all the stores, describing what happened.
Calling Actions from Stores is almost always the wrong thing to do. I can only think of one exception: when the Store responds to the first Action by starting up an asynchronous process. When the async process completes, you want to fire off a second Action. This is the case with a XHR call to the server. But the better way is to put the XHR handling code into a Utils module. The store can then respond to the first Action by calling a method in the Utils module, and then the Utils module has the code to call the second Action when the server response comes back.
I have a logged user that access my JavaScript app.
During the initialization, the app send a couple of Ajax calls to gather some informations.
Sometimes, I would say about one time out of ten, one of the calls abort in one of my route filters.
What I observed about it :
doesn't occurs every time
not always the same route (call)
there can be more than one fail at a same time
a simple page refresh re-trigger the calls, and
as it's not a constant failure, everything goes back to normal...
until the next glitch.
Here's the filter that is faulty:
I know it's this one because I replaced the 403 with 418 and it transformed the "forbidden" glitch into a "teapot" glitch.
Route::filter('auth-api', function() {
if (!Auth::check()) { App::abort(403, "Auth-api filter denied"); }
});
And here's the strange bug in action :
All the /api/[whatever] goes though the same filters, in this case, the /api/assurances died while the others went good.
It sounds like your sessions are failing for some reason. It is possibly due to the file session driver, which can lead to race conditions when accessed multiple times in short succession.
The best option is to change the session driver and test if the problem persists with another session driver. I recommend trying with Redis or Memcache - as these are designed to be fast, quick, and reliable.
I just read that some browsers would prevent HTTP polling (I guess by limiting the rate of requests)...
From https://github.com/sstrigler/JSJaC:
Note: As security restrictions of most modern browsers prevent HTTP
Polling from being usable anymore this module is disabled by default
now. If you want to compile it in use 'make polling'.
This could explain some misbehavior of some of my JavaScripts (sometimes requests are just not sent or retried, even if they were actually successful). But I couldn't find further information on details..
Questions
if it's "max. number of requests n per x seconds", what are the usual/default settings for x and n?
Is there any way good resource for this?
Any way to detect if a request has been "delayed" or "rejected" because of a rate limit?
Thanks for your help...
Stefan
Yes, as far as I am aware there is a default pool limit of 10 and a default request timeout of 30 seconds per request, however the timeout and poll limits can be controlled and different browsers implement different limitations!
Check out this Google implementation.
and this is an awesome implementation of catching a timeout error!
You can find the Firefox specifics HERE!
Internet Explorer specifics are controlled from inside the Windows registry.
Also have a look at this question.
Basically, the way you control is not by changing the browser limitations, but by abiding them. So you apply a technique called throttle-ing.
Think of it as creating a FIFO/priority queue of functions. A queue struct that takes xhr requests as members and enforces delay between them is an Xhr Poll. For instance, I am using
Jsonp to get data from a node.js server located on another domain and I am polling of course due to browser limitations. Otherwise, I get zero response back from the server and that is only because of browser limitations.
I am actually doing a console log for every request that's supposed to be sent, but not all of them are being logged. So the browser limits them.
I'll be even more specific with helping you out. I have a page on my website which is supposed to render a view for tens or even hundreds of articles. You go through them using a cool horizontal slider.
The current value of the slider matches the currrent 'page'. Since I am only displaying 5 articles per page and I can't exactly load thousands of articles 'onload' without severe performance implications, I load the articles for the current page. I get them from a MongoDB by sending a cross-domain request to a Python script.
The script is supposed to return an array of five objects with all the details I need to build the DOM elements for a 'page'. However, there are a couple of issues.
First, the slider works extremely fast, as it's more or less a value change. Even if there is drag drop functionality, key down events etc, the actual change takes miliseconds. However, the code of the slider looks something like this:
goog.events.listen(slider, goog.events.EventType.CHANGE, function() {
myProject.Articles.page(slider.getValue());
}
The slider.getValue() method returns an int with the current page number, so basically I have to load from:
currentPage * articlesPerPage to (currentPage * articlesPerPage + 1) - 1
But in order to load, i do something like this:
I have a storage engine(think of it as an array):
I check if the content is not already there
If it is, there is no point to make another request, so go forward with getting the DOM elements from the array with the already created DOM elements in place.
If it isn't, then I need to get it so I need to send that request I was mentioning, which would look something like(without accounting for browser limitations):
JSONP.send({'action':'getMeSomeArticles','start':start,'length': itemsPerPage, function(callback){
// now I just parse the callback quickly to make sure it is consistent
// create DOM elements, and populate the client side storage
// and update the view for the user.
}}
The problem comes from the speed with which you can change that slider. Since every change supposedly triggers a request(same would happen for normal Xhr requests), then you are basically crossing the limitations of all browsers, so without throttle-ing, there would be no 'callback' for most of the requests. 'callback' is the JS code returned by the JSONP request(which is more of a remote script inclusion than anything else).
So what I do is push a request to a priority queue, not POLL, as now I don't need to send multiple simultaneous requests. If the queue is empty, the recently added member is executed and everyone is happy. If it's not, then all non-completed requests in progress are cancelled and only the last one is executed.
Now in my particular case, I do a binary search(0(log n)) to see if the storage engine doesn't have data for the previous requests yet, which tells me if the previous request has been completed or not. If it has, then it's removed from the queue and the current one is processed, otherwise the new one fires. So an and so forth.
Again, for speed consideration and shit browser wanna-bes such as Internet Explorer, I do the above described procedure about 3-4 steps ahead. So I pre-load 20 pages ahead till everything is the client side storage engine. This way, every limitation is successfully dealt with.
The cooldown time is covered by the minimum time it would take to slide through 20 pages and the throttle-ing makes sure there are no more than 1 active requests at any given time(with backwards compatibility going as far as Internet Explorer 5).
The reason why I wrote all this is to give you an example trying to say that you cannot always enforce delay directly from the FIFO structure, as your calls may need to turn into what a user sees, and you don't exactly want to make a user wait 10-15 seconds for a single page to render.
Also, always minimize the polling and the need to poll(simultaneously fired Ajax events, as not all browsers actually do good things with them). For instance, instead of doing something like sending one request to get content and sending another for that content to be tracked as viewed in your app metrics, do as many tasks at server level as you possibly can!
Of course, you probably want to track your errors properly, so your Xhr object from your library of choice implement error handling for ajax and because you are an awesome developer you want to make use of them.
so say you have a try - catch block in place
The scenario is this:
An Ajax call has finished and it's supposed to return a JSON, but the call somehow failed. However, you try to parse the JSON and do whatever you need to do with it.
so
function onAjaxSuccess (ajaxResponse) {
try {
var yourObj = JSON.parse(ajaxRespose);
} catch (err) {
// Now I've actually seen this on a number of occasions, to log that an error occur
// a lot of developers will attempt to send yet another ajax request to log the
// failure of the previous one.
// for these reasons, workers exist.
myProject.worker.message('preferrably a pre-determined error code should go here');
// Then only the worker should again throttle and poll the ajax requests that log the
//specific error.
};
};
While I have seen various implementations that try to fire as many Xhr requests at the same time as they possible can until they encounter browser limitations, then do quite a good job at stalling the ones that haven't fired in wait for the browser 'cooldown', what I can advise you is to think about the following:
How important is speed for your app?
Just how scalable and how intensive the I/O will be?
If the answer to the first one is 'very' and to the latter 'OMFG modern technology', then try to optimize your code and architecture as much as you can so that you never need to send 10 simultaneous Xhr requests. Also, for large scale apps, multi-thread your processes. The JavaScript way to accomplish that is by using workers. Or you could call the ECMA board, tell them to make this a default, and then post it here so that the rest of us JS devs can enjoy native multi-threading in JS:)(how dafuq did they not think about this?!?!)
Stefan, quick answers below:
-if it's "max. number of requests n per x seconds", what are the usual/default settings for x and n?
This sounds more like a server restriction. The browser ones usually sound like:
-"the maximum requests for the same hostname is x"
-"the maximum connections for ANY hostname is y"
-Is there any way good resource for this?
http://www.browserscope.org/?category=network (also hover over table headers to see what is measured)
http://www.stevesouders.com/blog/2008/03/20/roundup-on-parallel-connections
-Any way to detect if a request has been "delayed" or "rejected" because of a rate limit?
You could look at the http headers for "Connection: close" to detect server restrictions but I am not aware of being able in JavaScript to read settings from so many browsers in a consistent, browser-independent way. (For Firefox, you could read this http://support.mozilla.org/en-US/questions/746848)
Hope this quick answer helps?
No, browser does not in any way affect polling. I think what was meant on that page is the same origin policy - you can only access the same host and port as your original page.
Only known limitation to connections themselves is that you usually can only have from two to four simultaneous connections to the same host.
I've written some apps with long poll, some with C++ backend with my own webserver, and one with PHP backend with Apache2.
My long poll timeout is 4..10 s. When something occurs, or 4..10 s passes, my server returns an empty response. Then the client immediatelly starts another AJAX request. I found that some browsers hangs up when I start AJAX call from previous AJAX handler, so I am using setTimeout() with a small value to start the next AJAX request.
When something happens on the client side, which should be sent to server, I use another AJAX request for it, but it's a one-way thing: the server does not send any response, and the client does not process anything. The result of the operation (if any) will be received on the long poll. It requires max. 2 connection to the server, which all browsers supports.
Keep in mind, that if there's 500 client, it means 500 server-side webserver thread, which will move together, occurring load peaks, because when something happens, the server have to report it at the same time for each clients, the clients will process it near same time long, they will start the next long request in the same time, and from then, the timeout will expire also at the same time, and furthcoming ones too. You can trick with rnd timeout, say 4 rnd(0..4), but it's worthless, if anything happens, they will "sync" again, all the request have to be served at the same time, when something reportable happens.
I've tested it thru a router, and it works. I assume, routers respects 4..10 lag, it's around the speed of a slow webapge (far, far away), which no router think, that it should be canceled.
My PHP work is a collaborative spreadsheet, it looks amazing when you hit enter and the stuff is updating simultaneously in several browsers. Have fun!
No limit for no of ajax requests. However it will be on same host & port.
Server can limit no of request from a machine based on its setting.
For example. A server can set so that if there are more than few request from same machine within specified time it will reject request.
After small mistake in javascript code, neverending loop was made witch each step calling 2 ajax requests. In firebug i could see more and more requests until firefox started to slow down, dont response and finally crash.
So, yes, there is a "limit" ;)
Background
I'm working on a web application utilizing AJAX to fetch content/data and what have you - nothing out of the ordinary.
On the server-side certain events can happen that the client-side JavaScript framework needs to be notified about and vice versa. These events are not always related to the users immediate actions. It is not an option to wait for the next page refresh to include them in the document or to stick them in some hidden fields because the user might never submit a form.
Right now it is design in such a way that events to and from the server are riding a long with the users requests. For instance if the user clicks a 'view details' link this would fire a request to the server to fetch some HTML or JSON with details about the clicked item. Along with this request or rather the response, a server-side (invoked) event will return with the content.
Question/issue 1:
I'm unsure how to control the queue of events going to the server. They can ride along with user invoked events, but what if these does not occur, the events will get lost. I imagine having a timer setup up to send these events to the server in the case the user does not perform some action. What do you think?
Question/issue 2:
With regards to the responds, some being requested as HTML some as JSON it is a bit tricky as I would have to somehow wrap al this data for allow for both formalized (and unrelated) events and perhaps HTML content, depending on the request, to return to the client. Any suggestions? anything I should be away about, for instance returning HTML content wrapped in a JSON bundle?
Update:
Do you know of any framework that uses an approach like this, that I can look at for inspiration (that is a framework that wraps events/requests in a package along with data)?
I am tackling a similar problem to yours at the moment. On your first question, I was thinking of implementing some sort of timer on the client side that makes an asycnhronous call for the content on expiry.
On your second question, I normaly just return JSON representing the data I need, and then present it by manipulating the Document model. I prefer to keep things consistent.
As for best practices, I cant say for sure that what I am doing is or complies to any best practice, but it works for our present requirement.
You might want to also consider the performance impact of having multiple clients making asynchrounous calls to your web server at regular intervals.
I am just wondering what general best practice is for saving data in Ajax Forms. In Spree ECommerce for example, every time you change a value in a list of objects (say you change the quantity of a certain Item in an Order), it updates the database with an Ajax call.
Is it better to have the User manually press "Save" or "Update" when they're done editing a form, or if you can (you have setup an ajax alternative), to just automatically save the data every time something changes?
It seems like Stack Overflow Careers saves a "Draft" of your profile every few seconds using some ajax thing.
As such, it seems like there's 3 ways to save data in a form if you have Ajax going:
User presses button, saves all data at once, not good if data is important
Save every time interval
Save every change
What do you recommend?
Good question. I don't think there's a one-size-fits-all best-practice that covers all situations. Generally, the more user-friendly your solution is, the greater the complexity of implementation, the less likely the potential for a proper gracefully degrading solution (unless you have been very, very careful).
Also, there are implications to whichever approach you have opted to go with. For example, autosaving periodically might not be a good idea where substantial data validation is involved. A user might type some stuff in, and get an error message after a few seconds. Instant feedback would be much more beneficial to the user in such a situation, as it is possible that the input which led to the failed validation was, say, a few actions ago, so it might be somewhat confusing to the user.
Saving whenever the user changes something (a keypress, a checkbox selection, etc.) would seem to be the way to go from a usability perspective, but again, it depends on what you are doing and could have negative side-effects. For example, if the user is on a slow connection, he/she might feel that your site is slow or buggy. It would also yield a lot more database queries than the old-school 'click save' method.
I guess an obvious way to get around some of the above caveats would be to incorporate on-the-spot client side validation, but what works in the end might well be down to what your hallway testers say.
Final recommendation: create the old-style 'click save to save' forms and enhance from there, making sure things don't break without javascript (unless you have express permission from a higher authority). Hope that wasn't all nonsense.
It all depends on the situation. If the form is going to change due to user input then you may be better served save/update form on every change. Otherwise wait for an explicit user action.
I can only see trouble on the horizon if you adopt an autosave strategy for a form..
I know this post is old, but I like this simple solution, if the user change som data on your form and try to leave page without saving it, I prompt a remember message
In a global .js:
var validate=false;
window.onbeforeunload = function() { if(validate) return "You made some changes, are you sure you want to leave?"; };
In the form page, (i did it in jquery):
$('input,textarea,select').change(function(){ validate=true; });
$('form').submit(function() { validate=false; });
BR!