calling ajax every 10 seconds using setinterval - ajax

i'm using setinterval to call ajax every 10 seconds so my question is,is this way are bad for server,does using setinterval make ajax effect badly on server side and if it's what is the best way to do that without effect badly on server side,thanks

It means an XHR Request every 10 seconds to the server, which is not a bad practice, since it is a core requirement. However, there can be a solution to it by applying the mechanism of Caching Data on Server Side, to reduce direct hits to the Database, and only perform hits in case there are any CRUD Operations applied on Database.

Related

RxJS - Concurrent paging

I'm facing a bit of a tricky problem and feel like my limited knowledge of RxJS is preventing me from reaching a solution.
Essentially what I'm trying to do is page an api endpoint in page sizes of 100, then for each page of data I receive perform an ajax request on each item. However I'm running into some performance issues when retrieving the pages of data, I assumed forkJoin would be exactly what I needed but it doesn't seem to be running the ajax requests in parellel as the operator suggests, this is leading to rather long wait times before the data is ready to process.
So my question is, how can I retrieve pages of data without having to rely on the previous page being fetched?
Sounds like this might be the github users project.
If, say, fetching avatar_url after fetching a list of users, forkjoin is going to wait for completion of all 100 requests until it emits anything.
flatMap is going to be a perceived improvement in the UI as it will emit each response as it arrives. But, does not alter the overall time to completion or the problem of browser limited connections.

Performance bottleneck with JSONP request (long evaluating time)

I have to use JSONP request to get large JSON (around 80kB, array of 100 object with multiple big properties). When browser is evaluating request I cant do anything with the page, browser is freezed. I suppose, JSON is evaluating and it uses UI thread.
What can I do to speed this up? Sometimes "Evaluate script" event (I use Timeline in DevTools) takes 3.5s (sic!).
I cant use regular AJAX request because of CORS, I cant enable CORS on server. I dont want to make 10 smaller requests because then I will have to rerender layout 10 times or more...

what's the skinny on long polling with ajax and webapi...is it going to kill my server? and string comparisons

I have a very simple long polling ajax call like this:
(function poll(){
$.ajax({ url: "myserver", success: function(data){
//do my stuff here
}, dataType: "json", complete: poll, timeout: 30000 });
})();
I just picked this example up this afternoon and it seems to work great. I'm using it to build out some html on my page and it's nearly instantaneous as best I can tell. I'm a little worried though that this is going to keep worker threads open on my server and that if I have too big of a load on the server, it's going to stop entirely. Can someone shed some light on this theory? On the back end I have a webapi service (.net mvc 4) that calls a database, build the object, then passes the object back down. It also seems to me that in order for this to work, the server would have to be calling the database constantly...and that can't be good right???
My next question is what is the best way on the client to determine if I need to update the html on my page? Currently I"m using JSON.stringify() to turn my object into a string and comparing the string that comes down to the old string and if there's a delta, it re-writes the html on the page. right now there's not a whole lot in the object coming down, but it could potentially get very large and I think doing that string comparison could be pretty resource intensive on the client...especially if it's doing it nearly constantly.
bottom line for me is this: I"m not sure exactly sure how long polling works. I just googled it and found the above sample code and implemented it and, on the surface, it's awesome. I just fear that it's going to bog things down (on the server) and my way of comparing old results to new is going to bog thigns down (on the client).
any and all information you can provide is greatly appreciated.
TIA.
OK, my two cents:
As others said, SignalR is tried and tested code so I would really consider using that instead of writing my own.
SignalR does change some of the IIS settings to optimise IIS for this sort of work. So if you are looking to implement your own, have a look at IIS setting changes done in SignalR
I suppose you are doing long polling so that your server could implement some form of Server Push. Just bear in mind that this will turn your stateless HTTP machine into a stateful machine which is not good if you want to scale. Long polling behind a load ballancer is not nice :) For me this is the worst thing about server push.
ASP.NET uses ThreadPool for serving requests. A long poll will hog a ThreadPool thread. If you have too many of these threads you might end up in thread starvation (and tears). As a ballpark figure, 100 is not too many but +1000 is.
Even SignalR team say that the IIS box which is optimised for SingalR, probably not optimised for normal ASP.NET and they recommend to separate these boxes. So this means cost and overhead.
At the end of the day, I recommend to using long polling if you are solving a business problem (and not because it is just cool) because then that will pay its costs and overheads and headaches.
I agreee with SLaks - i.e. use SignalR if you need realtime web with WebApi http://www.asp.net/signalr. Long polling is difficult to implement well, let someone else handle that complexity i.e. use SignalR (natural choice for WebApi) or Comet.
SignalR attempts 3 other forms of communication before resorting to long polling, web sockets, server sent events and forever frame (here).
In some circumstances you may be better of with simple polling i.e. a hit every second or so to update... take a look at this article. But here is a quote:
when you have a high message volume, long-polling does not provide any substantial
performance improvements over traditional polling. In fact, it could be worse,
because the long-polling might spin out of control into an
unthrottled, continuous loop of immediate polls.
The fear is that with any significant load on your web page your 30 second ajax query could end up being your own denial of service attack.
Even Bayeux (CometD) will resort to simple polling if the load gets too much:
Increased server load and resource starvation are addressed by using
the reconnect and interval advice fields to throttle clients, which in
the worst-case degenerate to traditional polling behaviour.
As for the second part of you question.
If you are using long polling then your server should ideally only be returning an update if something actually has changed thus your UI should probably "trust" the response and assume that a response means new data. The same goes for any of the Server Push type approaches.
If you did move back down towards simple polling pullmethod then you can use the inbuilt http methods for detecting an update using the If-Modified-Since header which would allow you to return a 304 Not Modified, so the server would check the timestamp of an object and only return a 200 with an object if it had been modified since the last request.

Do browsers limit AJAX polling rate? What is the limit?

I just read that some browsers would prevent HTTP polling (I guess by limiting the rate of requests)...
From https://github.com/sstrigler/JSJaC:
Note: As security restrictions of most modern browsers prevent HTTP
Polling from being usable anymore this module is disabled by default
now. If you want to compile it in use 'make polling'.
This could explain some misbehavior of some of my JavaScripts (sometimes requests are just not sent or retried, even if they were actually successful). But I couldn't find further information on details..
Questions
if it's "max. number of requests n per x seconds", what are the usual/default settings for x and n?
Is there any way good resource for this?
Any way to detect if a request has been "delayed" or "rejected" because of a rate limit?
Thanks for your help...
Stefan
Yes, as far as I am aware there is a default pool limit of 10 and a default request timeout of 30 seconds per request, however the timeout and poll limits can be controlled and different browsers implement different limitations!
Check out this Google implementation.
and this is an awesome implementation of catching a timeout error!
You can find the Firefox specifics HERE!
Internet Explorer specifics are controlled from inside the Windows registry.
Also have a look at this question.
Basically, the way you control is not by changing the browser limitations, but by abiding them. So you apply a technique called throttle-ing.
Think of it as creating a FIFO/priority queue of functions. A queue struct that takes xhr requests as members and enforces delay between them is an Xhr Poll. For instance, I am using
Jsonp to get data from a node.js server located on another domain and I am polling of course due to browser limitations. Otherwise, I get zero response back from the server and that is only because of browser limitations.
I am actually doing a console log for every request that's supposed to be sent, but not all of them are being logged. So the browser limits them.
I'll be even more specific with helping you out. I have a page on my website which is supposed to render a view for tens or even hundreds of articles. You go through them using a cool horizontal slider.
The current value of the slider matches the currrent 'page'. Since I am only displaying 5 articles per page and I can't exactly load thousands of articles 'onload' without severe performance implications, I load the articles for the current page. I get them from a MongoDB by sending a cross-domain request to a Python script.
The script is supposed to return an array of five objects with all the details I need to build the DOM elements for a 'page'. However, there are a couple of issues.
First, the slider works extremely fast, as it's more or less a value change. Even if there is drag drop functionality, key down events etc, the actual change takes miliseconds. However, the code of the slider looks something like this:
goog.events.listen(slider, goog.events.EventType.CHANGE, function() {
myProject.Articles.page(slider.getValue());
}
The slider.getValue() method returns an int with the current page number, so basically I have to load from:
currentPage * articlesPerPage to (currentPage * articlesPerPage + 1) - 1
But in order to load, i do something like this:
I have a storage engine(think of it as an array):
I check if the content is not already there
If it is, there is no point to make another request, so go forward with getting the DOM elements from the array with the already created DOM elements in place.
If it isn't, then I need to get it so I need to send that request I was mentioning, which would look something like(without accounting for browser limitations):
JSONP.send({'action':'getMeSomeArticles','start':start,'length': itemsPerPage, function(callback){
// now I just parse the callback quickly to make sure it is consistent
// create DOM elements, and populate the client side storage
// and update the view for the user.
}}
The problem comes from the speed with which you can change that slider. Since every change supposedly triggers a request(same would happen for normal Xhr requests), then you are basically crossing the limitations of all browsers, so without throttle-ing, there would be no 'callback' for most of the requests. 'callback' is the JS code returned by the JSONP request(which is more of a remote script inclusion than anything else).
So what I do is push a request to a priority queue, not POLL, as now I don't need to send multiple simultaneous requests. If the queue is empty, the recently added member is executed and everyone is happy. If it's not, then all non-completed requests in progress are cancelled and only the last one is executed.
Now in my particular case, I do a binary search(0(log n)) to see if the storage engine doesn't have data for the previous requests yet, which tells me if the previous request has been completed or not. If it has, then it's removed from the queue and the current one is processed, otherwise the new one fires. So an and so forth.
Again, for speed consideration and shit browser wanna-bes such as Internet Explorer, I do the above described procedure about 3-4 steps ahead. So I pre-load 20 pages ahead till everything is the client side storage engine. This way, every limitation is successfully dealt with.
The cooldown time is covered by the minimum time it would take to slide through 20 pages and the throttle-ing makes sure there are no more than 1 active requests at any given time(with backwards compatibility going as far as Internet Explorer 5).
The reason why I wrote all this is to give you an example trying to say that you cannot always enforce delay directly from the FIFO structure, as your calls may need to turn into what a user sees, and you don't exactly want to make a user wait 10-15 seconds for a single page to render.
Also, always minimize the polling and the need to poll(simultaneously fired Ajax events, as not all browsers actually do good things with them). For instance, instead of doing something like sending one request to get content and sending another for that content to be tracked as viewed in your app metrics, do as many tasks at server level as you possibly can!
Of course, you probably want to track your errors properly, so your Xhr object from your library of choice implement error handling for ajax and because you are an awesome developer you want to make use of them.
so say you have a try - catch block in place
The scenario is this:
An Ajax call has finished and it's supposed to return a JSON, but the call somehow failed. However, you try to parse the JSON and do whatever you need to do with it.
so
function onAjaxSuccess (ajaxResponse) {
try {
var yourObj = JSON.parse(ajaxRespose);
} catch (err) {
// Now I've actually seen this on a number of occasions, to log that an error occur
// a lot of developers will attempt to send yet another ajax request to log the
// failure of the previous one.
// for these reasons, workers exist.
myProject.worker.message('preferrably a pre-determined error code should go here');
// Then only the worker should again throttle and poll the ajax requests that log the
//specific error.
};
};
While I have seen various implementations that try to fire as many Xhr requests at the same time as they possible can until they encounter browser limitations, then do quite a good job at stalling the ones that haven't fired in wait for the browser 'cooldown', what I can advise you is to think about the following:
How important is speed for your app?
Just how scalable and how intensive the I/O will be?
If the answer to the first one is 'very' and to the latter 'OMFG modern technology', then try to optimize your code and architecture as much as you can so that you never need to send 10 simultaneous Xhr requests. Also, for large scale apps, multi-thread your processes. The JavaScript way to accomplish that is by using workers. Or you could call the ECMA board, tell them to make this a default, and then post it here so that the rest of us JS devs can enjoy native multi-threading in JS:)(how dafuq did they not think about this?!?!)
Stefan, quick answers below:
-if it's "max. number of requests n per x seconds", what are the usual/default settings for x and n?
This sounds more like a server restriction. The browser ones usually sound like:
-"the maximum requests for the same hostname is x"
-"the maximum connections for ANY hostname is y"
-Is there any way good resource for this?
http://www.browserscope.org/?category=network (also hover over table headers to see what is measured)
http://www.stevesouders.com/blog/2008/03/20/roundup-on-parallel-connections
-Any way to detect if a request has been "delayed" or "rejected" because of a rate limit?
You could look at the http headers for "Connection: close" to detect server restrictions but I am not aware of being able in JavaScript to read settings from so many browsers in a consistent, browser-independent way. (For Firefox, you could read this http://support.mozilla.org/en-US/questions/746848)
Hope this quick answer helps?
No, browser does not in any way affect polling. I think what was meant on that page is the same origin policy - you can only access the same host and port as your original page.
Only known limitation to connections themselves is that you usually can only have from two to four simultaneous connections to the same host.
I've written some apps with long poll, some with C++ backend with my own webserver, and one with PHP backend with Apache2.
My long poll timeout is 4..10 s. When something occurs, or 4..10 s passes, my server returns an empty response. Then the client immediatelly starts another AJAX request. I found that some browsers hangs up when I start AJAX call from previous AJAX handler, so I am using setTimeout() with a small value to start the next AJAX request.
When something happens on the client side, which should be sent to server, I use another AJAX request for it, but it's a one-way thing: the server does not send any response, and the client does not process anything. The result of the operation (if any) will be received on the long poll. It requires max. 2 connection to the server, which all browsers supports.
Keep in mind, that if there's 500 client, it means 500 server-side webserver thread, which will move together, occurring load peaks, because when something happens, the server have to report it at the same time for each clients, the clients will process it near same time long, they will start the next long request in the same time, and from then, the timeout will expire also at the same time, and furthcoming ones too. You can trick with rnd timeout, say 4 rnd(0..4), but it's worthless, if anything happens, they will "sync" again, all the request have to be served at the same time, when something reportable happens.
I've tested it thru a router, and it works. I assume, routers respects 4..10 lag, it's around the speed of a slow webapge (far, far away), which no router think, that it should be canceled.
My PHP work is a collaborative spreadsheet, it looks amazing when you hit enter and the stuff is updating simultaneously in several browsers. Have fun!
No limit for no of ajax requests. However it will be on same host & port.
Server can limit no of request from a machine based on its setting.
For example. A server can set so that if there are more than few request from same machine within specified time it will reject request.
After small mistake in javascript code, neverending loop was made witch each step calling 2 ajax requests. In firebug i could see more and more requests until firefox started to slow down, dont response and finally crash.
So, yes, there is a "limit" ;)

Difference between setInterval & Polling?

I want to know the difference between setInterval() (or) setTimeout() in DOM and polling in ajax. What is the main difference? If both are same, why the identified by two different names?
What is mean by polling in AJAX?
Any links or resource about this question would be more appreciative at the moment!!!
setInterval sets a repeating timer, setTimeout sets a timer that fires only once. Polling is when you repeatedly ask for something instead of waiting to be notified. Sometimes polling is necessary, for example if there's no way to be notified -- and this is often the case in Ajax applications. Both setInterval and setTimeout can be used to implement polling, depending on what you want to do.
In the case of periodically making a request to a server it's advisable to use setTimeout instead of setInterval. In the callback you do the request, wait for the response then set a new timer using setTimeout. If you use setInterval and the request latency is comparable to the interval then you risk that the responses will come out of order. For example, the timer fires and you make a request, it takes a little longer than usual so before it has returned the timer fires again, so you make a new request. Now you are waiting for two requests. It would have been better to wait for the first request to come back before doing the second.
polling is when you periodically ping the server to see if something is ready. A user might have made a request that will take some unspecified amount of time, but too long to wait, so you poll the server every x seconds to see if the result is ready.
setTimeout executes a function after the specified interval.
setInterval repeatedly executes a function every time.
check out http://www.w3schools.com/js/js_timing.asp
You can use these two functions to implement a polling scheme, but they are definitely not the same as polling.

Resources