I'm using backbone on a project of mine, integrated with communication to an external API. I want to use real-time updating of records. Since I don't have access to the main backend of this external application, and they don't provide neither websocket server nor long-polling endpoint, I am basically left with the option of doing regular polling with setInterval and a period of 50 seconds. It has been working quite well. My problem is the edge case. If for some reason the API request hangs, for more than 50 secs, let's say, I'll be triggering a new request right away. That means, 2 hanging requests now, which will add up eventually. Is there a way to set a timeout for the request? I know all requests lead to Backbone.sync, but I was checking the source code and I don't see any feasible way of setting the timeout for the XmlHttpRequest. Is there a way to do this cleanly and without overwriting behaviour? Or are there other solutions/workarounds?
Just pass a timeout:milliseconds option in the options argument to fetch. The options get passed directly to jQuery.ajax, which handles the low-level XHR call:
collection.fetch({timeout:50000});
Alternatively you can set a global timeout for all the requests made by your application by calling jQuery.ajaxSetup in your application startup:
$.ajaxSetup({timeout:50000});
Related
I would like to research a technique for building Web API methods but I am having trouble looking up information on it partly because I do not know what the technique is called. The idea is that instead of the client polling the web API on a tight loop and taking action when the response data changes, instead the server holds the call open until it sees the data change and then completes the request. This is more efficient because less time is spent making web connections as each call from the client is utilized to it's full extent: if new data is available before the web API call's timeout is reached then the call can immediately return that new data.
What is this technique called?
Long polling. Not tried it myself yet.
http://en.wikipedia.org/wiki/Push_technology#Long_polling
I currently have a problem where I send an asynchronous ajax request to a .NET controller in order to start a database search. This request makes it to the server which kicks off the search and immediately (less than a second) replies to the callback with a search ID, at which point I begin sending ajax requests every 10 seconds to check if the search has finished. This method works fine, and has been tested successfully with multiple users sending simultaneous requests.
If I send a second search request from the same user before the first search is finished, this call will not make it to the controller endpoint until after the first search has completed, which can take up to a minute. I can see the request leave chrome (or FF/IE) in the dev tools, and using Fiddler as a proxy I can see the request hit the machine that the application is running on, however it will not hit the breakpoint on the first line of the endpoint until after the first call returns.
At the point this call is blocking, there are typically up to 3 pending requests from the browser. Does IIS or the .NET architecture have some mechanism that is queuing my request? Or if not, what else would be between the request leaving the proxy and entering the controller? I'm at a bit of a loss for how to debug this.
I was able to find the issue. It turns out that despite my endpoint being defined asynchronously, ASP.NET controllers by default synchronize by session. So while my endpoints were able to be executed simultaneously across sessions, within the same session it would only allow one call at a time. I was able to fix the issue by setting the controller SessionState attribute to Read Only, allowing my calls to come through without blocking.
[SessionState(System.Web.SessionState.SessionStateBehavior.ReadOnly)]
There are some servlets which I call from ajax to check user availability and some other work. An ajax request can be send only if user is login.
But the problem is, user can hit server through ajax call mass number of times in a second using javascript injections. It can make server down.
There is one possibility to control it;
If number of hits from same IP in a
second(or some period) crosses the
maximum limit then I can invalidate
the session.
But some of my colleagues are not in favor of limiting the user. Is there any other way to safe my server from ajax bomb.
You have a few options:
you can throttle ajax requests, slowing them a little when they hit a limit, as some sites do.
rate limit (completely block) the ajax request after X requests, as Twitter does with its API
choose another option, like WebSockets.
These can be either server-sided or client-sided, server-sided being the obvious choice for security.
But the problem is, user can hit server through ajax call mass number of times in a second using javascript injections. It can make server down.
Javascript injection is the least of your problems if you're concerned with your server staying alive. Raw HTTP DDoS attacks are a much bigger problem than a few ajax requests. The main thing Javascript injection should stick right in your mind is to do with security, not server uptime.
i had the same problem with some web app i have developed,
the solution i came up with was to check the referer on the server-side,
and only allow calls from the server (127.0.0.1).
I want to suggest results using auto-complete. I need to send AJAX requests on each keystroke. For this I want to keep the HTTP connection open for few seconds and if something is typed within that period, I want to send the AJAX in that same connection. If nothing is typed in that period, I want to close the HTTP connection.
Background:
I already use typewatch plugin. But here the HTTP connections are made each time I send a request. I still want to improve the speed. I read in this thread http://www.philwhln.com/quoras-technology-examined#the-search-box that:
Quora uses persistent connections. A
HTTP connection is established with
the server when you start typing the
search query.
How can I do this with cross browser support? Is it just keep-alive?
You can't. Each request-response round trip is asynchronous, meaning that when it's sent, it waits for that particular request's response to return, then handles it.
What I think you want to do is prevent your script from hammering the server. To do this there are a variety of methods, but the most common is to use a keystroke timer. The timer waits a specified number of milliseconds after the user finishes typing before sending the request, containing the textbox's value, to the server.
If you're using JQuery you can use the TypeWatch plugin to do this. JQuery will also satisfy your cross browser requirements.
However, since you also want to do auto-complete, you may as well use the JQuery AutoComplete plugin which also has a keystroke timer built in, by default it's set to 400 millseconds. Click the OPTIONS TAB on this page to see what all the configuration options are that you can pass into the plugin.
I have made webpage that uses Ajax to update some values without reloading the page. I am using an XMLHttpRequest object to send a POST request, and I assign a callback function that gets called when the response arrives, and it works just fine.
But... how in the world does the browser know that some data coming from some ip:port should be sent to this particular callback function? I mean, in a worst case scenario, if I have Firefox and IE making some POST requests at roughly the same time from the same server, and even making subsequent POST requests before the responses arrive to the previous ones, how does the data coming in gets routed to the right callback functions ??
Each HTTP request made is on a seperate TCP connection. The browser simply waits for data to come back on that connection then invokes your callback function.
At a lower level, the TCP implementation on your OS will keep track of which packets belong to each socket (i.e. connection) by using a different "source port" for each one. There will be some lookup table mapping source ports to open sockets.
It is worth noting that the number of simultaneous connections a browser makes to any one server is limited (typically to 2). This was sensible back in the old days when pages reloaded to send and recieve data, but in these enlightened days of AJAX it is a real nuisance. See the page for an interesting discussion of the problem.
Each request has its own connection. Means that if you have single connection, of course you will have single response, and this response will be in your callback.
The general idea is that your browser opens a new connection entirely, makes a request to the server and waits for a response. This is all in one connection which is managed by the browser via a JavaScript API. The connection is not severed and then picked up again when the browser pushes something down, so the browser, having originated the request, knows what to do when the request finishes.
What truly makes things Asynchronous, is that these connections can happen separately in the background, which allows multiple requests to go out and return, while waiting for responses. This gives you the nice AJAX effect that appears to be the server returning something at a later time.