I am making a request to a CGI program using AJAX. The response sends me content-length. My purpose is to dynamically exhibit the response progress. For that I need to start a function on onreadystate value of XHR object to be 3. But the request doesn't seems to acquire that status number. Instead it goes directly from state 1 to state 4.
What am I missing?
The response could be going so quickly that you just don't notice it at state 3. Especially if you are running it on localhost, the response could be transmitted very quickly. You could try setting an alert when it gets to stage 3 to test whether it's actually getting there. Also, I belive internet explorer says that it is a mistake to access the response in stage 3 so there could be compatibility issues.
If you're running on localhost, then probably the browser is never getting a chance to run between the time it sends the request and the time it gets the response...
browser opens connection, sets readyState to 1
browser sends packet to server process
server process receives packet, gets priority from scheduler
server returns data to browser, and yields control of the CPU. Browser continues execution.
browser sees all data has been received, sets readyState to 4.
Long story short: don't count on going into the "receiving" state.
Related
I am trying to implement a video chat on my website. The user handling is done by my backend which creates a "signedUserTicket" and that ticket is then used to start the sinchClient. However when I try starting a call just after the message
Successfully initiated call, waiting for MXP signalling.
I get an error saying
Call PROGRESSING timeout. Will hangup call.
Also I get a Chrome warning saying
Synchronous XMLHttpRequest on the main thread is deprecated because of its detrimental effects to the end user's experience.
(But I don't think this is the reason for the timeout.)
So after checking in the network tab to find out whether the requests are correctly sent I realized that the problem is always with the requests going to https://rebtelsdk.pubnub.com. For example the request to https://rebtelsdk.pubnub.com/subscribe/sub-c-56224c36-d446-11e3-a531-02ee2ddab7fe/f590205a-c82d-4ec1-bd72-f2997097cbedS/0/14932956939626496?uuid=3b47f938-609d-4940-bb0e-bd7030cb3697&pnsdk=PubNub-JS-Web%2F3.7.2 takes about 20 seconds and it seems to cancel the requests after about 10 seconds giving me the timeout error.
Any ideas on how to fix this?
I have a web application that communicates with the server with AJAX. I am using https and SSL and keep-alive is set to 10 seconds.
Sometimes the user is sending a request to the server exactly when the keep-alive time expires. When this happens a new SSL session is created. This is all fine.
After this happens, the browser (Internet Explorer 11) is resending the AJAX request.
But now strange things happens. The browser is only sending the headers of the request. No body. The server first waits for a body that never arrives. Finally the server is aborting the request and the client gets a exception with http status 0 with message: Network error.
Some say this is normal behavior when using SSL and keep-alive and that this must be handled in the webb application. Other say this is a not correct behavior in Internet Explorer 11.
All I can see is that the server cant reuse a body of a request sent on a previous SSL session. The browser need to resend the entire request, but this is not happening.
If I catch the exception in the application and resends the request to the server, everything is working again. But it feels very strange to catch all http=0 in and resending them. Could also be dangerous from a application point of view.
The application only works in IE, so I cant compare with Chrome and FF.
My question is: Is this normal behavior or have I perhaps some incorrect configuration in the browser or on the webbserver?
When the server closes a connection at exactly the same time as a new request on a keep alive connection was sent, then it is not clear if the server received the request and acted on it or not. Therefor only requests which are considered idempotent (i.e. not change anything on the server) should be automatically retried by the client. GET and HEAD are considered idempotent while POST not.
Thus if the client automatically retries a GET or HEAD this would be correct behavior. Retrying a POST would be bad. Resubmitting a request without a body which originally had a body (like you claim but don't prove) would be a bug.
I currently have a problem where I send an asynchronous ajax request to a .NET controller in order to start a database search. This request makes it to the server which kicks off the search and immediately (less than a second) replies to the callback with a search ID, at which point I begin sending ajax requests every 10 seconds to check if the search has finished. This method works fine, and has been tested successfully with multiple users sending simultaneous requests.
If I send a second search request from the same user before the first search is finished, this call will not make it to the controller endpoint until after the first search has completed, which can take up to a minute. I can see the request leave chrome (or FF/IE) in the dev tools, and using Fiddler as a proxy I can see the request hit the machine that the application is running on, however it will not hit the breakpoint on the first line of the endpoint until after the first call returns.
At the point this call is blocking, there are typically up to 3 pending requests from the browser. Does IIS or the .NET architecture have some mechanism that is queuing my request? Or if not, what else would be between the request leaving the proxy and entering the controller? I'm at a bit of a loss for how to debug this.
I was able to find the issue. It turns out that despite my endpoint being defined asynchronously, ASP.NET controllers by default synchronize by session. So while my endpoints were able to be executed simultaneously across sessions, within the same session it would only allow one call at a time. I was able to fix the issue by setting the controller SessionState attribute to Read Only, allowing my calls to come through without blocking.
[SessionState(System.Web.SessionState.SessionStateBehavior.ReadOnly)]
I have writen searching in my site and now I am trying to make it search every time I start printing. So now I am sending many requests which contains different text to search for using AJAX one by one and every next reqest has to wait, before previous one is finished. Apperently I dont need old requests to be answered, but I need the only one response for the last request.
How can I kill the queue of not actual requests in Django?
Does anybody know the answer?
On the server side, it's probably too late to cancel requests, but you can ignore the responses on the client side. I would suggest aborting a pending AJAX request before sending a new one.
Here is how:
Abort Ajax requests using jQuery
An easier way to do this could be by waiting a bit before sending your request to the server. After each input, set up a timer that stops the previous (setTimout) and only send the request if the timeout is met.
If a request was already performed and has not returned you can still kill it as suggested in another answer.
I'm not aware of how to stop other requests using django -- hope that it's not even possible, it would be a security thread if requests could be killed by others.
I have made webpage that uses Ajax to update some values without reloading the page. I am using an XMLHttpRequest object to send a POST request, and I assign a callback function that gets called when the response arrives, and it works just fine.
But... how in the world does the browser know that some data coming from some ip:port should be sent to this particular callback function? I mean, in a worst case scenario, if I have Firefox and IE making some POST requests at roughly the same time from the same server, and even making subsequent POST requests before the responses arrive to the previous ones, how does the data coming in gets routed to the right callback functions ??
Each HTTP request made is on a seperate TCP connection. The browser simply waits for data to come back on that connection then invokes your callback function.
At a lower level, the TCP implementation on your OS will keep track of which packets belong to each socket (i.e. connection) by using a different "source port" for each one. There will be some lookup table mapping source ports to open sockets.
It is worth noting that the number of simultaneous connections a browser makes to any one server is limited (typically to 2). This was sensible back in the old days when pages reloaded to send and recieve data, but in these enlightened days of AJAX it is a real nuisance. See the page for an interesting discussion of the problem.
Each request has its own connection. Means that if you have single connection, of course you will have single response, and this response will be in your callback.
The general idea is that your browser opens a new connection entirely, makes a request to the server and waits for a response. This is all in one connection which is managed by the browser via a JavaScript API. The connection is not severed and then picked up again when the browser pushes something down, so the browser, having originated the request, knows what to do when the request finishes.
What truly makes things Asynchronous, is that these connections can happen separately in the background, which allows multiple requests to go out and return, while waiting for responses. This gives you the nice AJAX effect that appears to be the server returning something at a later time.