How browsers communicated to the server - ajax

I've noticed interesting behavior of the Chrome browser when multiple Ajax requests are sent from the page. That's what I see in network debug:
The grey part of last validate requests is "stalled". According to Chrome documentation:
Queueing. The browser queues requests when:
There are higher priority requests.
There are already six TCP connections open for this origin, which is the limit. Applies to HTTP/1.0 and HTTP/1.1 only.
The browser is briefly allocating space in the disk cache
Stalled. The request could be stalled for any of the reasons described in Queueing.
So that makes sense to me. I have several Ajax requests before (about 6 - some are ended before next ones get started) and then next ones get queued.
But same page sometimes looks different:
It's same instance of Chrome browser just different tab. And here I see no queueing at all. Because I can have over 200 such Ajax requests at the same time it causes problem with handling requests on the server side. It's clear that my task is to reduce amount of Ajax requests in general and I'm working on that but unpredictability of the browser behavior doesn't make the task easier.
What can be reason of such different behavior?

Related

How to fix HTTP/2.0 504 Gateway Timeout for multi simultaneous XHR connections when using HTTP/2

I activate HTTP/2 support on my server. Now i got the problem with AJAX/jQuery scipts like upload or Api handling.
After max_input_time of 60sec for php i got: [HTTP/2.0 504 Gateway Timeout 60034ms]
with HTTP/1 only a few connections where startet simultaneously and when one is finished a nother starts.
with HTTP/2 all starts at once.
when fore example 100 images would uploaded it takes to long for all.
I don't wish to change the max_input_time. I hope to limit the simultaneous connections in the scripts.
thank you
HTTP/2 intentionally allows multiple requests in parallel. This differs from HTTP/1.1 which only allowed one request at a time (but which browsers compensated for by opening 6 parallel connections). The downside to drastically increasing that limit is you can have more requests on the go at once, contending for bandwidth.
You’ve basically two choices to resolve this:
Change your application to throttle uploads rather than expecting the browser or the protocol to do this for you.
Limit the maximum number of concurrent streams allowed by your webserver. In Apache for example, this is controlled by the H2MaxSessionStreams Directive while in Nginx it is similarly controlled by the
http2_max_concurrent_streams config. Other streams will need to wait.

IE11 is very slow on HTTPS sites

We have an internal web application and since short, it is very slow in IE11. Chrome and Firefox do not have this problem.
We have an HTTP version of the same website, and this is fast with up to 6 concurrent HTTP sessions to the server and session persistence.
However, when we change to HTTPS, all this changes.
We still have session persistence, but only 2 simultaneous sessions to the servers and each request seems to take ages. It is not the server response that is slow, but the "start" time before IE11 sends the request (and the next request). The connection diagram changes to a staircase, issuing one request at a time, one after the other. Each request taking 100-200ms even if the request itself returns only a couple of bytes.
The workstation has Symantec Endpoint Protection installed (12.1), and also Digital Guardian software (4.7).

What do the colored bars in the Firefox net panel represent?

In the firefox developer tools, under the "Net" panel, resources that are loaded have their load time split into different colors/categories. These are:
DNS Lookup
Connecting
Blocking
Sending
Waiting
Receiving
What do each of these represent, and more specifically, does any of them accurately represent the amount of time that the server is thinking (accessing the database, running algorithms, etc)?
Thanks.
You couldn't accurately determine what the server is doing as such, I'm afraid.
You can discount most of them except Waiting, however, as the rest occur before and after the server handles your request. What it actually does while you wait will be a 'black box'.
There may be some asynchronous operations taking place during Sending and Receiving, so again it's hard to be accurate but you can get a ballpark figure of the time the server is working and the time the request spends travelling back and forth.
EDIT
Rough Definitions:
DNS Lookup: Translating the web address into a destination IP address by using a DNS server
Connecting: Establishing a connection with the web server
Blocking: Previously known as 'queueing', this is explained in more detail here
Sending: Sending your HTTP Request to the server
Waiting: Waiting for a response from the server - this is where it's probably doing all the work
Receiving: Getting the HTTP response back from the server
The firebug wiki also explains these (see the Timeline section).
Blocking Time spent in a browser queue waiting for a network
connection (formerly called Queueing). For SSL connections this includes the SSL Handshake and the OCSP validation step.
DNS Lookup DNS resolution time
Connection Elapsed time required to create a TCP connection
Waiting Waiting for a response from the server
Receiving Time required to read the entire response from the server
(and/or time required to read from cache)
'DOMContentLoaded' (event) Point in time when DOMContentLoaded event was fired (since the beginning of the request, can be negative if the request has been
started after the event)
'load' (event) Point in time when the page load event was fired (since the beginning of the request, can be negative if the request has been started after the event)
There's a pretty good article with time charts and a protocol level explanation of what's happening at each stage here.I found it pretty helpful as they also visually demonstrate the impact of using persistent and parallel connections versus serial connections.

2 connections per server?

i´ve read somewhere that you can just have 2 connections (eg. ajax requests) to the same server. is this correct?
so you can´t run 3 ajax requests simultaneously? what will happen to the 3rd one?
and if I´ve got one iframe, then i can just run 1 ajax request at the time?
what is the easiest way to get around this?
what keywords could i use to search for more information regarding this on google?
The 2 connection maximum pr server is mandated in the HTTP RFC 2616 section 8.1 http://www.ietf.org/rfc/rfc2616.txt
Clients that use persistent connections SHOULD limit the number of
simultaneous connections that they maintain to a given server. A
single-user client SHOULD NOT maintain more than 2 connections with
any server or proxy. A proxy SHOULD use up to 2*N connections to
another server or proxy, where N is the number of simultaneously
active users. These guidelines are intended to improve HTTP response
times and avoid congestion.
Q:what will happen to the 3rd one?
The third one will be queued untill one of the other HTTP calls return
Q:and if I´ve got one iframe, then i can just run 1 ajax request at the time?
The iFrame will be loaded through a HTTP connection, but once the HTML content has be returned the HTTP call has been completed and you again have 2 available HTTP connections
Q:what is the easiest way to get around this?
The most important is not to have long running HTTP requests, i.e. speed up processing on the server side. As long as HTTP requests are responded to in less than 100 ms, it is for normal apps not a problem.
You read it right, browsers limit simultaneous connection to the exact same domain to 2 for any type of requests (script src, image src, ajax etc.) originating from a given document, it can be changed in registry for IE and about:config in Firefox.
One way to get around this is to have additional CNAMEs to your host.

Any HTTP proxies with explicit, configurable support for request/response buffering and delayed connections?

When dealing with mobile clients it is very common to have multisecond delays during the transmission of HTTP requests. If you are serving pages or services out of a prefork Apache the child processes will be tied up for seconds serving a single mobile client, even if your app server logic is done in 5ms. I am looking for a HTTP server, balancer or proxy server that supports the following:
A request arrives to the proxy. The proxy starts buffering in RAM or in disk the request, including headers and POST/PUT bodies. The proxy DOES NOT open a connection to the backend server. This is probably the most important part.
The proxy server stops buffering the request when:
A size limit has been reached (say, 4KB), or
The request has been received completely, headers and body
Only now, with (part of) the request in memory, a connection is opened to the backend and the request is relayed.
The backend sends back the response. Again the proxy server starts buffering it immediately (up to a more generous size, say 64KB.)
Since the proxy has a big enough buffer the backend response is stored completely in the proxy server in a matter of miliseconds, and the backend process/thread is free to process more requests. The backend connection is immediately closed.
The proxy sends back the response to the mobile client, as fast or as slow as it is capable of, without having a connection to the backend tying up resources.
I am fairly sure you can do 4-6 with Squid, and nginx appears to support 1-3 (and looks like fairly unique in this respect). My question is: is there any proxy server that empathizes these buffering and not-opening-connections-until-ready capabilities? Maybe there is just a bit of Apache config-fu that makes this buffering behaviour trivial? Any of them that it is not a dinosaur like Squid and that supports a lean single-process, asynchronous, event-based execution model?
(Siderant: I would be using nginx but it doesn't support chunked POST bodies, making it useless for serving stuff to mobile clients. Yes cheap 50$ handsets love chunked POSTs... sigh)
What about using both nginx and Squid (client — Squid — nginx — backend)? When returning data from a backend, Squid does convert it from C-T-E: chunked to a regular stream with Content-Length set, so maybe it can normalize POST also.
Nginx can do everything you want. The configuration parameters you are looking for are
http://wiki.codemongers.com/NginxHttpCoreModule#client_body_buffer_size
and
http://wiki.codemongers.com/NginxHttpProxyModule#proxy_buffer_size
Fiddler, a free tool from Telerik, does at least some of the things you're looking for.
Specifically, go to Rules | Custom Rules... and you can add arbitrary Javascript code at all points during the connection. You could simulate some of the things you need with sleep() calls.
I'm not sure this method gives you the fine buffering control you want, however. Still, something might be better than nothing?
Squid 2.7 can support 1-3 with a patch:
http://www.squid-cache.org/Versions/v2/HEAD/changesets/12402.patch
I've tested this and found it to work well, with the proviso that it only buffers to memory, not disk (unless it swaps, of course, and you don't want this), so you need to run it on a box that's appropriately provisioned for your workload.
Chunked POSTs are a problem for most servers and intermediaries. Are you sure you need support? Usually clients should retry the request when they get a 411.
Unfortunately, I'm not aware of a ready-made solution for this. In the worst case scenario, consider developing it yourself, say, using Java NIO -- it shouldn't take more than a week.

Resources