IE11 is very slow on HTTPS sites - https

We have an internal web application and since short, it is very slow in IE11. Chrome and Firefox do not have this problem.
We have an HTTP version of the same website, and this is fast with up to 6 concurrent HTTP sessions to the server and session persistence.
However, when we change to HTTPS, all this changes.
We still have session persistence, but only 2 simultaneous sessions to the servers and each request seems to take ages. It is not the server response that is slow, but the "start" time before IE11 sends the request (and the next request). The connection diagram changes to a staircase, issuing one request at a time, one after the other. Each request taking 100-200ms even if the request itself returns only a couple of bytes.
The workstation has Symantec Endpoint Protection installed (12.1), and also Digital Guardian software (4.7).

Related

How browsers communicated to the server

I've noticed interesting behavior of the Chrome browser when multiple Ajax requests are sent from the page. That's what I see in network debug:
The grey part of last validate requests is "stalled". According to Chrome documentation:
Queueing. The browser queues requests when:
There are higher priority requests.
There are already six TCP connections open for this origin, which is the limit. Applies to HTTP/1.0 and HTTP/1.1 only.
The browser is briefly allocating space in the disk cache
Stalled. The request could be stalled for any of the reasons described in Queueing.
So that makes sense to me. I have several Ajax requests before (about 6 - some are ended before next ones get started) and then next ones get queued.
But same page sometimes looks different:
It's same instance of Chrome browser just different tab. And here I see no queueing at all. Because I can have over 200 such Ajax requests at the same time it causes problem with handling requests on the server side. It's clear that my task is to reduce amount of Ajax requests in general and I'm working on that but unpredictability of the browser behavior doesn't make the task easier.
What can be reason of such different behavior?

How to send data to client browsers when a server side change occurs

I have an intranet based CRM application developed in CodeIgniter 2.1 where the application is running on a local Apache server and around 20 clients are accessing it over LAN. This is to be connected to a call center setup where the call center application (running on a separate server) will do a HTTP post with caller's number as well as terminal number of the agent where the call is arriving to a URL of my Codeigniter application. I am using this data to populate a database table of call records.
Now from the terminal number (each terminal has static IP, and a session in Codeigniter is linked to IP as well) I can find out which user (login session) of my application is about to receive the call. I want to find a way out how I can send data from server side (it will be regarding the call like the number who is calling, past call records etc.) to that specific user's browser via AJAX or something similar? The agent's browser needs to display this information sent from server.
Periodic polling from browser by jquery etc. is not possible as the data needs to be updated almost instantaneously and rapid polling up to this extent will lead to high CPU usage at client end as well as extra load on network.
P.S.: I only want to know how to modify the browser data from server end.
In AJAX, asynchronous request/response doesn't involve polling; there's just an open TCP connection and non-blocking I/O. The client makes a request but returns immediately; when the server sends the response, the client is notified. So you can achieve what you want with AJAX's XMLHttpRequest without polling[1]. All you need is a url from which to serve your notifications. You could have one request thread and a general dispatch method, or different urls and different threads for each, depending on how you needed to scale.
[1] Well, to be honest, with very little polling. You'd really need to establish what the session/global timeout was and reissue requests within that time limit.

Understanding HTTPS connection setup overhead

I'm building a web-based chat app which will need to make an AJAX request for every message sent or received. I'd like the data to be encrypted and am leaning towards running AJAX (with long-polling) over HTTPS.
However, since the frequency of requests here is a lot higher than with basic web browsing, I'd like to get a better understanding of the overhead (network usage, time, server CPU, client CPU) in setting up the encrypted connection for each HTTPS request.
Aside from any general info/advice, I'm curious about:
As a very rough approximation, how much extra time does an HTTPS request take compared to HTTP? Assume content length of 1 byte and an average PC.
Will every AJAX request after the first have anything significant cached, allowing it to establish the connection quicker? If so, how much quicker?
Thank you in advance :-)
Everything in HTTPS is slower. Personal information shouldn't be cached, you have encryption on both ends, and an SSL handshake is relatively slow.
Long-polling will help. Long keep-alives are good. Enabling SSL sessions on your server will avoid a lot of the overhead as well.
The real trick is going to be doing load-balancing or any sort of legitimate caching. Not sure how much that will come into play in your system, being a chat server, but it's something to consider.
You'll get more information from this article.
Most of the overhead is in the handshake (exchanging certificates, checking for their revocation, ...). Session resumption and the recent false start extension helps in that respect.
In my experience, the worse case scenario happens when using client-certificate authentication and advertising too many CAs (the CertificateRequest message sent by the server can even become too big); this is quite rare since in practice, when you use client-certificate authentication, you would only accept client-certificates from a limited number of CAs.
If you configure your server properly (for resources for which it's appropriate), you can also enable browser caching for resources served over HTTPS, using Cache-Control: public.

2 connections per server?

i´ve read somewhere that you can just have 2 connections (eg. ajax requests) to the same server. is this correct?
so you can´t run 3 ajax requests simultaneously? what will happen to the 3rd one?
and if I´ve got one iframe, then i can just run 1 ajax request at the time?
what is the easiest way to get around this?
what keywords could i use to search for more information regarding this on google?
The 2 connection maximum pr server is mandated in the HTTP RFC 2616 section 8.1 http://www.ietf.org/rfc/rfc2616.txt
Clients that use persistent connections SHOULD limit the number of
simultaneous connections that they maintain to a given server. A
single-user client SHOULD NOT maintain more than 2 connections with
any server or proxy. A proxy SHOULD use up to 2*N connections to
another server or proxy, where N is the number of simultaneously
active users. These guidelines are intended to improve HTTP response
times and avoid congestion.
Q:what will happen to the 3rd one?
The third one will be queued untill one of the other HTTP calls return
Q:and if I´ve got one iframe, then i can just run 1 ajax request at the time?
The iFrame will be loaded through a HTTP connection, but once the HTML content has be returned the HTTP call has been completed and you again have 2 available HTTP connections
Q:what is the easiest way to get around this?
The most important is not to have long running HTTP requests, i.e. speed up processing on the server side. As long as HTTP requests are responded to in less than 100 ms, it is for normal apps not a problem.
You read it right, browsers limit simultaneous connection to the exact same domain to 2 for any type of requests (script src, image src, ajax etc.) originating from a given document, it can be changed in registry for IE and about:config in Firefox.
One way to get around this is to have additional CNAMEs to your host.

WinHTTP error 12175 after days and huge amount of queries

I am using the Windows WinHTTP library to perform http & https queries, and am sometimes getting a WinHTTP 12175 error, usually after several days of operation (sometimes weeks), hundred of thousandths to millions of queries.
When that happens, the only way to "fix" the error is to restart the service, and occasionally the errors will not go away until Windows (the server) is restarted.
Interestingly enough, this error appears "gradually": getting some of them for some https queries, then after some time, the service gets them for 100% of https queries, then later on, they popup even for regular http queries (the service makes lots of http & https queries, on the order of dozens per seconds, to many servers).
The HTTP connections are made directly, without any proxies. The OS is Windows 2012 R2.
AFAICT there is no memory leak or corruption, outside the 12175 errors, the rest of the service is operating just fine, no access violations or suspicious exceptions, service responds to queries normally, etc.
I have a suspicion this could be related to Windows Update doing OS certificate updates, renewing credentials or something, but I could never positively confirm it.
Anyone else has observed this behavior? Any workarounds?

Resources