Chrome show : HTTP hostname and TLS SNI hostname mismatch - windows

I have problem, when I started my chrome, and try visit https page, i see this error:
Error 1013 Ray ID: 2cb7c5989ac13dd8 • 2016-08-01 08:03:08 UTC
HTTP hostname and TLS SNI hostname mismatch
You've requested an IP address that is part of the CloudFlare network. The request's Host header does not match the request's TLS SNI Host header.
What is it? where place cloudfare and how can I this remove from my browser?

SNI stands for Server Name Indication, essentially it allows you to serve more than one SSL certificate from more than one IP Address. Most modern browsers submit this as part of a web request. CloudFlare Free Universal SSL makes use of SNI.
Similarly, the Host header allows a server to know which hostname it should serve. The Host header fundamentally allows virtualhosting, serving more than one website per IP Address.
So these two clearly need to match, the SNI hostname and the hostname in the Host header must be identical. When this fails to occur CloudFlare will present a Error 1013 HTTP hostname and TLS SNI hostname mismatch.
There may be a proxy in your local network that is stripping out SNI headers or deforming them thus causing them to mismatch. This can happen as a result of a firewall or intermediary web server.
There is a tool here to check if your SNI header and Host header match. Simply visit the linked site and check the "SNI information:" field states "cc.dcsec.uni-hannover.de".
If you require support for browsers which do not support SNI, CloudFlare's Pro, Business or Enterprise support have legacy support for non-SNI browsers; but it would make sense to contact your network administrator to see why SNI requests are being malformed.
The tags on this question suggest you are using Firefox on Windows 10 which should support SNI; therefore you should reach out to your network team to see why they are malforming SNI requests.

Related

Using winhttp for client with non default SSL server name indication (sni)

I'm using winhttp in order to establish https connection on port 443 with my remote. However, the server running this service also contains more services on the same https port (443), so it uses SNI in order to resolve the requested session.
However, the server doesn't expect to get the hostname as SNI, since it uses single URL for all services. instead, the SNI address is chosen not according to the URL but according to some other string notation (i.e. service_api or service_web_if ...)
In my client connection flow, I set the URL in method WinHttpConnect which also set the SNI accordingly, and the actual SSL/TLS handshake is made when calling WinHttpSentRequest.
I wonder how can I change the SNI value from the default URL value after calling WinHttpConnect.
So far, while investigating possible solutions, I've learned about HTTP_SERVICE_CONFIG_SSL_SNI_KEY structure which is set by method HttpSetServiceConfiguration along with the matching certificate for this SNI, but this seems to be related to the server side configuration. Besides that, I haven't found any references for such action unfortunately.
Perhaps anybody ever used non-default SNI using winhttp API and can tell me how to do so ? is the only option to do so is doing the SSL handshake using some lower level API such as schannel, and than switching back to winhttp ?
if it's not possible, perhaps there's an option to use extended hostname with directory tree in order to get multiple sni on a single url...

http2: The 421 Misdirected Request Status Code example

I'm reading the spec and trying to understand exactly when 421 might be returned. An example is given but I don't completely understand it.
Background
The spec establishes two conditions that allow for connection reuse:
For TCP connections without TLS, this depends on the host having resolved to the same IP address.
and
For https resources, connection reuse additionally depends on having a
certificate that is valid for the host in the URI.
If the certificate used in the connection has multiple subjectAltName or any of the subjectAltName is a wildcard, then the connection can be reused for any request that has a hostname that is in the list of subjectAltNames or matches any of the wildcards.
Specific Example In the spec
In some deployments, reusing a connection for multiple origins can
result in requests being directed to the wrong origin server. For
example, TLS termination might be performed by a middlebox that uses
the TLS Server Name Indication (SNI) [TLS-EXT] extension to select an
origin server. This means that it is possible for clients to send
confidential information to servers that might not be the intended
target for the request, even though the server is otherwise
authoritative.
Please explain where my understanding of this example is wrong:
An https connection is established to a middlebox with a request that has domain x.com. The middlebox has IP address 1.2.3.4 and x.com resolves to that address. Using SNI, the TLS handshake has x.com and the middlebox returns a certificate valid for that domain. All messages on this connection go from the client to the middlebox or from middlebox to client. Applicaiton level messages from client to middlebox are forwarded by middlebox to an origin on a different connection. Messages from origin to middlebox are forwarded to the client. If the connection is to be reused, meeting the two conditions discussed above is not enough. Specifically, for a request with domain y.com: if y.com resolves to 1.2.3.4 and the middlebox has a certificate valid for y.com, there can still be a problem. Because the original connection did its TLS handshake using x.com and because handshakes are only done at the beginning of new connections, there is no way establishing an https connection that would get the certificate for y.com. So the client incorrectly sends a request on the same connection to y.com. The middlebox rejects the request because the certificate associated with the connection is valid for x.com - not y.com. (The x.com certificate is only valid for x.com and the y.com certificate is only valid for y.com).
None of your examples will trigger a 421 as far as I can see.
Yes you are correct that a connection needs both the IP address and the SAN field in the certificate to be valid - without those a connection should not be reused.
So what would trigger a 421? As far as I can tell it will be mostly due to different SSL/TLS setups.
For example:
Assume website A (siteA.example.com) and website B (www.example.com) are both on same IP address. Assume website A has a wildcard cert for *.example.com and website B has a specific one. Could be a few reasons for this: for example it serves an EV cert for the main website which can't be a wildcard cert.
So cert A covers website A and website B. As does the IP address. So if you are connected to website siteA.example.com, and then try to connect to www.example.com then technically, by HTTP/2 standards, you could reuse the connection. But we wouldn't want that to happen, as we want to use our EV cert. So the server should reject with a 421. Now in this example the webserver is able to distinguish the correct host and has a valid cert for that host so could, in theory, serve the correct content under the wildcard cert, instead of sending a 421 - but since that wildcard cert is not defined for that virtualhost it should not do this.
Other examples include if you have different ciphers set up on different hosts. For example site A has super lax HTTPS config, because it's not really secure content and they want to reach even legacy browsers, but site B has super secure config and only accepts the latest TLS version and strong ciphers. Here you obviously wouldn't want them to reuse the same connection details. See here for a real would example of this.
Also this is only an issue for certain browsers, depending on how they decide to connection share. This page shows how different each of them do this (at least at the time of this blog post not not aware of anything changing since then): https://daniel.haxx.se/blog/2016/08/18/http2-connection-coalescing/
Also note that some bugs will exist with this (for example: https://bugs.chromium.org/p/chromium/issues/detail?id=546991). Best advice is: if you do not want connection sharing to happen, have a different IP address and/or ensure no overlaps in certificates.

Unable to redirect https traffic from external IP to loopback interface in Fiddler

I'm trying to use Fiddler to capture traffic that comes to my machine on its external ip address, and redirect it to the loopback interface without affecting the host header.
I have added the following to the OnBeforeRequest method:
if (oSession.HostnameIs("MyMachineName")){
oSession.bypassGateway = true;
oSession["x-overrideHost"] = "localhost";
}
This works fine for http traffic: I do indeed see a request to http://MyMachineName hit the loopback adaptor with its host header intact.
However, when intercepting https traffic I get the following in the response raw view:
fiddler.network.https> HTTPS handshake to auth.time-wise.net failed. System.IO.IOException The handshake failed due to an unexpected packet format.
I have Fiddler configured to capture and decrypt https traffic.
Does anyone know why this problem occurs and how it can be remedied?
Edit: in response to Eric's request for more information
Fiddler is running as a proxy (i.e. as standard), listening on port 8888.
The clients are (currently) web browsers on the same machine, and so are automatically using the Fiddler proxy, as they've picked up the change in default proxy.
You've left out some important details (e.g. what port is Fiddler running on, and how did you configure the remote client to send its traffic to Fiddler?)
Having said that, you will probably want to change your use of x-overrideHost to x-overrideHostname such that the port number of the traffic being retargeted is preserved.

Countering Fuckip IP Anonymity FireFox Addon

http://ipfuck.paulds.fr/
We've been recently getting hammered by this Firefox plug-in. It sends a fake IP in the headers so when our nginx web server picks up the IP it is a fake one.
Is there any way to get a real IP address or block out requests that have this plug-in installed?
There is actually no client IP entries in any HTTP Headers. There are only some un-official proxy headers which are added to a request, so that a proxy server can tell you the real ip of the connecting client (since the tcp socket will only reveal the IP address of the proxy server).
The plugin you linked to adds those proxy headers, to "fake" a proxy request, by adding a X-Real-IP: 1.2.3.4 or X-Forwarded-For: 1.2.3.4 header to the request. But no one forces you to use that IP address (which can be fake, like the 1.2.3.4 example here), you can always use the IP address of the socket that initiated the connection - which will be the client's real IP address if he uses the mentioned plugin.
Within the location section of your nginx configuration, you get the socket IP address through the $remote_addr variable. To retrieve the "fake" IP address, you can use $http_x_forwarded_for or $http_x_real_ip variable.
If you are using any application/cgi backend, you usually can examine the full headers and the socket IP address (i.e. in PHP you should check $_REQUEST and $_HEADERS variables)

How to work with HTTPS for multiple domains and and sub-domains on localhost?

I am using
Apache
Ruby and Ruby on Rails 3
Mac Os running "Snow Leopard"
and I would like to use HTTPS on localhost for my domains and sub-domains.
I have already set everything (I think correctly):
I generated a wildcard certificate for my domains and sub-domains (example: *.sitename.com)
I have set base-named virtualhosts in the http.conf file listening on port :433 and :80
My browser accept certificates also if it alerts me that those aren't safe and I can have access to pages using HTTPS
From the official Apache guide I read that it is not possible to do that using name-based virtualhost, but I also read someone that made that in some way (what?! I don't understand...).
So, is it possible or not to use HTTPS in localhost for multiple domains and sub-domains? If so, what I must "to do"\"to check" for working with that?
UPDATE for #sarnold
typhoeus appears to use libcurl, and
libcurl appears to support SNI -- is
your version of libcurl new enough to
support SNI? Does typhoeous know how
to enable it? (Do clients of libcurl
need to "enable" it SNI themselves?)
I think so because I can access all sub_domains over HTTPS and libcurl should be updated:
curl -V--version
curl 7.21.2 (x86_64-apple-darwin10.5.0) libcurl/7.21.2 OpenSSL/1.0.0c zlib/1.2.5 libidn/1.19
Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp smtp smtps telnet tftp
Features: IDN IPv6 Largefile NTLM SSL libz
# Typhoeus request
Typhoeus::Request.get("https://<sub_domain_name>.<domain_name>.com/")
How can I check if "Do clients of libcurl need to "enable" it SNI themselves?"?
The techniques for doing name-based virtual servers with SSL/TLS aren't great choices, but the Server Name Indication extension allows browsers to request a specific site by name, allowing different certificates to be used with different sites. Not all browsers support SNI yet.
Though one might ask what value there is is in having multiple certificates if they are all served out of the same process with the same privileges, anything to improve the user's TLS experience has to be worth the hassle. :)

Resources