https with ECDHE-ECDSA-AES256-GCM-SHA384 in windows 2012 - windows

I have been a long time reader but this is my first real post on a topic that I couldn't find a solution to.
I am currently hosting a website on Windows 2012 that I would like to get the latest TLS 1.2 ciphersuites running on.
I am aware of how to enable TLS 1.1 and TLS 1.2 in windows and have done so(via registry edits). I have also changed the cipher order to what I would like it to be.
My question is: How do i actually go through and set up my ECDHE / ECDSA portion of the cipher suite after this step?
When i view the site in the latest chrome beta (which supports ECDHE and ECDSA in TLS 1.2 provided you use the supported curves) it seems to skip all of the ECHDE ciphersuites.
Is there something else i need to do to get ECDHE/ECDSA properly enabled?
I have read around on the net trying to solve this myself and they mention making copies of your root cert and then modifying them to somehow support ECDHE. Am i barking up the wrong tree?
Thank you in advance for any and all support with this issue.
Edit: adding clarification/progress
After more research, I have found that in order to get ECDSA to work, you need an ECDSA certificate. The only way to get one at this time is to self-sign, as the cert-cartel has not yet come up with proper cross-licensing agreements and fee structures for Ellipic Curve Certificates yet.
Since self-signing is not an option for this site, I have removed all ECDSA suites from the cipher-order.
Unfortunately, because all of the AES Galois Counter Mode suites were also ECDSA, this rules those out for the time being.
This leaves me with a strongest cipher suite of ECDHE_RSA_WITH_AES_256_CBC_SHA384_P521 which I BELIEVE is supported by the latest version of Chrome beta correct? I can't seem to get Chrome to pick up anything beyond SHA-1. Is there no SHA-2 support? even in the latest beta?

AES-GCM is about how you encrypt the data in your connexion, EC-DSA or RSA about how the server identifies itself to the client. There is therefore no reason why you couldn't do AES-GCM encryption with a RSA authentication.
RFC 5289 does define the needed suite for that :
https://www.rfc-editor.org/rfc/rfc5289#section-3.2
CipherSuite TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 = {0xC0,0x2F};
CipherSuite TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 = {0xC0,0x30};
CipherSuite TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256 = {0xC0,0x31};
CipherSuite TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384 = {0xC0,0x32};
It's not however necessarily easy to find both the client and the server that will support them.

I had similar experiences with Win2008 R2.
Depending on the certificate, GCM cipher is offered by the server or not.
With self-signed ECDSA certificate i got GCM to work but older browsers
or Windows XP can't connect to such a https-site.
Windows doesnt support any TLS_ECDHE_RSA...GCM... ciphers:
http://msdn.microsoft.com/en-us/library/aa374757(v=vs.85).aspx
Thus normal RSA-certificates don't work with GCM under Windows.
Browser compatibility:
http://www.g-sec.lu/sslharden/SSL_comp_report2011.pdf

Related

DTLS using Schannel

I am trying to create a DTLS "connection" using Schannel under Windows (I am testing under recent Windows 10 version, so all DTLS versions supported by Schannel should be available)
I tried starting from working code to establish a regular TLS connection by following the documentation:
InitializeSecurityContext with null input on the first pass, SECBUFFER_TOKEN & SECBUFFER_ALERT on output
AcceptSecurityContext with SECBUFFER_TOKEN & SECBUFFER_EMPTY on input, SECBUFFER_TOKEN & SECBUFFER_ALERT on output.
Repeat the two steps until they succeed, and then move on to using Encrypt/DecryptMessage
This works perfectly fine in stream mode (ISC_REQ_SEQUENCE_DETECT | ISC_REQ_REPLAY_DETECT | ISC_REQ_CONFIDENTIALITY |
ISC_RET_EXTENDED_ERROR | ISC_REQ_ALLOCATE_MEMORY | ISC_REQ_STREAM)
If I try to substitute STREAM with ISC/ASC_REQ_DATAGRAM, my InitializeSecurityContext succeeds with SEC_I_CONTINUE_NEEDED as expected, but my very first AcceptSecurityContext then fails with SEC_E_INVALID_PARAMETER.
I have tried setting grbitEnabledProtocols of my SCHANNEL_CRED to 0 to use the defaults as documented on both sides, I also tried setting it to SP_PROT_DTLS1_X, and I still get the Invalid Parameter return from my first ASC. I have also tried the DTLS_1_0 constants just in case.
Also, all security protocols are enabled by default in my registry settings.
From my understanding of the DTLS RFC, my code is failing at the HelloVerifyRequest step, and, again from my understanding of the RFC, this part requires that the security provider generates a cookie from a few parts of the ClientHello message as well as the source's IP address. However, I could not find any documented way to pass this information to the ASC function.
(I think? :) ) I have searched the entire internet for any working example usage of DTLS under Schannel without any luck. On stackoverflow, I found this question that simply mentions that it is supported:
Is DTLS supported by Schannel on Windows 7?, and the linked MSDN article is just a high level overview.
I searched for any usage of the constants that are related to this feature... I searched for any usage of the constants that are related to this (ISC_REQ_DATAGRAM, SP_PROT_DTLS*, SECBUFFER_DTLS_MTU, ...) and the only thing I could find on all search engines I could think of were either copies of sspi.h or sites that index the constants in the Windows API...
I know DTLS is well supported by other implementations (OpenSSL etc), but I would really prefer to stay with Schannel, as other parts of my code currently work just fine with Schannel in TLS mode.
From Microsoft:
There is no documentation for implementing DTLS using Schannel. Known and persistent doc gap.
There are a few differences, but a TLS client or server can be adapted to DTLS fairly easily (a number of customers have done this successfully).
Set SCHANNEL_CRED.grbitEnabledProtocols to SP_PROT_DTLS1_X.
When calling AcceptSecurityContext, pass the client’s SOCKADDR via SECBUFFER_EXTRA.
MTU can be set via SetContextAttributes using constant SECPKG_ATTR_DTLS_MTU where the buffer is just an pointer to a ULONG. [Default is 1096 bytes.]
When ISC/ASC return SEC_I_MESSAGE_FRAGMENT, send this fragment and call ISC/ASC again, in a loop, to get the next fragment (without trying to read data from the network).
Implement timeout and retransmit logic in your application (since
schannel does not own the socket).
When receiving fragments, schannel will attempt to eliminate
duplicates, re-order and re-assemble, if possible.
SCHANNEL_SHUTDOWN does not apply to DTLS.
You can use https://github.com/mobius-software-ltd/iotbroker.cloud-windows-client As a sample to implement DTLS on windows
It does not uses SChannel but netty library.
MQTT-SN And CoAP are both supporting DTLS under this project.
BR
Yulian Oifa

Pound SSL Ciphers and Firefox Issue

I am fairly new to Pound cfg and SSL in general and working on learning. Tried a few things I found on Google related to setting Ciphers but they failed.
We are having an issue with Firefox after setting Ciphers in Pound to not allow SSLv3. Firefox tells customers that the system is not setup properly, so it is blocking them. Here is what I am trying to do.
Disallow SSLv3, SSLv2 via Pound Cfg file. Here is what I have tried:
Ciphers "All:!SSLv2:!SSLv3"
We are using SHA2 through Godaddy for Cert and SHA256 for key. When I test via https://dev.ssllabs.com/ssltest/ we get a giant F. Any ideas?
Any and all help is greatly appreciated. Thanks!
"Ciphers" is used to configure the cipher suites, not the SSL/TLS protocols. According to the man page, you want to do this:
Disable SSLv3
Note that Disable works by disabling that protocol and all lesser protocols, so disabling SSLv3 also disables SSLv2 along with it.
You will probably want to configure Ciphers as well. Exactly how you configure it depends on what browsers and user agents you want to support, but you can get started with:
Ciphers: "EECDH+AESGCM:AES128+EECDH"

clojure https connect using TLS 1.2 protocol

Trying connect to https server (https://3dsecure.kkb.kz) using TLS 1.2.
(defn- http-request-clojure [xml req-type]
(let [url-info (url-map req-type)
(prepare-response (.toString (:body (client/get
(str (:url url-info) "?"
(and (:name url-info)
(str (:name url-info) "="))
(URLEncoder/encode xml))
{:insecure? true
:socket-timeout 10000
:conn-timeout 10000}))))))
Got error "javax.net.ssl.SSLException: Received fatal alert: protocol_version"
openssl 1.0.1g , java 7.
Any ideas what goes wrong?
It's not you, it's them: from their Qualys SSL Labs report:
Java 6u45 No SNI 2 Protocol or cipher suite mismatch Fail3
Java 7u25 Protocol or cipher suite mismatch Fail3
Java 8u31 TLS 1.2 TLS_RSA_WITH_3DES_EDE_CBC_SHA (0xa) No FS 112
at least from today. They could fix this at any time, so hopefully you have a close enough relationship to politely encourage them to folow that link, and perhaps update openssl to they aren't vulnerable to this protocol downgrade attack.
This is almost always a simple matter of changing the nginx or apache config, though it can take a little fiddling to ensure all devices can still connect. SSL labs is an amazing resource for figuting this out.
From your perspective, is using Java 8 an option? It will be the easiest way past this.

How to get list of SSL/TLS ciphers supported by internet explorer

We are going to develop an SSL server which support all the ciphers supported by IE 10 and IE 11. So I started searching in google about the list of ciphers supported by IE, but I am not able to get a single user document which clearly mentions all SSL ciphers supported by IE.
Is there any user document available in internet or is there any way to directly check the IE browser settings to get the list of supported ciphers ?
The cipher suites depend less on the version of Internet Explorer and more on the underlying OS, because IE uses the SChannel implementation from Windows. And with some help of google it is easy to get the following information:
cipher suites in Schannel: http://msdn.microsoft.com/en-us/library/windows/desktop/aa374757(v=vs.85).aspx
cipher suites in Schannel on Vista: http://msdn.microsoft.com/en-us/library/windows/desktop/ff468651(v=vs.85).aspx
ciphers in IE7..10 on various Windows versions: https://github.com/client9/sslassert/wiki/IE-Supported-Cipher-Suites
Apart from that, why would you want to implement all cipher suites supported by IE? Some of them are only to connect to legacy SSL implementations. The usual way is to support a number of secure ciphers, enough so that one finds a shared cipher with the common client implementations.
Qualys SSL Labs publishes a more graphical view. Select your desired version of IE and OS from the list for more details.
https://www.ssllabs.com/ssltest/clients.html

How can I implement custom verification of an SSL certificate in Ruby's SSLServer?

I'm using SSL to form a trusted connection between two peers. Each peer knows who it expects to be connecting to (or accepting a connection from) at a given time. It should only accept valid certificates, and further, it should only accept certificates with certain attributes (probably by checking the canonical name).
So far, I can get the two sides to talk, based on the example in this question, and its answer. Each side can print out the certificate presented by the other peer.
I'm not sure what the correct way to verify these certificates is, though. The obvious way would be to just look at the certificates after the connection is made and drop the connection if it doesn't meet our expectations.
Is there a more correct way to do this? Is there a callback which is given the peer's presented certificate and can give it a thumbs-up or thumbs-down? Or is the right thing to handle it after SSL is done with its work?
In this case, I am the CA, so trusting the CA isn't an issue. I'm
signing these certificates on behalf of my users. The canonical names
aren't even domain names. Users connect peer-to-peer. I want the
client software I distribute to verify that the connecting user has a
certificate I signed and is the right user.
Its sounds like you are running a Private PKI. Just load the root of the trust chain into OpenSSL with SSL_CTX_load_verify_locations or SSL_load_verify_locations.
Be sure to use SSL_PEER_VERIFY to ensure OpenSSL performs the verification. The call would probably look like SSL_CTX_set_verify(ctx, SSL_VERIFY_PEER, NULL);. If peer validation fails, then the connect will fail.
There are ways to ensure the connect succeeds and then catching the error later. The trick is to set the verify callback, have the verify callback always return 1, and then call SSL_get_verify_result after the connection is setup. See SSL/TLS Client for an example.
Note: in all cases, you still have to perform name checking manually. OpenSSL currently does not do it (its in HEAD for OpenSSL 1.1.0). See libcurl or PostgreSQL for some code you can rip.
An example of a SSL/TLS client is provided by OpenSSL at its wiki. See SSL/TLS Client. There's no server code or example at the moment.
I'm not sure what the correct way to verify these certificates is, though.
The obvious way would be to just look at the certificates after the
connection is made and drop the connection if it doesn't meet our
expectations.
There's a lot to this, and some of it is not obvious. I'm going to break the answer up into parts, but all the parts try to answer your question.
First, you can verify the certificates are well formed. The group responsible in the context of the Web is the CA/Browser forums. They have baseline and extended requirements for creating certificates:
Baseline Certificate Requirements, https://www.cabforum.org/Baseline_Requirements_V1_1_6.pdf
Extended Validation Certificate Requirements, https://www.cabforum.org/Guidelines_v1_4_3.pdf
In the baseline docs, you will find, for example, an IP listed as the Common Name (CN) must also be listed in the Subject Alternate Names (SAN). In the extended docs, you will find that private IPs (Reserved per RFC 1918) cannot be present in a extended validation (EV) certificate; and EV certificates cannot contain wild cards.
Second, you can perform customary validation according to RFC 5280, Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile, http://www.ietf.org/rfc/rfc5280.txt.
The customary checks are the ones like hostname matching, time period validity checks, and verifying an end-entity or leaf certificate (client or server certificate) chains back to a root. In browsers using CAs, that's any number of hundreds of trusted roots or intermediates.
If you choose to perform revocation checking, then you will probably DoS your application (how is that for obvious!). A mobile client on a 3G network cannot download and process a 30MB CRL - it will surely hang the application. And an application cannot perform a OCSP query when the URL is wrong - that will surely fail.
Also, if you are performing hostname matching that includes wildcards, then care must be taken to handle ccTLDs properly. ccTLDs are like *.eu, *.us, or இலங்கை (nic.lk). There's some 5000 or so of them and Mozilla offers a list at http://publicsuffix.org/ (alternately, https://mxr.mozilla.org/mozilla-central/source/netwerk/dns/effective_tld_names.dat?raw=1).
Third, CAs don't warrant anything, so the answers you get from a CA is worthless. If you don't believe me, then check their Certification Practice Statement (CPS). For example, here is an excerpt from Apple's Certification Authority Certification Practice Statement (18 Sept 2013, page 6):
2.4.1. Warranties to Subscribers
The AAI Sub-CA does not warrant the use of any Certificate to any Subscriber.
2.4.2. CA disclaimers of warranties
To the extent permitted by applicable law, Subscriber agreements, if applicable,
disclaim warranties from Apple, including any warranty of merchantability or
fitness for a particular purpose
That means that they don't warrant the binding of the public key to the organization through the issuer's signature. And that's the whole purpose of X509!.
Fourth, DNS does not provide authentic answers. So you might get a bad answer from DNS and happily march over to a server controlled by your adversary. Or, 10 of the 13 root DNS servers under US control may collude to give you a wrong answer in the name of US national security.
Trying to get an authentic response from a non-US server is near impossible. The "secure DNS" pieces (sans DNSSEC) are still evolving, and I'm not aware of any mainstream implementations.
In the case of colluding US servers, a quorum won't work because the US holds an overwhelming majority.
The problem here is that you are making security decisions based on input from external services (CA and DNS). Essentially, you are conferring too must trust in untrustworthy actors.
A great treatment of the problems with PKI and PKIX is Dr. Peter Gutmann's Engineering Security at www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf. Be sure to read Chapters 1 adn 6. Dr. Gutmann has a witty sense of humor, so its not dry reading. Another great book is Ross Anderson's Security Engineering at http://www.cl.cam.ac.uk/~rja14/book.html.
You have a couple of defenses with all the problems caused by PKI, PKIX, and CAs. First, you can run a private PKI where you are your own certificate authority. In this case, you are not trusting an outsider. Bad DNS answers and rogue servers should be caught because the server's certificate will not form a valid chain.
Second, you can employ a security diversification strategy. Gutmann writes about it in his Engineering Security book, and you should visit "Security through Diversity" starting on page 292 and the "Risk Diversification for Internet Applications" section on page 296.
Third, you can employ a Trust-On-First-Use (TOFU) or Key Continuity strategy. This is similar to Wendlandt, Anderson and Perrig's Perspectives: Improving SSH-style Host Authentication with Multi-Path Probing or SSH's StrictHostKeyChecking option. In this strategy, you do the customary checks and then pin the certificate or public key. You can also ask for other's view of the certificate or public key. Unexpected certificate or key changes should set off alarm bells.
OWASP has an treatment of Certificate and Public Key Pinning at https://www.owasp.org/index.php/Certificate_and_Public_Key_Pinning. Note: some places rotate their certificates every 30 days or so, so you should probably pin public keys if possible. The list of frequent rotators includes Google, and its one of the reasons tools like Certificate Patrol makes so much noise.

Resources