What steps should I take to validate a SSL Certificate manually as browsers do? - validation

How do browsers like Chrome check SSL Certificate?
IS there any online databases or websites that they use?
What steps are taken by browsers to validate a SSL certificate?
Am I able to do it manually without using browser?

How do browsers like Chrome check SSL Certificate?
The certificate and chain is send by the server during the SSL handshake. The browser will create the trust chain based on the certificate issuer, provided chain certificates and the local root certificates. It will check expiration and purpose of the certificate and also check the subject alternative names (and maybe subject too) to make sure that the certificate is actually issued for the domain in the URL. It might also do some checks for certificate revocation.
For details see SSL Certificate framework 101: How does the browser actually verify the validity of a given server certificate? and How Do Browsers Handle Revoked SSL/TLS Certificates?.
I there any online database or websites that they use?
Not really. The necessary trust store is local. They might check revocation though against some online resource. See Is Certificate validation done completely local?.
Am I able to do it manually without using browser?
Sure, what the browser does could in theory be replicated manually. You can for example access the site and get the leaf and intermediate certificates with openssl s_client -showcerts .... You can then use openssl verify to verify the certificate chain, see also Verify a certificate chain using openssl verify. Then you need to look at the leaf certificate with openssl x509 -text ... to check purpose, expiration and the subject. Revocation is tricky but could be done with the help of openssl crl and openssl ocsp, although this does not really reflect what browsers do.

The official algorithm for validating any SSL/TLS certificate is defined by PKIX as modified by OCSP. For TLS nowadays the OCSP token is often transported by 'stapling' in the TLS handshake instead of by a separate connection, which requires several other RFCs, but that only affects transport, not the actual validation by the relier. For HTTPS specifically, the client must also check server identity aka 'hostname' as defined by rfc2818.
In practice, browsers may vary some. Chrome mostly uses a google-determined scheme to 'push' revocation data they select, but this has changed from time to time. Firefox, last I heard, used their own 'one-CRL' scheme. Also, although the standard and traditional practice was to check hostname against SAN if present and otherwise fall back to Subject.CN, Chrome since a few years ago requires SAN and never uses CN; you can find dozens of Qs on several stacks about "my selfsigned or otherwise DIY cert not from a real CA stopped working on Chrome".
If by 'do it manually' you really mean manually, that will be a lot of work. If you mean with tools other than a browser offline, somewhat easier; OpenSSL (if installed) can do most of this, although you need more options than shown in Steffen's link to get it right.
If you mean with tools other than a browser online, absolutely. The WWW has become extremely popular in recent decades, and there are zillions of programs and libraries for accessing it, nearly all of them including HTTPS (although two decades ago that was less common), which includes validating the certificate -- at least by default; many have options to disable, bypass, or override validation. There are standalone tools like curl and wget -- or openssl s_client can do the SSL/TLS and certificate part without doing HTTP on top. There are innumerable libraries and middlewares like ssl in python (or requests using it), tls in nodejs, (older) HttpsURLConnection and (newer) java.net.http in Java as well as third-parties like Apache HttpComponents; I'm sure there are counterparts for perl and dotnet although I'm not familiar with them. As well as powershell, which is fuzzy on the program/library distinction.

Related

Guzzle disabling certificate verification to false, how insecure is it?

Recently I found myself working with Guzzle while making requests to another server to post and fetch some data, in some cases, tokens. But I was getting certificate invalid error and I even tried to get a new .pem certificate, but Guzzle was still not accepting and kept throwing that error. So finally, I did what the "Internet" said:
$guzzleClient = new Client([
'verify' => false
]);
Now although this solution works, I am not sure how insecure it can get. Do I need to worry? If yes, in what scenarios?
well this is a big problem if you are for example
having login system on the request you are sending using guzzle
having payment/checkout on the request
basically any sensitive data being passed to the other server
because when you pass data without SSL certificate then your requests might get caught by malicious programs like
BurbSuite / WireShark , cain and abel / EtterCap
as these programs are Sniffing programs and anyone can get a version from the internet as they are open sourced and every thing going without SSL can be intercepted by the hacker using the tools mentioned above and the hacker can look to the entire request in plaintext! so its highly recommended to use SSL connection when passing sensitive data
Worth Mentioning : now a days even SSL isn't very secure because hackers can remove it using SSLStrip tool but believe me SSL will make it much harder for them to get to your request because if they used it your website sometimes will make non-completed requests and it will notify the user that the network isn't secure this will make it very hard for the hacker to get the user's data,
TLS/SSL in common configurations is meant to give you three things:
confidentiality - no third party is able to read the messages sent and received,
integrity - no third party is able to modify the messages sent and received,
server authentication - you know who are you talking to.
What you do with setting verify to false is disabling the certificate verification. It immediately disables the server authentication feature and enables loosing confidentiality and integrity too when facing an active attacker that has access to your data stream.
How is that?
First of all TLS/SSL relieas on Public Key Infrastructure. Without going into too much details: you hold on your machine a set of certificates of so called Certification Authorities (CA) whom you trust. When you open a new communication to a service, you get the services certificates and in the process of verification you validate amongst other things if the certificate belongs to a CA you trust. If yes, then the communication may proceed. If no, then the communication channel is closed.
Attack patterns
Disabling certificate verification allows for Man-in-the-Middle (MitM) attacks than can easily be performed in your local network (e.g. via ARP poisoning attacks), in the local network of the service you are calling or in the network between. As we usually do not trust the network completely, we tend to verify.
Imagine me performing an attack on you. I have performed ARP poisoning, now I can see all your traffic. It's encrypted, isn't it? Well, not obviously. The TCP handshake and TLS handshake you believe you have performed with the target service - you have performed with me. I have presented you not the certificate of the target service, as I am unable to fake it, but my own. But you did not validate it to reject it. I have opened a connection between me and the target service on your behalf too so I can look into the decrypted traffic, modify if necessary and reply to you to make you believe everything is ok.
First of all - all your secrets are belong to me. Second of all - I am able to perform attacks on both you and the target service (which might have been secured by authentication mechanisms, but now is not).
How to fix this?
In XXI century there should be little reason to disable TLS verification anywhere. Configuring it to work properly might be a pain though, even more when you are doing it for the first time. From my experience the most common issues in the micro service world are:
the target certificate is self-signed,
you are missing a CA root certificate in your trust store,
the microservice does provide his certificate, but does not provide an intermediate CA certificate.
It's hard to guess what your issue is. We would need to dig deeper.
While the other answers points out some really good point about how important SSL/TLS is, your connection is still encrypted and the remote endpoint you're using has https:// in it as well. So you're not entirely disabling SSL when you set verify to false if I'm not mistken. It's just less secure since that you're not verifying the certificate of the remote server if they are signed by a Certificate Authority (CA) using the CA bundle.
Do you need to worry?
If this is something on your production, ideally you'd want things to be secure and configured correctly, so yes.
By not verifying the certificate, like Marek Puchalski mentioned, there's possibility of the server might not be the one you think it is and allows mitm (man in the middle) attack as well. More about mitm here, and peer verification here.
Why is it happening & how do you fix it?
Most common issue is misconfigured server, especially PHP configuration. You can fix your PHP configuration following this guide, where you'll be using adding the CA root certificates bundle to your configuration. Alternatively you can add this to Guzzle.
Another common issue is, the remote server is using a self-signed certificate. Even if you configured your CA bundle in your trustedstore, this certificate can't be trusted since it's not signed by a trusted CA. So the server needs to configure a SSL certificated signed by a CA. If that's not possible, you can manually trust this CA root, however this comes with some security concerns as well.
Hope this helped :)

Is a world known CA certificate compulsory for a https site?

I want my site to be secure using HTTPS protocols. I managed to make a self-signed key to be trustedCertEntry as I made my own CA certificate, with different CN, which I used to sign my own private certificate.
It works smooth testing it with openssl with something like:
openssl s_client -connect www.mydomain.com:80 -tls1 -state
Thus, browser doesn't report a certificate self-signed error, as it sees a different CA.
But I get a SEC_ERROR_UNKNOWN_ISSUER error. Still it seems logical to me as nobody knows me as a CA. It is supposed to work if user adds exception for me.
I thought this trick was acceptable and it was like many https compliant sites were working, as you may visit a unknown site and you want to encrypt communications from 3rd party watchers but trust that page.
After trying to get a clear response for it, beyond coding that I will find resources, my question is:
If I want to have a site, for which the users don't have to add an exception in the first visit, do I have to get a certificate from a "world-known" CA? Or am I missing a solution for self-signing my certificate with my own CA certificate?
Technically speaking, the answer is: Yes, you will have to get a certificate from a CA that is trusted by your users' browsers via a chain of intermediary CA's that ends at an inherently trusted root CA. The accepted answer to this question explains how it works: SSL Certificate framework 101: How does the browser actually verify the validity of a given server certificate?
Having said that, if your "only" concern is to provide encrypted connections, you might be able to leverage the Let's Encrypt CA, which provides free certificates for that purpose. Those certificates will be only domain-validated, which provides a weaker kind of assurance of identity than, for example, an Extended Validation Certificate.
Depending on the browser used, there will be minimal difference in user experience between DV and EV certificates. For Safari, the user will see a grey padlock in the address bar for the lower assurance DV-backed sites, like this:
and a green padlock when higher identity assurance is provided, like this:
Whether the former is good enough for you (or your customers) depends on your situation.
In case you want to understand what "inherently trusted" actually means for web browsers, see this blog post: Who your browser trusts, and how to control it.

Why is Firefox saying that my website is using an "invalid security certificate"?

I have been using a wildcard SSL certificate for several of my company's B2B websites for some time. Recently, we noticed that Google Chrome started displaying a red unlocked lock with HTTPS crossed out for all of these websites. The solution I found was to reissue the certificate from the provider (Network Solutions). So, I did this, and updated the certificate for each of the websites, and the Google Chrome issue went away (HOORAY!). However, when visiting any of these websites in Firefox, it displays a security message stating the website is using an invalid security certificate:
How can I resolve this so that our users are not confused when visiting these websites?
P.S. These websites are running on IIS6.
It looks as if the certificate chain is incomplete and, thus, Firefox (and likely other browsers) cannot verify the site certificate. Normally browsers store intermediate certificates they have seen in the past - that might be a reason why it works in Chrome.
You can test using https://www.ssllabs.com/ssltest/analyze.html.
Depending on the server software (here, for Apache httpd and other servers which read the certificate in PEM/DER format), you can just paste the intermediate certificates together with the certificate in one .pem file (which is used as Certificate file).
The chain (intermediate certificates) is/are normally provided by your CA. In your case you could also use Chrome the review the certificate and then store/extract all intermediate certificates from the certificate view.
You can get this certificate is not trusted error if server doesn't send a required intermediate certificate.
Firefox automatically stores intermediate certificates that servers send in the Certificate Manager for future usage.
If a server doesn't send a full certificate chain then you won't get an untrusted error when Firefox has stored missing intermediate certificates from visiting a server in the past that has send it, but you do get an untrusted error if this intermediate certificate isn't stored yet.
You can inspect the certificate chain via a site like this:
http://www.networking4all.com/en/support/tools/site+check/
I followed the instructions at enter link description here, to import the intermediate certificates.
In IIS, there is an option under Directory Security to "Enable certificate trust list". I enabled it and added the "AddTrust External CA Root" to the CTL certificates list and this appears to have fixed the issue.

Standalone DartVM: Self-Signed Certificates and SSL

I've been struggling recently with using the standalone DartVM and SSL as a client. I'm of the understanding that Dart uses Mozilla NSS to manage the certificates. What I'm having a problem wit, is that on Windows, for example, there exists no binaries that I can find (other than third parties compiling the Mozilla source and uploading to mega or similar, which is pretty alarming if you ask me) released for the Windows platform. Compiling this C++ code is not a trivial task. I've not the resources to do so on my own under the Windows platform. This is why I write Dart (or other high level languages) in the first place.
Despite that, the error message I get when attempting to connect securely and being presented with a self-signed (or rather more technically correct, untrusted authority) certificate, is that the OS itself doesn't trust the certificate. On Windows, this is not the case. The certificate in question I'm using is a CA root certificate of my generating, with proper authority/signing chain, installed into Windows trusted roots manually. Both Chrome and Internet Explorer (of which use the Windows underlying certificate store) trust my certificate(s) without any warnings after having done this. So if the DartVM is not using the "OS" to validate a certificate upon handshake, then that message is very uninformative/misleading.
What can be done to overcome this outside of compiling NSS and trying to figure out just how to import my certificates by way of over-complicated and under-documented steps? Is there not a parameter that one could specify when initiating a secure connection to ignore SSL errors of this nature?
My web server forces the use of HTTPS so dropping back to plain HTTP would
not be an option for me. I also don't want to trust and much less want to pay a third party for my certificates of which are pretty much only used internally, which is why I generated a wildcard certificate under my own root CA in the first place. Paying for a wildcard certificate, for multiple domains, that aren't always necessarily exposed to the public or meant for public use is a bit astronomically priced and completely out of the question.

Why is Ruby unable to verify an SSL certificate?

This is my first time trying to use the XMLRPC::Client library to interact with a remote API and I keep receiving this error:
warning: peer certificate won't be verified in this SSL session
Searching around I've found loads of people that have gotten that error. Usually it's with self-signed certificates and they just want it to go away, so they do something dirty like monkey patch the way XMLRPC::Client is opening it's http session.
I first assumed this was simply the client not caring whether the certificate was valid or not, so I continued my search and came across this gem. It simply forces verification of all SSL certificates and throws a hard error if it's not able too. This was exactly what I wanted. I included it, ran the code again and now I'm getting this:
OpenSSL:SSL::SSLError:
SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B:
certificate verify failed
Of course! The certificate is bad! But I double check just to make sure with openssl's builtin s_client like so:
openssl s_client -connect sub.example.com:443
and what do I get:
CONNECTED(00000003)
---
Certificate chain
<snip>
Verify return code: 0 (ok)
So now we get to my question. OpenSSL (the command line version) says the certificate is good. OpenSSL (the Ruby library) disagrees. All of my web browsers say the certificate is good.
A few additional details that might be of use. The certificate is a wildcard but is valid for the domain. The openssl s_client was run on the same machine seconds apart from the Ruby code. This is Ruby 1.8.7 p357 which is installed with RVM.
Does Ruby use something other than the CA bundle provided by the host OS? Is there a way to tell Ruby to use a specific CA bundle or the system one?
If you are only interested in how to make Ruby behave the same way as OpenSSL s_client or your browser does, you may skip to the very last section, I'll cover the fine print in what is following.
By default, the OpenSSL::X509::Store used for making the connection doesn't use any trusted certificates at all. Based on your knowledge of the application domain, you would typically feed an instance of X509::Store with the trusted certificate(s) that are relevant for your application. There are several options for this:
Store#add_file takes a path to a PEM/DER-encoded certificate
Store#add_cert takes an instance of X509::Certificate
Store#add_path takes a path to a directory where trusted certificates can be found
The "Browser" Approach
This is in contrast to the approach browsers, Java (cacerts), or Windows with its own internal store of trusted certificates, take. There the software is pre-equipped with a set of trusted certificates that is considered to be "good" in the opinion of the software vendor. Generally this is not a bad idea, but if you actually look into these sets, then you will soon notice that there are just too many certificates. An individual can't really tell whether all of these certificates should be trusted blindly or not.
The Ruby Approach
The requirements of your typical Ruby application on the other hand are a lot different than that of a browser. A browser must be be able to let you navigate to any "legitimate" web site that comes with a TLS certificate and is served over https. But in a typical Ruby application you will only have to deal with a few services that use TLS or would otherwise require certificate validation.
And there is the benefit of the Ruby approach - although it requires more manual work, you will end up with a hand-tailored solution that exactly trusts the certificates it should trust in your given application context. This is tedious, but security is much higher this way because you expose a lot less attack surface. Take recent events: if you never had to include DigiNotar or any other compromised root in your trust set, then there's no way such breaches can affect you.
The downside of this, however, as you have already noticed, is that by default, if you don't actively add trusted certificates, the OpenSSL extension will not be able to validate any peer certificate at all. In order to make things work, you have to set up the configuration manually.
This inconvenience has led to a lot of dubious measures to circumvent it, the worst of all being to globally set OpenSSL::SSL::VERIFY_PEER = OpenSSL::SSL::VERIFY_NONE. Please don't do this. We have even made jokes about adding code that lets your application crash randomly if we encounter that hack :)
If manual trust setup seems too complicated, I'll offer an easy alternative now that makes the OpenSSL extension behave exactly the same as OpenSSL CLI commands like s_client.
Why s_client can verify the certificate
OpenSSL uses a similar approach to browsers and Windows. A typical installation will put a bundle of trusted certificates somewhere on your hard disk (something like /etc/ssl/certs/ca-bundle.crt) and this will serve as the default set of trusted certificates. That's where s_client looks when it needs to verify peer certificates and that's why your experiment succeeded.
Making Ruby act like s_client
If you'd still like to have the same comfort when validating certificates with Ruby, you can tell it to use the OpenSSL bundle of trusted certificates if available on your system by calling OpenSSL::X509::Store#set_default_paths. Additional information can be found here. To use this with XMLRPC::Client, simply ensure that set_default_paths gets called on the X509::Store it uses.
Thanks to emboss's answer that helps me to figure things out. here is my solution that monkeypatches Net::HTTP so I can enable the system trust certificates globally without changing my client code. It works for gems that rely on net/http. e.g. rest-client
module SetDefaultOpenSSLTrustStore
def initialize(*args, **kwargs)
super
cert_store = OpenSSL::X509::Store.new
cert_store.set_default_paths
#cert_store = cert_store
end
end
Net::HTTP.prepend SetDefaultOpenSSLTrustStore
If you have a ca-certificates file, just do this:
http.ca_file = <YOUR CA-CERT FILE PATH>
http.verify_mode = OpenSSL::SSL::VERIFY_PEER
http.verify_depth = 5

Resources