How to get https certificate working on local Laravel Homestead site - laravel

I'm getting this problem:
The error that I'm seeing in Windows 10 Chrome Version 65.0.3325.181 (Official Build) (64-bit) is:
Your connection is not private
Attackers might be trying to steal your
information from ((mysite)) (for example, passwords,
messages, or credit cards). Learn more NET::ERR_CERT_AUTHORITY_INVALID
This page is not secure (broken HTTPS).
Certificate - missing
This
site is missing a valid, trusted certificate
(net::ERR_CERT_AUTHORITY_INVALID).
Firefox Quantum 59.0.2 (64-bit) says:
Your connection is not secure
The owner of ((mysite)) has configured their website
improperly. To protect your information from being stolen, Firefox has
not connected to this website.
Connection is Not Secure
Could not verify this certificate because the
issuer is unknown.
I have already tried: https://stackoverflow.com/a/47755133/470749
vboxmanage --version
5.2.6r120293
vagrant -v
Vagrant 2.0.2
git branch
* (HEAD detached at v7.3.0)
vagrant box list
laravel/homestead (virtualbox, 5.2.0)
vagrant box update
==> vboxHomestead: Checking for updates to 'laravel/homestead'
vboxHomestead: Latest installed version: 5.2.0
vboxHomestead: Version constraints: >= 5.2.0
vboxHomestead: Provider: virtualbox
==> vboxHomestead: Box 'laravel/homestead' (v5.2.0) is running the latest version.
I wonder if this means that I'm not yet using release 7.1.0 (which has in its changelog "sign SSL certificates with a custom root certificate"), and I wonder if that's why I have this SSL HTTPS problem.
What are the next steps I should try now to get the certificate working?

Unfortunately, I don't have an easy way of checking it on Windows, so I'm going to use VirtualBox running on Linux here. Install vagrant, then:
$ vagrant box add laravel/homestead
$ git clone https://github.com/laravel/homestead.git
$ cd homestead
$ git checkout v7.3.0
$ bash init.sh
I've simplified Homestead.yaml a bit (you might prefer to stick with the defaults):
---
ip: "192.168.10.10"
provider: virtualbox
folders:
- map: /home/yuri/_/la1
to: /home/vagrant/code
sites:
- map: homestead.test
to: /home/vagrant/code/public
Then:
$ mkdir -p ~/_/la1/public
$ echo '<?php echo "it works";' > ~/_/la1/public/index.php
$ vagrant up
$ vagrant ssh -c 'ls /etc/nginx/sites-enabled'
homestead.test
$ vagrant ssh -c 'cat /etc/nginx/sites-enabled/homestead.test'
server {
listen 80;
listen 443 ssl http2;
server_name .homestead.test;
root "/home/vagrant/code/public";
...
ssl_certificate /etc/nginx/ssl/homestead.test.crt;
ssl_certificate_key /etc/nginx/ssl/homestead.test.key;
}
As we can see it has the certificates in /etc/nginx/ssl:
$ vagrant ssh -c 'ls -1 /etc/nginx/ssl'
ca.homestead.homestead.cnf
ca.homestead.homestead.crt
ca.homestead.homestead.key
ca.srl
homestead.test.cnf
homestead.test.crt
homestead.test.csr
homestead.test.key
I tried to trust server certificate systemwide, but it didn't work out. It appeared on Servers tab in Firefox' Certificate Manager, but that didn't make Firefox trust it. I could probably have added an exception, but trusting CA certificates looks like a better option. Trusting CA certificate makes browser trust any certificate they issue (new sites running under Homestead). So we're going to go with CA certificate here:
$ vagrant ssh -c 'cat /etc/nginx/ssl/ca.homestead.homestead.crt' > ca.homestead.homestead.crt
$ sudo trust anchor ca.homestead.homestead.crt
$ trust list | head -n 5
pkcs11:id=%4c%f9%25%11%e5%8d%ad%5c%2a%f3%63%b6%9e%53%c4%70%fa%90%4d%77;type=cert
type: certificate
label: Homestead homestead Root CA
trust: anchor
category: authority
Then, I've added 192.168.10.10 homestead.test to /etc/hosts, restarted Chromium, and it worked:
P.S. I'm running Chromium 65.0.3325.162, and Firefox 59.0.
Windows
Apparently, Windows doesn't have trust utility. Under Windows one has two stores: Local Machine and Current User Certificate stores. No point in using Local Machine Certificate Store, since we're making it work just for our current user. Then, there are substores. With two predefined of them being of most interest: Trusted Root Certification Authorities and Intermediate Certification Authorities Stores. Commonly referred in command line as root and CA.
You can access Chrome's Certificate Manager by following chrome://settings/?search=Manage%20certificates, then clicking Manage certificates. Of most interest are Trusted Root Certification Authorities and Intermediate Certification Authorities tabs.
One way to manager certificates is via command line:
>rem list Current User > Trusted Root Certification Authorities store
>certutil.exe -store -user root
>rem list Local Machine > Intermediate Certification Authorities store
>certutil.exe -store -enterprise CA
>rem GUI version of -store command
>certutil.exe -viewstore -user CA
>rem add certificate to Current User > Trusted Root Certification Authorities store
>certutil.exe -addstore -user root path\to\file.crt
>rem delete certificate from Current User > Trusted Root Certification Authorities store by serial number
>certutil.exe -delstore -user root 03259fa1
>rem GUI version of -delstore command
>certutil.exe -viewdelstore -user CA
The results are as follows (for both Local Machine and Current User Certificate stores):
root
homestead.test.crt
error
ca.homestead.homestead.crt
appears in Trusted Root Certification Authorities tab
CA
homestead.test.crt
doesn't work, appears in Other People tab
ca.homestead.homestead.crt
doesn't work, appears in Intermediate Certification Authorities tab
Other options would be double-clicking on a certificate in Explorer, importing certificates from Chrome's Certificate Manager, using Certificates MMC Snap-in (run certmgr.msc), or using CertMgr.exe.
For those who have grep installed, here's how to quickly check where is the certificate:
>certutil.exe -store -user root | grep "homestead\|^root\|^CA" ^
& certutil.exe -store -user CA | grep "homestead\|^root\|^CA" ^
& certutil.exe -store -enterprise root | grep "homestead\|^root\|^CA" ^
& certutil.exe -store -enterprise CA | grep "homestead\|^root\|^CA"
So, installing CA certificate into Current User > Trusted Root Certification Authorities store seems like the best option. And make sure not to forget to restart your browser.
more in-depth explanation of how it works
In Vagrantfile it requires scripts/homestead.rb, then runs Homestead.configure. That's the method, that configures vagrant to make all the needed preparations.
There we can see:
if settings.include? 'sites'
settings["sites"].each do |site|
# Create SSL certificate
config.vm.provision "shell" do |s|
s.name = "Creating Certificate: " + site["map"]
s.path = scriptDir + "/create-certificate.sh"
s.args = [site["map"]]
end
...
config.vm.provision "shell" do |s|
...
s.path = scriptDir + "/serve-#{type}.sh"
...
end
...
end
end
So, these two files create certificate and nginx config respectively.
further reading
How to make browser trust localhost SSL certificate?

Apparently you have to add your cert to the Trusted CA store. I let it auto decide and that did not work. Also I added it to my personal store which also did not work.
So the steps are (if you are on windows) is to hit your windows key and type in "Internet Options" and open well your internet options. Then click the "content" tab. From here click on "certificates" which is the middle button.
Then click Import and Next. Browse to where you saved the cert.
Then click "Place all certificates in the following store" and click browse and select the "Trusted Root Certificate Authorities".
And you should get a popup asking you to confirm and warning you and all that jazz.
And then make sure you restart your browser. On chrome you can type this into the URL bar: chrome://restart. Boom I hoped this helped you!

Your issue is that the issuer is unknown. As you mentioned in the errors;
"This site is missing a valid, trusted certificate"
or
"This site is missing a valid, trusted certificate (net::ERR_CERT_AUTHORITY_INVALID)"
Lets first understand why this error occurs. The browsers have list of trusted certificate authorities. You can see this list from setting/preferences section of different browsers. If your certificate is not issued by one of these authorities, then you will get the above error.
FIXING IT ON LOCALHOST
I can think of two possible solutions;
Add the certificate manually to the browser and it will start opening with https.
OR
Sign the certificate with a already trusted authority. Install the certificates on local server. Configure host in /etc/hosts file with the same name of your domain against which you have signed the certificate.
I hope it will fix the issue.

Related

Self signed certificate in certificate chain issue using Azure CLI on Windows

I have some trouble configuring my Windows to work with az command line tools. I have tested multiple configuration. One on locally installed system and one with windows based docker container. I get the same error on both system.
In case I issue the following command:
az login --tenant my-domain.org
I get the following error:
HTTPSConnectionPool(host='login.microsoftonline.com', port=443): Max retries exceeded with url: /my-domain.org/.well-known/openid-configuration (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1125)')))
The container has the following az and openssl version:
PS C:\azp> az version
{
"azure-cli": "2.28.0",
"azure-cli-core": "2.28.0",
"azure-cli-telemetry": "1.0.6",
"extensions": {}
}
PS C:\azp> openssl version
OpenSSL 1.1.1k 25 Mar 2021
The local system has the following az and openssl version:
(base) PS C:\01_Dev\dockerdevimage> az version
{
"azure-cli": "2.26.1",
"azure-cli-core": "2.26.1",
"azure-cli-telemetry": "1.0.6",
"extensions": {}
}
(base) PS C:\01_Dev\dockerdevimage> openssl version
OpenSSL 1.1.1c 28 May 2019
I tried to understand why I get the error, so I tested the connection with openssl as follows:
PS C:\azp> openssl s_client -proxy 10.76.209.147:3128 -connect login.microsoftonline.com:443 -showcerts
CONNECTED(00000180)
Can't use SSL_get_servername
depth=2 DC = org, DC = my-domain, CN = PKI, CN = BB-CA-DD <-- edited manually
verify error:num=19:self signed certificate in certificate chain
verify return:1
I have also tested with the same proxy server and with Linux container and the az command works as expected:
$ az version
{
"azure-cli": "2.25.0",
"azure-cli-core": "2.25.0",
"azure-cli-telemetry": "1.0.6",
"extensions": {}
}
$ openssl version
OpenSSL 1.1.1f 31 Mar 2020
$ az login --tenant my-domain.org
The default web browser has been opened at https://login.microsoftonline.com/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/oauth2/authorize. Please continue the login in the web browser. If no web browser is available or if the web browser fails to open, use device code flow with `az login --use-device-code`.
You have logged in. Now let us find all the subscriptions to which you have access...
[
{
"cloudName": "AzureCloud",
...
On Linux container the openssl command returns the following output:
$ openssl s_client -proxy 10.76.209.147:3128 -connect login.microsoftonline.com:443 -showcerts
Can't use SSL_get_servername
depth=2 DC = org, DC = my-domain, CN = PKI, CN = BB-CA-DD
verify return:1
I have also imported the certificate with the following command based on this link:
PS C:\azp> Import-Certificate -FilePath .\BB-CA-DD.crt -CertStoreLocation Cert:\LocalMachine\Root\
No changes. I'm not sure how to proceed.
Maybe this issue is related to the following posts and articles:
Can OpenSSL on Windows use the system certificate store?
How to Use OpenSSL with a Windows Certificate Authority to Generate TLS Certificates
Installing TLS / SSL ROOT Certificates to non-standard environments
Edit:
I've moved the solution from here to an Answer block to highlight that the issue for me was resolved. Based on the reactions, I've concluded that it is indeed useful for others too.
Finally I was able to resolve the issue as follows:
I've found the following documentation:
Setting up certificates for Azure CLI on Azure Stack Development Kit
The basic idea is to find the python installation used for Azure CLI and update the related certificate file.
In my case the Azure CLI was installed with python on the following location:
C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\python.exe
And using the command, that was suggested, returned as follows:
PS > & "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\python.exe" -c "import certifi; print(certifi.where())"
C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\certifi\cacert.pem
Updating the file mentioned above solved the az login issue for me. One of the python installation provided by my-domain.org contained a properly configured cacert.pem file.
You can use following method
Your azure CLI is looking for the cert at this location (if using Windows)
Default certificate authority bundle
Windows C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\Lib\site-packages\certifi\cacert.pem
Download the Certificate of your Azure Portal (portal.azure.com)
Append the certificate on above cacert.pem file
and try Az login again After restarting powershell.
Alternatively
If you're using Azure CLI over a proxy server, it may cause the following error: SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",). To address this error, set the environment variable REQUESTS_CA_BUNDLE to the path of certificate authority bundle certificate file in PEM format.
Append the proxy server's certificate to this file or copy the contents to another certificate file, then set REQUESTS_CA_BUNDLE to it. You might also need to set the HTTP_PROXY or HTTPS_PROXY environment variables.
Link to Ms Docs Solution
I solved this problem by changing DNS for IPv4. Maybe it can work for you too. I ran az upgrade command after DNS change. When I ran az upgrade while giving this error, it said "check internet connection". It was upgraded with success and the related error has been resolved.
I used Google DNS as DNS.
8.8.8.8
8.8.4.4
Then I set DNS to automatic option. I can continue to use it without any problems. I can now access with the az login command.

curl of url stored as bash variable in MacOS [duplicate]

root#sclrdev:/home/sclr/certs/FreshCerts# curl --ftp-ssl --verbose ftp://{abc}/ -u trup:trup --cacert /etc/ssl/certs/ca-certificates.crt
* About to connect() to {abc} port 21 (#0)
* Trying {abc}...
* Connected to {abc} ({abc}) port 21 (#0)
< 220-Cerberus FTP Server - Home Edition
< 220-This is the UNLICENSED Home Edition and may be used for home, personal use only
< 220-Welcome to Cerberus FTP Server
< 220 Created by Cerberus, LLC
> AUTH SSL
< 234 Authentication method accepted
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS alert, Server hello (2):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
Relating to 'SSL certificate problem: unable to get local issuer certificate' error. It is important to note that this applies to the system sending the CURL request, and NOT the server receiving the request.
Download the latest cacert.pem from https://curl.se/ca/cacert.pem
Add the '--cacert /path/to/cacert.pem' option to the curl command to tell curl where the local Certificate Authority file is.
(or) Create or add to a '.curlrc' file the line:
cacert = /path/to/cacert.pem
See 'man curl', the section about the '-K, --config <file>' section for information about where curl looks for this file.
(or if using php) Add the following line to php.ini: (if this is shared hosting and you don't have access to php.ini then you could add this to .user.ini in public_html).
curl.cainfo="/path/to/downloaded/cacert.pem"
Make sure you enclose the path within double quotation marks!!!
(perhaps also for php) By default, the FastCGI process will parse new files every 300 seconds (if required you can change the frequency by adding a couple of files as suggested here https://ss88.uk/blog/fast-cgi-and-user-ini-files-the-new-htaccess/).
It is failing as cURL is unable to verify the certificate provided by the server.
There are two options to get this to work:
Use cURL with -k option which allows curl to make insecure connections, that is cURL does not verify the certificate.
Add the root CA (the CA signing the server certificate) to /etc/ssl/certs/ca-certificates.crt
You should use option 2 as it's the option that ensures that you are connecting to secure FTP server.
I have solved this problem by adding one line code in cURL script:
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
Warning: This makes the request absolute insecure (see answer by #YSU)!
For me, simple install of certificates helped:
sudo apt-get install ca-certificates
In my case it turned out to be a problem with the installation of my certificate on the service I was trying to consume with cURL. I failed to bundle/concatenate the intermediate and root certificates into my domain certificate. It wasn't obvious at first that this was the problem because Chrome worked it out and accepted the certificate in spite of leaving out the intermediate and root certificates.
After bundling the certificate, everything worked as expected. I bundled like this
$ cat intermediate.crt >> domain.crt
And repeated for all intermediate and the root certificate.
Had this problem after install Git Extensions v3.48. Tried to install mysysgit again but same problem. At the end, had to disable (please consider security implications!) Git SSL verification with:
git config --global http.sslVerify false
but if you have a domain certificate better add it to (Win7)
C:\Program Files (x86)\Git\bin\curl-ca-bundle.crt
It is most likely a missing cert from the server.
Root->Intermediate->Server
A server should send the Server & Intermediate as a minimum.
Use openssl s_client -showcerts -starttls ftp -crlf -connect abc:21 to debug the issue.
If only one cert is returned (either self signed, or issued), then you must choose to either:
have the server fixed
trust that cert and add it to your CA cert store (not the best idea)
disable trust, e.g. curl -k (very bad idea)
If the server returned, more than one, but not including a self signed (root) cert:
install the CA (root) cert in your CA store for the this chain, e.g. google the issuer. (ONLY if you trust that CA)
have the server fixed to send the CA as part of the chain
trust a cert in the chain
disable trust
If the server returned a root CA certificate, then it is not in your CA store, your options are:
Add (trust) it
disable trust
I have ignored expired / revoked certs because there were no messages indicating it. But you can examine the certs with openssl x509 -text
Given you are connecting to a home edition (https://www.cerberusftp.com/support/help/installing-a-certificate/) ftp server, I am going to say it is self signed.
Please post more details, like the output from openssl.
We ran into this error recently. Turns out it was related to the root cert not being installed in the CA store directory properly. I was using a curl command where I was specifying the CA dir directly. curl --cacert /etc/test/server.pem --capath /etc/test ... This command was failing every time with curl: (60) SSL certificate problem: unable to get local issuer certificate.
After using strace curl ..., it was determined that curl was looking for the root cert file with a name of 60ff2731.0, which is based on an openssl hash naming convetion. So I found this command to effectively import the root cert properly:
ln -s rootcert.pem `openssl x509 -hash -noout -in rootcert.pem`.0
which creates a softlink
60ff2731.0 -> rootcert.pem
curl, under the covers read the server.pem cert, determined the name of the root cert file (rootcert.pem), converted it to its hash name, then did an OS file lookup, but could not find it.
So, the takeaway is, use strace when running curl when the curl error is obscure (was a tremendous help), and then be sure to properly install the root cert using the openssl naming convention.
It might be sufficient to just update the list of certificates
sudo update-ca-certificates -f
update-ca-certificates is a program that updates the directory /etc/ssl/certs to hold SSL certificates and generates ca-certificates.crt, a concatenated single-file list of certificates.
I have encountered this problem as well. I've read this thread and most of the answers are informative but overly complex to me. I'm not experienced in networking topics so this answer is for people like me.
In my case, this error was happening because I didn't include the intermediate and root certificates next to the certificate I was using in my application.
Here's what I got from the SSL certificate supplier:
- abc.crt
- abc.pem
- abc-bunde.crt
In the abc.crt file, there was only one certificate:
-----BEGIN CERTIFICATE-----
/*certificate content here*/
-----END CERTIFICATE-----
If I supplied it in this format, the browser would not show any errors (Firefox) but I would get curl: (60) SSL certificate : unable to get local issuer certificate error when I did the curl request.
To fix this error, check your abc-bunde.crt file. You will most likely see something like this:
-----BEGIN CERTIFICATE-----
/*additional certificate content here*/
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
/*other certificate content here*/
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
/*different certificate content here*/
-----END CERTIFICATE-----
These are your Intermediate and root certificates. Error is happening because they are missing in the SSL certificate you're supplying to your application.
To fix the error, combine the contents of both of these files in this format:
-----BEGIN CERTIFICATE-----
/*certificate content here*/
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
/*additional certificate content here*/
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
/*other certificate content here*/
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
/*different certificate content here*/
-----END CERTIFICATE-----
Note that there are no spaces between certificates, at the end or at the start of the file. Once you supply this combined certificate to your application, your problem should be fixed.
According to cURL docs you can also pass the certificate to the curl command:
Get a CA certificate that can verify the remote server and use the
proper option to point out this CA cert for verification when
connecting. For libcurl hackers: curl_easy_setopt(curl,
CURLOPT_CAPATH, capath);
With the curl command line tool: --cacert [file]
For example:
curl --cacert mycertificate.cer -v https://www.stackoverflow.com
Download https://curl.haxx.se/ca/cacert.pem
After download, move this file to your wamp server.
For exp: D:\wamp\bin\php\
Then add the following line to the php.ini file at the bottom.
curl.cainfo="D:\wamp\bin\php\cacert.pem"
Now restart your wamp server.
Try reinstalling curl in Ubuntu, and updating my CA certs with sudo update-ca-certificates --fresh which updated the certs
Mine worked by just adding -k to my curl.
No need to complicate things.
curl -LOk https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl
Yes you need to add a CA certificate also. Adding a code snippet in Node.js for clear view.
var fs = require(fs)
var path = require('path')
var https = require('https')
var port = process.env.PORT || 8080;
var app = express();
https.createServer({
key: fs.readFileSync(path.join(__dirname, './path to your private key/privkey.pem')),
cert: fs.readFileSync(path.join(__dirname, './path to your certificate/cert.pem')),
ca: fs.readFileSync(path.join(__dirname, './path to your CA file/chain.pem'))}, app).listen(port)
You have to change server cert from cert.pem to fullchain.pem
I had the same issue with Perl HTTPS Daemon:
I have changed:
SSL_cert_file => '/etc/letsencrypt/live/mydomain/cert.pem'
to:
SSL_cert_file => '/etc/letsencrypt/live/mydomain/fullchain.pem'
Enter these two codes to disable the SSL certificate issue. it's worked for me
after a lot of research I found this.
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, false);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
On windows I was having this problem. Curl was installed by mysysgit, so downloading and installing the newest version fixed my issue.
Otherwise these are decent instructions on how to update your CA cert that you could try.
My case was different. I'm hosting a site behind a firewall. The error was caused by pfSense.
Network layout: |Web Server 10.x.x.x| <-> |pfSense 49.x.x.x| <-> |Open Internet|
I accidentally found the cause, thanks to this answer.
All is well when I accessed my site from WAN.
However, when the site was accessed from inside LAN (e.g. when Wordpress made a curl request to its own server, despite using the WAN IP 49.x.x.x), it was served the pfSense login page.
I identified the certificate as pfSense webConfigurator Self-Signed Certificate. No wonder curl threw an error.
Cause: What happened was that curl was using the site's WAN IP address 49.x.x.x. But, in the context of the web server, the WAN IP was the firewall.
Debug: I found that I was getting the pfSense certificate.
Solution: On the server hosting the site, point its own domain name to 127.0.0.1
By applying the solution, curl's request was properly handled by the web server, and not forwarded to the firewall which responded by sending the login page.
I intended to comment on Yuvik's answer but I lack enough reputation points.
When you import a .crt file to /usr/share/local/ca-certificates, it needs to be in the correct format. Some of these have been mentioned earlier, but no one has mentioned the need for only a new line character, and no one has collected a checklist, so I thought I would provide one while I'm at it.
The certificate needs to end in .crt. From Ubuntu's man page:
Certificates must have a .crt extension in order to be included by
update-ca-certificates
Certificate files in /usr/local/share/ca-certificates can only contain one certificate
Certificate files must end in a newline. update-ca-certificates will appear to work if each row contains, for example, a carriage return + a newline (as is standard in Windows), but once the certificate is appended to /etc/ssl/ca-certificates.crt, it still will not work. This specific requirement bit me as we're loading certificates from an external source.
On windows - if you want to run from cmd
> curl -X GET "https://some.place"
Download cacert.pem from
https://curl.haxx.se/docs/caextract.html
Set permanently the environment variable:
CURL_CA_BUNDLE = C:\somefolder\cacert.pem
And reload the environment by reopening any cmd window in which you want to
use curl; if Chocolatey is installed you can use:
refreshenv
Now try again
Reason for the trouble:
https://laracasts.com/discuss/channels/general-discussion/curl-error-60-ssl-certificate-problem-unable-to-get-local-issuer-certificate/replies/95548
So far, I've seen this issue happen within corporate networks because of two reasons, one or both of which may be happening in your case:
Because of the way network proxies work, they have their own SSL certificates, thereby altering the certificates that curl sees. Many or most enterprise networks force you to use these proxies.
Some antivirus programs running on client PCs also act similarly to an HTTPS proxy, so that they can scan your network traffic. Your antivirus program may have an option to disable this function (assuming your administrators will allow it).
As a side note, No. 2 above may make you feel uneasy about your supposedly secure TLS traffic being scanned. That's the corporate world for you.
Had that problem and it was not solved with newer version. /etc/certs had the root cert, the browser said everything is fine. After some testing I got from ssllabs.com the warning, that my chain was not complete (Indeed it was the chain for the old certificate and not the new one). After correcting the cert chain everything was fine, even with curl.
This is ssh certificate store issue. You need to download the valid certificate pem file from target CA website, and then build the soft link file to instruct ssl the trusted certifacate.
openssl x509 -hash -noout -in DigiCert_Global_Root_G3.pem
you will get dd8e9d41
build solf link with hash number and suffix the file with a .0 (dot-zero)
dd8e9d41.0
Then try again.
Some systems may have this problem due to conda environment. If you have conda installed then disabling it may solve your problem. In my case when I deactivated conda this curl-SSL error was resolved. On ubuntu or MacOS try this command
conda deactivate
On Amazon Linux (CentOS / Red Hat etc) I did the following to fix this issue. First copy the cacert.pem downloaded from http://curl.haxx.se/ca/cacert.pem and put it in the /etc/pki/ca-trust/source/anchors/ directory. Then run the update-ca-trust command.
Here is a one liner taken from https://serverfault.com/questions/394815/how-to-update-curl-ca-bundle-on-redhat
curl https://curl.se/ca/cacert.pem -o /etc/pki/ca-trust/source/anchors/curl-cacert-updated.pem && update-ca-trust
However since curl was broken I actually used this command to download the cacert.pem file.
wget --no-check-certificate http://curl.haxx.se/ca/cacert.pem
Also if you were having trouble with php you may need to restart your web server service httpd restart for apache or service nginx restart for nginx.
I've been pulling my hair out over this issue for days on a Wordpress installation attempting to communicate with an internal ElasticSearch service via ElasticPress and a self-signed Root CA managed by AWS ACM PCA.
In my particular case, I was receiving a 200 OK response from the default cURL Transport as well as the expected body, but Wordpress was coming back with a WP_Error object as well that ElasticPress was picking up due to this certificate issue but never logging.
When it comes to Wordpress, there are two things worth noting:
The default cURL Transport for all wp_remote_* calls will look to a CA Bundle located in wp-includes/certificates/ca-bundle.crt. This bundle serves largely the same purpose as what's found under https://curl.haxx.se/docs/caextract.html, and will cover most use-cases that don't typically involve more exotic setups.
Action/Filter order matters in Wordpress, and in ElasticPress' case, many of its own internal functions leverage these remote calls. The problem is, these remote calls were being executed during the plugins_loaded lifecycle, which is too early for Theme logic to be able to override. If you're using any plugins that make external calls out to other services and you need to be able to modify the requests, you should take careful note as to WHEN these plugins are performing these requests.
What this means is that even with the right server setup, hooks, callbacks, and logic defined in your theme, you can still end up with a broken setup because the underlying plugin calls execute well before your theme loads and will never be able to tell Wordpress about the new certificates.
In the context of Wordpress applications, there are only two ways I know of that can circumvent this problem without updating core or third-party code logic:
(Recommended) Add a "Must Use" Plugin to your installation that adjusts the settings you need. MU Plugins load the earliest in the Wordpress lifecycle and will be able to give you the ability to override your plugins and your core without directly altering them. In my case, I set up a simple MU Plugin with the following logic:
// ep_pre_request_args is an ElasticPress-specific call that we need to adjust for all outbound HTTP requests
add_filter('ep_pre_request_args', function($args){
if($_ENV['ELASTICSEARCH_SSL_PATH'] ?? false) {
$args['sslcertificates'] = $_ENV['ELASTICSEARCH_SSL_PATH'];
}
return $args;
});
(Not Recommended) If you have absolutely no other options, you can also append your Root CA to wp-includes/certificates/ca-bundle.crt. This will seemingly "correct" the underlying issue and you will get proper verification of your SSL Certificates, but this method will fail each time you update Wordpress unless you bake in additional automation.
I'm adding this answer because I had thought that I was doing something wrong or wonky in my setup for days before I ever even bothered to delve deeper into the plugin source code. Hopefully this might save somebody some time if they're doing anything similar.
Non of the answers mentioned that might be a role to connect to internal vpn i had this issue before and was asking to be on a private network
in my case while I am setting up SSl webserver using NodeJS the problem was because I did not attach the Bundle file certificate , finally I solved the problem by adding that file as following :
Note : code from aboutssl.org
var https = require('https');
var fs = require('fs');
var https_options = {
key: fs.readFileSync("/path/to/private.key"),
cert: fs.readFileSync("/path/to/your_domain_name.crt"),
ca: [
fs.readFileSync('path/to/CA_root.crt'),
fs.readFileSync('path/to/ca_bundle_certificate.crt') // this is the bundle file
]
};
https.createServer(options, function (req, res) {
res.writeHead(200);
res.end("Welcome to Node.js HTTPS Servern");
}).listen(8443)
In the above, replace the text in bold with the following.
path/to/private.key – This is your private key file’s path.
path/to/your_domain_name.crt – Enter your SSL certificate file’s path.
path/to/CA_root.crt – Provide the CA root certificate file’s full path.
path/to/ca_bundle_certificate – This is the full path of your uploaded CA bundle file.
reference: https://aboutssl.org/how-to-install-ssl-certificate-on-node-js/
I had this problem with Digicert of all CAs. I created a digicertca.pem file that was just both intermediate and root pasted together into one file.
curl https://cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem
curl https://cacerts.digicert.com/DigiCertSHA2SecureServerCA.crt.pem
curl -v https://mydigisite.com/sign_on --cacert DigiCertCA.pem
...
* subjectAltName: host "mydigisite.com" matched cert's "mydigisite.com"
* issuer: C=US; O=DigiCert Inc; CN=DigiCert SHA2 Secure Server CA
* SSL certificate verify ok.
> GET /users/sign_in HTTP/1.1
> Host: mydigisite.com
> User-Agent: curl/7.65.1
> Accept: */*
...
Eorekan had the answer but only got myself and one other to up vote his answer.

certificate signed by unknown authority with self-signed certificates

I'm trying to setup a development environment where TLS is enabled for RabbitMQ. So here is what I did:
Use tls-gen script to generate certificates with basic profile.
Configure rabbitmq to use ca-certificate.pem, server-certificate.pem, and server-key.pem.
As I'm using MacOS, I ran sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain testca/ca_certificate.pem to add the CA certificate to the trusted roots
Within a Go program, I load the client_certificate.pem, and client-key.pem into a tls.Config
Call amqp.DialTLS().
I got the following error message:
err: x509: certificate signed by unknown authority
which is unexpected. In step 4 above, if I add the ca-certificate.pem into the root CAs of the tls.Config, it works fine. So I'm suspecting that the addition of the root ca macOS is not right.
Can somebody review the above and point out my mistake?

Configuring Vagrant CA Certificates

I am experiencing problems executing Vagrant commands behind a corporate proxy server and self-signed CA certificates. I have configured environment variables HTTP_PROXY, HTTPS_PROXY, and HTTP_NO_PROXY variables.
I have a Java key store containing all of the corporate certificates. I have used the -exportcert option of the keytool command with numerous options. I have utilized the openssl command also with numerous options and placed the resulting files in multiple locations within the embedded Ruby directories within the Vagrant installation without any success.
I have read a lot of sites containing information about configuring Ruby and curl but have not had any success in getting Vagrant commands to work. All of the posts I have located focus on Ruby and curl options that I do not understand how to utilize with Vagrant which includes Ruby as an embedded component of Vagrant.
Please provide instructions on how to correctly export certificates from the Java key store and optionally convert them and place the resulting files so that Vagrant will successfully be able to communicate through the corporate proxy to the internet.
Vagrant 1.9.5 on Windows 7
Vagrant installation directory C:\Apps\Vagrant\
C:\WorkArea> vagrant plugin install vagrant.proxyconf
ERROR: SSL verification error at depth 3: self signed certificate in certificate chain (19)
ERROR: Root certificate is not trusted (/C=US/O=xxx xxx/OU=xxx xxx Certification Authority/CN=xxx xxx Root Certification Authority 01 G2)
SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed (https://api.rubygems.org/specs.4.8.gz)
C:\WorkArea> vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'puppetlabs/ubuntu-16.04-64-puppet' could not be found. Attempting to find and install...
default: Box Provider: virtualbox
default: Box Version: >= 0
The box 'puppetlabs/ubuntu-16.04-64-puppet' could not be found or
could not be accessed in the remote catalog. If this is a private
box on HashiCorp's Atlas, please verify you're logged in via
`vagrant login`. Also, please double-check the name. The expanded
URL and error message are shown below:
URL: ["https://atlas.hashicorp.com/puppetlabs/ubuntu-16.04-64-puppet"]
Error: SSL certificate problem: self signed certificate in certificate chain
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
You don't explain what steps you have taken to try to fix the issue, but it would appear that you are not placing your root certificates in the correct location.
Go to the directory where you installed Vagrant, find the file embedded\cacert.pem, and then append the contents of your corporate certificates to the file. Save it and retry the operation. If you properly exported your root CA certificates then they should be read by Vagrant and allow the connection.
If you are still unable to make it work by combining the files, try running vagrant with SSL_CERT_FILE=/path/to/your/certs.pem in the environment. This will allow you to validate that you have properly exported your corporate certificates.

How to import a pfx using certutil without prompt?

I want to import a pfx using cmd. I am using certutils for that. But I am getting a prompt asking to trust the certificate. I want to automatize import so I want to skip the warning prompt. How can I accomplish that?
I am using command
certutil -f -user -p PASSWORD -importpfx c:\cert.pfx
The reason you got a prompt dialog is that you are trying to add a "CA certificate" into the "Trusted Root Certification Authorities" store. In fact, when you use "certutil -f -user -p PASSWORD -importpfx c:\cert.pfx" to import a PFX certificate, two actions happen:
Add a personal certificate(which includes the private key) into the "Personal" store.
Add a CA certificate into the "Trusted Root Certification Authorities" store.
It is the second action that cause the UAC to prompt a warning dialog, since you are trying to add one CA certificate into the "Trusted Root Certification Authorities" store and this means that any web host that holds this certicate will be trusted in the future, this is a very important action and should be treated very discreetly by the user, shouldn't it? So the UAC will warn the user to comfirm this action.
There is only one way to suppress the warning dialog, that is "you don't add the CA certificate into the "Trusted Root Certification Authorities" store by doing so:
certutil -f -user -p PASSWORD -importpfx c:\cert.pfx NoRoot
Add personal certificate into "Personal" store will not prompt any warning dialog. However, by this way, the web host that holds the CA certificate will not be trusted any more and this can be very frustrating if you use HTTPS to access the web host.

Resources