superset keycloak integration on https - https

We have a superset docker containers which is using keycloak as identity broker. All this setup is working fine on http. Further, we have installed ssl certificate on keycloak and same is also working fine. Our superset and keycloak integration code changes look exactly like its mentioned in the answer here.
Now, when we changed auth uris from http to https in superset/docker/pythonpath_dev/client_secret.json, we are getting below error after the login flow is redirected from keycloak to superset.
Forbidden
'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1091)'
We also tried installing root certificates on superset by mounting them on /usr/local/share/ca-certificates and then executing update-ca-certificates in the container, but still there was no help. Any idea how this can be resolved?

Thanks #sventorben for the tip. Indeed it was python which was not able to read my ca files. Since I am new to this, I would detail out all the steps followed. However, some of these steps might be redundant.
After I received my root as well intermediary CA files, I first converted them to PEM format as they were in DER format using openssl.
openssl x509 -inform DER -in myintermediary.cer -out myintermediary.crt
openssl x509 -inform DER -in myroot.cer -out myroot.crt
Then, I mounted these files to my superset container at path /usr/local/share/ca-certificates/
Then, I logged into my container and executed update-ca-certificates command and verified that 2 new pem files got added at /etc/ss/certs/ path i.e. myroot.pem and intermediary.pem.
Then, I added these CA files to python certifi inside my container. To find out the path of cacert.pem, I executed below commands into python terminal.
import certifi
certifi.where()
exit()
Here, second command gave me the path of cacert.pem which was like /usr/local/lib/python3.7/site-pacakges/certifi/cacert.pem.
After this, i appended my ca files at the end of cacert.pem
cat /etc/ssl/certs/myroot.pem /etc/ssl/certs/intermediary.pem >> /usr/local/lib/python3.7/site-pacakges/certifi/cacert.pem
In the end i logged out of my container and restarted it.
docker-compose stop
docker-compose up -d
Note:
I feel step 3 is redundant as python does not read CA files from there. However, i still did it and I am in no mood of reverting and test it out again.
Also, this was my temporary fix as executing the commands inside the container is not useful as they are ephermal.
Update:
Below are the steps followed for production deployment.
Convert root certificates in PEM format using openssl.
Concat both PEM files into a new PEM file which will be installed as bundle. Lets say, the new PEM file is mycacert.pem and same is mounted at /app/docker/.
Create one sh file called start.sh and write 2 commands as below.
cat /app/docker/mycacert.pem >> /usr/local/lib/python3.7/site-pacakges/certifi/cacert.pem
gunicorn --bind 0.0.0.0:8088 --access-logfile - --error-logfile - --workers 5 --worker-class gthread --threads 4 --timeout 200 --limit-request-line 4094 --limit-request-field_size 8190 'superset.app:create_app()'
Modify docker-compose.yml and change command as below.
command: ["/app/docker/start.sh"]
Restart superset container.
docker-compose stop
docker-compose up -d

Related

Why can not Hostname of NiFi Certificate Authority be updated using NIFI-TOOLKIT?

Toolkit to generate keystore and truststore with "-c 'hhh'",but the CN is 'localhost'?Can u get me suggestion?Thanks in advance!
-c,--certificateAuthorityHostname Hostname of NiFi Certificate Authority (default: localhost)
If you're running the TLS Toolkit in standalone mode, and this is not the first invocation, you likely already have a nifi-cert.pem and nifi-key.key file in the working directory. These files are the CA public certificate and private key respectively. They will be reused to continue signing newly generated node certificates because this allows for trust across the different nodes of the cluster (which is the intended use case for the toolkit).
If you want to create a new CA certificate and use that to sign the node certificates, you have a few options:
Copy the toolkit build directory to a new location and invoke it there. Ensure that the nifi-cert.pem and nifi-key.key files are not present. On the first invocation, a new CA certificate and key will be generated, with the specified certificate authority hostname.
Delete the nifi-cert.pem and nifi-key.key files from your existing toolkit build directory. Warning: You will no longer be able to sign certificates with the same CA key. For example, if you generated node1 and node2 certificates signed by CA_1, then delete CA_1 and want to add node3, you will not be able to sign node3 with CA_1. You will have to import multiple public CA certificates (CA_1 and the new CA_2, for example) into each node's truststore to allow for cross-trust.
Try to add -D in your command, to specify the CN and so on:
sudo bash tls-toolkit.sh client -c #YOUR_CA_HOST_FQDN -t #YOUR_CA_KEY_IF_ANY -p #CA_SERVER_PORT_IF_ANY -D "CN=#WHAT_EVER_YOU_WANT, OU=NIFI" -T PKCS12
I experienced the same problem here, my solution was to remove de target file, as suggested in previous post, and run the script with an additional param, in your case it should be something like this
tls-toolkit.sh standalone .... --certificateAuthorityHostname hhh
It worked for me

Apache Maven Download using CURL is timed out

curl -O "https://www.apache.org/dist/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.tar.gz"
This is the command I am using from terminal to download maven but it's either timed out or curl: (7) Failed to connect to www.apache.org port 443: Operation timed out.
If I use browser to download, no issue.
My assumption is the ssl connection or certificate issue. Any idea how can I resolve the curl issue.
Please take note, I am using this in a Dockerfile to create docker image and here is that:
FROM ******/mule-42x:v2.2.1
ENV MAVEN_VERSION 3.6.3
RUN mkdir -p /opt/maven \
&& cd /opt/maven \
&& curl -O "https://www.apache.org/dist/maven/maven-3/${MAVEN_VERSION}/binaries/apache-maven-${MAVEN_VERSION}-bin.tar.gz" \
&& tar xzvf "apache-maven-$MAVEN_VERSION-bin.tar.gz" \
&& rm "apache-maven-$MAVEN_VERSION-bin.tar.gz"
ENV MAVEN_HOME "/opt/maven/apache-maven-$MAVEN_VERSION"
ENV PATH=$MAVEN_HOME/bin:$PATH
I've tried to run the command presented in the question and encountered the following errors:
curl -O "https://www.apache.org/dist/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.tar.gz"
curl tries to verify the SSL certificate but fails with the following message:
curl: (60) server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
So I've added -k flag as the message suggests.
Now this works, however the http call returns and HTML page with 302 (redirect) to https://downloads.apache.org/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.tar.gz
So the command that has worked for me is:
curl -O -k https://downloads.apache.org/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.tar.gz
An Important side note:
I'm assuming that you've configured the network right and it has all the proper proxy definitions if you're running behind the proxy in your organization, otherwise you should define proxy first.
All in all I suggest you running this command 'manually' first (from the command line not as a part of the build I mean) outside the docker on the machine where you run the docker build and only when you make sure it works run it in a docker file.

curl of url stored as bash variable in MacOS [duplicate]

root#sclrdev:/home/sclr/certs/FreshCerts# curl --ftp-ssl --verbose ftp://{abc}/ -u trup:trup --cacert /etc/ssl/certs/ca-certificates.crt
* About to connect() to {abc} port 21 (#0)
* Trying {abc}...
* Connected to {abc} ({abc}) port 21 (#0)
< 220-Cerberus FTP Server - Home Edition
< 220-This is the UNLICENSED Home Edition and may be used for home, personal use only
< 220-Welcome to Cerberus FTP Server
< 220 Created by Cerberus, LLC
> AUTH SSL
< 234 Authentication method accepted
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS alert, Server hello (2):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
Relating to 'SSL certificate problem: unable to get local issuer certificate' error. It is important to note that this applies to the system sending the CURL request, and NOT the server receiving the request.
Download the latest cacert.pem from https://curl.se/ca/cacert.pem
Add the '--cacert /path/to/cacert.pem' option to the curl command to tell curl where the local Certificate Authority file is.
(or) Create or add to a '.curlrc' file the line:
cacert = /path/to/cacert.pem
See 'man curl', the section about the '-K, --config <file>' section for information about where curl looks for this file.
(or if using php) Add the following line to php.ini: (if this is shared hosting and you don't have access to php.ini then you could add this to .user.ini in public_html).
curl.cainfo="/path/to/downloaded/cacert.pem"
Make sure you enclose the path within double quotation marks!!!
(perhaps also for php) By default, the FastCGI process will parse new files every 300 seconds (if required you can change the frequency by adding a couple of files as suggested here https://ss88.uk/blog/fast-cgi-and-user-ini-files-the-new-htaccess/).
It is failing as cURL is unable to verify the certificate provided by the server.
There are two options to get this to work:
Use cURL with -k option which allows curl to make insecure connections, that is cURL does not verify the certificate.
Add the root CA (the CA signing the server certificate) to /etc/ssl/certs/ca-certificates.crt
You should use option 2 as it's the option that ensures that you are connecting to secure FTP server.
I have solved this problem by adding one line code in cURL script:
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
Warning: This makes the request absolute insecure (see answer by #YSU)!
For me, simple install of certificates helped:
sudo apt-get install ca-certificates
In my case it turned out to be a problem with the installation of my certificate on the service I was trying to consume with cURL. I failed to bundle/concatenate the intermediate and root certificates into my domain certificate. It wasn't obvious at first that this was the problem because Chrome worked it out and accepted the certificate in spite of leaving out the intermediate and root certificates.
After bundling the certificate, everything worked as expected. I bundled like this
$ cat intermediate.crt >> domain.crt
And repeated for all intermediate and the root certificate.
Had this problem after install Git Extensions v3.48. Tried to install mysysgit again but same problem. At the end, had to disable (please consider security implications!) Git SSL verification with:
git config --global http.sslVerify false
but if you have a domain certificate better add it to (Win7)
C:\Program Files (x86)\Git\bin\curl-ca-bundle.crt
It is most likely a missing cert from the server.
Root->Intermediate->Server
A server should send the Server & Intermediate as a minimum.
Use openssl s_client -showcerts -starttls ftp -crlf -connect abc:21 to debug the issue.
If only one cert is returned (either self signed, or issued), then you must choose to either:
have the server fixed
trust that cert and add it to your CA cert store (not the best idea)
disable trust, e.g. curl -k (very bad idea)
If the server returned, more than one, but not including a self signed (root) cert:
install the CA (root) cert in your CA store for the this chain, e.g. google the issuer. (ONLY if you trust that CA)
have the server fixed to send the CA as part of the chain
trust a cert in the chain
disable trust
If the server returned a root CA certificate, then it is not in your CA store, your options are:
Add (trust) it
disable trust
I have ignored expired / revoked certs because there were no messages indicating it. But you can examine the certs with openssl x509 -text
Given you are connecting to a home edition (https://www.cerberusftp.com/support/help/installing-a-certificate/) ftp server, I am going to say it is self signed.
Please post more details, like the output from openssl.
We ran into this error recently. Turns out it was related to the root cert not being installed in the CA store directory properly. I was using a curl command where I was specifying the CA dir directly. curl --cacert /etc/test/server.pem --capath /etc/test ... This command was failing every time with curl: (60) SSL certificate problem: unable to get local issuer certificate.
After using strace curl ..., it was determined that curl was looking for the root cert file with a name of 60ff2731.0, which is based on an openssl hash naming convetion. So I found this command to effectively import the root cert properly:
ln -s rootcert.pem `openssl x509 -hash -noout -in rootcert.pem`.0
which creates a softlink
60ff2731.0 -> rootcert.pem
curl, under the covers read the server.pem cert, determined the name of the root cert file (rootcert.pem), converted it to its hash name, then did an OS file lookup, but could not find it.
So, the takeaway is, use strace when running curl when the curl error is obscure (was a tremendous help), and then be sure to properly install the root cert using the openssl naming convention.
It might be sufficient to just update the list of certificates
sudo update-ca-certificates -f
update-ca-certificates is a program that updates the directory /etc/ssl/certs to hold SSL certificates and generates ca-certificates.crt, a concatenated single-file list of certificates.
I have encountered this problem as well. I've read this thread and most of the answers are informative but overly complex to me. I'm not experienced in networking topics so this answer is for people like me.
In my case, this error was happening because I didn't include the intermediate and root certificates next to the certificate I was using in my application.
Here's what I got from the SSL certificate supplier:
- abc.crt
- abc.pem
- abc-bunde.crt
In the abc.crt file, there was only one certificate:
-----BEGIN CERTIFICATE-----
/*certificate content here*/
-----END CERTIFICATE-----
If I supplied it in this format, the browser would not show any errors (Firefox) but I would get curl: (60) SSL certificate : unable to get local issuer certificate error when I did the curl request.
To fix this error, check your abc-bunde.crt file. You will most likely see something like this:
-----BEGIN CERTIFICATE-----
/*additional certificate content here*/
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
/*other certificate content here*/
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
/*different certificate content here*/
-----END CERTIFICATE-----
These are your Intermediate and root certificates. Error is happening because they are missing in the SSL certificate you're supplying to your application.
To fix the error, combine the contents of both of these files in this format:
-----BEGIN CERTIFICATE-----
/*certificate content here*/
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
/*additional certificate content here*/
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
/*other certificate content here*/
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
/*different certificate content here*/
-----END CERTIFICATE-----
Note that there are no spaces between certificates, at the end or at the start of the file. Once you supply this combined certificate to your application, your problem should be fixed.
According to cURL docs you can also pass the certificate to the curl command:
Get a CA certificate that can verify the remote server and use the
proper option to point out this CA cert for verification when
connecting. For libcurl hackers: curl_easy_setopt(curl,
CURLOPT_CAPATH, capath);
With the curl command line tool: --cacert [file]
For example:
curl --cacert mycertificate.cer -v https://www.stackoverflow.com
Download https://curl.haxx.se/ca/cacert.pem
After download, move this file to your wamp server.
For exp: D:\wamp\bin\php\
Then add the following line to the php.ini file at the bottom.
curl.cainfo="D:\wamp\bin\php\cacert.pem"
Now restart your wamp server.
Try reinstalling curl in Ubuntu, and updating my CA certs with sudo update-ca-certificates --fresh which updated the certs
Mine worked by just adding -k to my curl.
No need to complicate things.
curl -LOk https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl
Yes you need to add a CA certificate also. Adding a code snippet in Node.js for clear view.
var fs = require(fs)
var path = require('path')
var https = require('https')
var port = process.env.PORT || 8080;
var app = express();
https.createServer({
key: fs.readFileSync(path.join(__dirname, './path to your private key/privkey.pem')),
cert: fs.readFileSync(path.join(__dirname, './path to your certificate/cert.pem')),
ca: fs.readFileSync(path.join(__dirname, './path to your CA file/chain.pem'))}, app).listen(port)
You have to change server cert from cert.pem to fullchain.pem
I had the same issue with Perl HTTPS Daemon:
I have changed:
SSL_cert_file => '/etc/letsencrypt/live/mydomain/cert.pem'
to:
SSL_cert_file => '/etc/letsencrypt/live/mydomain/fullchain.pem'
Enter these two codes to disable the SSL certificate issue. it's worked for me
after a lot of research I found this.
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, false);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
On windows I was having this problem. Curl was installed by mysysgit, so downloading and installing the newest version fixed my issue.
Otherwise these are decent instructions on how to update your CA cert that you could try.
My case was different. I'm hosting a site behind a firewall. The error was caused by pfSense.
Network layout: |Web Server 10.x.x.x| <-> |pfSense 49.x.x.x| <-> |Open Internet|
I accidentally found the cause, thanks to this answer.
All is well when I accessed my site from WAN.
However, when the site was accessed from inside LAN (e.g. when Wordpress made a curl request to its own server, despite using the WAN IP 49.x.x.x), it was served the pfSense login page.
I identified the certificate as pfSense webConfigurator Self-Signed Certificate. No wonder curl threw an error.
Cause: What happened was that curl was using the site's WAN IP address 49.x.x.x. But, in the context of the web server, the WAN IP was the firewall.
Debug: I found that I was getting the pfSense certificate.
Solution: On the server hosting the site, point its own domain name to 127.0.0.1
By applying the solution, curl's request was properly handled by the web server, and not forwarded to the firewall which responded by sending the login page.
I intended to comment on Yuvik's answer but I lack enough reputation points.
When you import a .crt file to /usr/share/local/ca-certificates, it needs to be in the correct format. Some of these have been mentioned earlier, but no one has mentioned the need for only a new line character, and no one has collected a checklist, so I thought I would provide one while I'm at it.
The certificate needs to end in .crt. From Ubuntu's man page:
Certificates must have a .crt extension in order to be included by
update-ca-certificates
Certificate files in /usr/local/share/ca-certificates can only contain one certificate
Certificate files must end in a newline. update-ca-certificates will appear to work if each row contains, for example, a carriage return + a newline (as is standard in Windows), but once the certificate is appended to /etc/ssl/ca-certificates.crt, it still will not work. This specific requirement bit me as we're loading certificates from an external source.
On windows - if you want to run from cmd
> curl -X GET "https://some.place"
Download cacert.pem from
https://curl.haxx.se/docs/caextract.html
Set permanently the environment variable:
CURL_CA_BUNDLE = C:\somefolder\cacert.pem
And reload the environment by reopening any cmd window in which you want to
use curl; if Chocolatey is installed you can use:
refreshenv
Now try again
Reason for the trouble:
https://laracasts.com/discuss/channels/general-discussion/curl-error-60-ssl-certificate-problem-unable-to-get-local-issuer-certificate/replies/95548
So far, I've seen this issue happen within corporate networks because of two reasons, one or both of which may be happening in your case:
Because of the way network proxies work, they have their own SSL certificates, thereby altering the certificates that curl sees. Many or most enterprise networks force you to use these proxies.
Some antivirus programs running on client PCs also act similarly to an HTTPS proxy, so that they can scan your network traffic. Your antivirus program may have an option to disable this function (assuming your administrators will allow it).
As a side note, No. 2 above may make you feel uneasy about your supposedly secure TLS traffic being scanned. That's the corporate world for you.
Had that problem and it was not solved with newer version. /etc/certs had the root cert, the browser said everything is fine. After some testing I got from ssllabs.com the warning, that my chain was not complete (Indeed it was the chain for the old certificate and not the new one). After correcting the cert chain everything was fine, even with curl.
This is ssh certificate store issue. You need to download the valid certificate pem file from target CA website, and then build the soft link file to instruct ssl the trusted certifacate.
openssl x509 -hash -noout -in DigiCert_Global_Root_G3.pem
you will get dd8e9d41
build solf link with hash number and suffix the file with a .0 (dot-zero)
dd8e9d41.0
Then try again.
Some systems may have this problem due to conda environment. If you have conda installed then disabling it may solve your problem. In my case when I deactivated conda this curl-SSL error was resolved. On ubuntu or MacOS try this command
conda deactivate
On Amazon Linux (CentOS / Red Hat etc) I did the following to fix this issue. First copy the cacert.pem downloaded from http://curl.haxx.se/ca/cacert.pem and put it in the /etc/pki/ca-trust/source/anchors/ directory. Then run the update-ca-trust command.
Here is a one liner taken from https://serverfault.com/questions/394815/how-to-update-curl-ca-bundle-on-redhat
curl https://curl.se/ca/cacert.pem -o /etc/pki/ca-trust/source/anchors/curl-cacert-updated.pem && update-ca-trust
However since curl was broken I actually used this command to download the cacert.pem file.
wget --no-check-certificate http://curl.haxx.se/ca/cacert.pem
Also if you were having trouble with php you may need to restart your web server service httpd restart for apache or service nginx restart for nginx.
I've been pulling my hair out over this issue for days on a Wordpress installation attempting to communicate with an internal ElasticSearch service via ElasticPress and a self-signed Root CA managed by AWS ACM PCA.
In my particular case, I was receiving a 200 OK response from the default cURL Transport as well as the expected body, but Wordpress was coming back with a WP_Error object as well that ElasticPress was picking up due to this certificate issue but never logging.
When it comes to Wordpress, there are two things worth noting:
The default cURL Transport for all wp_remote_* calls will look to a CA Bundle located in wp-includes/certificates/ca-bundle.crt. This bundle serves largely the same purpose as what's found under https://curl.haxx.se/docs/caextract.html, and will cover most use-cases that don't typically involve more exotic setups.
Action/Filter order matters in Wordpress, and in ElasticPress' case, many of its own internal functions leverage these remote calls. The problem is, these remote calls were being executed during the plugins_loaded lifecycle, which is too early for Theme logic to be able to override. If you're using any plugins that make external calls out to other services and you need to be able to modify the requests, you should take careful note as to WHEN these plugins are performing these requests.
What this means is that even with the right server setup, hooks, callbacks, and logic defined in your theme, you can still end up with a broken setup because the underlying plugin calls execute well before your theme loads and will never be able to tell Wordpress about the new certificates.
In the context of Wordpress applications, there are only two ways I know of that can circumvent this problem without updating core or third-party code logic:
(Recommended) Add a "Must Use" Plugin to your installation that adjusts the settings you need. MU Plugins load the earliest in the Wordpress lifecycle and will be able to give you the ability to override your plugins and your core without directly altering them. In my case, I set up a simple MU Plugin with the following logic:
// ep_pre_request_args is an ElasticPress-specific call that we need to adjust for all outbound HTTP requests
add_filter('ep_pre_request_args', function($args){
if($_ENV['ELASTICSEARCH_SSL_PATH'] ?? false) {
$args['sslcertificates'] = $_ENV['ELASTICSEARCH_SSL_PATH'];
}
return $args;
});
(Not Recommended) If you have absolutely no other options, you can also append your Root CA to wp-includes/certificates/ca-bundle.crt. This will seemingly "correct" the underlying issue and you will get proper verification of your SSL Certificates, but this method will fail each time you update Wordpress unless you bake in additional automation.
I'm adding this answer because I had thought that I was doing something wrong or wonky in my setup for days before I ever even bothered to delve deeper into the plugin source code. Hopefully this might save somebody some time if they're doing anything similar.
Non of the answers mentioned that might be a role to connect to internal vpn i had this issue before and was asking to be on a private network
in my case while I am setting up SSl webserver using NodeJS the problem was because I did not attach the Bundle file certificate , finally I solved the problem by adding that file as following :
Note : code from aboutssl.org
var https = require('https');
var fs = require('fs');
var https_options = {
key: fs.readFileSync("/path/to/private.key"),
cert: fs.readFileSync("/path/to/your_domain_name.crt"),
ca: [
fs.readFileSync('path/to/CA_root.crt'),
fs.readFileSync('path/to/ca_bundle_certificate.crt') // this is the bundle file
]
};
https.createServer(options, function (req, res) {
res.writeHead(200);
res.end("Welcome to Node.js HTTPS Servern");
}).listen(8443)
In the above, replace the text in bold with the following.
path/to/private.key – This is your private key file’s path.
path/to/your_domain_name.crt – Enter your SSL certificate file’s path.
path/to/CA_root.crt – Provide the CA root certificate file’s full path.
path/to/ca_bundle_certificate – This is the full path of your uploaded CA bundle file.
reference: https://aboutssl.org/how-to-install-ssl-certificate-on-node-js/
I had this problem with Digicert of all CAs. I created a digicertca.pem file that was just both intermediate and root pasted together into one file.
curl https://cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem
curl https://cacerts.digicert.com/DigiCertSHA2SecureServerCA.crt.pem
curl -v https://mydigisite.com/sign_on --cacert DigiCertCA.pem
...
* subjectAltName: host "mydigisite.com" matched cert's "mydigisite.com"
* issuer: C=US; O=DigiCert Inc; CN=DigiCert SHA2 Secure Server CA
* SSL certificate verify ok.
> GET /users/sign_in HTTP/1.1
> Host: mydigisite.com
> User-Agent: curl/7.65.1
> Accept: */*
...
Eorekan had the answer but only got myself and one other to up vote his answer.

Spring Boot - SSL setup (./well-known/pki-validation)

I am new to SSL setup, please excuse me if my question is wrong.
I have deployed a Spring Boot application on AWS EC2 (Windows) instance with bunch of restful services, exposed through public IP address (AWS), i am able to access them publicly(http). I want to SSL(https) them now. I am in process of purchasing certificate, in one of the steps to setup, they have given these lines to validate a text file, is anyone aware of this ? Can you please suggest where i need to create ./well-known/pki-validation folder on my Spring Boot application(Tomcat) ?
The issuing vendor will provide you with a simple text-based file to place in sub-folders /.well-known/pki-validation/ in your site’s "home directory". If done properly, the vendor can view this file via HTTP:// and then issue the certificate upon confirmation.
1 Install certboot in the server.
git clone https://github.com/certbot/certbot
cd certbot
./certbot-auto --help
2 Obtain the certificate
In order to obtain the certificate you need to expose trough the server certain files. I do that using the target folder of spring boot tomcat.
./certbot-auto certonly --webroot -w {SpringBootProjectDir}/target/classes/static/ -d {yourDomain.com}
This command obtains the certificates and leaves them in:
/etc/letsencrypt/live/{yourDomain.com}/
Tomcat can't read the certificate provided since its not in p12 format. We have to generate the cert in this format. Use this command
sudo openssl pkcs12 -export -in /etc/letsencrypt/live/{yourDomain.com}/fullchain.pem -inkey /etc/letsencrypt/live/{yourDomain.com}/privkey.pem -out /etc/letsencrypt/live/{yourDomain.com}/keystore.p12 -name tomcat -CAfile /etc/letsencrypt/live/{yourDomain.com}/chain.pem -caname root
It will ask you a password. Keep the password.
3 Configure the server
server.port=443
server.ssl.enabled=true
server.ssl.key-store: /etc/letsencrypt/live/{yourDomain.com}/keystore.p12
server.ssl.key-store-password: {password}
server.ssl.keyStoreType: PKCS12
server.ssl.keyAlias: tomcat
Restart the server and Thats it!

Gitlab-CI runner: ignore self-signed certificate

gitlab-ci-multi-runner register
gave me
couldn't execute POST against https://xxxx/ci/api/v1/runners/register.json:
Post https://xxxx/ci/api/v1/runners/register.json:
x509: cannot validate certificate for xxxx because it doesn't contain any IP SANs
Is there a way to disable certification validation?
I'm using Gitlab 8.13.1 and gitlab-ci-multi-runner 1.11.2.
Based on Wassim's answer, and gitlab documentation about tls-self-signed and custom CA-signed certificates, here's to save some time if you're not the admin of the gitlab server but just of the server with the runners (and if the runner is run as root):
SERVER=gitlab.example.com
PORT=443
CERTIFICATE=/etc/gitlab-runner/certs/${SERVER}.crt
# Create the certificates hierarchy expected by gitlab
sudo mkdir -p $(dirname "$CERTIFICATE")
# Get the certificate in PEM format and store it
openssl s_client -connect ${SERVER}:${PORT} -showcerts </dev/null 2>/dev/null | sed -e '/-----BEGIN/,/-----END/!d' | sudo tee "$CERTIFICATE" >/dev/null
# Register your runner
gitlab-runner register --tls-ca-file="$CERTIFICATE" [your other options]
Update 1: CERTIFICATE must be an absolute path to the certificate file.
Update 2: it might still fail with custom CA-signed because of gitlab-runner bug #2675
In my case I got it working by adding the path to the .pem file as following:
sudo gitlab-runner register --tls-ca-file /my/path/gitlab/gitlab.myserver.com.pem
Often, gitlab-runners are hosted in a docker container. In that case, one needs to make sure that the tls-ca-file is available in the container.
Ok I followed step by step this post http://moonlightbox.logdown.com/posts/2016/09/12/gitlab-ci-runner-register-x509-error and then it worked like a charm.
To prevent dead link I copy the steps below:
First edit ssl configuration on the GitLab server (not the runner)
vim /etc/pki/tls/openssl.cnf
[ v3_ca ]
subjectAltName=IP:192.168.1.1 <---- Add this line. 192.168.1.1 is your GitLab server IP.
Re-generate self-signed certificate
cd /etc/gitlab/ssl
sudo openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout /etc/gitlab/ssl/192.168.1.1.key -out /etc/gitlab/ssl/192.168.1.1.crt
sudo openssl dhparam -out /etc/gitlab/ssl/dhparam.pem 2048
sudo gitlab-ctl restart
Copy the new CA to the GitLab CI runner
scp /etc/gitlab/ssl/192.168.1.1.crt root#192.168.1.2:/etc/gitlab-runner/certs
Thanks #Moon Light #Wassim Dhif
The following steps worked in my environment. (Ubuntu)
Download certificate
I did not have access to the gitlab server. Therefore,
Open https://some-host-gitlab.com in browser (I use chrome).
View site information, usually a green lock in URL bar.
Download/Export certificate by navigating to certificate information(chrome, firefox has this option)
In gitlab-runner host
Rename the downloaded certificate with .crt
$ mv some-host-gitlab.com some-host-gitlab.com.crt
Register the runner now with this file
$ sudo gitlab-runner register --tls-ca-file /path/to/some-host-gitlab.com.crt
I was able to register runner to a project.
Currently there is no possibility to run the multi runner with an insecure ssl option.
There is currently an open issue at GitLab about that.
Still you should be able to get your certificate, make it a PEM file and give it to the runner command using --tls-ca-file
To craft the PEM file use openssl.
openssl x509 -in mycert.crt -out mycert.pem -outform PEM
In my setup the following the following worked as well. It's just important that IP/Name used for creating certificate matches IP/Name used for registering the runner.
gitlab-runner register --tls-ca-file /my/path/gitlab/gitlab.myserver.com.pem
Furthermore, it could be necessary to add a line for hostname lookup to the runners config.toml file also (section [runners.docker]):
extra_hosts = ["git.domain.com:192.168.99.100"]
see also https://gitlab.com/gitlab-org/gitlab-runner/issues/2209
In addition, there could be some network-trouble if for gitlab/gitlab-runner network-mode host is used, it has to be added to the config.toml as well, as it starts additional containers, which otherwise could have a problem to connect to the gitlab-host ((section [runners.docker]):
network_mode="host"
Finally, there might be an issue with the self-signed SSL-Cert (https://gitlab.com/gitlab-org/gitlab-runner/issues/2659).
A dirty workaround is to add
environment = ["GIT_SSL_NO_VERIFY=true"]
to the [[runners]] section.

Resources