gitlab-ci-multi-runner register
gave me
couldn't execute POST against https://xxxx/ci/api/v1/runners/register.json:
Post https://xxxx/ci/api/v1/runners/register.json:
x509: cannot validate certificate for xxxx because it doesn't contain any IP SANs
Is there a way to disable certification validation?
I'm using Gitlab 8.13.1 and gitlab-ci-multi-runner 1.11.2.
Based on Wassim's answer, and gitlab documentation about tls-self-signed and custom CA-signed certificates, here's to save some time if you're not the admin of the gitlab server but just of the server with the runners (and if the runner is run as root):
SERVER=gitlab.example.com
PORT=443
CERTIFICATE=/etc/gitlab-runner/certs/${SERVER}.crt
# Create the certificates hierarchy expected by gitlab
sudo mkdir -p $(dirname "$CERTIFICATE")
# Get the certificate in PEM format and store it
openssl s_client -connect ${SERVER}:${PORT} -showcerts </dev/null 2>/dev/null | sed -e '/-----BEGIN/,/-----END/!d' | sudo tee "$CERTIFICATE" >/dev/null
# Register your runner
gitlab-runner register --tls-ca-file="$CERTIFICATE" [your other options]
Update 1: CERTIFICATE must be an absolute path to the certificate file.
Update 2: it might still fail with custom CA-signed because of gitlab-runner bug #2675
In my case I got it working by adding the path to the .pem file as following:
sudo gitlab-runner register --tls-ca-file /my/path/gitlab/gitlab.myserver.com.pem
Often, gitlab-runners are hosted in a docker container. In that case, one needs to make sure that the tls-ca-file is available in the container.
Ok I followed step by step this post http://moonlightbox.logdown.com/posts/2016/09/12/gitlab-ci-runner-register-x509-error and then it worked like a charm.
To prevent dead link I copy the steps below:
First edit ssl configuration on the GitLab server (not the runner)
vim /etc/pki/tls/openssl.cnf
[ v3_ca ]
subjectAltName=IP:192.168.1.1 <---- Add this line. 192.168.1.1 is your GitLab server IP.
Re-generate self-signed certificate
cd /etc/gitlab/ssl
sudo openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout /etc/gitlab/ssl/192.168.1.1.key -out /etc/gitlab/ssl/192.168.1.1.crt
sudo openssl dhparam -out /etc/gitlab/ssl/dhparam.pem 2048
sudo gitlab-ctl restart
Copy the new CA to the GitLab CI runner
scp /etc/gitlab/ssl/192.168.1.1.crt root#192.168.1.2:/etc/gitlab-runner/certs
Thanks #Moon Light #Wassim Dhif
The following steps worked in my environment. (Ubuntu)
Download certificate
I did not have access to the gitlab server. Therefore,
Open https://some-host-gitlab.com in browser (I use chrome).
View site information, usually a green lock in URL bar.
Download/Export certificate by navigating to certificate information(chrome, firefox has this option)
In gitlab-runner host
Rename the downloaded certificate with .crt
$ mv some-host-gitlab.com some-host-gitlab.com.crt
Register the runner now with this file
$ sudo gitlab-runner register --tls-ca-file /path/to/some-host-gitlab.com.crt
I was able to register runner to a project.
Currently there is no possibility to run the multi runner with an insecure ssl option.
There is currently an open issue at GitLab about that.
Still you should be able to get your certificate, make it a PEM file and give it to the runner command using --tls-ca-file
To craft the PEM file use openssl.
openssl x509 -in mycert.crt -out mycert.pem -outform PEM
In my setup the following the following worked as well. It's just important that IP/Name used for creating certificate matches IP/Name used for registering the runner.
gitlab-runner register --tls-ca-file /my/path/gitlab/gitlab.myserver.com.pem
Furthermore, it could be necessary to add a line for hostname lookup to the runners config.toml file also (section [runners.docker]):
extra_hosts = ["git.domain.com:192.168.99.100"]
see also https://gitlab.com/gitlab-org/gitlab-runner/issues/2209
In addition, there could be some network-trouble if for gitlab/gitlab-runner network-mode host is used, it has to be added to the config.toml as well, as it starts additional containers, which otherwise could have a problem to connect to the gitlab-host ((section [runners.docker]):
network_mode="host"
Finally, there might be an issue with the self-signed SSL-Cert (https://gitlab.com/gitlab-org/gitlab-runner/issues/2659).
A dirty workaround is to add
environment = ["GIT_SSL_NO_VERIFY=true"]
to the [[runners]] section.
Related
I have some trouble configuring my Windows to work with az command line tools. I have tested multiple configuration. One on locally installed system and one with windows based docker container. I get the same error on both system.
In case I issue the following command:
az login --tenant my-domain.org
I get the following error:
HTTPSConnectionPool(host='login.microsoftonline.com', port=443): Max retries exceeded with url: /my-domain.org/.well-known/openid-configuration (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1125)')))
The container has the following az and openssl version:
PS C:\azp> az version
{
"azure-cli": "2.28.0",
"azure-cli-core": "2.28.0",
"azure-cli-telemetry": "1.0.6",
"extensions": {}
}
PS C:\azp> openssl version
OpenSSL 1.1.1k 25 Mar 2021
The local system has the following az and openssl version:
(base) PS C:\01_Dev\dockerdevimage> az version
{
"azure-cli": "2.26.1",
"azure-cli-core": "2.26.1",
"azure-cli-telemetry": "1.0.6",
"extensions": {}
}
(base) PS C:\01_Dev\dockerdevimage> openssl version
OpenSSL 1.1.1c 28 May 2019
I tried to understand why I get the error, so I tested the connection with openssl as follows:
PS C:\azp> openssl s_client -proxy 10.76.209.147:3128 -connect login.microsoftonline.com:443 -showcerts
CONNECTED(00000180)
Can't use SSL_get_servername
depth=2 DC = org, DC = my-domain, CN = PKI, CN = BB-CA-DD <-- edited manually
verify error:num=19:self signed certificate in certificate chain
verify return:1
I have also tested with the same proxy server and with Linux container and the az command works as expected:
$ az version
{
"azure-cli": "2.25.0",
"azure-cli-core": "2.25.0",
"azure-cli-telemetry": "1.0.6",
"extensions": {}
}
$ openssl version
OpenSSL 1.1.1f 31 Mar 2020
$ az login --tenant my-domain.org
The default web browser has been opened at https://login.microsoftonline.com/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/oauth2/authorize. Please continue the login in the web browser. If no web browser is available or if the web browser fails to open, use device code flow with `az login --use-device-code`.
You have logged in. Now let us find all the subscriptions to which you have access...
[
{
"cloudName": "AzureCloud",
...
On Linux container the openssl command returns the following output:
$ openssl s_client -proxy 10.76.209.147:3128 -connect login.microsoftonline.com:443 -showcerts
Can't use SSL_get_servername
depth=2 DC = org, DC = my-domain, CN = PKI, CN = BB-CA-DD
verify return:1
I have also imported the certificate with the following command based on this link:
PS C:\azp> Import-Certificate -FilePath .\BB-CA-DD.crt -CertStoreLocation Cert:\LocalMachine\Root\
No changes. I'm not sure how to proceed.
Maybe this issue is related to the following posts and articles:
Can OpenSSL on Windows use the system certificate store?
How to Use OpenSSL with a Windows Certificate Authority to Generate TLS Certificates
Installing TLS / SSL ROOT Certificates to non-standard environments
Edit:
I've moved the solution from here to an Answer block to highlight that the issue for me was resolved. Based on the reactions, I've concluded that it is indeed useful for others too.
Finally I was able to resolve the issue as follows:
I've found the following documentation:
Setting up certificates for Azure CLI on Azure Stack Development Kit
The basic idea is to find the python installation used for Azure CLI and update the related certificate file.
In my case the Azure CLI was installed with python on the following location:
C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\python.exe
And using the command, that was suggested, returned as follows:
PS > & "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\python.exe" -c "import certifi; print(certifi.where())"
C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\certifi\cacert.pem
Updating the file mentioned above solved the az login issue for me. One of the python installation provided by my-domain.org contained a properly configured cacert.pem file.
You can use following method
Your azure CLI is looking for the cert at this location (if using Windows)
Default certificate authority bundle
Windows C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\Lib\site-packages\certifi\cacert.pem
Download the Certificate of your Azure Portal (portal.azure.com)
Append the certificate on above cacert.pem file
and try Az login again After restarting powershell.
Alternatively
If you're using Azure CLI over a proxy server, it may cause the following error: SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",). To address this error, set the environment variable REQUESTS_CA_BUNDLE to the path of certificate authority bundle certificate file in PEM format.
Append the proxy server's certificate to this file or copy the contents to another certificate file, then set REQUESTS_CA_BUNDLE to it. You might also need to set the HTTP_PROXY or HTTPS_PROXY environment variables.
Link to Ms Docs Solution
I solved this problem by changing DNS for IPv4. Maybe it can work for you too. I ran az upgrade command after DNS change. When I ran az upgrade while giving this error, it said "check internet connection". It was upgraded with success and the related error has been resolved.
I used Google DNS as DNS.
8.8.8.8
8.8.4.4
Then I set DNS to automatic option. I can continue to use it without any problems. I can now access with the az login command.
We have a superset docker containers which is using keycloak as identity broker. All this setup is working fine on http. Further, we have installed ssl certificate on keycloak and same is also working fine. Our superset and keycloak integration code changes look exactly like its mentioned in the answer here.
Now, when we changed auth uris from http to https in superset/docker/pythonpath_dev/client_secret.json, we are getting below error after the login flow is redirected from keycloak to superset.
Forbidden
'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1091)'
We also tried installing root certificates on superset by mounting them on /usr/local/share/ca-certificates and then executing update-ca-certificates in the container, but still there was no help. Any idea how this can be resolved?
Thanks #sventorben for the tip. Indeed it was python which was not able to read my ca files. Since I am new to this, I would detail out all the steps followed. However, some of these steps might be redundant.
After I received my root as well intermediary CA files, I first converted them to PEM format as they were in DER format using openssl.
openssl x509 -inform DER -in myintermediary.cer -out myintermediary.crt
openssl x509 -inform DER -in myroot.cer -out myroot.crt
Then, I mounted these files to my superset container at path /usr/local/share/ca-certificates/
Then, I logged into my container and executed update-ca-certificates command and verified that 2 new pem files got added at /etc/ss/certs/ path i.e. myroot.pem and intermediary.pem.
Then, I added these CA files to python certifi inside my container. To find out the path of cacert.pem, I executed below commands into python terminal.
import certifi
certifi.where()
exit()
Here, second command gave me the path of cacert.pem which was like /usr/local/lib/python3.7/site-pacakges/certifi/cacert.pem.
After this, i appended my ca files at the end of cacert.pem
cat /etc/ssl/certs/myroot.pem /etc/ssl/certs/intermediary.pem >> /usr/local/lib/python3.7/site-pacakges/certifi/cacert.pem
In the end i logged out of my container and restarted it.
docker-compose stop
docker-compose up -d
Note:
I feel step 3 is redundant as python does not read CA files from there. However, i still did it and I am in no mood of reverting and test it out again.
Also, this was my temporary fix as executing the commands inside the container is not useful as they are ephermal.
Update:
Below are the steps followed for production deployment.
Convert root certificates in PEM format using openssl.
Concat both PEM files into a new PEM file which will be installed as bundle. Lets say, the new PEM file is mycacert.pem and same is mounted at /app/docker/.
Create one sh file called start.sh and write 2 commands as below.
cat /app/docker/mycacert.pem >> /usr/local/lib/python3.7/site-pacakges/certifi/cacert.pem
gunicorn --bind 0.0.0.0:8088 --access-logfile - --error-logfile - --workers 5 --worker-class gthread --threads 4 --timeout 200 --limit-request-line 4094 --limit-request-field_size 8190 'superset.app:create_app()'
Modify docker-compose.yml and change command as below.
command: ["/app/docker/start.sh"]
Restart superset container.
docker-compose stop
docker-compose up -d
curl -O "https://www.apache.org/dist/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.tar.gz"
This is the command I am using from terminal to download maven but it's either timed out or curl: (7) Failed to connect to www.apache.org port 443: Operation timed out.
If I use browser to download, no issue.
My assumption is the ssl connection or certificate issue. Any idea how can I resolve the curl issue.
Please take note, I am using this in a Dockerfile to create docker image and here is that:
FROM ******/mule-42x:v2.2.1
ENV MAVEN_VERSION 3.6.3
RUN mkdir -p /opt/maven \
&& cd /opt/maven \
&& curl -O "https://www.apache.org/dist/maven/maven-3/${MAVEN_VERSION}/binaries/apache-maven-${MAVEN_VERSION}-bin.tar.gz" \
&& tar xzvf "apache-maven-$MAVEN_VERSION-bin.tar.gz" \
&& rm "apache-maven-$MAVEN_VERSION-bin.tar.gz"
ENV MAVEN_HOME "/opt/maven/apache-maven-$MAVEN_VERSION"
ENV PATH=$MAVEN_HOME/bin:$PATH
I've tried to run the command presented in the question and encountered the following errors:
curl -O "https://www.apache.org/dist/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.tar.gz"
curl tries to verify the SSL certificate but fails with the following message:
curl: (60) server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
So I've added -k flag as the message suggests.
Now this works, however the http call returns and HTML page with 302 (redirect) to https://downloads.apache.org/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.tar.gz
So the command that has worked for me is:
curl -O -k https://downloads.apache.org/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.tar.gz
An Important side note:
I'm assuming that you've configured the network right and it has all the proper proxy definitions if you're running behind the proxy in your organization, otherwise you should define proxy first.
All in all I suggest you running this command 'manually' first (from the command line not as a part of the build I mean) outside the docker on the machine where you run the docker build and only when you make sure it works run it in a docker file.
I am trying to login into a private docker registry using docker community edition 18.06 for Mac, but i am getting this error while docker login from cli-
Error response from daemon: Missing client certificate domain.cert for
key domain.key
First, I installed CA certificate in ~/.docker/certs.d/myprivaterepo:port using below commmands:
$ openssl genrsa -out client.key 4096
$ openssl req -new -x509 -text -key client.key -out client.cert
And it gave me error -
Error response from daemon: Get
https://myprivaterepo:port/v2/: Service Unavailable
Then i generated the certificate with '.crt' format using above command and it started giving me this error:
Error response from daemon: Missing client certificate client.cert for
key client.key
I am assuming it requires a key and both .crt and .cert certificates to be present. I infact tried creating another .cert certificate with another key, but it gave me below error:
Error response from daemon: tls: private key does not match public key
I referred to docker documentation- https://docs.docker.com/engine/security/https/, but could not resolve issue.
Can you please let me know how to generate the combination of these 2 certificates.
Thanks in advance!
For me, renaming and putting the crt file in /usr/local/share/ca-certificates/ worked:
# copy and rename the file
cp client.crt /usr/local/share/ca-certificates/<my-private-repo>:<port>.crt
# update certificates
sudo ca-update-certificates
# restart docker daemon
sudo service docker restart
I am new to SSL setup, please excuse me if my question is wrong.
I have deployed a Spring Boot application on AWS EC2 (Windows) instance with bunch of restful services, exposed through public IP address (AWS), i am able to access them publicly(http). I want to SSL(https) them now. I am in process of purchasing certificate, in one of the steps to setup, they have given these lines to validate a text file, is anyone aware of this ? Can you please suggest where i need to create ./well-known/pki-validation folder on my Spring Boot application(Tomcat) ?
The issuing vendor will provide you with a simple text-based file to place in sub-folders /.well-known/pki-validation/ in your site’s "home directory". If done properly, the vendor can view this file via HTTP:// and then issue the certificate upon confirmation.
1 Install certboot in the server.
git clone https://github.com/certbot/certbot
cd certbot
./certbot-auto --help
2 Obtain the certificate
In order to obtain the certificate you need to expose trough the server certain files. I do that using the target folder of spring boot tomcat.
./certbot-auto certonly --webroot -w {SpringBootProjectDir}/target/classes/static/ -d {yourDomain.com}
This command obtains the certificates and leaves them in:
/etc/letsencrypt/live/{yourDomain.com}/
Tomcat can't read the certificate provided since its not in p12 format. We have to generate the cert in this format. Use this command
sudo openssl pkcs12 -export -in /etc/letsencrypt/live/{yourDomain.com}/fullchain.pem -inkey /etc/letsencrypt/live/{yourDomain.com}/privkey.pem -out /etc/letsencrypt/live/{yourDomain.com}/keystore.p12 -name tomcat -CAfile /etc/letsencrypt/live/{yourDomain.com}/chain.pem -caname root
It will ask you a password. Keep the password.
3 Configure the server
server.port=443
server.ssl.enabled=true
server.ssl.key-store: /etc/letsencrypt/live/{yourDomain.com}/keystore.p12
server.ssl.key-store-password: {password}
server.ssl.keyStoreType: PKCS12
server.ssl.keyAlias: tomcat
Restart the server and Thats it!