Our existing SSL certificate is about to expire, and so we're trying to install a new one. However, the instructions on Heroku are lacking...
Creating the bundle
To create the bundle, you're supposed to concatenate a bunch of intermediate cert files together in the correct order. Example on Heroku:
$ cat EssentialSSLCA_2.crt ComodoUTNSGCCA.crt UTNAddTrustSGCCA.crt AddTrustExternalCARoot.crt > bundle.pem
(https://devcenter.heroku.com/articles/ssl-certificate-dnsimple)
We received a different set of files:
AddTrustExternalCARoot.crt
COMODORSAAddTrustCA.crt
COMODORSADomainValidationSecureServerCA.crt
(www_our_domain).crt
How should they be concatenated? Is this correct?:
$ cat (www_our_domain).crt COMODORSADomainValidationSecureServerCA.crt COMODORSAAddTrustCA.crt AddTrustExternalCARoot.crt > bundle.pem
Adding the certs
I'm assuming we don't need to provision another SSL endpoint, we just update the one we have...
$ heroku certs:add server.crt server.key bundle.pem
(https://devcenter.heroku.com/articles/ssl-endpoint#provision-the-add-on)
But unclear to me what happens to the old certs the add on was originally provisioned with? Are they over-written? Do they need to be removed?
How should they be concatenated? Is this correct?:
If you supply the 3 files server.crt server.key bundle.pem, you can skip (www_our_domain).crt in the bundle. Otherwise, simply supply a server.crt and a server.key
$ cat (www_our_domain).crt COMODORSADomainValidationSecureServerCA.crt COMODORSAAddTrustCA.crt AddTrustExternalCARoot.crt > server.crt
I'm assuming we don't need to provision another SSL endpoint, we just update the one we have...
To update a certificate use heroku certs:update, not heroku certs:add. See the official docs.
Heroku's GUI interface is now updated to allow you to update the SSL certificate.
From Heroku -- Settings -- Copy and paste the text in your .crt file, paste in your private key and you are done.
Related
(I'm on Windows using cygwin for development and trying to set up my dev env, where other devs are on Unix, I've tried installing ubuntu terminal but the organisation's rules disallow using the windows store and installing it seperate fails for seemingly the same reason.)
I'm trying to add two different .pem certificates provided by my organization to npm's cafile config in .npmrc, but it will only accept a single file, I've tried cat-ing them together but it seemingly only accepts the first certificate in the file. Is there a way of adding more certificates? I have tried using the NODE_EXTRA_CA_CERTS variable but it gives an error library:fopen:No such process which is apparently a bug on their end.
So I'm at a loss, how can I add more than one cert?
Append It's content to the first .pem file wich addressed in .npmrc file.
If getting this error: SSL certificate problem: unable to get local issuer certificate run this command: git config --global http.sslCAInfo <PATH TO .pem FILE>
i got a Puppet Enterprise Master Server 2018.1.3 which should get the Code with Code Manager from a git-Repository via https, where the server certificate of the git server is signed by a third party CA.
after getting everything afaik correctly configured, i get following:
> puppet-code deploy --dry-run
Dry-run deploying all environments.
Errors while collecting a list of environments to deploy (exit code: 1).
ERROR -> Unable to determine current branches for Git source 'puppet'
(/etc/puppetlabs/code-staging/environments)
Original exception:
The SSL certificate is invalid
executing directly r10k produces a similar error. which makes sense, since i have not installed the third party CA certificate anywhere yet.
so i thought, r10k most likely runs jruby which runs java (i do not any idea about ruby), so i will install the certificate in the jvm:
keytool -import -file gitCA.cer -alias gitCA -keystore /opt/puppetlabs/server/apps/java/lib/jvm/java/jre/lib/security/cacerts -storepass changeit
but i am still getting the same error, also after a system restart, so ok, it means r10k does not use jruby but ruby, so i will install also the certificate in the OS, put the certificate under /etc/pki/trust/anchors and called update-ca-certificates (on SLES12). After that, i can access the git-Repo-URL with wget without getting any certificate error, so the certificate is installed in the OS correctly, but still, even after a system restart, i am getting the same error with r10k.
after lot of goggling for certificate stores and ruby i found out that
export SSL_CERT_FILE=<path_to_cert>
fixes the direct call of r10k:
> r10k deploy display --fetch ---
:sources:
- :name: :puppet
:basedir: "/etc/puppetlabs/code/environments"
:remote: https://xxx#git.xxx/git/puppet
:environments:
- develop
- master
- production
- puppet_test
but puppet-code still not working with same error message. but i thought, obviously i am right now root and puppet-code is executed by user pe-puppet, so i put the export command in the /etc/profile.local file, so it is available to all users.
still not working. even after system restart and deleting /opt/puppetlabs/server/data/puppetserver/r10k/ that was created with user root while directly calling r10k.
first question: why does r10k works, but puppet-code not?
second question: where is the correct place for that certificate?
many thanks
Michael
UPDATE: 27.AUG.2018
i tried this:
sudo -H -u pe-puppet bash -c '/opt/puppetlabs/puppet/bin/r10k deploy display --fetch'
which did not work, despite i am setting the SSL_CERT_FILE variable in the /etc/profile.local file.
but i got it working by setting the variable in the /etc/environment file.
but puppet code still not working. why?
for those looking for a solution to this problem checkout this post on the Puppet Support Base.
Simply put you have two options:
Use a Git source instead of an HTTPS source to refer to your repository in your Puppetfile. This option requires adding SSH keys to your Puppet master and your repository.
Add a certificate authority (CA) cert for the repository to the list of trusted CAs in /opt/puppetlabs/puppet/ssl/cert.pem.
Option one: Use a Git source instead of an HTTPS source
To deploy code from your repository using a Git source, configure a private SSH key on your Puppet master and a public SSH key on your repository:
In your Puppetfile, change references to your Git repository from an HTTPS source to a Git source:
For example, change:
mod 'site_date', :git: 'https://example.com/user/site_data.git',
to:
mod 'site_data', :git: 'ssh://user#example.com:22/user/site_data.git',
Configure your SSH keys. Configure the private key using our documentation on how to Declare module or data content with SSH private key authentication for PE 2018.1.
Note: Use the version selector to choose the right version of our documentation for your deployment.
The details of configuring your public key depend on how your Git repository is configured. Talk to your Git repository administrator.
Option two: Add a trusted CA cert
If you are unable to specify a Git s
ource, add your repository to the list of CAs trusted by Code Manager by adding a CA cert to the file /opt/puppetlabs/puppet/ssl/cert.pem.
Transfer the cert (ca.pem) file to your CA node.
On the CA node, add the cert to the list of CAs trusted by Code Manager: cat ca.pem >> /opt/puppetlabs/puppet/ssl/cert.pem
Agent runs won't revert changes made to cert.pem because the file isn't managed by PE, but upgrades to PE will overwrite the file. After you upgrade PE, you must add the CA cert to cert.pem again.
so, i got it working, but not happy with the solution.
i turned on debug logging on /etc/puppetlabs/puppetserver/logback.xml, confirming that puppet-code is indeed calling r10k:
2018-08-27T14:54:24.149+02:00 DEBUG [qtp462609859-78] [p.c.core] Invoking shell:
/opt/puppetlabs/bin/r10k deploy --config /opt/puppetlabs/server/data/code-manager/r10k.yaml --verbose warn display --format=json --fetch
2018-08-27T14:54:24.913+02:00 ERROR [qtp462609859-78] [p.c.app] Errors while collecting a list of environments to deploy (exit code: 1).
ERROR -> Unable to determine current branches for Git source 'puppet' (/etc/puppetlabs/code-staging/environments)
Original exception:
The SSL certificate is invalid
so i did it the very quick and dirty way:
cd /opt/puppetlabs/puppet/bin/
mv r10k r10k-bin
touch r10k
chmod +x r10k
vi r10k
and
#!/bin/bash
export SSL_CERT_FILE=<new_cert_path>
/opt/puppetlabs/puppet/bin/r10k-bin "$#"
now it is working:
puppet:~ # puppet-code deploy --dry-run
Dry-run deploying all environments.
Found 5 environments.
but not happy, any better idea?
I try to deploy my opensource project to Nexus Repository (https://oss.sonatype.org) using the travis ci, but unfortunately travis doesn't found the secret key for gpg signed step.
I follow all steps on https://github.com/making/travis-ci-maven-deploy-skelton but the release deploy continuous doesn't work. At my workspace all works correctly and I can deploy releases to Nexus Repository.
I'm using one script to deploy the project:
#!/usr/bin/env bash
echo "Checking the current branch..."
if [ "$TRAVIS_BRANCH" = 'master' ] && [ "$TRAVIS_PULL_REQUEST" == 'false' ]; then
echo "The current branch is: master"
echo "Run maven deploy parameter using sign and build-extras profiles..."
mvn deploy -P sign,build-extras --settings setting-maven.xml
fi
Such issues commonly occur if the service is running under another user than the developer's account. GnuPG has per-user "GnuPG home directories" in ~/.gnupg. Make sure to import the keys under the service's user (run this command from your developer account):
gpg --export-secret-keys [key-id] | sudo -u [service user] gpg --import
Alternatively, you could use gpg's --homedir option to change to GnuPG home directory location, but be aware GnuPG is very picky about properly set, tight permissions by default (which is a good thing).
The solution in https://github.com/making/travis-ci-maven-deploy-skelton relies on symmetrically encrypted keyrings in your $GPG_DIR. In the sample that would be the folder deploy.
To create these keyrings, you do this (copied):
$ export ENCRYPTION_PASSWORD=<password to encrypt>
$ openssl aes-256-cbc -pass pass:$ENCRYPTION_PASSWORD -in ~/.gnupg/secring.gpg -out deploy/secring.gpg.enc
$ openssl aes-256-cbc -pass pass:$ENCRYPTION_PASSWORD -in ~/.gnupg/pubring.gpg -out deploy/pubring.gpg.enc
This creates the encrypted keyrings in the folder deploy. You probably need to create the folder before you run the openssl commands.
Both encrypted keyrings need to be checked in, so that they are available as part of the project at build-time.
At build-time you need to decrypt the keyrings. You do this by adding something like this to your .travis.yml file:
before_install:
- openssl aes-256-cbc -pass pass:$ENCRYPTION_PASSWORD -in $GPG_DIR/pubring.gpg.enc -out $GPG_DIR/pubring.gpg -d
- openssl aes-256-cbc -pass pass:$ENCRYPTION_PASSWORD -in $GPG_DIR/secring.gpg.enc -out $GPG_DIR/secring.gpg -d
Note, how the openssl command uses $GPG_DIR? That's basically your deploy directory. To make sure Travis knows about $GPG_DIR, define it, e.g. like this:
env:
global:
- GPG_DIR="`pwd`/deploy"
So basically Travis now knows how to decrypt your GPG keyrings and puts them into a defined location. Now you still have to tell GPG how to pick it up. For that you have two options:
Properties in your pom.xml or
properties in your settings.xml
The project https://github.com/making/travis-ci-maven-deploy-skelton uses the first option (pom.xml). The essential part is:
<profiles>
<profile>
<id>ossrh</id>
<properties>
<gpg.executable>gpg</gpg.executable>
<gpg.keyname>${env.GPG_KEYNAME}</gpg.keyname>
<gpg.passphrase>${env.GPG_PASSPHRASE}</gpg.passphrase>
<!-- tell gpg to NOT use the default keyring from the current user's home -->
<gpg.defaultKeyring>false</gpg.defaultKeyring>
<!-- instead tell gpg to use the keyrings from your GPG_DIR -->
<gpg.publicKeyring>${env.GPG_DIR}/pubring.gpg</gpg.publicKeyring>
<gpg.secretKeyring>${env.GPG_DIR}/secring.gpg</gpg.secretKeyring>
</properties>
[...]
With these properties you define parameters for the gpg executable. This works, because you have set up $GPG_DIR in your .travis.yml file.
In essence you tell gpg to not use the standard keyrings in the current user's home directory, but instead the keyrings you've just decrypted and stuck into your $GPG_DIR.
You may ask yourself where the other <properties>/env variables come from. They are appended to your .travis.yml when running the following commands:
$ travis encrypt --add -r <username>/<repository> SONATYPE_USERNAME=<sonatype username>
$ travis encrypt --add -r <username>/<repository> SONATYPE_PASSWORD=<sonatype password>
$ travis encrypt --add -r <username>/<repository> ENCRYPTION_PASSWORD=<password to encrypt>
$ travis encrypt --add -r <username>/<repository> GPG_KEYNAME=<gpg keyname (ex. 1C06698F)>
$ travis encrypt --add -r <username>/<repository> GPG_PASSPHRASE=<gpg passphrase>
<username> is your Github username and <repository> your Github repo.
After running the travis encrypt --add commands, your .travis.yml file will have appended entries like this:
env:
global:
- GPG_DIR="`pwd`/deploy"
- secure: VYxU+0zMoKExcopJ8z74Pd5KE6TnoP72hZchnpy+gxLVrt4d5lBJ042xT2D/4qebG8stHpq5DtYO8EQaZVMKVQl48fXwQk4aWiY0OWNY2Pz63Y9IFDGX0n/B1NPxbPToCoXHsddGvAVOlRXbDTfkF+yc3nheaLLnjhxFAM9X0/e1/bqnFyrwqrJmenG7RaGclsscjLPLExTAy+jIbj59loZkclVfKpMS98Ol605Xpmn6VTxr8Z7k3FEQ4mt3VI350QKBbmMsiEWpVAbPVPWsUkEYpM5VuU7Pi1W5fbaJBxIlOAdKjDtYfUvyY63iQK79787dBrGM5T2FDUV05UXpi4NvKnrcdkhuOFlXB4Io3qroen1lrh5igBdIlYc4kBpvDMpnewIuM7F+5fPS9XgBZIlOkiAEPqInr0sonlj4c+vkd92PeujDYiKUCA2uVLEzLYAAu7oC6O5K18JBzLurKNAda+9f+XuQrc1t140u8jic9YF7oM7fUdiu1MJ6j9WMiu3Syh9yjAOC+5RaBxy/ZcDUmYazH/oQNe3d55AMKYOdsryF51W/WfSrHoHtKUGsy9RsDvY690GU6XZ+Zev79nRKs9uVSqqlcGv+YPoB3zlDjmks51fm0HovgWWsCDbDgP4/FXPFKzr0Ht6qnYjJ=
- secure: Wl1/oERtbz739uq+cfHQdpXGC/ZIX1l9HBihyTSt0qta7HlqQeCHtCQfpbq92BYj518CZjNl3ijXlYaXOoW4Z1L2VGzJwxNVdiG2XVkUrXfTO6i711Q/f6ezINlDhRhH+Sn1GhFPB8x7i5vnlqSvMqG19x2mfPsD50yi/58elU7t3zUg5HnBpHfyCdrlaa1pI/sHYIog4Y/Nm3H6/9WDu5ErnhmSKT9LCHdXDXn8AO8UfQXP/eMHUAMdnn8LP/+HtGXmI1Jij9UFaB1PTMyKRCMiVizMDgMqtjXhzBqg8Wqy6pp2yicSEn4JVgBM26vsNQwHXgz3kut4FwlY7Aph8Mx61jU8OvVh/vD6y1gm7r7PW4lcy1PT3pTtfL2XLH3p0/cl+WqHJIfupyOZg/z0dEd0JKJAxJ7XR3y6Z0QVTKe0QTSOO8O5g+EfuyoJFC4d28G8gM+Zc1OctpXOMrU0l4x3PDrb8xoxugsUMpYfUIQl7L9Dxr6PqHbDIgNM/5N5L3ZwWiI12fKtIqfurVJ2jsA9ahzCskzRSK745lwIPrpw6NVPjN8CzbTWZjyR9aMuxpHO+ptMmXxjo3asA7tJQDBtfbAWWz0FGro429UK3IWa5dgtVQpP2GG4/VWtUM1CUhG9x74FpojIHa4EzpLji=
[...]
These secure entries are translated to environment variables by Travis at build-time.
A couple more remarks:
By default Travis uses gpg version 1. Because of issues with entering passphrases via loopback in gpg 2.0, version 1 is preferred. Once Travis supports gpg 2.2, this may change.
If you haven't done so yet, you need to publish your key to
http://keys.gnupg.net:11371
http://keyserver.ubuntu.com:11371
http://pool.sks-keyservers.net:11371
Good luck.
I'm new to generating certificates and using letsencrypt, so I'm not sure if this is a dumb question or even possible.
I want to create a small example webapplication using node.js. And I want to test how to implement https, and how to get a proper certificate.
So I tried to use letsencrypt. But it doesn't seem to work.
I'm using my local machine (win10) and I'm cloning the git. Afterwards I try to execute the command ./letsencrypt-auto but windows won't recognize the script as a command.
How is it possible to use letsencrypt locally on my win10 machine, where no webserver (usually) is running.
letsencrypt-auto only works with Apache on Debian-based OSes (for now). There's no way to use it on Windows, yet.
That said, people are trying. You might find this project interesting. (Disclaimer: I have no affiliation with that and haven't tried it myself.)
Alternatively you can look at - https://github.com/minio/concert built using golang, you can get a windows binary quite naturally.
Install
You need to have golang installed to compile concert.
$ go get -u github.com/minio/concert
How to run?
Generates certs in certs directory by default.
$ sudo concert gen <EMAIL> <DOMAIN>
Generate certificates in custom directory.
$ sudo concert gen --dir my-certs-dir <EMAIL> <DOMAIN>
Renew certificates in certs directory by default.
$ sudo concert renew <EMAIL>
Generate certificates in custom directory.
$ sudo concert renew --dir my-certs-dir <EMAIL>
Run a server with automatic renewal.
$ sudo concert server <EMAIL> <DOMAIN>
Alternatively, you can use ngrok to expose your local port 80 and make it available to the world via the secure tunnel on subdomain.ngrok.io. There is also a possibility to pass that domain as a CNAME for your own domain name.
All you have to do is:
Create free account with https://ngrok.com/ It works on all operating systems.
Run ngrok http 80 and note your subdomain.ngrok.io
Add the above subdomain to your /etc/hosts as 127.0.0.1 subdomain.ngrok.io. This way you will be able to access that domain locally with SSL, while ngrok will make sure Let's Encrypt is able to access it via the Internet.
Edit: Note that this method might not work reliably. Let's encrypt has 20 certificates rate limit per registered domain. Which means up 20 certificates in total can be generated for all ngrok users per week.
Disclaimer: I have no affiliation with ngrok.io.
I see that people have had issues in the past with Heroku and SSL and matching .pem certs (like this: Heroku SSL error: key doesn't match PEM certificate).
However, our site has had ssl running fine, until it expired. We renewed with GoDaddy and have been following the instructions here: http://blog.matthodan.com/how-to-setup-heroku-hostname-ssl-with-godaddy
Now the weird thing is is that we're continually getting this read out-
"Pem is invalid / Key doesn't match the PEM certificate"
I recently renewed my ssl cert for heroku hosted domain registered at godaddy. I did the following:-(Do not remove old cert at heroku)
apply renewal credit at godaddy and use the previous certificate signing request(csr), selecting option of third party hosted domain and submit.
After the certificate has been issued by godaddy download certs for
'Nginx' server.
unzip the zipped file, cd into directory, and combine certificates.
cat 48bcdx31xxxx.crt sf_bundle-g2-g1.crt > combined.crt
run certificate update command heroku certs:update combined.crt server.key
Confirm the changes by typing app_name.
Check your certs heroku certs:info --app=app_name
Done!
and after few seconds app is running on previous ssl endpoint url. So, No need to update dns.
Those of you who are having hard time getting your app running on Heroku again, after renewing the SSL certificate on GoDaddy. Here is how I fixed the problem:
Remove the old certificate from Heroku using the following command
$ heroku certs:remove
After renewing the certificate on GoDaddy, download the certificate (choose "ngnix" as a web server).
Unzip the downloaded file and then go into the folder in your terminal ( You should see two files - in my case I had a file with a weird name 82321234a.crt and gd_bundle.crt)
Run the following command there to create a new CRT file:
$ cat 82321234a.crt gd_bundle.crt > combined.crt
Go into your certificate folder for your application in your terminal
Add the new certificate to Heroku using the following command
$ heroku certs:add combined.crt server.key
And, that's it!
I hope this helps.