Difference between certbot and certbot-auto - lets-encrypt

I am using letsencrypt for my server to support https. When looking around I find commands with certbot and others with certbot-auto with similar funcionalities. Do you need to use consistently one or the other? Can someone expalin the difference and in which case you would use each one?

If you use the certbot or letsencrypt command, you are using packages provided by your operating system vendor, which are often slow to update. If this is the case, you should probably switch to certbot-auto, which provides the latest version of Certbot on a variety of operating systems.
From here : https://community.letsencrypt.org/t/important-what-you-need-to-know-about-tls-sni-validation-issues/50811

There is an important difference (at least, in two of my production setups)
This info is current as of 2020-04-05
Certbot is the OS's "official" release, while certbot-auto is the cutting-edge version, that has to be downloaded manually.
Having said this, there seems to be an unintended key difference while working with Wildcard certificates with NO automation script (i.e. Digital Ocean HAS an auto script, so in your case this will not be an issue)
cerbot-auto (v. 1.3.0) will NOT renew it's own certificates when nearing the expiration date.
certbot (v. 0.31.0) WILL renew your near-expiring certbot-auto, Wildcard-generated certificates.
Of course, this seems to be a bug that needs fixing, but in the meantime, it's valid to use "certbot" to MANUALLY renew "certbot-auto"-generated certificates. The instructions don't point you in this direction.
certbot certonly --manual --manual-public-ip-logging-ok --preferred-challenges dns-01 --server https://acme-v02.api.letsencrypt.org/directory -d "*.example.com" -d example.com
NOTE: This only seems to affect Wildcard (*.example.com), NON-automatic scripted certificates. It's your responsibility to check viability on your particular setup.

Related

How to run jenkins with HTTPS on MacOS

I have a MacOS based Machine, and I am running a Jenkins instance on it. It run with HTTP protocol (http://127.0.0.1:8080). I would like to run it SSL security (https://127.0.0.1:8080).
How to achieve this? Any help would be appreciated.
Thanks.
I tried running it on 8443 port (127.0.0.1:8443). It didn't work.
If you want your instance to just be available over https, you can configure that with the startup paramters, e.g.:
--httpPort=-1 \
--httpsPort=443 \
--httpsKeyStore=path/to/keystore \
--httpsKeyStorePassword=keystorePassword
The keystore is a Java keystore with your certificate - if you need one, you can use let's encrypt or a self signed one.
For a bigger instance, I would recommend a reverse proxy in front of Jenkins. The documentation how to do this can be found here: https://www.jenkins.io/doc/book/system-administration/reverse-proxy-configuration-with-jenkins/

set up conda for caching proxy (MacOs + Squidman)

How can I configure conda to use a caching proxy?
So far I have:
installed SquidMan and set its host and port to 127.0.0.1:8080.
set the network settings to proxy both http and https at this address
edited .condarc to use a proxy
I think that SquidMan is set up correctly. If I switch it off and try to browse the internet, I get an error message "The proxy server is refusing connections". This happens for both http and https websites and also if I enter an IP directly (no DNS in between).
The edited .condarc is this:
proxy_servers:
http: http://127.0.0.1:8080
https: http://127.0.0.1:8080
Those are the same addresses as in the system proxy settings - which seem to work fine for browsing.
As a test I'm cycling through
conda install python=3.6
conda install python=3.7
conda clean --all
and hoping to see very fast download speeds for those python packages.
But they are always painfully slow.
I checked the SquidMan settings. There is a "maximum object size", maybe that prevents the conda downloads from being cached. Are they too big ?
So I dialled those settings up to the max (well bigger than the conda download) and tried again. Same results.
How do I configure squidman to work with conda ?
It sounds like you likely have configured everything properly for proxying traffic through squid. However, conda uses https to download its packages. In a basic configuration, squid can only pass SSL connections through from the client to the server. This traffic is already encrypted, so it can not be cached. Options that you have available to you are:
Use squid's ssl bump feature to have squid decrypt and re-encrypt the data passing through it. Getting this set up is somewhat tricky because you have to generate a self-signed certificate and get it trusted by conda (using conda install --insecure might be all that is needed for conda).
Use a conda-specific proxy server. Anaconda, Inc. offers such a server as a product, so you are unlikely to find much built-in support for this in the open source conda tools. Sonatype's Nexus repository manager also claims to proxy conda repositories in its documentation.
Use conda's built-in support for local caching. Since you referenced conda clean in your question, you are aware of this cache and must have some reason for not using it. For a single machine, the conda pkgs_dir should work pretty well. For multiple machines, maybe you could get by with a network share pkgs_dir or copying all the .tar.bz2 and .conda files from a local machine into the pkgs_dir for each machine.
Add a second layer of proxying. conda allows you to specify channels with an http protocol. You could set up a proxy server that accepts http requests and passes them on as https requests. You could put your squid caching proxy in front of this http-https proxy. This way, squid will see plain http traffic that it can inspect and cache, and you can still access https only conda repositories. As an example, with nginx, you could do this with a simple conf file like:
server {
listen 80;
server_name localhost 127.0.0.1;
location / {
proxy_pass https://repo.anaconda.com/;
}
}

Should I remove --no-bootstrap, and --no-self-upgrade options when running certobot-auto automatically?

I'm developing a program to automatically renew my SSL certificates for my server.
Currently, the command to renew SSL certificates is /opt/certbot-auto certonly --keep --no-bootstrap --no-self-upgrade --non-interactive --webroot -w /usr/share/nginx/html -d myDomain.
I'm wandering if I should remove --no-self-upgrade options in the command. Because if I don't upgrade certobot-auto, I'll get a warning saying Attempting to parse the version <new_version> renewal configuration file found at XXX with version <old_version> of Certbot. This might not work.. I'm afraid some day in the future, the program will not be able to renew the SSL certificates on my machine if I don't upgrade certbot-auto.
If I remove --no-self-upgrade, should I also remove --no-bootstrap? Because new versions of certbot-auto might have different OS dependencies.
Yes, I would recommend removing those flags.
Also, since you're using Nginx, you probably want to use Certbot's built in Nginx support. To do this, run once with just /opt/certbot-auto --nginx, and follow the prompts, including saying yes to renewing a certificate even though you already have one. That will save the renewal settings using the Nginx support.
You can then change your cron job to simply /opt/certbot-auto -q renew, which will automatically renew all previously issued certificates, and reload Nginx.

Where to add client certificates for Docker for Mac?

I have a docker registry that I'm accessing behind an nginx proxy that does authentication using client-side ssl certificates.
When I attempt to push to this registry, I need the docker daemon to send the client certificate to nginx.
According to:
https://docs.docker.com/engine/security/certificates/
There should be a directory called /etc/docker where these certificates can go. This directory doesn't exist on Docker for Mac.
So I thought I'd try putting the certificates inside the virtual machine itself by doing:
docker-machine ssh default
This resulted in docker complaining:
Error response from daemon: crypto/tls: private key does not match public key
I don't believe there is anything wrong with my key pair, and I've done this same setup on linux (much easier) without problems.
4 yrs later Google still brought me here.
I found the answer in the official docs:
https://docs.docker.com/desktop/mac/#add-client-certificates
Citing from source:
You can put your client certificates in
~/.docker/certs.d/<MyRegistry>:<Port>/client.cert and
~/.docker/certs.d/<MyRegistry>:<Port>/client.key.
When the Docker for Mac application starts up, it copies the
~/.docker/certs.d folder on your Mac to the /etc/docker/certs.d
directory on Moby (the Docker for Mac xhyve virtual machine).
You need to restart Docker for Mac after making any changes to the keychain or to the ~/.docker/certs.d directory in order for the
changes to take effect.
The registry cannot be listed as an insecure registry (see Docker Engine). Docker for Mac will ignore certificates listed under
insecure registries, and will not send client certificates. Commands
like docker run that attempt to pull from the registry will produce
error messages on the command line, as well as on the registry.
Self-signed TLS CA can be installed like this, your certs might reside in the same directory.
sudo mkdir -p /Applications/Docker.app/Contents/Resources/etc/ssl/certs
sudo cp my_ca.pem /Applications/Docker.app/Contents/Resources/etc/ssl/certs/ca-certificates.crt
https://docs.docker.com/desktop/mac/#add-tls-certificates works for me and here is short description of how to for users who use
Docker Desktop
Mac os system
add cert into mac os chain
# Add the cert for all users
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ca.crt
# Add the cert for yourself
security add-trusted-cert -d -r trustRoot -k ~/Library/Keychains/login.keychain ca.crt
restart Docker Desktop
This is a current "Oct. 2022" docs in Docker for Mac. (I made it clear to see full url!)
How do I add TLS certificates?( https://docs.docker.com/desktop/faqs/macfaqs/#how-do-i-add-tls-certificates)
There should be a directory called /etc/docker where these certificates can go. This directory doesn't exist on Docker for Mac.
In my case, I also don't have /etc/docker by default. If you use ~/.docker, the docker desktop will pass alias into /etc/docker.
I don't believe there is anything wrong with my key pair, and I've done this same setup on linux (much easier) without problems.
You can try put your key pairs under ~/.docker/certs.d/Hostname:port, and restart your Docker Desktop for Mac. As a result, I guess you can achieve what you want.

Why would git-upload-pack (during git clone) hang?

I've read several other 'git hangs on clone' questions, but none match my environment and details. I'm using git built under cygwin (msys git is not an option) to clone a repo from a Linux host over SSH.
git clone user#host:repo
I've tested against the same host on other platforms, and it works fine, but on this Windows machine the clone hangs indefinitely. I set GIT_TRACE=1 and it looks like the problem is with this command:
'ssh' 'user#host' 'git-upload-pack '\''repo'\'''
My SSH keys are set up correctly: ssh user#host works fine. When I run the command, I get a bunch of output that ends like this:
...
003dbbd3db63763922ad75bbeefa3811dce001576851 refs/tags/start
0000
Then it hangs for 20+ minutes, which is the longest I've waited before killing it.
The server has Git 1.7.11.7 with OpenSSH 5.9p1, while the client has Git 1.7.9 with OpenSSH 6.1p1.
Is that supposed to be the end of the git-upload-pack output? Is this a bug in Git or my configuration?
The upcoming git1.8.5 (Q4 2013) will document more the smart http protocol.
See commit 4c6fffe2ae3642fa4576c704e2eb443de1d0f8a1 by Shawn O. Pearce.
With that detailed documentation, the idea would be to monitor the web requests done between your git client and the server, and see if those conforms to what is documented below.
That could help in pinpointing where the service "hangs".
The file Documentation/technical/http-protocol.txt insists on:
the "Smart Service git-upload-pack"
Clients MUST first perform ref discovery with '$GIT_URL/info/refs?service=git-upload-pack'.
C: POST $GIT_URL/git-upload-pack HTTP/1.0
S: 200 OK
S: Content-Type: application/x-git-upload-pack-result
S: Cache-Control: no-cache
S:
S: ....ACK %s, continue
S: ....NAK
Clients MUST NOT reuse or revalidate a cached reponse.
Servers MUST include sufficient Cache-Control headers
to prevent caching of the response.
Servers SHOULD support all capabilities defined here.
Clients MUST send at least one 'want' command in the request body.
Clients MUST NOT reference an id in a 'want' command which did not appear in the response obtained through ref discovery unless the server advertises capability "allow-tip-sha1-in-want".
The "negociation" algorithm
(c) Send one $GIT_URL/git-upload-pack request:
C: 0032want <WANT #1>...............................
This worked for me, incase it helps someone else.
Check your git remote url. It might hang with git-upload-pack on a trace if your using the wrong url type. You change the url from ssh git#github.com: to https://github.com/ on your remote.
We have faced a similar issue - and we attributed it to the following: Our git repo has a LOT of binary files checked in (multiple versions, over the past 1.5 years of this project). So, we assumed that this was the cause.
To support this theory, we have other code bases that are more recent (and thus do not have so many binary files and their versions) - which do not exhibit this behavior.
Our setup: Git setup on linux, site-to-site VPN between London and India over a T1 line.
I was having this same problem after I added some jazz like this to my ssh config in order to set window titles in tmux:
Host *
PermitLocalCommand yes
LocalCommand if [[ $TERM == screen* ]]; then printf "\033k%h\033\\"; fi
getting rid of that fixed my git.
An outdated PuTTy can also cause this. Your system might be using plink.exe as GIT_SSH.
You can install the latest development build from http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html to make sure this is not the problem.
My problem was simple. I updated the VPN client and git started hanging. I quit the VPN client and restarted it.

Resources