I have a docker registry that I'm accessing behind an nginx proxy that does authentication using client-side ssl certificates.
When I attempt to push to this registry, I need the docker daemon to send the client certificate to nginx.
According to:
https://docs.docker.com/engine/security/certificates/
There should be a directory called /etc/docker where these certificates can go. This directory doesn't exist on Docker for Mac.
So I thought I'd try putting the certificates inside the virtual machine itself by doing:
docker-machine ssh default
This resulted in docker complaining:
Error response from daemon: crypto/tls: private key does not match public key
I don't believe there is anything wrong with my key pair, and I've done this same setup on linux (much easier) without problems.
4 yrs later Google still brought me here.
I found the answer in the official docs:
https://docs.docker.com/desktop/mac/#add-client-certificates
Citing from source:
You can put your client certificates in
~/.docker/certs.d/<MyRegistry>:<Port>/client.cert and
~/.docker/certs.d/<MyRegistry>:<Port>/client.key.
When the Docker for Mac application starts up, it copies the
~/.docker/certs.d folder on your Mac to the /etc/docker/certs.d
directory on Moby (the Docker for Mac xhyve virtual machine).
You need to restart Docker for Mac after making any changes to the keychain or to the ~/.docker/certs.d directory in order for the
changes to take effect.
The registry cannot be listed as an insecure registry (see Docker Engine). Docker for Mac will ignore certificates listed under
insecure registries, and will not send client certificates. Commands
like docker run that attempt to pull from the registry will produce
error messages on the command line, as well as on the registry.
Self-signed TLS CA can be installed like this, your certs might reside in the same directory.
sudo mkdir -p /Applications/Docker.app/Contents/Resources/etc/ssl/certs
sudo cp my_ca.pem /Applications/Docker.app/Contents/Resources/etc/ssl/certs/ca-certificates.crt
https://docs.docker.com/desktop/mac/#add-tls-certificates works for me and here is short description of how to for users who use
Docker Desktop
Mac os system
add cert into mac os chain
# Add the cert for all users
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ca.crt
# Add the cert for yourself
security add-trusted-cert -d -r trustRoot -k ~/Library/Keychains/login.keychain ca.crt
restart Docker Desktop
This is a current "Oct. 2022" docs in Docker for Mac. (I made it clear to see full url!)
How do I add TLS certificates?( https://docs.docker.com/desktop/faqs/macfaqs/#how-do-i-add-tls-certificates)
There should be a directory called /etc/docker where these certificates can go. This directory doesn't exist on Docker for Mac.
In my case, I also don't have /etc/docker by default. If you use ~/.docker, the docker desktop will pass alias into /etc/docker.
I don't believe there is anything wrong with my key pair, and I've done this same setup on linux (much easier) without problems.
You can try put your key pairs under ~/.docker/certs.d/Hostname:port, and restart your Docker Desktop for Mac. As a result, I guess you can achieve what you want.
Related
I'm trying to make Gitpitch to load presentations from Gitlab-omnibus installed in a local network (that is, not gitlab.com).
What I've done:
Pulled Docker image from https://hub.docker.com/r/knsit/gitpitch/dockerfile
Imported Gitpitch sample repository https://github.com/gitpitch/in-60-seconds from Github to our Gitlab
docker run docker run -d --rm --name gitpitch -e GP_GITLAB_BASE=https://gitlab.local.corp/ -e GP_GITLAB_API=https://gitlab.local.corp/api/v4/ -e GP_GITLAB_AS_DEFAULT=true -e GP_GITLAB_ACCESS_TOKEN=token -e GP_HOST=host -p 9000:9000 knsit/gitpitch
Please, note s in https. Our Gitlab uses HTTPS, but with self-signed certificate.
I can connect to the port 9000 of the container but browser shows me Error 404, saying that no Pitchme.md file exist in the repository.
I suspect that it is due to self-signed certificate of the Gitlab installation.
Is it possible to turn off checking of certificate validity for GitPitch?
I do not maintain the Docker image you are using so I can't speak to it specifically. The official GitPitch image available for deployment on-premises is GitPitch Enterprise.
That said, if you can customize the configuration for your local instance you might get the behavior you want by activating the following property:
play.ws.ssl.loose.acceptAnyCertificate=true
You can learn more about customizing the configuration for GitPitch Enterprise here. It might help you to understand a little more about custom configuration for the GitPitch server.
Of course, if you want to unlock the full GitPitch feature set on-premises, get in touch about a GitPitch Enterprise license. Details on the gitpitch.com website.
I met a couple of problems when installing the Kubernetes with Kubeadm. I am working behind the corporate network. I declared the proxy settings in the session environment.
$ export http_proxy=http://proxy-ip:port/
$ export https_proxy=http://proxy-ip:port/
$ export no_proxy=master-ip,node-ip,127.0.0.1
After installing all the necessary components and dependencies, I began to initialize the cluster. In order to use the current environment variables, I used sudo -E bash.
$ sudo -E bash -c "kubeadm init --apiserver-advertise-address=192.168.1.102 --pod-network-cidr=10.244.0.0/16"
Then the output message hung at the message below forever.
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.3
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [loadbalancer kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.102]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
Then I found that none of the kube components was up while kubelet kept requesting kube-apiserver. sudo docker ps -a returned nothing.
What is the possible root cause of it?
Thanks in advance.
I would strongly suspect it is trying to pull down the docker images for gcr.io/google_containers/hyperkube:v1.7.3 or whatever, which requires teaching the docker daemon about the proxies, in this way using systemd
That would certainly explain why docker ps -a shows nothing, but I would expect the dockerd logs journalctl -u docker.service (or its equivalent in your system) to complain about its inability to pull from gcr.io
Based on what I read from the kubeadm reference guide, they are expecting you to patch the systemd config on the target machine to expose those environment variables, and not just set them within the shell that launched kubeadm (although that certainly could be a feature request)
I am running the latest Docker Toolbox, using latest Oracle VirtualBox, with Windows 7 as a host OS.
I am trying to enable non-TLS access to Docker remote API, so I could use Postman REST client running on Windows and hit docker API running on docker-machine in the VirtualBox. I found that if Docker configuration included -H tcp://0.0.0.0:2375, that would do the trick exposing the API on port 2375 of the docker machine, but for the life of me I can't find where this configuration is stored and can be changed.
I did docker-machine ssh from the Toolbox CLI, and then went and pocked around the /etc/init.d/docker file, but no changes to the file survive docker-machine restart.
I was able to find answer to this question for Ubuntu and OSX, but not for Windows.
#CarlosRafaelRamirez mentioned the right place, but I will add a few details and provide more detailed, step-by-step instructions, because Windows devs are often not fluent in Linux ecosystem.
Disclaimer: following steps make it possible to hit Docker Remote API from Windows host, but please keep in mind two things:
This should not be done in production as it makes Docker machine very not secure.
Current solution disables most of the docker-machine and all docker CLI functionality. docker-machine ssh remains operational, forcing one to SSH into docker machine to access docker commands.
Solution
Now, here are the steps necessary to switch Docker API to non-TLS port. (Docker machine name is assumed to be "default". If your machine name has a different name, you will need to specify it in the commands below.)
Start "Docker Quickstart Terminal". It starts Bash shell and is the place where all following commands will be run. Run docker-machine ip command and note the IP address of the docker host machine. Then do
docker-machine ssh
cd /var/lib/boot2docker
sudo vi profile This starts "vi" editor in elevated privileges mode required for editing "profile" file, where Docker host settings are. (If as a Windows user you are not familiar with vi, here's is super-basic crash course on it. When file is open in the vi, vi is not in editing mode. Press "i" to start edit mode. Now you can make changes. After you made all the changes, hit Esc and then ZZ to save changes and exit vi. If you need to exit vi without saving changes, after Esc please type :q! and hit Enter. ":" turns on vi's command mode, and "q!" command means exit without saving. Detailed vi command info is here.)
Using vi, change DOCKER_HOST to be DOCKER_HOST='-H tcp://0.0.0.0:2375', and set DOCKER_TLS=no. Save changes as described above.
exit to leave SSH session.
docker-machine restart
After doocker machine has restarted, your sould be able to hit docker API URL, like http://dokerMachineIp:2375/containers/json?all=1, and get valid JSON back.
This is the end of steps required to achieve the main goal.
However, if at this point you try to run docker-machine config or docker images, you will see an error message indicating that docker CLI client is trying to get to the Docker through the old port/TLS settings, which is understandable. What was not expected to me though, is that even after I followed all the Getting Started directions, and ran export DOCKER_HOST=tcp://192.168.99.101:2375 and export DOCKER_TLS_VERIFY=0, resulting in
$ env | grep DOCKER
DOCKER_HOST=tcp://192.168.99.101:2375
DOCKER_MACHINE_NAME=default
DOCKER_TLS_VERIFY=0
DOCKER_TOOLBOX_INSTALL_PATH=C:\Program Files\Docker Toolbox
DOCKER_CERT_PATH=C:\Users\USERNAME\.docker\machine\machines\default
the result was the same:
$ docker-machine env
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host
"192.168.99.101:2376"
If you see a problem with how I changed environment variables to point Docker CLI to the new Docker host address, please comment.
To work around this problem, use docker-machine ssh command and run your docker commands after that.
I encountered the same problem and thanks to #VladH made it working not changing any internal Docker profile properties. All you have to do is correctly define Windows local env variables (or configure maven plugin properties, if you use io.fabric8 docker-maven-plugin).
Note that 2375 port is used for non-TLS connections, and 2376 only for TLS connections.
DOCKER_HOST=tcp://192.168.99.100:2376
DOCKER_TLS_VERIFY=0
DOCKER_TOOLBOX_INSTALL_PATH=C:\Program Files\Docker Toolbox
DOCKER_CERT_PATH=C:\Users\USERNAME\.docker\machine\machines\default
I'm trying to get a working Docker installation following this tutorial:
http://docs.docker.io/en/latest/installation/windows/
So far, I got the VM running with a manually downloaded repository (followed the GitHub link and downloaded as a ZIP file, because "git clone" didn't work behind my corporate proxy, even after setting up the proxy with "git conf --global http.proxy ..." - it kept asking me for authentification 407, although I entered my user name and password).
Now I am in the state in which I should use "docker run busybox echo hello world" (Section "Running Docker").
When I do this, I first get told that Docker is not installed (as shown at the bottom of the tutorial), and then, after I got it with apt-get install docker, I get "Segmentation Fault or critical error encountered. Dumping core and aborting."
What can I do now? Is this because I didn't use git clone or is something wrong with the Docker installation? I read somewhere, that apt-get install docker doesn't install the Docker I want, but some GNOME tool. Can I maybe specify my apt-request to get the right tool?
Windows Boot2Docker behind corporate proxy
(Context: March 2015, Windows 7, behind corporate proxy)
TLDR; see GitHub project VonC/b2d:
Clone it and:
configure ..\env.bat following the env.bat.template,
add the alias you want in the 'profile' file,
execute senv.bat then b2d.bat.
You then are in a properly customized boot2docker environment with:
an ssh session able to access internet behind corporate proxy when you type docker search/pull.
Dockerfiles able to access internet behind corporate proxy when they do an apt-get update/install and you type a docker build.
Installation and first steps
If you are admin of your workstation, you can run boot2docker install on your Windows.
It currently comes with:
Boot2Docker 1.5.0 (Docker v1.5.0, Linux v3.18.5)
Boot2Docker Management Tool v1.5.0
VirtualBox v4.3.20-r96997
msysGit v1.9.5-preview20141217
Then, once installed:
add c:\path\to\Boot2Docker For Windows\ in your %PATH%
(one time): boot2docker init
boot2docker start
boot2docker ssh
type exit to exit the ssh session, and boot2docker ssh to go back in: the history of commands you just typed is preserved.
if you want to close the VM, boot2docker stop
You actually can see the VM start or stop if you open the Virtual Box GUI, and type in a DOS cmd session boot2docker start or stop.
Hosts & Proxy: Windows => Boot2Docker => Docker Containers
The main point to understand is that you will need to manage 2 HOSTS:
your Windows workstation is the host to the Linux Tiny Core run by VirtualBox in order for you to define and run containers
(%HOME%\.boot2docker\boot2docker.iso =>
.%USERPROFILE%\VirtualBox VMs\boot2docker-vm\boot2docker-vm.vmdk),
Your boot2docker Linux Tiny Core is host to your containers that you will run.
In term of proxy, that means:
Your Windows Host must have set its HTTP_PROXY, HTTPS_PROXY and NO_PROXY environment variable (you probably have them already, and they can be used for instance by the Virtual Box to detect new versions of Virtual Box)
Your Tiny Core Host must have set http_proxy, https_proxy and no_proxy (note the case, lowercase in the Linux environment) for:
the docker service to be able to query/load images (for example: docker search nginx).
If not set, the next docker pull will get you a dial tcp: lookup index.docker.io: no such host.
This is set in a new file /var/lib/boot2docker/profile: it is profile, not .profile.
the docker account (to be set in /home/docker/.ashrc), if you need to execute any other command (other than docker) which would require internet access)
any Dockerfile that you would create (or the next RUN apt-get update will get you a, for example, Could not resolve 'http.debian.net').
That means you must add the lines ENV http_proxy http://... first, before any RUN command requiring internet access.
A good no_proxy to set is:
.company,.sock,localhost,127.0.0.1,::1,192.168.59.103
(with '.company' the domain name of your company, for the internal sites)
Data persistence? Use folder sharing
The other point to understand is that boot2docker uses Tiny Core, a... tiny Linux distribution (the .iso file is only 26 MB).
And Tiny Core offers no persistence (except for a few technical folders): if you modify your ~/.ashrc with all your preferred settings and alias... the next boot2docker stop / boot2docker start will restore a pristine Linux environment, with your modification gone.
You need to make sure the VirtualBox has the Oracle_VM_VirtualBox_Extension_Pack downloaded and added in the Virtual Box / File / Settings / Extension / add the Oracle_VM_VirtualBox_Extension_Pack-4.x.yy-zzzzz.vbox-extpack file).
As documented in boot2docker, you will have access (from your Tiny Core ssh session) to /c/Users/<yourLogin> (ie the %USERPROFILE% is shared by Virtual Box)
Port redirection? For container and for VirtualBox VM
The final point to understand is that no port is exported by default:
your container ports are not visible from your Tiny Core host (you must use -p 80:80 for example in order to expose the 80 port of the container to the 80 port of the Linux session)
your Tiny Cort ports are not exported from your Virtual Box VM by default: even if your container is visible from within Tiny Core, your Windows browser won't see it: http://127.0.0.1 won't work "The connection was reset".
For the first point, docker run -it --rm --name my-apache-app -v "$PWD":/usr/local/apache2/htdocs/ httpd:2.4 won't work without a -p 80:80 in it.
For the second point, define an alias doskey vbm="c:\Program Files\Oracle\VirtualBox\VBoxManage.exe" $*, and then:
- if the Virtual Box 'boot2docker-vm' is not yet started, uses vbm modifyvm
- if the Virtual Box 'boot2docker-vm' is already started, uses vbm controlvm
Typically, if I realize, during a boot2docker session, that the port 80 is not accessible from Windows:
vbm controlvm "boot2docker-vm" natpf1 "tcp-port80,tcp,,80,,80";
vbm controlvm "boot2docker-vm" natpf1 "udp-port80,udp,,80,,80";
Then, and only then, I can access http://127.0.0.1
Persistent settings: copied to docker service and docker account
In order to use boot2docker easily:
create on Windows a folder %USERPROFILE%\prog\b2d
add a .profile in it (directly in Windows, in%USERPROFILE%\prog\b2d), with your settings and alias.
For example (I modified the original /home/docker/.ashrc):
# ~/.ashrc: Executed by SHells.
#
. /etc/init.d/tc-functions
if [ -n "$DISPLAY" ]
then
`which editor >/dev/null` && EDITOR=editor || EDITOR=vi
else
EDITOR=vi
fi
export EDITOR
# Alias definitions.
#
alias df='df -h'
alias du='du -h'
alias ls='ls -p'
alias ll='ls -l'
alias la='ls -la'
alias d='dmenu_run &'
alias ce='cd /etc/sysconfig/tcedir'
export HTTP_PROXY=http://<user>:<pwd>#proxy.company:80
export HTTPS_PROXY=http://<user>:<pwd>#proxy.company:80
export NO_PROXY=.company,.sock,localhost,127.0.0.1,::1,192.168.59.103
export http_proxy=http://<user>:<password>#proxy.company:80
export https_proxy=http://<user>:<password>#proxy.company:80
export no_proxy=.company,.sock,localhost,127.0.0.1,::1,192.168.59.103
alias l='ls -alrt'
alias h=history
alias cdd='cd /c/Users/<user>/prog/b2d'
ln -fs /c/Users/<user>/prog/b2d /home/docker
(192.168.59.103 is usually the ip returned by boot2docker ip)
Putting everything together to start a boot2docker session: b2d.bat
create and add a b2d.bat script in your %PATH% which will:
start boot2docker
copy the right profile, both for the docker service (which is restarted) and for the /home/docker user account.
initiate an interactive ssh session
That is:
doskey vbm="c:\Program Files\Oracle\VirtualBox\VBoxManage.exe" $*
boot2docker start
boot2docker ssh sudo cp -f /c/Users/<user>/prog/b2d/.profile /var/lib/boot2docker/profile
boot2docker ssh sudo /etc/init.d/docker restart
boot2docker ssh cp -f /c/Users/<user>/prog/b2d/.profile .ashrc
boot2docker ssh
In order to enter a new boot2docker session, with your settings defined exactly as you want, simply type:
b2d
And you are good to go:
End result:
a docker search xxx will work (it will access internet)
any docker build will work (it will access internet if the ENV http_proxy directives are there)
any Windows file from %USERPROFILE%\prog\b2d can be modified right from ~/b2d.
Or you actually can write and modify those same files (like some Dockerfile) right from your Windows session, using your favorite editor (instead of vi)
And all this, behind a corporate firewall.
Bonus: http only
Tuan adds in the comments:
Maybe my company's proxy doesn't allow https. Here's my workaround:
boot2docker ssh,
kill the docker process and
set the proxy export http_proxy=http://proxy.com, then
start docker with docker -d --insercure-registry docker.io
I have previously setup a EC2 instance on Ubuntu 10.04 and setup the necessary binaries to allow ssh and more importantly FreeNX(no machine) to work on my MacOS-10.6 machine.
As this was done on a micro instance, i was keen to try it on small instance today so i created a AMI image from the aws management console(browser) and launch a new small instance using the image with the exact same keypair and security setting.
Expecting the instance to work exactly the same(except much faster) i tried to connect to it using SSH and FreeNX again.
Result:
SSH is working fine and my env look exactly the same.
NX is unable to connect.
it complain username/password is incorrect.
I wonder why this is happen since i did an exact clone of the EC2 instance and i can connect fine using NX with the previous instance?
I had the same issue, and after a lot of searching fixed it. It seems freenx lost the usernames and passwords. I fixed it by doing the following:
log in with putty as ubuntu user then
cd /etc/nxserver
sudo vim node.conf
set ENABLE_PASSDB_AUTHENTICATION="1" and save the file
then
sudo nxserver --adduser xxxxxx
sudo nxserver --passwd yyyyyy
sudo nxserver --restart
after that I was able to log in using nomachine with the username and password I just set.