Connecting to Docker-Machine via docker-py on OSX - macos

Context
I am trying to use docker-py to connect to docker-machine on OSX.
I can't simply use the standard Client(base_url='unix://var/run/docker.sock') since docker is running on a docker-machine Virtual Machine, not my local OS.
Instead, I am trying to connect securely to the VM using docker.tls:
from docker import Client
import docker.tls as tls
from os import path
CERTS = path.join(path.expanduser('~'), '.docker', 'machine', 'certs')
tls_config = tls.TLSConfig(
client_cert=(path.join(CERTS, 'cert.pem'), path.join(CERTS,'key.pem')),
ca_cert=path.join(CERTS, 'ca.pem'),
verify=True
#verify=False
)
client = docker.Client(base_url='https://192.168.99.100:2376', tls=tls_config)
Problem
When I try to run this code (running something like print client.containers() on the next line), I get this error:
requests.exceptions.SSLError: hostname '192.168.99.100' doesn't match 'localhost'
I've been trying to follow the github issue on a similar problem with boot2docker, ie. the old version of docker-machine, but I don't know much about how SSL certificates are implemented. I tried adding 192.168.99.100 localhost to the end of my /etc/hosts file as suggested in the github issue, but that did not fix the issue (even after export DOCKER_HOST=tcp://localhost:2376).
Maybe connecting via the certificates is not the way to go for docker-machine, so any answers with alternative methods of connecting to a particular docker-machine via docker-py are acceptable too.
UPDATE
Seems like v0.5.2 of docker-machine tries to solve this via the --tls-san flag for the create command. Need to verify but installation via brew is still giving v0.5.1, so I'll have to install manually.

Looks like with the Docker-py v1.8.0 you can connect to Docker-machine like below;
import docker
client = docker.from_env(assert_hostname=False)
print client.version()
See the doc here

I installed docker-machine v0.5.2 as detailed in the release on github. Then I just had to create a new machine as follows:
$ docker-machine create -d virtualbox --tls-san <hostname> <machine-name>
Then I added <hostname> <machine-ip> to /etc/hosts. The code worked after that
from docker import Client
import docker.tls as tls
from os import path
CERTS = path.join(path.expanduser('~'), '.docker', 'machine', 'machines', <machine-name>)
tls_config = tls.TLSConfig(
client_cert=(path.join(CERTS, 'cert.pem'), path.join(CERTS,'key.pem')),
ca_cert=path.join(CERTS, 'ca.pem'),
verify=True
)
client = docker.Client(base_url='https://<machine-ip>:2376', tls=tls_config)
where I replaced <machine-name> in the CERTS path and replaced <machine-ip> in the base_url.

Related

Unable to create new docker instances with docker-machine

I am using AWS with docker-machine to create and provision my instances. I would use this command to create a new instance:
docker-machine create --driver amazonec2 --amazonec2-instance-type "t2.micro" --amazonec2-security-group zhxw-production-sg zhxw-production-3
About a month ago, that worked fine. I just went to create a fresh machine, and I can no longer connect to it. When I run the above command, it gets stuck on "waiting for SSH to be available..."
Running pre-create checks...
Creating machine...
(zhxw-production-3) Launching instance...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
It just hangs at that point. If I cancel the command, and check the AWS EC2 console, it suggests that it's running:
When I run docker-machine ls, it also suggests that it's running, but with errors:
$-> docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
zhxw-production-2 - amazonec2 Running tcp://3.86.xxx.xxx:2376 v19.03.12
zhxw-production-3 - amazonec2 Running tcp://54.167.xxx.xxx:2376 Unknown Unable to query docker version: Cannot connect to the docker engine endpoint
I'm able to connect to the zhxw-production-2 machine (which has been running for a month). Just not the new one zhxw-production-3 one I just launched.
$-> docker-machine env zhxw-production-3
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "54.167.123.108:2376": dial tcp 54.167.123.108:2376: connect: connection refused
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.
The regenerate-certs command doesn't help either. I'm not really sure where to start debugging, because as far as I can tell, the docker-machine create command is the very beginning.
Turned out to be a problem with SSH to my AWS environment. I had my public IP address whitelisted, but it had changed.
I came across a problem like this and I found out that the AWS EC2 AMI did not have SSH installed, so I had to use different AMI, eg. Ubuntu.
I went through the same problem recently and found that the cause was the public ip change when I enabled elastic ip on the machine. I don't know if this is your case. Maybe my solution will help you or help others. He follows:
usually the file path is: /User/<name_your_user>/.docker/machine/<name_machine_ploblem>
edit parameter value: "IPAdress"
After making the change, run the command: docker-machine regenerate-certs <name_instance_ec2>
With these procedures, my problem was solved. I hope it helps! hug to everyone.

Where to add client certificates for Docker for Mac?

I have a docker registry that I'm accessing behind an nginx proxy that does authentication using client-side ssl certificates.
When I attempt to push to this registry, I need the docker daemon to send the client certificate to nginx.
According to:
https://docs.docker.com/engine/security/certificates/
There should be a directory called /etc/docker where these certificates can go. This directory doesn't exist on Docker for Mac.
So I thought I'd try putting the certificates inside the virtual machine itself by doing:
docker-machine ssh default
This resulted in docker complaining:
Error response from daemon: crypto/tls: private key does not match public key
I don't believe there is anything wrong with my key pair, and I've done this same setup on linux (much easier) without problems.
4 yrs later Google still brought me here.
I found the answer in the official docs:
https://docs.docker.com/desktop/mac/#add-client-certificates
Citing from source:
You can put your client certificates in
~/.docker/certs.d/<MyRegistry>:<Port>/client.cert and
~/.docker/certs.d/<MyRegistry>:<Port>/client.key.
When the Docker for Mac application starts up, it copies the
~/.docker/certs.d folder on your Mac to the /etc/docker/certs.d
directory on Moby (the Docker for Mac xhyve virtual machine).
You need to restart Docker for Mac after making any changes to the keychain or to the ~/.docker/certs.d directory in order for the
changes to take effect.
The registry cannot be listed as an insecure registry (see Docker Engine). Docker for Mac will ignore certificates listed under
insecure registries, and will not send client certificates. Commands
like docker run that attempt to pull from the registry will produce
error messages on the command line, as well as on the registry.
Self-signed TLS CA can be installed like this, your certs might reside in the same directory.
sudo mkdir -p /Applications/Docker.app/Contents/Resources/etc/ssl/certs
sudo cp my_ca.pem /Applications/Docker.app/Contents/Resources/etc/ssl/certs/ca-certificates.crt
https://docs.docker.com/desktop/mac/#add-tls-certificates works for me and here is short description of how to for users who use
Docker Desktop
Mac os system
add cert into mac os chain
# Add the cert for all users
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ca.crt
# Add the cert for yourself
security add-trusted-cert -d -r trustRoot -k ~/Library/Keychains/login.keychain ca.crt
restart Docker Desktop
This is a current "Oct. 2022" docs in Docker for Mac. (I made it clear to see full url!)
How do I add TLS certificates?( https://docs.docker.com/desktop/faqs/macfaqs/#how-do-i-add-tls-certificates)
There should be a directory called /etc/docker where these certificates can go. This directory doesn't exist on Docker for Mac.
In my case, I also don't have /etc/docker by default. If you use ~/.docker, the docker desktop will pass alias into /etc/docker.
I don't believe there is anything wrong with my key pair, and I've done this same setup on linux (much easier) without problems.
You can try put your key pairs under ~/.docker/certs.d/Hostname:port, and restart your Docker Desktop for Mac. As a result, I guess you can achieve what you want.

Vagrant server content won't show in web browser on windows

I am trying to use vagrant on Windows system. I already gone through the step of add vagrant box, init it and vagrant up. And I also use PuttyGen and Putty to ssh into the VM as introduced here: http://blog.osteel.me/posts/2015/01/25/how-to-use-vagrant-on-windows.html
Now after installing all necessary packages, I try to run this code on the VM:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'SUCESSFULLY running flask inside centos68 via apache!\n\n'
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
And I also go into my Vagrantfile on local machine and uncomment # config.vm.network “private_network”, ip: “192.168.33.10” by removing the # sign and save it. And it shows running on http://0.0.0:5000/
But when I type in the IP address and port number on browser, it shows:
The site can't be reached. It seems as if VM cannot communicate with local machine.
This kind of problem never occurred in Mac OS, I am wondering if it is because of Putty. Does anyone know how to solve it? Thanks a lot!
Ok, so I used Git Bash instead of Putty for the installation and everything works fine.

How to setup heroku app locally using docker?

I am trying to setting up heroku app locally using docker which is developed using java (Dropwizard framework) and postgresql.
Following this guidelines : https://devcenter.heroku.com/articles/docker
Getting docker-machine ip using (changed port to 2204 in docker-compose.yml file)
$ docker-compose up
$open "http://$(docker-machine ip default):2204"
Issue: Unable to access local server ping api - http://docker-machine-ip:port/ping
Other details:
OS X El Capitan 10.11.1 (15B42)
Docker version 1.9.0, build 76d6bc9
heroku-toolbelt/3.42.25 (x86_64-darwin10.8.0) ruby/1.9.3
heroku-cli/4.27.9-cce0260 (amd64-darwin) go1.5.2
=== Installed Plugins
heroku-apps#1.0.0
heroku-cli-addons#0.1.1
heroku-docker#1.1.2
heroku-fork#4.0.0
heroku-git#2.4.4
heroku-local#4.1.5
heroku-run#2.9.2
heroku-status#1.2.4
Thanks!
Is docker-machine installed and on your PATH?
From what you've written (http://:port/ping) it looks like you aren't getting the IP address, which implies that docker-machine ip default is returning nothing.
Like so:
$ echo "http://$(docker-machine ip default):2204"
-bash: docker-machine: command not found
http://:2204
See https://docs.docker.com/machine/install-machine/ for installation of docker-machine.

FreeNX(nomachine) unable to connect after cloning of a working ubuntu EC2 instance

I have previously setup a EC2 instance on Ubuntu 10.04 and setup the necessary binaries to allow ssh and more importantly FreeNX(no machine) to work on my MacOS-10.6 machine.
As this was done on a micro instance, i was keen to try it on small instance today so i created a AMI image from the aws management console(browser) and launch a new small instance using the image with the exact same keypair and security setting.
Expecting the instance to work exactly the same(except much faster) i tried to connect to it using SSH and FreeNX again.
Result:
SSH is working fine and my env look exactly the same.
NX is unable to connect.
it complain username/password is incorrect.
I wonder why this is happen since i did an exact clone of the EC2 instance and i can connect fine using NX with the previous instance?
I had the same issue, and after a lot of searching fixed it. It seems freenx lost the usernames and passwords. I fixed it by doing the following:
log in with putty as ubuntu user then
cd /etc/nxserver
sudo vim node.conf
set ENABLE_PASSDB_AUTHENTICATION="1" and save the file
then
sudo nxserver --adduser xxxxxx
sudo nxserver --passwd yyyyyy
sudo nxserver --restart
after that I was able to log in using nomachine with the username and password I just set.

Resources