How to access docker-machine instances that were created on AWS from new machine - docker-machine

I set-up some AWS EC2 instances using Docker using docker-machine on my previous laptop, using commands like this:
docker-machine create --driver amazonec2 --amazonec2-instance-type "t2.micro" --amazonec2-security-group MY_SECURITY_GROUP container-1
On the old laptop, I can still view and control them:
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
container-1 - amazonec2 Stopped Unknown
container-2 - amazonec2 Running tcp://xx.xx.xx.xxx:yyyy v20.10.7
container-3 - amazonec2 Stopped Unknown
But on my new laptop, I'm not able to see them:
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
I have the AWS environment variables (key/secret) on the new laptop. I've looked at the hidden files in the old laptop to see if there's something that docker-machine uses to store a list of created containers, but I don't see anything.
Is there a command to add these to the new laptop, so I can see and start/stop them?

I found the solution to this. You need to manually copy the the machine's hidden directory (from ~/.docker/machine/machines/) on the old laptop to the new one. In the example above, that would be ~/.docker/machine/machines/container-1, ~/.docker/machine/machines/container-2, etc.
In addition, each machine has a config.json that contains absolute paths to the certificates. That config file looks something like this:
{
"ConfigVersion": 3,
"Driver": {
"IPAddress": "4.94.173.4",
"MachineName": "container-1",
"SSHUser": "ubuntu",
"SSHPort": 22,
"SSHKeyPath": "/Users/USERNAME/.docker/machine/machines/zhxw-production-2/id_rsa",
"StorePath": "/Users/USERNAME/.docker/machine",
...
... where USERNAME is your system username. If this username is different between old and new laptops, you'll need to update all the references to the new paths.

Related

Unable to create new docker instances with docker-machine

I am using AWS with docker-machine to create and provision my instances. I would use this command to create a new instance:
docker-machine create --driver amazonec2 --amazonec2-instance-type "t2.micro" --amazonec2-security-group zhxw-production-sg zhxw-production-3
About a month ago, that worked fine. I just went to create a fresh machine, and I can no longer connect to it. When I run the above command, it gets stuck on "waiting for SSH to be available..."
Running pre-create checks...
Creating machine...
(zhxw-production-3) Launching instance...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
It just hangs at that point. If I cancel the command, and check the AWS EC2 console, it suggests that it's running:
When I run docker-machine ls, it also suggests that it's running, but with errors:
$-> docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
zhxw-production-2 - amazonec2 Running tcp://3.86.xxx.xxx:2376 v19.03.12
zhxw-production-3 - amazonec2 Running tcp://54.167.xxx.xxx:2376 Unknown Unable to query docker version: Cannot connect to the docker engine endpoint
I'm able to connect to the zhxw-production-2 machine (which has been running for a month). Just not the new one zhxw-production-3 one I just launched.
$-> docker-machine env zhxw-production-3
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "54.167.123.108:2376": dial tcp 54.167.123.108:2376: connect: connection refused
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.
The regenerate-certs command doesn't help either. I'm not really sure where to start debugging, because as far as I can tell, the docker-machine create command is the very beginning.
Turned out to be a problem with SSH to my AWS environment. I had my public IP address whitelisted, but it had changed.
I came across a problem like this and I found out that the AWS EC2 AMI did not have SSH installed, so I had to use different AMI, eg. Ubuntu.
I went through the same problem recently and found that the cause was the public ip change when I enabled elastic ip on the machine. I don't know if this is your case. Maybe my solution will help you or help others. He follows:
usually the file path is: /User/<name_your_user>/.docker/machine/<name_machine_ploblem>
edit parameter value: "IPAdress"
After making the change, run the command: docker-machine regenerate-certs <name_instance_ec2>
With these procedures, my problem was solved. I hope it helps! hug to everyone.

Docker Postgres with windows share

I migrated from Linux to Windows and tried to setup a postgres container with a mounted directory (copied from my Linux install) containing the database.
This does not work.
Windows mounts are always owned by root
Postgres does not run under root
How to get this unholy combination to work?
You don't provide much details so it is difficult to tell what actually went wrong. However there is a known issue with Postgres setup on Windows Docker using a windows mount for database data files. In that case, running docker logs will show something along the following lines
waiting for server to start....FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
HINT: The server must be started by the user that owns the data directory.
stopped waiting
pg_ctl: could not start server
Unfortunately there is no way to overcome this issue so you cannot use Windows mount, see Postgres Data has wrong ownership. You may use docker volumes in order to make database data indipendent from docker postgres container, using the following commands
docker create -v /var/lib/postgresql/data --name PostgresData alpine
docker run -p 5432:5432 --name yourPostgres -e POSTGRES_PASSWORD=yourPassword -d --volumes-from PostgresData postgres
You may find a more thoroughful explanation at Setup Postgresql on Windows with Docker

How to enable Docker API access from Windows running Docker Toolbox (docker machine)

I am running the latest Docker Toolbox, using latest Oracle VirtualBox, with Windows 7 as a host OS.
I am trying to enable non-TLS access to Docker remote API, so I could use Postman REST client running on Windows and hit docker API running on docker-machine in the VirtualBox. I found that if Docker configuration included -H tcp://0.0.0.0:2375, that would do the trick exposing the API on port 2375 of the docker machine, but for the life of me I can't find where this configuration is stored and can be changed.
I did docker-machine ssh from the Toolbox CLI, and then went and pocked around the /etc/init.d/docker file, but no changes to the file survive docker-machine restart.
I was able to find answer to this question for Ubuntu and OSX, but not for Windows.
#CarlosRafaelRamirez mentioned the right place, but I will add a few details and provide more detailed, step-by-step instructions, because Windows devs are often not fluent in Linux ecosystem.
Disclaimer: following steps make it possible to hit Docker Remote API from Windows host, but please keep in mind two things:
This should not be done in production as it makes Docker machine very not secure.
Current solution disables most of the docker-machine and all docker CLI functionality. docker-machine ssh remains operational, forcing one to SSH into docker machine to access docker commands.
Solution
Now, here are the steps necessary to switch Docker API to non-TLS port. (Docker machine name is assumed to be "default". If your machine name has a different name, you will need to specify it in the commands below.)
Start "Docker Quickstart Terminal". It starts Bash shell and is the place where all following commands will be run. Run docker-machine ip command and note the IP address of the docker host machine. Then do
docker-machine ssh
cd /var/lib/boot2docker
sudo vi profile This starts "vi" editor in elevated privileges mode required for editing "profile" file, where Docker host settings are. (If as a Windows user you are not familiar with vi, here's is super-basic crash course on it. When file is open in the vi, vi is not in editing mode. Press "i" to start edit mode. Now you can make changes. After you made all the changes, hit Esc and then ZZ to save changes and exit vi. If you need to exit vi without saving changes, after Esc please type :q! and hit Enter. ":" turns on vi's command mode, and "q!" command means exit without saving. Detailed vi command info is here.)
Using vi, change DOCKER_HOST to be DOCKER_HOST='-H tcp://0.0.0.0:2375', and set DOCKER_TLS=no. Save changes as described above.
exit to leave SSH session.
docker-machine restart
After doocker machine has restarted, your sould be able to hit docker API URL, like http://dokerMachineIp:2375/containers/json?all=1, and get valid JSON back.
This is the end of steps required to achieve the main goal.
However, if at this point you try to run docker-machine config or docker images, you will see an error message indicating that docker CLI client is trying to get to the Docker through the old port/TLS settings, which is understandable. What was not expected to me though, is that even after I followed all the Getting Started directions, and ran export DOCKER_HOST=tcp://192.168.99.101:2375 and export DOCKER_TLS_VERIFY=0, resulting in
$ env | grep DOCKER
DOCKER_HOST=tcp://192.168.99.101:2375
DOCKER_MACHINE_NAME=default
DOCKER_TLS_VERIFY=0
DOCKER_TOOLBOX_INSTALL_PATH=C:\Program Files\Docker Toolbox
DOCKER_CERT_PATH=C:\Users\USERNAME\.docker\machine\machines\default
the result was the same:
$ docker-machine env
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host
"192.168.99.101:2376"
If you see a problem with how I changed environment variables to point Docker CLI to the new Docker host address, please comment.
To work around this problem, use docker-machine ssh command and run your docker commands after that.
I encountered the same problem and thanks to #VladH made it working not changing any internal Docker profile properties. All you have to do is correctly define Windows local env variables (or configure maven plugin properties, if you use io.fabric8 docker-maven-plugin).
Note that 2375 port is used for non-TLS connections, and 2376 only for TLS connections.
DOCKER_HOST=tcp://192.168.99.100:2376
DOCKER_TLS_VERIFY=0
DOCKER_TOOLBOX_INSTALL_PATH=C:\Program Files\Docker Toolbox
DOCKER_CERT_PATH=C:\Users\USERNAME\.docker\machine\machines\default

Where exactly, are files in docker container stored on the host machine

I am using docker on windows. With the use of kitematic, I have created an ubuntu container. This ubuntu image has postgresql installed on it.
Wondering if there is any possibility to access the postgres configuration files available in the container from the host (windows machine)?
Where exactly does the container store its file system on the host machine?
I hope it would be part of image file with format VMDK.
Please correct me if I'm wrong.
Wondering if there is any possibility to access the postgres configuration files available in the container from the host (windows machine)
That is not how Docker would allow you to modify a file in a container.
For that, you should mount a host (Windows) folder when starting (docker run -v) your container.
See "Mount a host directory as a data volume"
docker run -d -P --name web -v /c/Users/<myACcount>/src/webapp:/opt/webapp training/webapp python app.py
Issue 247 mentions ~/Library/Application Support/Kitematic for App data, and ~/Kitematic "for easy access to volume data".

Run postgres container with data volumes through docker-machine

I have an issue with running postgres container with set up volumes for data folder on my Mac OS machine.
I tried to run it such like this:
docker run \
--name my-postgres \
-e POSTGRES_USER=admin \
-e POSTGRES_PASSWORD=password \
-e POSTGRES_DB=some_db_dev \
-v $PG_LOCAL_DATA:/var/lib/postgresql/data \
-d postgres:9.5.1
Every time I got the following result in logs:
* Starting PostgreSQL
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are enabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
initdb: could not create directory "/var/lib/postgresql/data/pg_xlog": Permission denied
initdb: removing contents of data directory "/var/lib/postgresql/data"
Versions of docker, docker-machine, virtualbox and boot2docker are:
docker-machine version 0.6.0, build e27fb87
Docker version 1.10.2, build c3959b1
VirtualBox Version 5.0.16 r105871
boot2docker 1.10.3
I saw many publications about this topic but the most of them are outdated. I had tried do the similar solution as for mysql but it did not help.
Maybe somebody can updated me: does some solution exist to run postgres container with data volumes through docker-machine?
Thanks!
If you are running docker-machine on a Mac, at this time, you cannot mount to a directory that is not part of your local user space (/Users/<user>/) without extra configuration.
This is because on the Mac, Docker makes a bind mount automatically with the home ~ directory. Remember that since Docker is being hosted in a VM that isn't your local Mac OS, any volume mounts are relative to the host VM - not your machine. That means by default, Docker cannot see your Mac's directories since it is being hosted on a separate VM from your Mac OS.
Mac OS => Linux Virtual Machine => Docker
^------------------^
Docker Can See VM
^-----------------X----------------^
Docker Can't See Here
If you open VirtualBox, you can create other mounts (i.e. shared folders) to your local machine to the host VM and then access them that way.
See this issue for specifics on the topic: https://github.com/docker/machine/issues/1826
I believe the Docker team is adding these capabilities in upcoming releases (especially since a native Mac version is in short works).
You should use docker named volumes instead of folders on your local file system.
Try creating a volume:
docker volume create my_vol
Then mount the data directory in your above command:
docker run \
--name my-postgres \
-e POSTGRES_USER=admin \
-e POSTGRES_PASSWORD=password \
-e POSTGRES_DB=some_db_dev \
-v my_vol:/var/lib/postgresql/data \
-d postgres:9.5.1
Checkout my blog post for a whole postgres, node ts docker setup for both dev and prod: https://yzia2000.github.io/blog/2021/01/15/docker-postgres-node.html
For more on docker volumes: https://docs.docker.com/storage/volumes/

Resources