Creating docker containers on Windows - vagrant

So getting boot2docker up and running, and pulling containers from the Docker Hub are non-issue on a windows environment. But if I wish to create a container and run it, how do I go about doing this? I've read about using fig, but is fig installed via Windows or from the container? I've attempted to do it from the container, but it often results in a permissions error, and even CHOWNing the folder doesn't solve the issue of not being able to call fig in the container.
Is it even possible to just run docker via Boot2Docker on windows as a development environment? Or should I just use Vagrant as the host VM and play with a bunch of docker containers in it?
Just some clarification and direction would be appreciated.

Fig is a tool for working with Docker. It runs on the host (which could mean your Windows host communicating with Docker via the TCP socket, or could mean your boot2docker VM which is a guest of your windows machine and a host of your Docker containers).
All that Fig's doing is streamlining the process of pulling, building and starting Docker images. For example, this fig.yml
db:
image: postgres
app:
build: .
links:
- "db:db"
environment:
- FOO=bar
is (roughly) the same as this series of Docker commands in Bash:
docker run -d --name db postgres
docker build -t app .
docker run -d --name app --link=db:db --env=FOO=bar app

Related

linux container running on windows host -- access to host docker

I have an ubuntu container running on docker for windows. How can I give it access to the docker daemon that is running on my windows host?
Note: I am not trying to run docker-in-docker, i.e., inside my container, docker ps should show containers running on the host machine's docker daemon. If my host machine was running linux, this would be achieved by mounting /var/run/docker.sock inside the container -- is there a similar technique when the host machine runs windows?
This is my docker destkop version: Docker Desktop 4.12.0 (85629) is currently the newest version available.
You can expose the daemon via TCP socket, it's a setting under General

is there any way to run a docker image on host from other docker image? [duplicate]

I am using a docker container to build and deploy my software to a collection of ec2's. In the deployment script I build my software and then package it in a docker image. The image is pushed to my private registry, pulled by my production ec2's and then run. So essentially I will need to run docker within a docker container.
The problem is that I can't actually start docker on my container. If I try
service docker start
I get
bash: service: command not found
And if I try
docker -d
I get
2014/10/07 15:54:35 docker daemon: 0.11.1-dev 02d20af/0.11.1; execdriver: native; graphdriver:
[e2feb6f9] +job serveapi(unix:///var/run/docker.sock)
[e2feb6f9] +job initserver()
[e2feb6f9.initserver()] Creating server
2014/10/07 15:54:35 Listening for HTTP on unix (/var/run/docker.sock)
[error] attach_loopback.go:42 There are no more loopback device available.
loopback mounting failed
[e2feb6f9] -job initserver() = ERR (1)
2014/10/07 15:54:35 loopback mounting failed
The service command doesn't exist on the docker container so I can't start docker. I'm not sure what I should be doing now to start docker so I'm a bit stuck here, any help is appreciated.
A bit more information
Host machine is running fedora 20 (will eventually be running amazon linux on an ec2)
Docker container is running centos 7.0
Host is running Docker version 1.2.0, build fa7b24f/1.2.0
Container is running docker-0.11.1-22.el7.centos.x86_64
How about not running 'docker inside docker' and run docker on your host, but from within your docker container? Just mount your docker.sock and docker binary:
docker run -v /var/run/docker.sock:/run/docker.sock -v $(which docker):/bin/docker [your image]
https://github.com/sameersbn/docker-gitlab uses this approach to spin up docker containers, take a look at this image.
You can also take a look at: https://registry.hub.docker.com/u/mattgruter/doubledocker/
UPDATE on july 2016
The most current approach is to use docker:dind image, as described here:
https://hub.docker.com/_/docker/
Short summary:
$ docker run --privileged --name some-docker -d docker:dind
and then:
$ docker run --rm --link some-docker:docker docker info
While in almost all cases I would suggest following #cthulhu's answer and not running "docker in docker", in the cases when you must (e.g. a test suite which tests against multiple docker version), use the following to create additional loopback devices:
#!/bin/bash
for i in {0..6}
do
mknod -m0660 /dev/loop$i b 7 $i
done
(Taken from the thread for Docker Issue #7058)
You can simply run docker inside the docker container using dind. Try this image from Jerome, as follows:
docker run --privileged -t -i jpetazzo/dind
Check this page for more details:
https://github.com/jpetazzo/dind

Docker localhost process not working on Windows

I am using Docker Quickstart Terminal to run a docker container. The container should work on port 8088 of localhost:
docker run -it --name myContainer -p 8088:8088
However, when I go to localhost:8088 or 127.0.0.1:8088 I can't find any process running.
This works on OSX.
Why is this not working on Windows?
I'm assuming you're using VirtualBox, since that's what is integrated with the Quickstart terminal.
The reason it doesn't work is that Windows isn't running your (Linux) containers natively, it's running them in a separate Linux-based VM. This VM is available under a different ip address than your "physical" machine, usually printed when you start the quickstart terminal:
This is the ip address you need to use in order to connect to published container ports.
One possibility is the kind of VM you are using : HyperV (Docker For Windows) or VirtualBox (Docker Toolbox).
If it is the later (which seems probable since you are using the Docker Quickstart Terminal), you need to port forward 8088 in order for your PC (localhost) to see it.
See "How do I configure docker compose to expose ports correctly?" as an example when using VirtualBox.
If localhost does not work, a docker-machine ip will show you the ip of the VM being executed.

Create a Docker Container using VirtualBox on Windows for Kubernetes

I'm starting to experiment with Kubernetes on my Windows 10 dev machine. I've got minikube running on my machine, with some "canned" test services, so it looks like Kubernetes is working properly.
Now I'm trying to create my first service by following this: http://kubernetes.io/docs/hellonode/
The problem is I can't build the docker image. I get an error that basically says docker isn't running. I've installed the docker toolkit, and I've looked at docker for windows, but it needs hyper-v which doesn't work with Kubernetes (it requires VirtualBox). So is there any way I can get docker running on windows using VirtualBox?
Once you have the docker client on your host windows machine, you can run
minikube docker-env --shell powershell
That will point the docker client on your host to the docker daemon inside the minikube VM.

Access Docker daemon Remote api on Docker for Mac

I'm runner Docker for OSX, and having trouble getting the Docker remote API to work.
My situation is this:
Docker daemon running natively on OSX (https://www.docker.com/products/docker#/mac, so not the boot2docker variant)
Jenkins running as docker image
No I want to use the Jenkins docker-build-step plugin to build a docker image, but I want it to use the docker daemon on the host machine, so in Jenkins settings, DOCKER_URL should be something like :2375. (Reason for this is I don't want to install docker on the jenkins container if I already have it on my host machine).
Is there a way to to this or is de Docker for Mac currently not supporting this? I tried fiddling with export DOCKER_OPTS or DOCKER_HOST options but still get a Connection refused on calling http://localhost:2375/images/json for example.
Basicly the question is more about enabling the Docker for OSX remote api, with use case calling it from a Jenkins docker container.
You could consider using socat. It solved my problem, which seem to be similar.
socat TCP-LISTEN:2375,reuseaddr,fork UNIX-CONNECT:/var/run/docker.sock &
This allows you to access your macOS host Docker API from a Docker container using: tcp://[host IP address]:2375
On macOS socat can be installed like this:
brew install socat
See here for a long discussion on this topic: Plugin: Docker fails to connect via unix:// on Mac OS X
If you already added an SSH public key to your remote server, then you can use this ssh credentials for your docker connection, too. You don't need to configure the remote api on the server for this approach.
When connecting to macOS Docker Desktop, you could use ssh (after you have enabled it on Mac)
docker -H ssh:user#192.168.64.1 images
or
export DOCKER_HOST=ssh:user#192.168.64.1
docker images
I had the same issue but with mysql. I needed to expose the port of my docker hosts on port 43306 to docker image mysql running on port 3306.
Solution:
Create your docker image with -p parameter.
Example:
#> docker run -p 0.0.0.0:43306:3306 --name mysql-5.7.23xx -e MYSQL_ROOT_PASSWORD=myrootdba -d mysql/mysql-server:5.7.23 --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
Now I can connect from my host docker server on port 43306 to mysql docker image.

Resources