Can I build a Docker container from the CLI against a remote daemon? - windows

Currently I've the following setup:
An Hyper-V VM running Windows 10 on which is my dev machine. My CPU doesn't support nested virtualization.
Docker for Windows is installed on the host machine which runs Windows 10 too.
Is it possible to run docker build from the VM against Docker on the host machine?

Yes, you can. According to the documentation, there is 3 ways to do this,
# with Git repo
docker -H xxx build https://github.com/docker/rootfs.git#container:docker
# Tarball contexts
docker -H xxx build http://server/context.tar.gz
Text files
docker -H xxx build - < Dockerfile
When doing this, you need to make sure that,
your client have docker installed.
all the dependent files are accessible by the host.
At the end, the docker image will be created in your host.
Update
the docker options is documented here now.

export DOCKER_HOST=ssh://sammy#your_server_ip
then you can run docker build on your host machine
reference

There is (from my understanding) 3 different way of building a docker using a remote docker host / daemon:
Using the DOCKER_HOST variable
Using contexts
Using -H cli option
as in :
DOCKER_HOST="ssh://user#docker-build.dev" docker build -t toto .
docker use context remote-build-host && docker build -t toto .
docker -H ssh://user#docker-build.dev:22 build -t toto .
Please note the port is required in the last form (-H)
See this page and this one too for more info.

Related

is there any way to run a docker image on host from other docker image? [duplicate]

I am using a docker container to build and deploy my software to a collection of ec2's. In the deployment script I build my software and then package it in a docker image. The image is pushed to my private registry, pulled by my production ec2's and then run. So essentially I will need to run docker within a docker container.
The problem is that I can't actually start docker on my container. If I try
service docker start
I get
bash: service: command not found
And if I try
docker -d
I get
2014/10/07 15:54:35 docker daemon: 0.11.1-dev 02d20af/0.11.1; execdriver: native; graphdriver:
[e2feb6f9] +job serveapi(unix:///var/run/docker.sock)
[e2feb6f9] +job initserver()
[e2feb6f9.initserver()] Creating server
2014/10/07 15:54:35 Listening for HTTP on unix (/var/run/docker.sock)
[error] attach_loopback.go:42 There are no more loopback device available.
loopback mounting failed
[e2feb6f9] -job initserver() = ERR (1)
2014/10/07 15:54:35 loopback mounting failed
The service command doesn't exist on the docker container so I can't start docker. I'm not sure what I should be doing now to start docker so I'm a bit stuck here, any help is appreciated.
A bit more information
Host machine is running fedora 20 (will eventually be running amazon linux on an ec2)
Docker container is running centos 7.0
Host is running Docker version 1.2.0, build fa7b24f/1.2.0
Container is running docker-0.11.1-22.el7.centos.x86_64
How about not running 'docker inside docker' and run docker on your host, but from within your docker container? Just mount your docker.sock and docker binary:
docker run -v /var/run/docker.sock:/run/docker.sock -v $(which docker):/bin/docker [your image]
https://github.com/sameersbn/docker-gitlab uses this approach to spin up docker containers, take a look at this image.
You can also take a look at: https://registry.hub.docker.com/u/mattgruter/doubledocker/
UPDATE on july 2016
The most current approach is to use docker:dind image, as described here:
https://hub.docker.com/_/docker/
Short summary:
$ docker run --privileged --name some-docker -d docker:dind
and then:
$ docker run --rm --link some-docker:docker docker info
While in almost all cases I would suggest following #cthulhu's answer and not running "docker in docker", in the cases when you must (e.g. a test suite which tests against multiple docker version), use the following to create additional loopback devices:
#!/bin/bash
for i in {0..6}
do
mknod -m0660 /dev/loop$i b 7 $i
done
(Taken from the thread for Docker Issue #7058)
You can simply run docker inside the docker container using dind. Try this image from Jerome, as follows:
docker run --privileged -t -i jpetazzo/dind
Check this page for more details:
https://github.com/jpetazzo/dind

Testcontainers ; Running #Testcontainers Tests inside docker [Running Docker inside Docker]

How To Run #Testcontainers based test cases inside the docker container ?
I have Simple Spring Boot App that has Integration Test (Component level) that are interacting with containers using Testcontainers. Test cases are ruining fine from outside container(Local machine).
We are running everything in containers and build is running on docker jenkins image.
Docker file is creating jar and then image. #Testcontainers is not able to find docker installed.
Below is my docker file.
FROM maven:3.6-jdk-11-openj9
VOLUME ["/var/run/docker.sock"]
RUN apt-get update
RUN apt-get -y install docker.io
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN mvn -Dmaven.repo.local=/root/m2 --batch-mode -f pom.xml clean package
EXPOSE 8080
CMD ["/bin/bash"]
While running build i am getting below below error
org.testcontainers.dockerclient.EnvironmentAndSystemPropertyClientProviderStrategy - ping failed with configuration Environment variables, system properties and defaults. Resolved dockerHost=unix:///var/run/docker.sock due to org.rnorth.ducttape.TimeoutException: Timeout waiting for result with exception
Whats the best way to handle this case ? I want to run my component level integration test during mvn build phase using docker file.
below reference did not helped me.
https://www.testcontainers.org/supported_docker_environment/continuous_integration/dind_patterns/
This is not complete answer but you should enable access to a docker daemon from inside your container. Installing Docker and running it's daemon inside your container is complicated so not recommended. Docker can be controlled via Unix socket or over TCP (I assume the host system is a Linux).
How Test containers look for Docker:
By default it tries to connect to Unix socket /var/run/docker.sock. You can specify other socket path or TCP address by setting environment variables (DOCKER_HOST).
How docker exposes it's control API:
By default via Unix socket /var/run/docker.sock (on your host). You can expose docker API elsewhere by adding following parameters to docker start command (the location of command launching your docker is system dependent): -H fd:// -H tcp://127.0.0.1:2376. Note that you can specify more than one option. -H fd:// - is the default, tcp://127.0.0.1:2376 - tells Docker to listen on localhost port 2376.
How to make Docker available inside your container ("Docker in Docker"): If you enabled network access - no need to do additional config except pointing Testcontaners to it as mentioned above. If you want to use default Unix socket then you can map (mount) it into container via volume option:
docker run --volume /var/run/docker.sock:/var/run/docker.sock your-image-id-here
The remaining problem is that mounted docker.sock inside container will also be owned by root:docker (with same uid:gid as on your host system) so Testcontainers would work only if your container user can connect to that socket. That is user of running process is root or happen to have exact same group id inside your container as group id of docker on your host system.
I do not know yet a good solution to this one, so for starters you can run your tests inside container as root, or hard-code container's user group-id to match your host's docker group id.

Access Docker daemon Remote api on Docker for Mac

I'm runner Docker for OSX, and having trouble getting the Docker remote API to work.
My situation is this:
Docker daemon running natively on OSX (https://www.docker.com/products/docker#/mac, so not the boot2docker variant)
Jenkins running as docker image
No I want to use the Jenkins docker-build-step plugin to build a docker image, but I want it to use the docker daemon on the host machine, so in Jenkins settings, DOCKER_URL should be something like :2375. (Reason for this is I don't want to install docker on the jenkins container if I already have it on my host machine).
Is there a way to to this or is de Docker for Mac currently not supporting this? I tried fiddling with export DOCKER_OPTS or DOCKER_HOST options but still get a Connection refused on calling http://localhost:2375/images/json for example.
Basicly the question is more about enabling the Docker for OSX remote api, with use case calling it from a Jenkins docker container.
You could consider using socat. It solved my problem, which seem to be similar.
socat TCP-LISTEN:2375,reuseaddr,fork UNIX-CONNECT:/var/run/docker.sock &
This allows you to access your macOS host Docker API from a Docker container using: tcp://[host IP address]:2375
On macOS socat can be installed like this:
brew install socat
See here for a long discussion on this topic: Plugin: Docker fails to connect via unix:// on Mac OS X
If you already added an SSH public key to your remote server, then you can use this ssh credentials for your docker connection, too. You don't need to configure the remote api on the server for this approach.
When connecting to macOS Docker Desktop, you could use ssh (after you have enabled it on Mac)
docker -H ssh:user#192.168.64.1 images
or
export DOCKER_HOST=ssh:user#192.168.64.1
docker images
I had the same issue but with mysql. I needed to expose the port of my docker hosts on port 43306 to docker image mysql running on port 3306.
Solution:
Create your docker image with -p parameter.
Example:
#> docker run -p 0.0.0.0:43306:3306 --name mysql-5.7.23xx -e MYSQL_ROOT_PASSWORD=myrootdba -d mysql/mysql-server:5.7.23 --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
Now I can connect from my host docker server on port 43306 to mysql docker image.

Connecting to rethinkdb (or any other app running on an http port) from the Docker OS X beta

I've installed the Docker for Mac beta which allows you to use docker commands directly. I want to try to run rethinkdb through docker, so I've followed the instructions of the rethinkdb docker container docs and done the following:
docker run --name some-rethink -v "$PWD:/data" -d rethinkdb
This works, and I can see the container with docker ps and start shell with docker exec -it /bin/bash
However, I can't connect to the admin panel on my Mac directly with their suggestion
$BROWSER "http://$(docker inspect --format \
'{{ .NetworkSettings.IPAddress }}' some-rethink):8080"
This essentially amounts to google-chrome http://172.17.0.2:8080/, but this doesn't work. I asked around and was told
You can't use the docker private ip address space to access the ports
You have to forward them to the mac
However, I'm not sure how to do this as I don't have any port forwarding tools I'm familiar with such as ssh on the container itself. Using the suggested port forwarding command in the rethinkdb container docs ssh -fNTL ... but with localhost instead of remote does not work.
How can I connect to the rethinkdb admin panel through http with the docker beta on a Mac?
Try forwarding the container port using the -p flag in the docker run command, e.g.:
docker run -p 8080:8080 --name some-rethink -v "$PWD:/data" -d rethinkdb
and then it should be accessible on localhost,
google-chrome http://127.0.0.1:8080/
Relevant docker run docs: https://docs.docker.com/engine/reference/run/#/expose-incoming-ports

Creating docker containers on Windows

So getting boot2docker up and running, and pulling containers from the Docker Hub are non-issue on a windows environment. But if I wish to create a container and run it, how do I go about doing this? I've read about using fig, but is fig installed via Windows or from the container? I've attempted to do it from the container, but it often results in a permissions error, and even CHOWNing the folder doesn't solve the issue of not being able to call fig in the container.
Is it even possible to just run docker via Boot2Docker on windows as a development environment? Or should I just use Vagrant as the host VM and play with a bunch of docker containers in it?
Just some clarification and direction would be appreciated.
Fig is a tool for working with Docker. It runs on the host (which could mean your Windows host communicating with Docker via the TCP socket, or could mean your boot2docker VM which is a guest of your windows machine and a host of your Docker containers).
All that Fig's doing is streamlining the process of pulling, building and starting Docker images. For example, this fig.yml
db:
image: postgres
app:
build: .
links:
- "db:db"
environment:
- FOO=bar
is (roughly) the same as this series of Docker commands in Bash:
docker run -d --name db postgres
docker build -t app .
docker run -d --name app --link=db:db --env=FOO=bar app

Resources