Windows Docker Desktop Linux mode - docker container time skew - windows

Question: How can I map docker container time to my local PC time to sync the time inside the docker container?
From my windows 10 PC, I am running Linux mode docker desktop version 2.2.0.4 (43472), docker Engine 19.03.8.
All the docker containers created are showing massive time skew from that of the host:
From centos 8 docker container:
[root# /]# date
Thu May 7 01:18:16 UTC 2020
From docker host running Window Doker desktop on Windows 10 PC:
PS> date
14 May 2020 14:42:17
I tried to create a new container with -v option as below:
docker container run -it -v c:\docker_volumes\docker1:/storage -v /etc/localtime:/etc/localtime:ro --name centos7-squid centos:7.7.1908 /bin/bash
I get the error below
Unable to find image 'centos:7.7.1908' locally
7.7.1908: Pulling from library/centos
f34b00c7da20: Pull complete Digest: sha256:50752af5182c6cd5518e3e91d48f7ff0cba93d5d760a67ac140e2d63c4dd9efc
Status: Downloaded newer image for centos:7.7.1908
C:\Program Files\Docker\Docker\resources\bin\docker.exe: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\"/etc/localtime\\" to rootfs \\"/var/lib/docker/overlay2/c7e86cffdc46c354f19b25fa97146ce8f2caee653793219719b043c97040d1b7/merged\\" at \\"/var/lib/docker/overlay2/c7e86cffdc46c354f19b25fa97146ce8f2caee653793219719b043c97040d1b7/merged/usr/share/zoneinfo/UTC\\" caused \\"not a directory\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.

I fixed it by setting the hardware clock of the virtual machine running docker:
docker run --rm --privileged alpine hwclock -s
credit:
https://blog.jverkamp.com/2017/11/15/clock-drift-in-docker-containers/

Related

Cannot run docker container in windows server 2019

Cannot run docker container in windows server 2019 vmware. It was error response from daemon
docker container run mcr.microsoft.com/windows/nanoserver:1809 hostname
docker: Error response from daemon: container bb81979fe2974f59031e56e062f1b08f1ad6fdaa57ec57965c316563f384da59 encountered an error during hcsshim::System::Start: context deadline exceeded.
Have you tried running it with Hyper-V?
docker run -it --isolation=hyperv mcr.microsoft.com/windows/nanoserver:1809
If it works you can create or edit the config file:
C:\ProgramData\docker\config\daemon.json
Add:
{
"exec-opts": ["isolation=hyperv"]
}
Sure. If you are using something that does not require windows container. You can run a container on linux. I would install an instance of ubuntu or centos on vmware install docker and work with it there.
Download centos
https://docs.centos.org/en-US/centos/install-guide/downloading/
Install docker
https://docs.docker.com/engine/install/centos/
A side not though this is not really production grade orchestration. You may want to look at docker swarm / kubernetes / openshift for prod workloads.

is there any way to run a docker image on host from other docker image? [duplicate]

I am using a docker container to build and deploy my software to a collection of ec2's. In the deployment script I build my software and then package it in a docker image. The image is pushed to my private registry, pulled by my production ec2's and then run. So essentially I will need to run docker within a docker container.
The problem is that I can't actually start docker on my container. If I try
service docker start
I get
bash: service: command not found
And if I try
docker -d
I get
2014/10/07 15:54:35 docker daemon: 0.11.1-dev 02d20af/0.11.1; execdriver: native; graphdriver:
[e2feb6f9] +job serveapi(unix:///var/run/docker.sock)
[e2feb6f9] +job initserver()
[e2feb6f9.initserver()] Creating server
2014/10/07 15:54:35 Listening for HTTP on unix (/var/run/docker.sock)
[error] attach_loopback.go:42 There are no more loopback device available.
loopback mounting failed
[e2feb6f9] -job initserver() = ERR (1)
2014/10/07 15:54:35 loopback mounting failed
The service command doesn't exist on the docker container so I can't start docker. I'm not sure what I should be doing now to start docker so I'm a bit stuck here, any help is appreciated.
A bit more information
Host machine is running fedora 20 (will eventually be running amazon linux on an ec2)
Docker container is running centos 7.0
Host is running Docker version 1.2.0, build fa7b24f/1.2.0
Container is running docker-0.11.1-22.el7.centos.x86_64
How about not running 'docker inside docker' and run docker on your host, but from within your docker container? Just mount your docker.sock and docker binary:
docker run -v /var/run/docker.sock:/run/docker.sock -v $(which docker):/bin/docker [your image]
https://github.com/sameersbn/docker-gitlab uses this approach to spin up docker containers, take a look at this image.
You can also take a look at: https://registry.hub.docker.com/u/mattgruter/doubledocker/
UPDATE on july 2016
The most current approach is to use docker:dind image, as described here:
https://hub.docker.com/_/docker/
Short summary:
$ docker run --privileged --name some-docker -d docker:dind
and then:
$ docker run --rm --link some-docker:docker docker info
While in almost all cases I would suggest following #cthulhu's answer and not running "docker in docker", in the cases when you must (e.g. a test suite which tests against multiple docker version), use the following to create additional loopback devices:
#!/bin/bash
for i in {0..6}
do
mknod -m0660 /dev/loop$i b 7 $i
done
(Taken from the thread for Docker Issue #7058)
You can simply run docker inside the docker container using dind. Try this image from Jerome, as follows:
docker run --privileged -t -i jpetazzo/dind
Check this page for more details:
https://github.com/jpetazzo/dind

Error while running postgres container "Error response from daemon: invalid mode: /var/lib/postgresql/data."

I'm trying to run a postgres docker container on Windows 10.
I've installed postgres using the Linux container as I couldn't do so using the Windows container.
While running the below in powershell
docker run -d --name pg-flowthru --env-file ./database/env.list -p 5432:5432 --rm -v ${PWD}:/docker/volumes/postgres:/var/lib/postgresql/data postgres
(env.list contains database credentials), I'm getting the below error:
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: invalid mode: /var/lib/postgresql/data.
See 'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.
C drive is already in the "Shared Drives" in Docker Desktop
I think this may be an issue with path, but I'm new to docker and can't figure it out.
From here
More info here
A volume has to be created first:
docker volume create postgresql-volume
The postgres container can now be run using the previously created volume:
docker run -d --name pg-flowthru --env-file ./database/env.list -p 5432:5432 -v 'postgresql-volume:/var/lib/postgresql/data' postgres
Listing the running containers now shows the above container running.

connected host has failed to respond when I run `docker run hello-world` in docker

The lastest version of docker(version 0.6) has been installed in my laptop (windows 10 LTSB) through the installation package docker toolbox. It seems to be installed correctly cause I see the logo of docker when I started the docker quickstart terminal. While when I run docker run hello-world, it returns
$ docker run hello-world
D:\Program Files\Docker Toolbox\docker.exe: An error occurred trying to connect: Post https://192.168.99.100:2376/v1.24/containers/create: dial tcp 192.168.99.100:2376: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond..
See 'D:\Program Files\Docker Toolbox\docker.exe run --help'.
By the way, I open the vpn through cisco anyconnect. However, when I disconnect the vpn and run the hello world, it seems to just froze at
$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
c04b14da8d14: Pulling fs layer
What happened? Can you find what's wrong with the docker or vpn?
You should first create a default machine (or start if already created):
docker-machine create default
Then get the environment commands for your new VM:
docker-machine env
And then connect your shell to the new machine:
eval "$(docker-machine env default)"
After then docker run hello-world is expected to work fine. I tested on Windows 10Pro with Docker-Toolbox

Creating docker containers on Windows

So getting boot2docker up and running, and pulling containers from the Docker Hub are non-issue on a windows environment. But if I wish to create a container and run it, how do I go about doing this? I've read about using fig, but is fig installed via Windows or from the container? I've attempted to do it from the container, but it often results in a permissions error, and even CHOWNing the folder doesn't solve the issue of not being able to call fig in the container.
Is it even possible to just run docker via Boot2Docker on windows as a development environment? Or should I just use Vagrant as the host VM and play with a bunch of docker containers in it?
Just some clarification and direction would be appreciated.
Fig is a tool for working with Docker. It runs on the host (which could mean your Windows host communicating with Docker via the TCP socket, or could mean your boot2docker VM which is a guest of your windows machine and a host of your Docker containers).
All that Fig's doing is streamlining the process of pulling, building and starting Docker images. For example, this fig.yml
db:
image: postgres
app:
build: .
links:
- "db:db"
environment:
- FOO=bar
is (roughly) the same as this series of Docker commands in Bash:
docker run -d --name db postgres
docker build -t app .
docker run -d --name app --link=db:db --env=FOO=bar app

Resources