Unable to reach web server in Docker swarm from the host - macos

I'm starting out using Docker on macOS, and get stuck when trying to complete part 4 of the Get Started guide. I have created two extra virtual machines (myvm1 and myvm2), set myvm1 as swarm manager, and myvm2 as a worker.
I have then deployed a stack with 5 Flask web servers using the docker-compose.yml from part 3 of the tutorial. The processes seem to start fine, and are distributed between the two machines, but I am not able to reach them from the host using a browser.
How should I configure the port forwarding/network to be able to reach the web servers in the swarm from the host of the virtual machines running the docker container?
The following is a list of commands that I have run, some with resulting output.
$ docker-machine create --driver virtualbox myvm1
$ docker-machine create --driver virtualbox myvm2
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
myvm1 - virtualbox Running tcp://192.168.99.100:2376 v18.09.0
myvm2 - virtualbox Running tcp://192.168.99.101:2376 v18.09.0
$ docker-machine ssh myvm1 "docker swarm init --advertise-addr 192.168.99.100"
$ docker-machine ssh myvm2 "docker swarm join --token <my-token-inserted-here> 192.168.99.100:2377"
$ eval $(docker-machine env myvm1)
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
myvm1 * virtualbox Running tcp://192.168.99.100:2376 v18.09.0
myvm2 - virtualbox Running tcp://192.168.99.101:2376 v18.09.0
$ docker stack deploy -c docker-compose.yml getstartedlab
$ docker stack ps getstartedlab
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
it9asz4zpdmi getstartedlab_web.1 mochr/test_repo:friendly_hello myvm2 Running Preparing 18 seconds ago
645gvtnde7zz getstartedlab_web.2 mochr/test_repo:friendly_hello myvm1 Running Preparing 18 seconds ago
fpq6cvcf3e0e getstartedlab_web.3 mochr/test_repo:friendly_hello myvm2 Running Preparing 18 seconds ago
plkpximnpobf getstartedlab_web.4 mochr/test_repo:friendly_hello myvm1 Running Preparing 18 seconds ago
gr2p8a0asatb getstartedlab_web.5 mochr/test_repo:friendly_hello myvm2 Running Preparing 18 seconds ago
The docker-compose.yml:
version: "3"
services:
web:
image: mochr/test_repo:friendly_hello
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "4000:80"
networks:
- webnet
networks:
webnet:

It looks like this is a known problem with the current version of boot2docker: https://github.com/docker/machine/issues/4608
The workaround is either to use a swarm based on machines that do not require boot2docker (e.g. AWS, DigitalOcean, etc.), wait until a newer version of boot2docker is released, or use an earlier version of boot2docker, as described in that link. To use an earlier version:
export VIRTUALBOX_BOOT2DOCKER_URL=https://github.com/boot2docker/boot2docker/releases/download/v18.06.1-ce/boot2docker.iso
before creating your virtual machines with docker-machine. (Remove your existing virtual machines first, then use that export, then docker-machine create myvm1)
Then, you should be able to bring up your stack and access your containers at either 192.168.99.100:4000 or 192.168.99.101:4000 (or whatever IP addresses are revealed by docker-machine ls)

Related

not able to start aerokube moon on linux (ubuntu)

I am trying to setup aerokube moon on a linux machine(Ubuntu 16.04) .
Steps followed :
minikube installed and ingress is enabled.
moon installed using https://aerokube.com/moon/latest/#install-helm .
started minikube using docker driver
$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
kubectl get pods -n moon
NAME READY STATUS RESTARTS AGE
moon-5f6fd5f9fd-7b945 3/3 Running 0 10m
moon-5f6fd5f9fd-fcct6 3/3 Running 0 10m
$minikube tunnel
Status:
machine: minikube
pid: 148130
route: 10.96.0.0/12 -> xxx.xxx.xx.x
minikube: Running
services: []
errors:
minikube: no errors
router: no errors
loadbalancer emulator: no errors
$
cat /etc/hosts
127.0.0.1 localhost moon.aerokube.local
xxx.xxx.xxx.xxx moon.aerokube.local --> this ip is output of `$minikube ip`
Issue 1 : when I try to access http://moon.aerokube.local/ .I get
Issue2 how to change the default port for selenium
I would like to change the default port for selenium in moon as my 8080 and 4444 port are already occupied.
I would like to use some other port for moon ui and /wd/hub
I assume ,this probably will be accessible from that linux machine itself(I can't check directly on that machine) as it is pointed to localhost in /etc/host. but I dont know how to make it accessible from other places (facing issues no 2 mentioned in this post) like from our laptops for people working on this project .
Please help

Can't run kubectl in docker container from a host machine installed Minikube "The connection to the server 127.0.0.1:32768 was refused"

I want to have a container that can access and run kubectl command on my host machine. Here is what I have:
I have installed Kubernetes and Minikube on my host machine.
I used this docker container: helm-kubectl link
This is the command I run my docker:
docker run -it -v ~/.kube:/root/.kube -v ~/.minikube:/Users/xxxx/.minikube dtzar/helm-kubectl
Inside the container, when I checked the cluster, I can see the context has loaded my minikube, However, I can't run another kubectl command due to the reason "The connection to the server 127.0.0.1:32768 was refused - did you specify the right host or port?".
bash-5.0# kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
docker-desktop docker-desktop docker-desktop
docker-for-desktop docker-desktop docker-desktop
* minikube minikube minikube
bash-5.0# kubectl get all
The connection to the server 127.0.0.1:32768 was refused - did you specify the right host or port?
I have checked my Kubenetes config at ~/.kube and the port is 32768.
- cluster:
certificate-authority: /Users/xxx/.minikube/ca.crt
server: https://127.0.0.1:32768
name: minikube
I have tried port -p 32768 or --expose 32768 but no luck. So anyone can help this?
Thanks zerkms! It works with --network host

Windows Docker Desktop Linux mode - docker container time skew

Question: How can I map docker container time to my local PC time to sync the time inside the docker container?
From my windows 10 PC, I am running Linux mode docker desktop version 2.2.0.4 (43472), docker Engine 19.03.8.
All the docker containers created are showing massive time skew from that of the host:
From centos 8 docker container:
[root# /]# date
Thu May 7 01:18:16 UTC 2020
From docker host running Window Doker desktop on Windows 10 PC:
PS> date
14 May 2020 14:42:17
I tried to create a new container with -v option as below:
docker container run -it -v c:\docker_volumes\docker1:/storage -v /etc/localtime:/etc/localtime:ro --name centos7-squid centos:7.7.1908 /bin/bash
I get the error below
Unable to find image 'centos:7.7.1908' locally
7.7.1908: Pulling from library/centos
f34b00c7da20: Pull complete Digest: sha256:50752af5182c6cd5518e3e91d48f7ff0cba93d5d760a67ac140e2d63c4dd9efc
Status: Downloaded newer image for centos:7.7.1908
C:\Program Files\Docker\Docker\resources\bin\docker.exe: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\"/etc/localtime\\" to rootfs \\"/var/lib/docker/overlay2/c7e86cffdc46c354f19b25fa97146ce8f2caee653793219719b043c97040d1b7/merged\\" at \\"/var/lib/docker/overlay2/c7e86cffdc46c354f19b25fa97146ce8f2caee653793219719b043c97040d1b7/merged/usr/share/zoneinfo/UTC\\" caused \\"not a directory\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
I fixed it by setting the hardware clock of the virtual machine running docker:
docker run --rm --privileged alpine hwclock -s
credit:
https://blog.jverkamp.com/2017/11/15/clock-drift-in-docker-containers/

is there any way to run a docker image on host from other docker image? [duplicate]

I am using a docker container to build and deploy my software to a collection of ec2's. In the deployment script I build my software and then package it in a docker image. The image is pushed to my private registry, pulled by my production ec2's and then run. So essentially I will need to run docker within a docker container.
The problem is that I can't actually start docker on my container. If I try
service docker start
I get
bash: service: command not found
And if I try
docker -d
I get
2014/10/07 15:54:35 docker daemon: 0.11.1-dev 02d20af/0.11.1; execdriver: native; graphdriver:
[e2feb6f9] +job serveapi(unix:///var/run/docker.sock)
[e2feb6f9] +job initserver()
[e2feb6f9.initserver()] Creating server
2014/10/07 15:54:35 Listening for HTTP on unix (/var/run/docker.sock)
[error] attach_loopback.go:42 There are no more loopback device available.
loopback mounting failed
[e2feb6f9] -job initserver() = ERR (1)
2014/10/07 15:54:35 loopback mounting failed
The service command doesn't exist on the docker container so I can't start docker. I'm not sure what I should be doing now to start docker so I'm a bit stuck here, any help is appreciated.
A bit more information
Host machine is running fedora 20 (will eventually be running amazon linux on an ec2)
Docker container is running centos 7.0
Host is running Docker version 1.2.0, build fa7b24f/1.2.0
Container is running docker-0.11.1-22.el7.centos.x86_64
How about not running 'docker inside docker' and run docker on your host, but from within your docker container? Just mount your docker.sock and docker binary:
docker run -v /var/run/docker.sock:/run/docker.sock -v $(which docker):/bin/docker [your image]
https://github.com/sameersbn/docker-gitlab uses this approach to spin up docker containers, take a look at this image.
You can also take a look at: https://registry.hub.docker.com/u/mattgruter/doubledocker/
UPDATE on july 2016
The most current approach is to use docker:dind image, as described here:
https://hub.docker.com/_/docker/
Short summary:
$ docker run --privileged --name some-docker -d docker:dind
and then:
$ docker run --rm --link some-docker:docker docker info
While in almost all cases I would suggest following #cthulhu's answer and not running "docker in docker", in the cases when you must (e.g. a test suite which tests against multiple docker version), use the following to create additional loopback devices:
#!/bin/bash
for i in {0..6}
do
mknod -m0660 /dev/loop$i b 7 $i
done
(Taken from the thread for Docker Issue #7058)
You can simply run docker inside the docker container using dind. Try this image from Jerome, as follows:
docker run --privileged -t -i jpetazzo/dind
Check this page for more details:
https://github.com/jpetazzo/dind

Creating docker containers on Windows

So getting boot2docker up and running, and pulling containers from the Docker Hub are non-issue on a windows environment. But if I wish to create a container and run it, how do I go about doing this? I've read about using fig, but is fig installed via Windows or from the container? I've attempted to do it from the container, but it often results in a permissions error, and even CHOWNing the folder doesn't solve the issue of not being able to call fig in the container.
Is it even possible to just run docker via Boot2Docker on windows as a development environment? Or should I just use Vagrant as the host VM and play with a bunch of docker containers in it?
Just some clarification and direction would be appreciated.
Fig is a tool for working with Docker. It runs on the host (which could mean your Windows host communicating with Docker via the TCP socket, or could mean your boot2docker VM which is a guest of your windows machine and a host of your Docker containers).
All that Fig's doing is streamlining the process of pulling, building and starting Docker images. For example, this fig.yml
db:
image: postgres
app:
build: .
links:
- "db:db"
environment:
- FOO=bar
is (roughly) the same as this series of Docker commands in Bash:
docker run -d --name db postgres
docker build -t app .
docker run -d --name app --link=db:db --env=FOO=bar app

Resources