Windows unable to connect to exposed to Docker port - windows

So I have a container running with port forwarding set up. It seems that the port is listening on the local windows host, for some reason, the connection won't go through.
The command to run the docker container:
docker run -p 4400:4400 storybook:latest
Inside the container itself, I can verify the service is running on port 4400:
netstat -ltnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:4400 0.0.0.0:* LISTEN 33/node
wget http://0.0.0.0:4400
--2022-08-23 19:57:12-- http://0.0.0.0:4400/
Connecting to 0.0.0.0:4400... connected.
HTTP request sent, awaiting response... 200 OK
Length: 5866 (5.7K) [text/html]
Saving to: 'index.html'
100%[==============================================================================>] 5,866 --.-K/s in 0s
2022-08-23 19:57:12 (363 MB/s) - 'index.html' saved [5866/5866]
And on the windows host, I can verify docker is listening on port 4440:
netstat -aon | find /i "listening"
TCP [::]:4400 [::]:0 LISTENING 20412
tasklist
Image Name PID Session Name Session# Mem Usage
========================= ======== ================ =========== ============
com.docker.backend.exe 20412 Console 1 26,540 K
But I can't access the service via the Windows host.
wget http://localhost:4400
wget : The underlying connection was closed: The connection was closed unexpectedly.
I even tried getting the IP address of the docker container:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
898079f335be storybook:latest "npx nx serve storyb…" 34 minutes ago Up 34 minutes 0.0.0.0:4400->4400/tcp, :::4400->4400/tcp relaxed_mayer
> docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' 898079f335be
172.17.0.2
And tried accessing the service via that IP:
wget http://172.17.0.2:4400
wget : Unable to connect to the remote server
The version of windows:
Edition: Windows 10 Enterprise
Version: 21H2
OS Build: 19044.1766
Docker information:
Client:
Context: desktop-linux
Debug Mode: false
Plugins:
buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
compose: Docker Compose (Docker Inc., 2.0.0-beta.4)
scan: Docker Scan (Docker Inc., v0.8.0)
Server:
Containers: 7
Running: 1
Paused: 0
Stopped: 6
Images: 22
Server Version: 20.10.7
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: d71fcd7d8303cbf684402823e425e9dd2e99285d
runc version: b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
init version: de40ad0
Security Options:
seccomp
Profile: default
Kernel Version: 5.4.72-microsoft-standard-WSL2
Operating System: Docker Desktop
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 25.06GiB
Name: docker-desktop
ID: VFD3:RX76:D4JD:5Z6P:R2IQ:7JD4:FFQS:YDLJ:BDNW:J4UX:4U5A:GF4S
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No blkio throttle.read_bps_device support
WARNING: No blkio throttle.write_bps_device support
WARNING: No blkio throttle.read_iops_device support
WARNING: No blkio throttle.write_iops_device support
EDIT: I am using WSL2 as the backend.

Related

Docker Windows getting Client.Timeout exceeded while awaiting headers for any pull image command or login

When trying to run docker run hello-world on windows 10(no option), I am getting this error(a common error I saw on many threads) :
Unable to find image 'hello-world:latest' locally
docker: Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers).
I already had some images pulled earlier(couple of months back) which are working. But I am not able to pull any new image(eg: mongo) or even not able to pull hello-world image. Have searched throughout, tried setting up dns to 8.8.8.8/8.8.4.4, experimental=true inside docker configs(Docker Desktop GUI), yet unable to resolve. One thing I wasn't expecting was HTTP PROXY inside docker info, as I had removed the proxy settings from docker's GUI and even environment variables.
docker info:
Client:
Debug Mode: false
Plugins:
scan: Docker Scan (Docker Inc., v0.3.4)
Server:
Containers: 3
Running: 2
Paused: 0
Stopped: 1
Images: 4
Server Version: 19.03.13
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 8fba4e9a7d01810a393d5d25a3621dc101981175
runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 5.4.39-linuxkit
Operating System: Docker Desktop
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.941GiB
Name: docker-desktop
ID: <Docker Id>
Docker Root Dir: /var/lib/docker
Debug Mode: false
HTTP Proxy: http://<ip>:port/
HTTPS Proxy: http://<ip>:port/
No Proxy: localhost,127.0.0.2,firm.com,firm.org
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
Can someone please help how to remove these proxies from docker info and allow me to pull the images ?
I have not ticked Expose daemon on tcp://localhost:2375 without TLS and Use the WSL 2 based engine inside docker-desktop gui. Also inside proxies, setting manual is turned off and network uses manual dns configuration of 8.8.8.8. Also I am unable to ping to hub.docker.com which says request timeout, and trying to do docker login inside cmd returns the same request timeout error, but docker desktop(GUI) is showing my user logged in. I feel if we are able to remove the proxy from docker info, it might solve the problem.
I apparently solved it with this change:
Before:
After:
This is weird stuff according to me since we are disabling the manual proxy configs it shouldn't go inside it, but even after restarting and stopping the services it didn't work. So eventually I removed all the proxy url(despite manual proxy being off) and then it worked. Just windows stuff I guess.

Issue: Error while running ubuntu bash shell in docker

I am running docker on my arm based 32 bit device.
However, when i try to run an ubuntu bash shell as a docker container via the command : docker run -it ubuntu bash , I keep getting the following error:
docker: Error response from daemon: OCI runtime create failed:
container_linux.go:348: starting container process caused
"process_linux.go:402: container init caused \"open /dev/ptmx: no such file or directory\"": unknown.
Here's what docker info gives:
Containers: 4
Running: 0
Paused: 0
Stopped: 4
Images: 3
Server Version: 18.06.1-ce
Storage Driver: vfs
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.65-00273-gfa38327-dirty
OSType: linux
Architecture: armv7l
CPUs: 4
Total Memory: 923MiB
ID: 2PDV:3KHU:VZZM:DM6F:4MVR:TXBN:35YJ:VWP5:TMHD:GMKW:TPMI:MALC
Docker Root Dir: /opt/usr/media/docker_workdir
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
It would be great if someone could tell me what's wrong and how can I fix this ?
It could be that, for one reason or another, your docker container can't find its own /dev/ptmx or even perhaps your /dev/ altogether.
One quick solution is to do:
docker run -it -v /dev:/dev ubuntu bash
This binds your /dev/ directory to the container's, meaning that they will use the same files.
Notice that, although in of itself this operation is harmless, in production environments this means that the isolation between the host's and the container's devices is gone.
For that reason, make sure to only ever use this trick in test environments.
It looks like your OS is missing pseudo-terminals (PTY) - a device that has the functions of a physical terminal without actually being one.
The file /dev/ptmx is a character file with major number 5
and minor number 2, usually of mode 0666 and owner.group
of root.root. It is used to create a pseudo-terminal mas­ter and slave pair.
FILES
/dev/ptmx - UNIX 98 master clone device
/dev/pts/* - UNIX 98 slave devices
/dev/pty[p-za-e][0-9a-f] - BSD master devices
/dev/tty[p-za-e][0-9a-f] - BSD slave devices
Reference: http://man7.org/linux/man-pages/man7/pty.7.html
This is by default included into Linux kernel. Maybe lack of it is somehow related to your OS architecture. Also, I'm not sure how can you fix, maybe try to update && upgrade OS.
Quick workaround if you don't need a tty would be to skip -t flag:
docker run -i ubuntu bash
In docker run -it, -i/--interactive means "keep stdin open" and -t/--tty means "tell the container that stdin is a pseudo tty". The key here is the word "interactive". If you omit the flag, the container still executes /bin/bash but exits immediately. With the flag, the container executes /bin/bash then patiently waits for your input. That means now you will have bash session inside the container, so you can ls, mkdir, or do any bash command inside the container.
one workable fix:
docker exec -i hello-world rm /dev/ptmx
docker exec -i hello-world mknod /dev/ptmx c 5 2
or enable kernel config: CONFIG_DEVPTS_MULTIPLE_INSTANCES=y

Error response from daemon: rpc error: code = 2 desc = The swarm does not have a leader [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I have created 3 nodes swarm cluster by Virtualbox using docker-machine. The three are all running and i'm able to use 'docker-machine ssh' to connect every one.There is a problem that I restart the physical machine and the cluster seems to not work,why? The follow is the details.Thanks for your guide and advice.
san#san-System-Product-Name:~$ docker-machine ls
NAME ACTIVE DRIVER STATE URL
SWARM DOCKER ERRORS
first - virtualbox Running tcp://192.168.99.100:2376
v17.06.0-ce
second - virtualbox Running tcp://192.168.99.101:2376
v17.06.0-ce
third - virtualbox Running tcp://192.168.99.102:2376
v17.06.0-ce
The first is a leader and the second is a manager while the third is a worker.I tried to use 'docker-machine ssh first docker node ls'.
Error response from daemon:
`rpc error: code = 2 desc = The swarm does not have a leader`.
It's possible that too few managers are online. Make sure more than
half of the managers are online.
exit status 1
san#san-System-Product-Name:~$ docker-machine ssh first docker info
Containers: 2
Running: 0 Paused: 0 Stopped: 2
Images: 3
Server Version: 17.06.0-ce
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 17
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: pending
NodeID: dowdk4pzfzm85zijbo23e6xs3
Error: rpc error: code = 2 desc = The swarm does not have a leader. It's possible that too few managers are online. Make sure more than half of the managers are online.
Is Manager: true Node Address: 192.168.99.100
Manager Addresses:
192.168.99.100:2375
192.168.99.102:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: cfb82a876ecc11b5ca0977d1733adbe58599088a
runc version: 2d41c047c83e09a6d61d464906feb2a2f3c52aa4
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version:
4.4.74-boot2docker
Operating System: Boot2Docker 17.06.0-ce (TCL 7.2); HEAD : 0672754 - Thu Jun 29 00:06:31 UTC 2017
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 995.8MiB
Name: first
ID: ACGX:Z6QQ:5KOX:7W2O:OMMM:43PB:4QES:KKGJ:IXUC:J2SW:F4SJ:QMQ4
Docker Root Dir: /mnt/sda1/var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 24
Goroutines: 76
System Time: 2017-07-28T01:57:37.410536525Z
EventsListeners: 0
Registry: https://index.docker.io/v1/
Labels: provider=virtualbox
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
san#san-System-Product-Name:~$ docker-machine ssh first docker network ls
NETWORK ID NAME DRIVER SCOPE
22e85840407d bridge bridge local
fc3c6786739c docker_gwbridge bridge local
e294dde63753 host host local
55f8e340b794 none null local
how could i fix this problem and use
docker node ls
on manage node?very thanks for your advice.
I had the same problem but I'm not sure what caused it. I was able to fix it by entering:
docker swarm init --force-new-cluster
Everything got restored.

How to reach the service running in docker container(overlay) externally from different hosts

I have a docker container running on overlay network. My requirement is to reach the service running in this container externally from different hosts. The service is bind to container's internal IP address and doing port bind to host is not a solution in this case.
Actual Scenario:
The service running inside container is spark driver configured with yarn-client. The spark driver binds to container internal IP(10.x.x.x). When spark driver communicates with hadoop yarn running on different cluster, the application master on yarn tries to communicate back to spark driver on the driver’s container internal ip but it can’t connect driver on internal IP for obvious reason.
Please let me know if there is a way to achieve the successful communication from application master(yarn) to spark driver(docker container).
Swarm Version: 1.2.5
docker info:
Containers: 3
Running: 2
Paused: 0
Stopped: 1
Images: 42
Server Version: swarm/1.2.5
Role: primary
Strategy: spread
Filters: health, port, containerslots, dependency, affinity, constraint
Nodes: 1
ip-172-30-0-175: 172.30.0.175:2375
└ ID: YQ4O:WGSA:TGQL:3U5F:ONL6:YTJ2:TCZJ:UJBN:T5XA:LSGL:BNGA:UGZW
└ Status: Healthy
└ Containers: 3 (2 Running, 0 Paused, 1 Stopped)
└ Reserved CPUs: 0 / 16
└ Reserved Memory: 0 B / 66.06 GiB
└ Labels: kernelversion=3.13.0-91-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs
└ UpdatedAt: 2016-09-10T05:01:32Z
└ ServerVersion: 1.12.1
Plugins:
Volume:
Network:
Swarm:
NodeID:
Is Manager: false
Node Address:
Security Options:
Kernel Version: 3.13.0-91-generic
Operating System: linux
Architecture: amd64
CPUs: 16
Total Memory: 66.06 GiB
Name: 945b4af662a4
Docker Root Dir:
Debug Mode (client): false
Debug Mode (server): false
Command to run container: I am running it using docker-compose:
zeppelin:
container_name: "${DATARPM_ZEPPELIN_CONTAINER_NAME}"
image: "${DOCKER_REGISTRY}/zeppelin:${DATARPM_ZEPPELIN_TAG}"
network_mode: "${CONTAINER_NETWORK}"
mem_limit: "${DATARPM_ZEPPELIN_MEM_LIMIT}"
env_file: datarpm-etc.env
links:
- "xyz"
- "abc"
environment:
- "VOL1=${VOL1}"
- "constraint:node==${DATARPM_ZEPPELIN_HOST}"
volumes:
- "${VOL1}:${VOL1}:rw"
entrypoint: ["/bin/bash", "-c", '<some command here>']
It seems yarn and spark need to be able to see the each other directly on the network. If you could put them on the same overlay network, everything would be able to communicate directly, if not...
Overlay
It is possible to route data directly into the overlay network on a Docker node via the docker_gwbridge that all overlay containers are connected to but, and it's a big but, that only works if you are on the Docker node where the container is running.
So running 2 containers on a 2 node non swarm mode overlay 10.0.9.0/24 network...
I can ping the local container on demo0 but not the remote on demo1
docker#mhs-demo0:~$ sudo ip ro add 10.0.9.0/24 dev docker_gwbridge
docker#mhs-demo0:~$ ping -c 1 10.0.9.2
PING 10.0.9.2 (10.0.9.2): 56 data bytes
64 bytes from 10.0.9.2: seq=0 ttl=64 time=0.086 ms
docker#mhs-demo0:~$ ping -c 1 10.0.9.3
PING 10.0.9.3 (10.0.9.3): 56 data bytes
^C
--- 10.0.9.3 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss
Then on the other host the container are reversed but it's still the local container that is accessable.
docker#mhs-demo1:~$ sudo ip ro add 10.0.9.0/24 dev docker_gwbridge
docker#mhs-demo1:~$ ping 10.0.9.2
PING 10.0.9.2 (10.0.9.2): 56 data bytes
^C
--- 10.0.9.2 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
docker#mhs-demo1:~$ ping 10.0.9.3
PING 10.0.9.3 (10.0.9.3): 56 data bytes
64 bytes from 10.0.9.3: seq=0 ttl=64 time=0.094 ms
64 bytes from 10.0.9.3: seq=1 ttl=64 time=0.068 ms
So the big issue is the network would need to know where containers are running and route packets accordingly. If the network were capable of achieving routing like that, you probably wouldn't need an overlay network in the first place.
Bridge networks
Another possibility is using a plain bridge network on each Docker node with routable IP's. So each bridge has an IP range assigned that your network is aware of and can route to from anywhere.
192.168.9.0/24 10.10.2.0/24
Yarn DockerC
router
10.10.0.0/24 10.10.1.0/24
DockerA DockerB
The would attach a network to each nodes.
DockerA:$ docker network create --subnet 10.10.0.0/24 sparknet
DockerB:$ docker network create --subnet 10.10.1.0/24 sparknet
DockerC:$ docker network create --subnet 192.168.2.0/24 sparknet
Then the router configures routes for 10.10.0.0/24 via DockerA etc.
This is a similar approach to the way Kubernetes does its networking.
Weave Net
Weave is similar to overlay in that it creates a virtual network that transmits data over UDP. It's a bit more of a generalised networking solution though and can integrate with a host network.

Docker-Machine and Swarm behind proxy

I'm traing to set up docker swarm over my virtual cluster. First, I try to install the swarm-master on the localhost with docker-machine.
The problem is that the machine need to use a proxy to access the discovery token.
First I ask a token with swarm create. To do that, I created this file :
$cat /etc/systemd/system/docker.service.d/http_proxy.conf
[Service]
Environment="HTTP_PROXY=http://**.**.**.**:3128/" "HTTPS_PROXY=http://**.**.**.**:3128/" "NO_PROXY=localhost,127.0.0.1,192.168.2.100,192.168.2.101,192.168.2.102,192.168.2.103,192.168.2.104,192.168.2.105,192.168.2.106,192.168.2.107,192.168.2.108,192.168.2.194,192.168.2.110"
I restarted the daemon and I can pull the swarm image :
$docker run -e "http_proxy=http://**.**.**.**:3128/" -e "https_proxy=http://**.**.**.**:3128/" swarm create
b54d8665e72939d2c611d8f9e99521b4
After I want to create the swarm master :
$docker-machine create -d generic --generic-ip-address localhost \
--engine-env HTTP_PROXY=http://192.168.254.10:3128/ \
--engine-env HTTPS_PROXY=http://192.168.254.10:3128/ \
--engine-env NO_PROXY=localhost,192.168.2.102,192.168.2.100 \
--swarm --swarm-master --swarm-discovery \
token://b54d8665e72939d2c611d8f9e99521b4 swarm-master
Result :
Running pre-create checks...
Creating machine...
Waiting for machine to be running, this may take a few minutes...
Machine is running, waiting for SSH to be available...
Detecting operating system of created instance...
Provisioning created instance...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Configuring swarm...
To see how to connect Docker to this machine, run: docker-machine env swarm-master
And I have errors in the logs of the join and manage container (I think the error come because the containers don't take care of the proxy) :
$docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6fbf967cdb60 swarm:latest "/swarm join --advert" 53 seconds ago Up 52 seconds 2375/tcp swarm-agent
8b176116989e swarm:latest "/swarm manage --tlsv" 54 seconds ago Up 53 seconds 2375/tcp, 0.0.0.0:3376->3376/tcp swarm-agent-master
$docker logs 6fbf967cdb60
time="2015-11-17T19:37:21Z" level=info msg="Registering on the discovery service every 20s..." addr="localhost:2376" discovery="token://b54d8665e72939d2c611d8f9e99521b4"
time="2015-11-17T19:37:41Z" level=error msg="Post https://discovery.hub.docker.com/v1/clusters/b54d8665e72939d2c611d8f9e99521b4?ttl=60: dial tcp: lookup discovery.hub.docker.com on 8.8.4.4:53: read udp 172.17.0.3:46576->8.8.4.4:53: i/o timeout"
$docker logs 8b176116989e
time="2015-11-17T19:37:20Z" level=info msg="Listening for HTTP" addr="0.0.0.0:3376" proto=tcp
time="2015-11-17T19:37:40Z" level=error msg="Discovery error: Get https://discovery.hub.docker.com/v1/clusters/b54d8665e72939d2c611d8f9e99521b4: dial tcp: lookup discovery.hub.docker.com on 8.8.4.4:53: read udp 172.17.0.2:44241->8.8.4.4:53: i/o timeout"
Is it a bug of the generic driver ?
Some others informations :
# docker version
Client:
Version: 1.9.0
API version: 1.21
Go version: go1.4.2
Git commit: 76d6bc9
Built: Tue Nov 3 17:29:38 UTC 2015
OS/Arch: linux/amd64
Server:
Version: 1.9.0
API version: 1.21
Go version: go1.4.2
Git commit: 76d6bc9
Built: Tue Nov 3 17:29:38 UTC 2015
OS/Arch: linux/amd64
# docker info
Containers: 2
Images: 8
Server Version: 1.9.0
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 12
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux 8 (jessie)
CPUs: 2
Total Memory: 1000 MiB
Name: swarm-master
ID: 6SDE:CQRA:NM6W:TY2H:4DPB:O4YO:IGRT:33AA:OKQP:M6UK:EMSR:H4WR
WARNING: No memory limit support
WARNING: No swap limit support
Labels:
provider=generic
Thank you :)
The problem was that it's not possible to use docker machine to create the swarm-master on the same machine. So I created two VM, one with docker-machine (and mh-keystore) and one other for swarm-master.
Creating the mh-keystore on localhost :
$docker-machine create -d generic --generic-ip-address localhost mh-keystore
$docker $(docker-machine config mh-keystore) run -d \
-p "8500:8500" \
-h "consul" \
progrium/consul -server -bootstrap
$docker ps
Installation of swarm-master to the other machine
$ docker-machine create \
-d generic --generic-ip-address 192.168.2.100 \
--swarm --swarm-image="swarm" --swarm-master \
--swarm-discovery="consul://192.168.2.103:8500" \
swarm-master
Creation of agent :
$ docker-machine create \
-d generic --generic-ip-address 192.168.2.102 \
--swarm \
--swarm-discovery="consul://192.168.2.103:8500" \
swarm-agent-00

Resources