-bash: mongo: command not found AWS - bash

I have installed Mongodb server in AWS. Mongodb server is up and running. But, I'm unable to connect to mongo shell. Mongo shell is not being recognized. Normally all utilities are present in the same directory as mongod utility. But, I could not find mongo utility in /usr/bin/ directory.
I am unable to figure out the issue. Where am I going wrong?
$ ps -ef | grep mongo
mongod 21149 1 0 09:35 ? 00:00:01 /usr/bin/mongod -f /etc/mongod.conf
ec2-user 21226 21086 0 09:48 pts/0 00:00:00 grep mongo
$
$ mongo
-bash: mongo: command not found
$
MONGOD LOG
2014-05-18T09:35:18.239+0000 ***** SERVER RESTARTED *****
2014-05-18T09:35:18.262+0000 initandlisten MongoDB starting : pid=21149 port=27017 dbpath=/data/db 64-bit host=ip-172-31-1-234
2014-05-18T09:35:18.262+0000 initandlisten db version v2.6.1
2014-05-18T09:35:18.262+0000 initandlisten git version: 4b95b086d2374bdcfcdf2249272fb552c9c726e8
2014-05-18T09:35:18.262+0000 initandlisten build info: Linux build14.nj1.10gen.cc 2.6.32-431.3.1.el6.x86_64 #1 SMP Fri Jan 3 21:39:27 UTC 2014 x86_64 BOOST_LIB_VERSION=1_49
2014-05-18T09:35:18.262+0000 initandlisten allocator: tcmalloc
2014-05-18T09:35:18.262+0000 initandlisten options: { config: "/etc/mongod.conf", net: { bindIp: "127.0.0.1", port: 27017 }, processManagement: { fork: true, pidFilePath: "/var/run/mongodb/mongod.pid" }, storage: { dbPath: "/data/db", journal: { enabled: false } }, systemLog: { destination: "file", logAppend: true, path: "/var/log/mongodb/mongod.log" } }
2014-05-18T09:35:18.332+0000 initandlisten waiting for connections on port 27017
2014-05-18T09:36:18.335+0000 clientcursormon mem (MB) res:45 virt:330
2014-05-18T09:36:18.335+0000 clientcursormon mapped:80
2014-05-18T09:36:18.335+0000 clientcursormon connections:0
Steps followed to install mongodb in AWS:
echo "[MongoDB]
name=MongoDB Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64
gpgcheck=0
enabled=1" | sudo tee -a /etc/yum.repos.d/mongodb.repo
sudo yum install -y mongo-10gen-server

you have to use
$ sudo yum install -y mongo-10gen-server mongodb-org-shell
and then
$ mongo

Mongo-10gen-server package doesnt include the component which is required to run "mongo" connecting to database running as mongod.
Use:
yum install mongodb-org
This will install all the required components.

Related

Processes are running slow in Docker container on Ubuntu 18.04

Processes are taking time when running them in a Docker container on an Ubuntu 18 machine. But the same process with the same Docker version is running fine on an Ubuntu 16 machine.
I have a node application listening on some port. Accepting get requests on the path "/" and "/docker" which simply runs a command "whoami" in the host machine and in a Docker container respectively and returns the result. The same node application with the same Docker container is running on both the machines (Ubuntu 16 and Ubuntu 18).
Firstly, I tried sending 20 concurrent get requests with path "/" to both the machines. And both the machines executed the command in an average of
35-40ms.
Secondly, I tried sending 20 concurrent get requests with path "/docker" to both the machines. Here, the Ubuntu 16 machine took a maximum of 4.3 seconds and an average of 3 seconds. But the Ubuntu 18 machine takes a maximum of 10 seconds and an average of 9 seconds.
I tried the above test multiple times and concluded that when running the process inside Docker, the time taken to execute is almost double in the Ubuntu 18 machine compared to Ubuntu 16.
I checked the following:
I tried monitoring through top and htop while hitting 20 requests. But everything seems the same there.
Also tried monitoring using perf command. But unable to find any unusual difference there. But I am not very used to perf command and so unable to understand clearly.
While these 20 requests were in processing. I run the same Docker command manually with strace. And found random results i.e. sometimes time taken in clock_gettime or futex (FUTEX_WAIT) or sometimes in +++ exited with 0 +++ message on Ubuntu 18, but it took less time on Ubuntu 16.
Below are the different configurations and code snippets I am using and running:
Machine1: Giving better performance.
node v10.16.0
npm 6.9.0
docker 18.09.8
ubuntu 16.04.3 LTS, xenial
Machine2: Giving poor performance.
node v10.16.0
npm 6.9.0
docker 18.09.8
ubuntu 18.04.2 LTS, bionic
Node application code snippet:
// for path "/docker"
var excuteInDocker = function() {
var cmd = "docker";
var args = ["exec", "ubuntu", "whoami"];
return executeCmd(cmd, args);
}
// for path "/"
var execute = function(){
var cmd = 'whoami';
var args = [];
return executeCmd(cmd, args);
}
Output of docker info which are common to both ubuntu 16 and 18:
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 2
Server Version: 18.09.8
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.296GiB
Name: myhostname
ID: LLLO:OMTS:PNNM:T3MP:AD2F:UMDG:IIZK:OGBO:3ZLL:YDBX:ONAO:AY5G
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 27
Goroutines: 42
System Time: 2019-07-25T15:25:54.991694211+05:30
EventsListeners: 0
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
WARNING: No swap limit support
docker info specific to Ubuntu 16:
Kernel Version: 4.4.0-112-generic
Operating System: Ubuntu 16.04.3 LTS
Total Memory: 7.303GiB
ID: FOFI:RW7N:RZSP:HHKH:BKS3:LMWL:TC2J:W7V2:222Y:Q2AU:XMU3:KLU7
docker info specific to Ubuntu 18:
Kernel Version: 4.15.0-1040-aws
Operating System: Ubuntu 18.04.2 LTS
Total Memory: 7.296GiB
ID: LLLO:OMTS:PNNM:T3MP:AD2F:UMDG:IIZK:OGBO:3ZLL:YDBX:ONAO:AY5G
Ubuntu 16 machine Data:
1. Data of time taken in execution
2019-07-25 14:06:42.851 INFO uid: 540ae880-aeb7-11e9-919d-dd32b3cf84d5 time: 475 result: {"success":true,"data":"root"}
2019-07-25 14:06:43.183 INFO uid: 54145e60-aeb7-11e9-919d-dd32b3cf84d5 time: 745 result: {"success":true,"data":"root"}
2019-07-25 14:06:45.711 INFO uid: 540c4810-aeb7-11e9-919d-dd32b3cf84d5 time: 3326 result: {"success":true,"data":"root"}
.
.
.
2019-07-25 14:06:46.835 INFO uid: 541d5f10-aeb7-11e9-919d-dd32b3cf84d5 time: 4338 result: {"success":true,"data":"root"}
Logs of command strace -t docker exec ubuntu whoami:
Result of perf top --sort comm,dso:
Ubuntu 18 machine Data:
1. Data of time taken in execution:
2019-07-25 14:07:32.559 INFO uid: 715a6af0-aeb7-11e9-a5a9-2fffd4e800d1 time: 1008 result: {"success":true,"data":"root"}
2019-07-25 14:07:32.941 INFO uid: 7178c860-aeb7-11e9-a5a9-2fffd4e800d1 time: 1191 result: {"success":true,"data":"root"}
2019-07-25 14:07:40.363 INFO uid: 71767e70-aeb7-11e9-a5a9-2fffd4e800d1 time: 8628 result: {"success":true,"data":"root"}
.
.
.
2019-07-25 14:07:41.970 INFO uid: 718af0d0-aeb7-11e9-a5a9-2fffd4e800d1 time: 10101 result: {"success":true,"data":"root"}
Logs of command strace -t docker exec ubuntu whoami:
result of perf top --sort comm,dso:
So, I need help in debugging what is wrong with Docker on the Ubuntu 18 machine. Or if there is any limitation with Docker on Ubuntu 18 or maybe some machine or Docker configuration issue.
I did not encounter such a problem on my desktop, but mysql container was running very slowly in a way that I could not understand on my ubuntu laptop. This solution solved my problem.

Can't send requests to a sinatra app running with docker and docker-compose

Could you tell me what's wrong with my Dockerfile or docker-compose?
# Dockerfile
FROM ruby:2.5.0
RUN apt-get update -qq && apt-get install -y build-essential
ENV APP_HOME /app
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
ADD Gemfile* $APP_HOME/
RUN bundle install
ADD . $APP_HOME
# docker-compose.yml
version: '3'
services:
db:
image: mongo
volumes:
- $HOME/data/mongodb:/data/db
ports:
- "27017:27017"
web:
build: .
command: rackup config.ru --port 4567
volumes:
- .:/app
ports:
- "4567:4567"
depends_on:
- db
If I run the app directly with 'rackup' command it works well. The issues start when I run the containers through 'docker-compose up' command, it stops accepting requests through 'localhost:4567'.
I couldn't spot the issue, that's why I'm asking for help. 🤓
Here is the logs from 'docker-compose up', just in case.
Starting tracker_db_1 ... done
Recreating tracker_api_1 ... done
Attaching to tracker_db_1, tracker_api_1
db_1 | 2018-07-30T09:43:17.580+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
db_1 | 2018-07-30T09:43:17.669+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=fe12227a1143
db_1 | 2018-07-30T09:43:17.669+0000 I CONTROL [initandlisten] db version v4.0.0
db_1 | 2018-07-30T09:43:17.669+0000 I CONTROL [initandlisten] git version: 3b07af3d4f471ae89e8186d33bbb1d5259597d51
db_1 | 2018-07-30T09:43:17.669+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016
db_1 | 2018-07-30T09:43:17.669+0000 I CONTROL [initandlisten] allocator: tcmalloc
db_1 | 2018-07-30T09:43:17.669+0000 I CONTROL [initandlisten] modules: none
db_1 | 2018-07-30T09:43:17.669+0000 I CONTROL [initandlisten] build environment:
db_1 | 2018-07-30T09:43:17.669+0000 I CONTROL [initandlisten] distmod: ubuntu1604
db_1 | 2018-07-30T09:43:17.669+0000 I CONTROL [initandlisten] distarch: x86_64
db_1 | 2018-07-30T09:43:17.669+0000 I CONTROL [initandlisten] target_arch: x86_64
db_1 | 2018-07-30T09:43:17.669+0000 I CONTROL [initandlisten] options: { net: { bindIpAll: true } }
db_1 | 2018-07-30T09:43:17.686+0000 I STORAGE [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
db_1 | 2018-07-30T09:43:17.700+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=487M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
web_1 | [2018-07-30 09:43:19] INFO WEBrick 1.4.2
web_1 | [2018-07-30 09:43:19] INFO ruby 2.5.0 (2017-12-25) [x86_64-linux]
web_1 | [2018-07-30 09:43:19] INFO WEBrick::HTTPServer#start: pid=1 port=4567
db_1 | 2018-07-30T09:43:29.033+0000 I STORAGE [initandlisten] WiredTiger message [1532943809:33683][1:0x7f429bbb8a00], txn-recover: Main recovery loop: starting at 7/8576
db_1 | 2018-07-30T09:43:29.787+0000 I STORAGE [initandlisten] WiredTiger message [1532943809:787811][1:0x7f429bbb8a00], txn-recover: Recovering log 7 through 8
db_1 | 2018-07-30T09:43:30.362+0000 I STORAGE [initandlisten] WiredTiger message [1532943810:362900][1:0x7f429bbb8a00], txn-recover: Recovering log 8 through 8
db_1 | 2018-07-30T09:43:30.433+0000 I STORAGE [initandlisten] WiredTiger message [1532943810:433423][1:0x7f429bbb8a00], txn-recover: Set global recovery timestamp: 0
db_1 | 2018-07-30T09:43:30.463+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
db_1 | 2018-07-30T09:43:30.487+0000 I CONTROL [initandlisten]
db_1 | 2018-07-30T09:43:30.488+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
db_1 | 2018-07-30T09:43:30.488+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
db_1 | 2018-07-30T09:43:30.488+0000 I CONTROL [initandlisten]
db_1 | 2018-07-30T09:43:30.635+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
db_1 | 2018-07-30T09:43:30.642+0000 I NETWORK [initandlisten] waiting for connections on port 27017
Your app is not available outside localhost and its localhost is docker container, not your system (you can even try to curl localhost from inside the container and I bet it will work just fine). I can't give you more details, I am not strong at the topic. But all you need to do is just bind rack to 0.0.0.0.
for example, I run my rails app this way:
bundle exec rails s -p 3000 -b 0.0.0.0
in your case it will be:
rackup --host 0.0.0.0 --port ...
Now it will be available even from your network (at least it should be)
I ran into this problem when trying to setup a simple Sinatra app in Docker using the built in WEBrick server. In other words, I don't have a config.ru file. In case anyone else runs into the same issue, I thought I'd post my solution.
My app is a "Hello World" app I used just to get Sinatra running inside a Docker container.
Here's the app.rb before my changes.
require 'sinatra'
get '/' do
'Hello world!'
end
When I ran the Docker container I could see WEBrick was running and ready to accept requests. But when I tried to access it from my laptop the request never made it to the container.
To fix the issue I added set :bind, '0.0.0.0' to app.rb, like this.
require 'sinatra'
set :bind, '0.0.0.0'
get '/' do
'Hello world!'
end
For anything requiring more complex configuration settings it would be better to add the bind setting to a configuration file.

Can't seem to expose docker container port to host

I could be missing something ridiculous, but every docker container I have tried to expose to my host machine (Mac) doesn't seem to work. I can tell that the containers are running and appear to have properly been exposed to the port I chose. Am I missing something obvious? Any help would be greatly appreciated.
I pulled down latest ElasticSearch image: https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
Run Docker:
docker run -d -p 9200:9200 docker.elastic.co/elasticsearch/elasticsearch:5.4.0
Request to see running images:
docker ps
View running image:
5e8ae3b13f7c docker.elastic.co/elasticsearch/elasticsearch:5.4.0 "/bin/bash bin/es-..." 4 seconds ago Up 4 seconds 0.0.0.0:9200->9200/tcp, 9300/tcp eloquent_almeida
Run lsof to see if anything exposed on port 9200
lsof -i tcp:9200
Nothing returned
Mac OS: 10.12.4
Docker Updated Version:
docker version
Client:
Version: 17.04.0-ce
API version: 1.27 (downgraded from 1.28)
Go version: go1.7.5
Git commit: 4845c56
Built: Wed Apr 5 23:33:17 2017
OS/Arch: darwin/amd64
Server:
Version: 17.03.1-ce
API version: 1.27 (minimum version 1.12)
Go version: go1.7.5
Git commit: c6d412e
Built: Mon Mar 27 16:58:30 2017
OS/Arch: linux/amd64
Experimental: false
Downloaded nmap and ran against 9200 localhost. Also made sure 9200 is open now in /etc/pf.conf.
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00016s latency).
Other addresses for localhost (not scanned): ::1
PORT STATE SERVICE
9200/tcp closed wap-wsp
Also tried using docker-machine on mac's IP:
docker-machine ip default
192.168.99.100
Tried 192.168.99.100:9200 and still no luck
You know, it looks like something is wrong with downloaded image or docker installation. I repeated your steps - all is Ok:
[06:40 PM] borlaze#mac: /tmp $ docker run -d -p 9200:9200 docker.elastic.co/elasticsearch/elasticsearch:5.4.0
[06:41 PM] borlaze#mac: /tmp $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fd05a1fe9b5a docker.elastic.co/elasticsearch/elasticsearch:5.4.0 "/bin/bash bin/es-..." 9 seconds ago Up 7 seconds 0.0.0.0:9200->9200/tcp, 9300/tcp practical_bell
[06:41 PM] borlaze#mac: /tmp $ lsof -i tcp:9200
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
com.docke 32108 borlaze 21u IPv4 0x601aa3189a6fc3e3 0t0 TCP *:wap-wsp (LISTEN)
com.docke 32108 borlaze 22u IPv6 0x601aa318a167e6cb 0t0 TCP localhost:wap-wsp (LISTEN)
Checked on OS 10.12.4, docker
[06:45 PM] borlaze#mac: /tmp $ docker version
Client:
Version: 17.03.1-ce
API version: 1.27
Go version: go1.7.5
Git commit: c6d412e
Built: Tue Mar 28 00:40:02 2017
OS/Arch: darwin/amd64
Server:
Version: 17.03.1-ce
API version: 1.27 (minimum version 1.12)
Go version: go1.7.5
Git commit: c6d412e
Built: Fri Mar 24 00:00:50 2017
OS/Arch: linux/amd64
Experimental: true
Try to remove image and repeat.

How to connect to SSHD inside a Docker container from Windows?

I have a Ruby on Rails environment, and I'm converting it to run in Docker. This is largely because the development machine is a Windows laptop and the server is not. I have the Docker container mainly up and running, and now I want to connect the RubyMine debugger. To accomplish this the recommendation is to setup an SSH server in the container.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/207649545-Use-RubyMine-and-Docker-for-development-run-and-debug-before-deployment-for-testing-
I successfully added SSHD to the container using the dockerfile lines from https://docs.docker.com/engine/examples/running_ssh_service/#build-an-egsshd-image minus the EXPOSE 22 (because it wasn't working with the port mapping in the docker-compose.yml). But the port is not accessible on the local machine
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6652389d248c civilservice_web "bundle exec rails..." 16 minutes ago Up 16 minutes 0.0.0.0:3000->3000/tcp, 0.0.0.0:3022->22/tcp civilservice_web_1
When I try to point PUTTY at localhost and 3022, it says that the server unexpectedly closed the connection.
What am I missing here?
This is my dockerfile
FROM ruby:2.2
RUN apt-get update && apt-get install -y \
build-essential \
libpq-dev \
nodejs \
openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
CMD ["/usr/sbin/sshd", "-D"]
RUN mkdir /MyApp
WORKDIR /MyApp
ADD Gemfile /MyApp/Gemfile
ADD Gemfile.lock /MyApp/Gemfile.lock
RUN bundle install
ADD . /MyApp
and this is my docker-compose.yml
version: '2'
services:
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/CivilService
ports:
- "3000:3000"
- "3022:22"
DOCKER_HOST doesn't appear to be an environment variable
docker version outputs the following
Client:
Version: 17.03.0-ce
API version: 1.26
Go version: go1.7.5
Git commit: 60ccb22
Built: Thu Feb 23 10:40:59 2017
OS/Arch: windows/amd64
Server:
Version: 17.03.0-ce
API version: 1.26 (minimum version 1.12)
Go version: go1.7.5
Git commit: 3a232c8
Built: Tue Feb 28 07:52:04 2017
OS/Arch: linux/amd64
Experimental: true
docker run -it --rm --net container:civilservice_web_1 busybox netstat -lnt outputs
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.11:35455 0.0.0.0:* LISTEN
SSHD is now running along side the Rails app, but the recipe that I was working from for setting up the service is not correct for the flavor of Linux that came with my base image https://docs.docker.com/engine/examples/running_ssh_service/#build-an-egsshd-image
The image I'm using is based on Debian 8. Could someone point me at where the example breaks down?
Your sshd process isn't running. That's visible in the netstat output:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.11:35455 0.0.0.0:* LISTEN
But as user2105103 points out, I should have realized that if I compared your docker-compose.yml with the Dockerfile. You define the sshd command in the image with a Dockerfile line:
CMD ["/usr/sbin/sshd", "-D"]
But then you override your image setting when running the container with the docker-compose command:
command: bundle exec rails s -p 3000 -b '0.0.0.0'
So, the only thing run, as you can see in the netstat, is the rails app listening on 3000. If you need multiple commands to run, then you can docker exec to kick off the second command (not recommended for a second service like this), use a command that launches sshd in the background and rails in the foreground (fairly ugly), or you can consider something like supervisord.
Personally, I'd skip sshd and just use docker exec -it civilservice_web_1 /bin/bash to get a prompt inside the container when you need it.

Docker-Machine and Swarm behind proxy

I'm traing to set up docker swarm over my virtual cluster. First, I try to install the swarm-master on the localhost with docker-machine.
The problem is that the machine need to use a proxy to access the discovery token.
First I ask a token with swarm create. To do that, I created this file :
$cat /etc/systemd/system/docker.service.d/http_proxy.conf
[Service]
Environment="HTTP_PROXY=http://**.**.**.**:3128/" "HTTPS_PROXY=http://**.**.**.**:3128/" "NO_PROXY=localhost,127.0.0.1,192.168.2.100,192.168.2.101,192.168.2.102,192.168.2.103,192.168.2.104,192.168.2.105,192.168.2.106,192.168.2.107,192.168.2.108,192.168.2.194,192.168.2.110"
I restarted the daemon and I can pull the swarm image :
$docker run -e "http_proxy=http://**.**.**.**:3128/" -e "https_proxy=http://**.**.**.**:3128/" swarm create
b54d8665e72939d2c611d8f9e99521b4
After I want to create the swarm master :
$docker-machine create -d generic --generic-ip-address localhost \
--engine-env HTTP_PROXY=http://192.168.254.10:3128/ \
--engine-env HTTPS_PROXY=http://192.168.254.10:3128/ \
--engine-env NO_PROXY=localhost,192.168.2.102,192.168.2.100 \
--swarm --swarm-master --swarm-discovery \
token://b54d8665e72939d2c611d8f9e99521b4 swarm-master
Result :
Running pre-create checks...
Creating machine...
Waiting for machine to be running, this may take a few minutes...
Machine is running, waiting for SSH to be available...
Detecting operating system of created instance...
Provisioning created instance...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Configuring swarm...
To see how to connect Docker to this machine, run: docker-machine env swarm-master
And I have errors in the logs of the join and manage container (I think the error come because the containers don't take care of the proxy) :
$docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6fbf967cdb60 swarm:latest "/swarm join --advert" 53 seconds ago Up 52 seconds 2375/tcp swarm-agent
8b176116989e swarm:latest "/swarm manage --tlsv" 54 seconds ago Up 53 seconds 2375/tcp, 0.0.0.0:3376->3376/tcp swarm-agent-master
$docker logs 6fbf967cdb60
time="2015-11-17T19:37:21Z" level=info msg="Registering on the discovery service every 20s..." addr="localhost:2376" discovery="token://b54d8665e72939d2c611d8f9e99521b4"
time="2015-11-17T19:37:41Z" level=error msg="Post https://discovery.hub.docker.com/v1/clusters/b54d8665e72939d2c611d8f9e99521b4?ttl=60: dial tcp: lookup discovery.hub.docker.com on 8.8.4.4:53: read udp 172.17.0.3:46576->8.8.4.4:53: i/o timeout"
$docker logs 8b176116989e
time="2015-11-17T19:37:20Z" level=info msg="Listening for HTTP" addr="0.0.0.0:3376" proto=tcp
time="2015-11-17T19:37:40Z" level=error msg="Discovery error: Get https://discovery.hub.docker.com/v1/clusters/b54d8665e72939d2c611d8f9e99521b4: dial tcp: lookup discovery.hub.docker.com on 8.8.4.4:53: read udp 172.17.0.2:44241->8.8.4.4:53: i/o timeout"
Is it a bug of the generic driver ?
Some others informations :
# docker version
Client:
Version: 1.9.0
API version: 1.21
Go version: go1.4.2
Git commit: 76d6bc9
Built: Tue Nov 3 17:29:38 UTC 2015
OS/Arch: linux/amd64
Server:
Version: 1.9.0
API version: 1.21
Go version: go1.4.2
Git commit: 76d6bc9
Built: Tue Nov 3 17:29:38 UTC 2015
OS/Arch: linux/amd64
# docker info
Containers: 2
Images: 8
Server Version: 1.9.0
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 12
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux 8 (jessie)
CPUs: 2
Total Memory: 1000 MiB
Name: swarm-master
ID: 6SDE:CQRA:NM6W:TY2H:4DPB:O4YO:IGRT:33AA:OKQP:M6UK:EMSR:H4WR
WARNING: No memory limit support
WARNING: No swap limit support
Labels:
provider=generic
Thank you :)
The problem was that it's not possible to use docker machine to create the swarm-master on the same machine. So I created two VM, one with docker-machine (and mh-keystore) and one other for swarm-master.
Creating the mh-keystore on localhost :
$docker-machine create -d generic --generic-ip-address localhost mh-keystore
$docker $(docker-machine config mh-keystore) run -d \
-p "8500:8500" \
-h "consul" \
progrium/consul -server -bootstrap
$docker ps
Installation of swarm-master to the other machine
$ docker-machine create \
-d generic --generic-ip-address 192.168.2.100 \
--swarm --swarm-image="swarm" --swarm-master \
--swarm-discovery="consul://192.168.2.103:8500" \
swarm-master
Creation of agent :
$ docker-machine create \
-d generic --generic-ip-address 192.168.2.102 \
--swarm \
--swarm-discovery="consul://192.168.2.103:8500" \
swarm-agent-00

Resources