docker-compose Error Cannot start service mongo: driver failed programming external connectivity on endpoint - windows

I'm setting up grandnode with mondodb in docker using docker compose.
docker-compose.yml
version: "3.6"
services:
mongo:
image: mongo:3.6
volumes:
- mongo_data_db:/data/db
- mongo_data_configdb:/data/configdb
ports:
- 27017:27017
grandnode:
image: grandnode/grandnode:4.10
ports:
- 8080:8080
depends_on:
- mongo
volumes:
mongo_data_db:
external: true
mongo_data_configdb:
external: true
Getting below error while using the docker-compose.
E:\docker\grandnode>docker-compose up
Creating network "grandnode_default" with the default driver
Creating grandnode_mongo_1 ... error
ERROR: for grandnode_mongo_1 Cannot start service mongo: driver failed programming external connectivity on endpoint grandnode_mongo_1 (1e54342c07b093e32189aad487927f226b3ed0d1b6bdf7413588377b0e99bc2c): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:27017:tcp:172.20.0.2:27017: input/output error
ERROR: for mongo Cannot start service mongo: driver failed programming external connectivity on endpoint grandnode_mongo_1 (1e54342c07b093e32189aad487927f226b3ed0d1b6bdf7413588377b0e99bc2c): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:27017:tcp:172.20.0.2:27017: input/output error
ERROR: Encountered errors while bringing up the project.

It happen to me, in Xubuntu 20.04.
The problem was that I had mongod running in my computer.
Stop mongod, was the solution for me.
I did this:
sudo systemctl stop mongod
Check that mongod was stopped with:
systemctl status mongod | grep Active
The output of this command should be:
Active: inactive (dead)
Then, executed again this:
docker-compose up -d
Everything worked as expected.

Unless you want to connect to your MongoDB instance from your local host, you don't need that port mapping "27017:27017".
Both services are on the same network and will see each other anyway. Grandnode can connect to MongoDB at mongo:27017

The problem was because the Shared Drives were unchecked.
Check the drives required
Click Apply
Restart Docker
This will fix the issue.

stop your MongoDB server from your OS.
for linux
sudo systemctl stop mongod
if this still doesn't work then uninstall MongoDB from the local machine and run docker compose once again

for Linux user
sudo systemctl stop MongoDB
sudo docker-compose up -d

Related

Connecting to a Mongo container from Spring container

I have a problem here that I really cannot understand. I already saw few topics here with the same problem and those topics was successfully solved. I basically did the same thing and cannot understand what I'm doing wrong.
I have a Spring application container that tries to connect to a Mongo container through the following Docker Composer:
version: '3'
services:
app:
build: .
ports:
- "8080:8080"
links:
- db
db:
image: mongo
volumes:
- ./database:/data
ports:
- "27017:27017"
In my application.properties:
spring.data.mongodb.uri=mongodb://db:27017/app
Finally, my Dockerfile:
FROM eclipse-temurin:11-jre-alpine
WORKDIR /home/java
RUN mkdir /home/java/bar
COPY ./build/libs/foo.jar /home/java/bar/foo.jar
CMD ["java","-jar", "/home/java/bar/foo.jar"]
When I run docker compose up --build I got:
2022-11-17 12:08:53.452 INFO 1 --- [null'}-db:27017] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server db:27017
Caused by: java.net.UnknownHostException: db
Running the docker compose ps I can see the mongo container running well, and I am able to connect to it through Mongo Compass and with this same Spring Application but outside of container. The difference running outside of container is the host from spring.data.mongodb.uri=mongodb://db:27017/app to spring.data.mongodb.uri=mongodb://localhost:27017/app.
Also, I already tried to change the host for localhost inside of the spring container and didnt work.
You need to specify MongoDB host, port and database as different parameters as mentioned here.
spring.data.mongodb.host=db
spring.data.mongodb.port=27017
spring.data.mongodb.authentication-database=admin
As per the official docker-compose documentation the above docker-compose file should worked since both db and app are in the same network (You can check if they are in different networks just in case)
If the networking is not working, as a workaround, instead of using localhost inside the spring container, use the server's IP, i.e, mongodb://<server_ip>:27017/app (And make sure there is no firewall blocking it)

docker-compose pull Error: "error creating temporary lease: read-only file system"

I'm trying to run docker-compose pull but I get some errors that I don't know what to do with.
My docker-compose.yaml file:
version: '3'
services:
strapi:
image: strapi/strapi
environment:
DATABASE_CLIENT: postgres
DATABASE_NAME: strapi
DATABASE_HOST: postgres
DATABASE_PORT: 5432
DATABASE_USERNAME: strapi
DATABASE_PASSWORD: strapi
volumes:
- ./app:/srv/app
ports:
- '1337:1337'
depends_on:
- postgres
postgres:
image: postgres
environment:
POSTGRES_DB: strapi
POSTGRES_USER: strapi
POSTGRES_PASSWORD: strapi
volumes:
- ./data:/var/lib/postgresql/data
The error message:
Pulling postgres ... error
Pulling strapi ... error
ERROR: for strapi error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown
ERROR: for postgres error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown
ERROR: error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown
error creating temporary lease: write /var/lib/desktop-containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: read-only file system: unknown
I tried a multitude of things so YMMV, but here are all of the steps I did that ultimately got it working.
I am using Windows 10 with WSL2 backend on Ubuntu, so again YMMV as I see MacOS is tagged. This is one of the few questions I see related to mine, so I thought it would be valuable.
Steps for success:
Update WSL (wsl --update -- unrelated to the GitHub issue below)
stop Docker Desktop
stop WSL (wsl --shutdown)
unregister the docker-desktop distro (which contains binaries, but no data)
wsl --unregister docker-desktop
restart Docker Desktop (try running as admin)
Enable use of docker compose V2 (settings -> general -> Use Docker Compose V2)
Associated GitHub issue link
Extra Info:
I ended up using V2 of docker compose when it worked... it works either way now that the image has pulled properly, though.
I unsuccessfully restarted, reinstalled, and factory reset Docker Desktop many times.

Docker killing 3306 usage port cause of crashing docker

I'm using MaxOs and after install Docker i tried to install LaraDock,running this command which that was into LaraDoc documentation:
laradock % docker-compose up -d nginx mariadb phpmyadmin redis workspace
return this error:
laradock_mariadb_1 is up-to-date
laradock_docker-in-docker_1 is up-to-date
laradock_redis_1 is up-to-date
Starting laradock_mysql_1 ...
Starting laradock_mysql_1 ... error
WARNING: Host is already in use by another container
ERROR: for laradock_mysql_1 Cannot start service mysql: driver failed programming external
connectivity on endpoint laradock_mysql_1 (a75f179cd36ac95540f346d1c75ff105904cc8717690152ac90b92383c847a3b): Bind for 0.0.0.0:3306 failed: port is already allocated
Starting laradock_workspace_1 ... error
ERROR: for laradock_workspace_1 Cannot start service workspace: driver failed programming external connectivity on endpoint laradock_workspace_1 (fd6a03d680c668acae7f6db40ad7f5d9951a267cdf7e7686f66f751f91cece17): Bind for 0.0.0.0:8080 failed: port is already allocated
ERROR: for mysql Cannot start service mysql: driver failed programming external connectivity on endpoint laradock_mysql_1 (a75f179cd36ac95540f346d1c75ff105904cc8717690152ac90b92383c847a3b): Bind for 0.0.0.0:3306 failed: port is already allocated
ERROR: for workspace Cannot start service workspace: driver failed programming external connectivity on endpoint laradock_workspace_1 (fd6a03d680c668acae7f6db40ad7f5d9951a267cdf7e7686f66f751f91cece17): Bind for 0.0.0.0:8080 failed: port is already allocated
and when i try to kill 3306 cause of crashing Docker application
sudo kill `sudo lsof -t -i:3306`
LaraDock configuration:
...
ports:
- "${MYSQL_PORT}:3306"
...
ports:
- "${MARIADB_PORT}:3306"
You should guarantee that MYSQL_PORT and MARIADB_PORT have different values, otherwise Docker will try to allocate the same port for both on host network.
Beside that, when you don't "publish" any port, containers on Docker can run with their own ports, like a lot of containers running with port 80, because, by default, every container has a network interface.
Pay attention on indentation, always use spaces instead of tabs:
...
version: '3.0'
services:
mysql:
image: mysql:5.7
ports:
- "3306:3306"
mariadb:
image: mariadb:10.4
ports:
- "3307:3306"

How to use docker run with a Meteor image?

I have 2 containers mgmt-app who is a Meteor container and mgmt-mongo who is the MongoDB.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7b65be4ac454 gitlab-lab:5005/dfc/mongo:latest "/entrypoint.sh mongo" About an hour ago Up About an hour 27017/tcp mgmt-mongo
dff0b3c69c5f gitlab-lab:5005/dfc/mgmt-docker-gui:lab "/bin/sh -c 'sh $METE" About an hour ago Up 42 minutes 0.0.0.0:80->80/tcp mgmt-app
From my Docker host I want to run docker run gitlab-lab:5005/dfc/mgmt-docker-gui:lab ls -al
but I have this error:
=> Starting meteor app on port:80
/app/programs/server/node_modules/fibers/future.js:280
throw(ex);
^
Error: MONGO_URL must be set in environment
So I tried:
docker run -e "MONGO_URL=mongodb://mgmt-mongo:27017/meteor" gitlab-lab:5005/dfc/mgmt-docker-gui:lab ls -al
and then the error was:
/app/programs/server/node_modules/fibers/future.js:313
throw(ex);
^
MongoError: failed to connect to server [mgmt-mongo:27017] on first connect
I really don't understand because when I do a docker-compose up -d with this file:
mgmt-app:
image: gitlab-lab:5005/dfc/mgmt-docker-gui:latest
container_name: mgmt-app
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $HOME/.docker:/root/.docker
- /home/dockeradm/compose/area:/home/dockeradm/compose/area
environment:
- ROOT_URL=http://localhost:80
- MONGO_URL=mongodb://mgmt-mongo:27017/meteor
ports:
- 80:80
restart: always
mgmt-mongo:
image: gitlab-lab:5005/dfc/mongo:latest
container_name: mgmt-mongo
volumes:
- mgmt_mongo_data_config:/data/configdb
- mgmt_mongo_data_db:/data/db
restart: always
everything go well.
So my request is, how should I do my docker run to execute my command ? (the command is not a simple ls -al but it's ok for the demo)
When you run the containers separately with docker run, they are not linked on the same docker network so the mongo container is not accessible from the app container. To remedy this, you should use either:
--link to mark the app container as linked to the mongo container. This works, but is deprecated.
a defined docker network for both containers to be linked by; this is more complex, but is the recommended architecture
By contrast, docker-compose automatically adds both containers to the same docker network, so they are immediately connectable without any extra configuration required:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.

Elasticsearch 5.1 and Docker - How to get networking configured properly to reach Elasticsearch from the host

Using Elasticsearch:latest (v5.1) from the Docker public repo, I created my own image containing Cerebro. I am now attempting to get Elasticsearch networking properly configured so that I can connect to Elasticsearch from Cerebro. Cerebro running inside of the container I created, renders properly on my host at: http://localhost:9000.
After committing my image, I created my Docker container with the following:
sudo docker run -d -it --privileged --name es5.1 --restart=always \
-p 9200:9200 \
-p 9300:9300 \
-p 9000:9000 \
-v ~/elasticsearch/5.1/config:/usr/share/elasticsearch/config \
-v ~/elasticsearch/5.1/data:/usr/share/elasticsearch/data \
-v ~/elasticsearch/5.1/cerebro/conf:/root/cerebro-0.4.2/conf \
elasticsearch_cerebro:5.1 \
/root/cerebro-0.4.2/bin/cerebro
my elasticsearch.yml in ~/elasticsearch/5.1/config currently has the following network and discovery entries specified:
network.publish_host: 192.168.1.26
discovery.zen.ping.unicast.hosts: ["192.168.1.26:9300"]
I have also tried 0.0.0.0 and not specifying the values to default to the loopback for these settings. In addition, I've tried specifying network.host with a combination of values. No matter how I set this, elasticsearch logs on startup:
[info] play.api.Play - Application started (Prod)
[info] p.c.s.NettyServer - Listening for HTTP on /0:0:0:0:0:0:0:0:9000
[error] p.c.s.n.PlayDefaultUpstreamHandler - Cannot invoke the action
java.net.ConnectException: Connection refused: localhost/127.0.0.1:9200
… cascading errors because of this connection refusal...
No matter how I set the elasticsearch.yml networking, the error message on Elasticsearch startup does not change. I verified that the elasticsearch.yml is being picked-up inside of the Docker container. Please let me know were I'm going wrong with this configuration.
Well, it looks like I"m answering my own question after a days-worth of battle with this! The issue was that elasticsearch wasn't started inside of the container. To determine this, I got a terminal into the container:
docker exec -it es5.1 bash
Once in the container, I checked service status:
service elasticsearch status
To this, the OS responded with:
[FAIL] elasticsearch is not running ... failed!
I started it with:
service elasticsearch start
I add a single script that I'll call from docker run to start elasticsearch and cerebro and that should do the trick. However, I would still like to hear if there is a better way to configure this.
I made a github docker-compose repo that will spin up a elasticsearch, kibana, logstash, cerebro cluster
https://github.com/Shuliyey/elkc
========================================================================
On the other hand, in regard to the actual problem (elasticsearch_cerebro not working).
To get the elasticsearch and cerebro working in one docker container. Need to use supervisor
https://docs.docker.com/engine/admin/using_supervisord/
will update with more details
No need to use supervisor at all. A very simple way to solve this is to use docker-compose and bundle Elasticsearch and Cerebro together, like this:
docker-compose.yml:
version: '2'
services:
elasticsearch:
build: elasticsearch
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- ./elasticsearch/data:/usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx1500m -Xms1500m"
networks:
- elk
cerebro:
build: cerebro
volumes:
- ./cerebro/config/application.conf:/opt/cerebro/conf/application.conf
ports:
- "9000:9000"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
elasticsearch/Dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch:5.5.1
cerebro/Dockerfile:
FROM yannart/cerebro
Then you run docker-compose build and docker-compose up. When everything is started, you can access ES at http://localhost:9200 and Cerebro at http://localhost:9000

Resources