mongodb runing on docker Failed to auth on robo 3t - laravel

i'am facing a probleme while trying to connect to my mongoDB with robo t3
My docker container connfiguration :
todo_mongodb:
container_name: mongodb
image: mongo:latest
ports:
- "27017:27017"
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=admin
- MONGO_INITDB_DATABASE=admin
volumes:
- "./mongo/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d"
Docker ps command show's :
i'am able to access mongodb shell and login as admin using
winpty docker exec -it mongodb bash
mongo admin -u admin -p admin
i've also tried to verify my admin db password using
db.auth('admin','admin')
> 1
i've tried to reset my containers but nothing worked any help please !!!

another instance of docker was running and blocking the connections ! hope this helps someone .

Related

Connecting to a Mongo container from Spring container

I have a problem here that I really cannot understand. I already saw few topics here with the same problem and those topics was successfully solved. I basically did the same thing and cannot understand what I'm doing wrong.
I have a Spring application container that tries to connect to a Mongo container through the following Docker Composer:
version: '3'
services:
app:
build: .
ports:
- "8080:8080"
links:
- db
db:
image: mongo
volumes:
- ./database:/data
ports:
- "27017:27017"
In my application.properties:
spring.data.mongodb.uri=mongodb://db:27017/app
Finally, my Dockerfile:
FROM eclipse-temurin:11-jre-alpine
WORKDIR /home/java
RUN mkdir /home/java/bar
COPY ./build/libs/foo.jar /home/java/bar/foo.jar
CMD ["java","-jar", "/home/java/bar/foo.jar"]
When I run docker compose up --build I got:
2022-11-17 12:08:53.452 INFO 1 --- [null'}-db:27017] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server db:27017
Caused by: java.net.UnknownHostException: db
Running the docker compose ps I can see the mongo container running well, and I am able to connect to it through Mongo Compass and with this same Spring Application but outside of container. The difference running outside of container is the host from spring.data.mongodb.uri=mongodb://db:27017/app to spring.data.mongodb.uri=mongodb://localhost:27017/app.
Also, I already tried to change the host for localhost inside of the spring container and didnt work.
You need to specify MongoDB host, port and database as different parameters as mentioned here.
spring.data.mongodb.host=db
spring.data.mongodb.port=27017
spring.data.mongodb.authentication-database=admin
As per the official docker-compose documentation the above docker-compose file should worked since both db and app are in the same network (You can check if they are in different networks just in case)
If the networking is not working, as a workaround, instead of using localhost inside the spring container, use the server's IP, i.e, mongodb://<server_ip>:27017/app (And make sure there is no firewall blocking it)

How to check from inside a container if another container is running on port

I am running 2 containers at the same time (connected via docker-compose on setting links && depends_on).
The depends on is not enough, so I want the script that run on entryphone of one of the container to check if the other container is running already on some port.
I tried:
#!bin/bash
until nc -z w10 <container_name> 3306
do
echo waiting for db to be ready...
sleep 2
done
echo code is ready
But this is not working..
Anyone got an idea?
I would suggest to use the depends_on approach. However, you can use some of the advanced setting of this command. Please, read the documentation of Control startup and shutdown order in Compose
You can use the wait-for-it.sh script to exactly achieve what you need. Extracted from the documentation:
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
command: ["./wait-for-it.sh", "db:5432", "--", "python", "app.py"]
db:
image: postgres
Since you are already using docker-compose to orchestrate your services a better way would be to use condition: service_healthy of the depends_on long syntax. So instead of manually waiting in one container for the other to become available docker-compose will start the former only after the latter became healthy, i.e. available.
If the depended-on container does not have a specified HEALTHCHECK in its image already you can manually define it in the docker-compose.yml with the healthcheck attribute.
Example with a mariadb database using the included healthcheck.sh script:
services:
app:
image: myapp/image
depends_on:
db:
condition: service_healthy
db:
image: mariadb
environment:
- MARIADB_ROOT_PASSWORD=password
healthcheck:
test: "healthcheck.sh --connect"
With this docker-compose up will first start the db service and wait until the db service becomes healthy, i.e. is ready to accept connections, and only then will start the app service which can immediately connect to the db.

Oracle xe 11g in docker, users created are lost after restarting docker on ubuntu

I have installed docker on Ubuntu 21.10 and following the official
instructions I pulled oracle 11g xe image:
docker pull oracleinanutshell/oracle-xe-11g
Then I started the image:
docker run -d -p 49161:1521 -p 8080:8080 oracleinanutshell/oracle-xe-11g
and using the Oracle SQL Developer I connected as SYSTEM and created a standard user, granting the appropriate privileges (create/delete tables, sequences etc).
Then I connected as that standard user and started creating and populating some tables.
But when stopping the docker image and restarting it, the user and all the tables were lost. What could be done to resolve this issue?
Thanks a lot!
You need to create a volume in order to keep persistent data. Moreover, once you start to deal with those kind of things. It is better to deal using docker compose.
Option 1 using docker:
First create the volume:
docker volume create db-vol
Then use this command in order to attach the volumen where the data is stored:
docker run -d -p 49161:1521 -p 8080:8080 -v db-vol:/opt/oracle/oradata oracleinanutshell/oracle-xe-11g
Option 2 using docker compose:
version: '3'
services:
oracle-db:
image: oracleinanutshell/oracle-xe-11g:latest
ports:
- 1521:1521
- 5500:5500
volumes:
- db-vol:/opt/oracle/oradata
volumes:
db-vol:
Please, find the theory of the concepts needed here:
https://docs.docker.com/storage/volumes/
https://hub.docker.com/r/oracleinanutshell/oracle-xe-11g

docker-compose Error Cannot start service mongo: driver failed programming external connectivity on endpoint

I'm setting up grandnode with mondodb in docker using docker compose.
docker-compose.yml
version: "3.6"
services:
mongo:
image: mongo:3.6
volumes:
- mongo_data_db:/data/db
- mongo_data_configdb:/data/configdb
ports:
- 27017:27017
grandnode:
image: grandnode/grandnode:4.10
ports:
- 8080:8080
depends_on:
- mongo
volumes:
mongo_data_db:
external: true
mongo_data_configdb:
external: true
Getting below error while using the docker-compose.
E:\docker\grandnode>docker-compose up
Creating network "grandnode_default" with the default driver
Creating grandnode_mongo_1 ... error
ERROR: for grandnode_mongo_1 Cannot start service mongo: driver failed programming external connectivity on endpoint grandnode_mongo_1 (1e54342c07b093e32189aad487927f226b3ed0d1b6bdf7413588377b0e99bc2c): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:27017:tcp:172.20.0.2:27017: input/output error
ERROR: for mongo Cannot start service mongo: driver failed programming external connectivity on endpoint grandnode_mongo_1 (1e54342c07b093e32189aad487927f226b3ed0d1b6bdf7413588377b0e99bc2c): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:27017:tcp:172.20.0.2:27017: input/output error
ERROR: Encountered errors while bringing up the project.
It happen to me, in Xubuntu 20.04.
The problem was that I had mongod running in my computer.
Stop mongod, was the solution for me.
I did this:
sudo systemctl stop mongod
Check that mongod was stopped with:
systemctl status mongod | grep Active
The output of this command should be:
Active: inactive (dead)
Then, executed again this:
docker-compose up -d
Everything worked as expected.
Unless you want to connect to your MongoDB instance from your local host, you don't need that port mapping "27017:27017".
Both services are on the same network and will see each other anyway. Grandnode can connect to MongoDB at mongo:27017
The problem was because the Shared Drives were unchecked.
Check the drives required
Click Apply
Restart Docker
This will fix the issue.
stop your MongoDB server from your OS.
for linux
sudo systemctl stop mongod
if this still doesn't work then uninstall MongoDB from the local machine and run docker compose once again
for Linux user
sudo systemctl stop MongoDB
sudo docker-compose up -d

How to use docker run with a Meteor image?

I have 2 containers mgmt-app who is a Meteor container and mgmt-mongo who is the MongoDB.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7b65be4ac454 gitlab-lab:5005/dfc/mongo:latest "/entrypoint.sh mongo" About an hour ago Up About an hour 27017/tcp mgmt-mongo
dff0b3c69c5f gitlab-lab:5005/dfc/mgmt-docker-gui:lab "/bin/sh -c 'sh $METE" About an hour ago Up 42 minutes 0.0.0.0:80->80/tcp mgmt-app
From my Docker host I want to run docker run gitlab-lab:5005/dfc/mgmt-docker-gui:lab ls -al
but I have this error:
=> Starting meteor app on port:80
/app/programs/server/node_modules/fibers/future.js:280
throw(ex);
^
Error: MONGO_URL must be set in environment
So I tried:
docker run -e "MONGO_URL=mongodb://mgmt-mongo:27017/meteor" gitlab-lab:5005/dfc/mgmt-docker-gui:lab ls -al
and then the error was:
/app/programs/server/node_modules/fibers/future.js:313
throw(ex);
^
MongoError: failed to connect to server [mgmt-mongo:27017] on first connect
I really don't understand because when I do a docker-compose up -d with this file:
mgmt-app:
image: gitlab-lab:5005/dfc/mgmt-docker-gui:latest
container_name: mgmt-app
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $HOME/.docker:/root/.docker
- /home/dockeradm/compose/area:/home/dockeradm/compose/area
environment:
- ROOT_URL=http://localhost:80
- MONGO_URL=mongodb://mgmt-mongo:27017/meteor
ports:
- 80:80
restart: always
mgmt-mongo:
image: gitlab-lab:5005/dfc/mongo:latest
container_name: mgmt-mongo
volumes:
- mgmt_mongo_data_config:/data/configdb
- mgmt_mongo_data_db:/data/db
restart: always
everything go well.
So my request is, how should I do my docker run to execute my command ? (the command is not a simple ls -al but it's ok for the demo)
When you run the containers separately with docker run, they are not linked on the same docker network so the mongo container is not accessible from the app container. To remedy this, you should use either:
--link to mark the app container as linked to the mongo container. This works, but is deprecated.
a defined docker network for both containers to be linked by; this is more complex, but is the recommended architecture
By contrast, docker-compose automatically adds both containers to the same docker network, so they are immediately connectable without any extra configuration required:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.

Resources