Oracle xe 11g in docker, users created are lost after restarting docker on ubuntu - oracle

I have installed docker on Ubuntu 21.10 and following the official
instructions I pulled oracle 11g xe image:
docker pull oracleinanutshell/oracle-xe-11g
Then I started the image:
docker run -d -p 49161:1521 -p 8080:8080 oracleinanutshell/oracle-xe-11g
and using the Oracle SQL Developer I connected as SYSTEM and created a standard user, granting the appropriate privileges (create/delete tables, sequences etc).
Then I connected as that standard user and started creating and populating some tables.
But when stopping the docker image and restarting it, the user and all the tables were lost. What could be done to resolve this issue?
Thanks a lot!

You need to create a volume in order to keep persistent data. Moreover, once you start to deal with those kind of things. It is better to deal using docker compose.
Option 1 using docker:
First create the volume:
docker volume create db-vol
Then use this command in order to attach the volumen where the data is stored:
docker run -d -p 49161:1521 -p 8080:8080 -v db-vol:/opt/oracle/oradata oracleinanutshell/oracle-xe-11g
Option 2 using docker compose:
version: '3'
services:
oracle-db:
image: oracleinanutshell/oracle-xe-11g:latest
ports:
- 1521:1521
- 5500:5500
volumes:
- db-vol:/opt/oracle/oradata
volumes:
db-vol:
Please, find the theory of the concepts needed here:
https://docs.docker.com/storage/volumes/
https://hub.docker.com/r/oracleinanutshell/oracle-xe-11g

Related

How to check from inside a container if another container is running on port

I am running 2 containers at the same time (connected via docker-compose on setting links && depends_on).
The depends on is not enough, so I want the script that run on entryphone of one of the container to check if the other container is running already on some port.
I tried:
#!bin/bash
until nc -z w10 <container_name> 3306
do
echo waiting for db to be ready...
sleep 2
done
echo code is ready
But this is not working..
Anyone got an idea?
I would suggest to use the depends_on approach. However, you can use some of the advanced setting of this command. Please, read the documentation of Control startup and shutdown order in Compose
You can use the wait-for-it.sh script to exactly achieve what you need. Extracted from the documentation:
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
command: ["./wait-for-it.sh", "db:5432", "--", "python", "app.py"]
db:
image: postgres
Since you are already using docker-compose to orchestrate your services a better way would be to use condition: service_healthy of the depends_on long syntax. So instead of manually waiting in one container for the other to become available docker-compose will start the former only after the latter became healthy, i.e. available.
If the depended-on container does not have a specified HEALTHCHECK in its image already you can manually define it in the docker-compose.yml with the healthcheck attribute.
Example with a mariadb database using the included healthcheck.sh script:
services:
app:
image: myapp/image
depends_on:
db:
condition: service_healthy
db:
image: mariadb
environment:
- MARIADB_ROOT_PASSWORD=password
healthcheck:
test: "healthcheck.sh --connect"
With this docker-compose up will first start the db service and wait until the db service becomes healthy, i.e. is ready to accept connections, and only then will start the app service which can immediately connect to the db.

How to network 2 separate docker containers to communicate with eachother?

I'm pretty new to docker, and I've tried searching about networking but haven't found a solution that's worked.
I have a Laravel app that is using Laradock.
I also have an external 3rd party API that runs in its own docker container.
I basically want to specify the container name of the api inside my laravel .env file, and have it dynamically resolve the container ip so I can make API calls from my Laravel app. I can already do this with services that are already part of laradock like mariadb/mysql, but since my API is located in an external container, it can't connect to it.
I tried making a network and attaching them with;
docker network create my-network
Then inside my docker-compose.yml files for each of the containers, I specified;
networks:
my-network:
name: "my-network"
But if I try and ping them with;
docker exec -ti laradock-workspace-1 ping my-api
I can't connect and can't really figure out why. Was hoping someone familiar with docker might be able to explain why since I'm sure it's something very obvious I'm missing. Thanks!
By default Docker Compose uses a bridge network to provision inter-container communication. Read this article for more info about inter-container networking.
What matters for you, is that by default Docker Compose creates a hostname that equals the service name in the docker-compose.yml file.
Consider the following docker-compose.yml:
version: '3.9'
services:
server:
image: node:16.9.0
container_name: server
tty: true
stdin_open: true
depends_on:
- mongo
command: bash
mongo:
image: mongo
environment:
MONGO_INITDB_DATABASE: my-database
When you run docker-compose up, Docker will create a default network and assigns the service name as hostname for both mongo and server.
You can now access the backend container via:
docker exec -it server bash
And now you can ping the mongo container using Dockers internal network (default on port 27017 in this case):
curl -v http://mongo:27017/my-database
That's it. The same applies for your setup.

mongodb runing on docker Failed to auth on robo 3t

i'am facing a probleme while trying to connect to my mongoDB with robo t3
My docker container connfiguration :
todo_mongodb:
container_name: mongodb
image: mongo:latest
ports:
- "27017:27017"
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=admin
- MONGO_INITDB_DATABASE=admin
volumes:
- "./mongo/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d"
Docker ps command show's :
i'am able to access mongodb shell and login as admin using
winpty docker exec -it mongodb bash
mongo admin -u admin -p admin
i've also tried to verify my admin db password using
db.auth('admin','admin')
> 1
i've tried to reset my containers but nothing worked any help please !!!
another instance of docker was running and blocking the connections ! hope this helps someone .

How to use docker run with a Meteor image?

I have 2 containers mgmt-app who is a Meteor container and mgmt-mongo who is the MongoDB.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7b65be4ac454 gitlab-lab:5005/dfc/mongo:latest "/entrypoint.sh mongo" About an hour ago Up About an hour 27017/tcp mgmt-mongo
dff0b3c69c5f gitlab-lab:5005/dfc/mgmt-docker-gui:lab "/bin/sh -c 'sh $METE" About an hour ago Up 42 minutes 0.0.0.0:80->80/tcp mgmt-app
From my Docker host I want to run docker run gitlab-lab:5005/dfc/mgmt-docker-gui:lab ls -al
but I have this error:
=> Starting meteor app on port:80
/app/programs/server/node_modules/fibers/future.js:280
throw(ex);
^
Error: MONGO_URL must be set in environment
So I tried:
docker run -e "MONGO_URL=mongodb://mgmt-mongo:27017/meteor" gitlab-lab:5005/dfc/mgmt-docker-gui:lab ls -al
and then the error was:
/app/programs/server/node_modules/fibers/future.js:313
throw(ex);
^
MongoError: failed to connect to server [mgmt-mongo:27017] on first connect
I really don't understand because when I do a docker-compose up -d with this file:
mgmt-app:
image: gitlab-lab:5005/dfc/mgmt-docker-gui:latest
container_name: mgmt-app
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $HOME/.docker:/root/.docker
- /home/dockeradm/compose/area:/home/dockeradm/compose/area
environment:
- ROOT_URL=http://localhost:80
- MONGO_URL=mongodb://mgmt-mongo:27017/meteor
ports:
- 80:80
restart: always
mgmt-mongo:
image: gitlab-lab:5005/dfc/mongo:latest
container_name: mgmt-mongo
volumes:
- mgmt_mongo_data_config:/data/configdb
- mgmt_mongo_data_db:/data/db
restart: always
everything go well.
So my request is, how should I do my docker run to execute my command ? (the command is not a simple ls -al but it's ok for the demo)
When you run the containers separately with docker run, they are not linked on the same docker network so the mongo container is not accessible from the app container. To remedy this, you should use either:
--link to mark the app container as linked to the mongo container. This works, but is deprecated.
a defined docker network for both containers to be linked by; this is more complex, but is the recommended architecture
By contrast, docker-compose automatically adds both containers to the same docker network, so they are immediately connectable without any extra configuration required:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.

Docker FATAL: could not write lock file "postmaster.pid": No space left on device

postgres:9.5
I try rebooting,
docker-compose build --no-cache
delete image and container and build again
I have many proyects and anybody starts, keeps the same configuration...
Mac osx Sierra
Apparently the containers were not deleted well, I tried with this and after rebuild works ok.
# Delete all containers
docker rm $(docker ps -a -q)
# Delete all images
docker rmi $(docker images -q)
docker-compose.yml
version: '2'
services:
web:
build: .
image: imagename
command: python manage.py runserver 0.0.0.0:8000
ports:
- "3000:3000"
- "8000:8000"
volumes:
- .:/code
depends_on:
- migration
- redis
- db
redis:
image: redis:3.2.3
db:
image: postgres:9.5
volumes:
- .:/tmp/data/
npm:
image: imagename
command: npm install
volumes:
- .:/code
migration:
image: imagename
command: python manage.py migrate --noinput
volumes:
- .:/code
depends_on:
- db
Dockerfile:
FROM python:3.5.2
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
RUN mkdir /code
WORKDIR /code
RUN easy_install -U pip
ADD requirements.txt /code/requirements.txt
RUN pip install -r requirements.txt`
If you're coming here from Google and finding that multiple containers are complaining of Disk space, the issue may be that your local Docker installation has maxed out its disk image size. This is configurable in Docker for Mac. Here are the instructions to change that disk image size.
You can do docker volume prune to remove all unused local volumes.
If you do not have any critical data you can blow away the docker volume.
docker volume ls
docker volume rm your_volume
My case, goto docker Dashboard -> Settings and increate Disk Image size then restart
Off late, I faced a similar issue with postgres and mysql databases. All of a sudden these containers exited without any external trigger. I spend much time on this issue finally it was identified as RAM allocation issue in the server.
There were 13 containers working in the same path out of which 3 postgres and 1 mysql database containers also present. These containers exited and the application stopped working. There were 2 errors in the docker logs - mainly
postgresql database directory appears to contain a database and
FATAL: could not write lock file "postmaster.pid": No space left on device
I tried stopping all other services and starting the database containers only but this issue repeated
First of all check the storage utilisation status with the below command
df [OPTION]... [FILE]...
df -hP
In my case, it was showing 98% utilised and databases were not able to add new records which caused the problem. After allocated additional memory to the NFS mount, the issue cleared
Once it is done, verify the RAM utilisation status also which will be increased now
free -h
This will return values in total, used, free, shared, buff/cache, available. If you try stopping containers one by one and restarting, you can see these are consuming memory from the shared category. So in my case, there was almost 18M showing initially in shared which was not enough for the databases to run along with all other containers. After the NFS mount is increased, the shared RAM also increased to 50M which means all services working fine
So it is pure physical storage space issue and you should act proactively to remove old unused files, docker images which takes huge space, local docker volumes etc. Check in docker documentation to perform these steps
https://docs.docker.com/config/pruning/
I faced this problem on Docker Desktop for Mac, after I've rebuilt the containers and started these via docker compose up. But the older versions of these containers were already running, because these were set to restart automatically.
I.e. the PostgreSQL DB couldn't set the lock on the named volume, because there was a concurrent access with the running container.

Resources