Docker Oracle Database - can't overwrite ENV variables for credentials - oracle

I would like to configure an Oracle database on a server. For that, I am using this image from DockerHub:
https://hub.docker.com/r/sath89/oracle-12c/
Having included the image in a docker-compose.yml file, I am having trouble with overwriting the default credentials for accessing the database (the username is system while the password is oracle). This is how my docker-compose.yml file looks like:
version: '3.5'
services:
oracle12c-db:
image: sath89/oracle-12c
restart: always # restart policy
ports:
- 1521:1521
environment:
- USER=myusername
- PASS=mypass
- HOST=oracle-database
- PORT=1521
- ORACLE_SID=XE
- HTTP_PORT=8080
After successfully executing the command docker-compose up, I am still not able to access the database with the new credentials (only with the default ones). Is my docker-compose file syntactically correct or am I missing out something else here? Thanks in advance for your help!

I don't you can modify this at run time particularly easily.
Option 1 is to create your own Dockerfile based on theirs and pass in the user and password at build time (or hard code it to something else)
Option 2 is to modify their entrypoint and run the appropriate Oracle commands at startup to change the user/password

Related

How to choose correct profile on different environments for Docker and Docker-Compose?

Actually I have checked some questions like this
What I do not understand is; if I change my docker-compose.yml and add profile to it then should I leave the Dockerfile without profile ?
For example my docker-compose file:
backend:
container_name: backend
image: backend
build: ./backend
restart: always
deploy:
restart_policy:
condition: on-failure
max_attempts: 15
ports:
- '8080:8080'
environment:
- MYSQL_ROOT_PASSWORD=DbPass3008
- MYSQL_PASSWORD=DbPass3008
- MYSQL_USER=DbUser
- MYSQL_DATABASE=db
depends_on:
- mysql
And I will add:
environment:
- "SPRING_PROFILES_ACTIVE=test
As far as I understand I need to put 3 different compose file and run them with -f parameter for different environments like:
docker-compose -f docker-compose-local/test/prod up -d
But my question is that my Dockerfile is already specifying profile as:
FROM openjdk:17-oracle
ADD ./target/backend-0.0.1-SNAPSHOT.jar backend.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar", "-Dspring.profiles.active=TEST", "backend.jar"]
So how should I change this Dockerfile? Even if I create 3-4 different compose file, they are all using same Dockerfile. Should I create different Dockerfiles too (seems ridicilous) but what is the correct way ?
There's no need to add a java -Dspring.profiles.active=... command-line option; Spring will recognize the runtime SPRING_PROFILES_ACTIVE environment variable on its own. That means all of your environments can use the same image (which is generally a good practice).
Compose can also expand host environment variables in some contexts, so you may be able to use a single Compose file with environment-variable references
version: '3.8'
services:
backend:
environment:
- SPRING_PROFILES_ACTIVE=${ENVIRONMENT:-dev}
ENVIRONMENT=test docker-compose up -d
I tend to discourage putting environment-specific settings in a src/main/resources/*.yml file, since it means you need to recompile the application jar file whenever you deploy to a new environment. Another possibility is to set most Spring properties as environment variables, and then use multiple Compose files to include environment-specific settings. The one downside here is that you need multiple docker-compose -f options and you need to repeat them on every docker-compose invocation.

How to create adittional buckets on influxdb docker initialize

i don't know how to approach my problem because i don`t find similar cases to have an example.
I want to setup influx with 2 buckets to save telegraf data but only setups with init bucket.
These are the two influxdb services in my docker composer file:
influxdb:
image: influxdb:latest
volumes:
- ./influxdbv2:/root/.influxdbv2
environment:
# Use these same configurations parameters in your telegraf configuration, mytelegraf.conf.
- DOCKER_INFLUXDB_INIT_MODE=setup
- DOCKER_INFLUXDB_INIT_USERNAME=User
- DOCKER_INFLUXDB_INIT_PASSWORD=****
- DOCKER_INFLUXDB_INIT_ORG=org
- DOCKER_INFLUXDB_INIT_BUCKET=data
- DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=****
ports:
- "8086:8086"
influxdb_cli:
image: influxdb:latest
links:
- influxdb
volumes:
# Mount for influxdb data directory and configuration
- ./influxdbv2:/root/.influxdbv2
entrypoint: ["./entrypoint.sh"]
restart: on-failure:10
depends_on:
- influxdb
when inits runs influxdb setup correctly but doesn`t run the script and telegraf returns 404 when trying to write to buckets.
I ran into the same issue today and as far as I am aware you cannot currently initialize two buckets with the DOCKER_INFLUXDB_INIT_BUCKET environment variable.
So I created a shellscript called createSecondBucket.sh that I found in another answer for this question. It uses the influx cli to create a new bucket. The script looks like this:
#!/bin/sh
set -e
influx bucket create -n YOUR_BUCKET_NAME -o YOUR_ORG_NAME -r 0
Note that I had to change the line endings to unix (LF) to run the script without errors.
Inside my Dockerfile I added the following lines:
COPY ./createSecondBucket.sh /docker-entrypoint-initdb.d
RUN chmod +x /docker-entrypoint-initdb.d/createSecondBucket.sh
which have the effect that the script is executed after the container starts for the first time. I found this information on the MongoDB dockerhub page which you can find here under the "Initializing a fresh instance" headline.

Oracle xe 11g in docker, users created are lost after restarting docker on ubuntu

I have installed docker on Ubuntu 21.10 and following the official
instructions I pulled oracle 11g xe image:
docker pull oracleinanutshell/oracle-xe-11g
Then I started the image:
docker run -d -p 49161:1521 -p 8080:8080 oracleinanutshell/oracle-xe-11g
and using the Oracle SQL Developer I connected as SYSTEM and created a standard user, granting the appropriate privileges (create/delete tables, sequences etc).
Then I connected as that standard user and started creating and populating some tables.
But when stopping the docker image and restarting it, the user and all the tables were lost. What could be done to resolve this issue?
Thanks a lot!
You need to create a volume in order to keep persistent data. Moreover, once you start to deal with those kind of things. It is better to deal using docker compose.
Option 1 using docker:
First create the volume:
docker volume create db-vol
Then use this command in order to attach the volumen where the data is stored:
docker run -d -p 49161:1521 -p 8080:8080 -v db-vol:/opt/oracle/oradata oracleinanutshell/oracle-xe-11g
Option 2 using docker compose:
version: '3'
services:
oracle-db:
image: oracleinanutshell/oracle-xe-11g:latest
ports:
- 1521:1521
- 5500:5500
volumes:
- db-vol:/opt/oracle/oradata
volumes:
db-vol:
Please, find the theory of the concepts needed here:
https://docs.docker.com/storage/volumes/
https://hub.docker.com/r/oracleinanutshell/oracle-xe-11g

When run GITLAB CI then error migrate laravel with service mysql 5.7

enter image description here
Please help me. When gitlab CI instance run then error migrate of laravel , can't connect host mysql
enter image description here
First:
Use Docker with environment variables for deploy everyone :D
Second:
Make cut .env in CI script and show.
Your sed is not working right. It substitutes the variable name with the value, and not your data in the variable value.
Should be something like:
sed -i "s|DB_HOST=|DB_HOST=${DB_HOST}|g" .env
Third: Don't use .env.example for building .env. Build .env file from empty.
You didn't set up your MySQL properly. For services you have MySQL, but no port of it is exposed, NO MySQL root username, password, or database is set.
It should be something like...
services:
mysql:
image: mysql:5.7
env:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: laravel
ports:
- 33306:3306
Then use that in your project Environment, try this for a complete yml.
If you want SQLite instead of MySQL you can try this

how can i mount data from oracle docker container?

i just want to start an oracle docker container with docker-compose.yml.
that works so far, till i added a folder to sync/mount/whatever from the container.
problem 1: if the folder on the host is missing - docker doesn't mount anything
problem 2: a empty folder is getting ignored by git - so i added an empty file .emptyfileforgit
So if i now start my docker-compose up, docker mounts the folder with my fake file to the oracle container, and so the database is "broken".
docker compose file:
version: "3"
services:
mysql_db:
container_name: oracle_db
image: wnameless/oracle-xe-11g:latest
ports:
- "49160:22"
- "49161:1521"
- "49162:8080"
restart: always
volumes:
- "./oracle_data:/u01/app/oracle/oradata"
- "./startup_scripts:/docker-entrypoint-initdb.d"
environment:
- ORACLE_ALLOW_REMOTE=true
Question: how can i get rid of this behaviour?
With a mysql container that works fine...
Thanks a lot!
By using volume , folder/path/in/host : folder/path/in/container , the data in the container folder is mapped to provided location in the host. Initially the db data is empty, that is why your host folder does not contain any data. And do not put mock-invalid data in the db folder in your container. Because it will corrupt your db. If you want to add dump db data, just put it in the folder of your host folder, it will mapped to the path in your container
You need to copy files in /0u1/app/oracle/oradata
then you can access it out of container in your system in path ./orcle_data location.

Resources