I am using Oracle Database Enterprise Edition and this is my docker-compose
version: '3'
services:
db:
image: store/oracle/database-enterprise:12.2.0.1
restart: "unless-stopped"
ports:
- 1523:1521
- 5501:5500
volumes:
- ./setup-scripts:/opt/oracle/scripts/setup
- ./dump:/opt/oracle/dump
environment:
- "ORACLE_PWD=oracle"
When I use docker-compose up my sql files are copied to /opt/oracle/scripts/setup but the sql files are not executed. What could be the possible reason ? "ORACLE_PWD=oracle" doesn't seem to work also. I have to use the default password Oradoc_db1
Related
version: "3.7"
services:
api_service:
build: .
restart: always
ports:
- 8080:8080
depends_on:
- postgres_db
links:
- postgres_db:database
postgres_db:
image: "postgres:11.4"
restart: always
ports:
- 5435:5432
environment:
POSTGRES_DB: testDb
POSTGRES_PASSWORD: admin
this is my YAML file.
and I got properties
spring.datasource.platform=postgres
spring.datasource.url=jdbc:postgresql://database:5432/testDb
spring.datasource.username=postgres
spring.datasource.password=admin
if my Postgres is rds then do I need to compose them or I just can go with Dockerfile for jar only and not YAML file?
You can create environment variables for the RDS address, RDS username, RDS password and RDS port. Pass it to the Dockerfile to the api_service. Your api_service should know to assemble Postgres connection string based on the environment variables. Please check - Spring Profiles in connection String
Those Spring properties values are probably incorrect in most of the environments in which you might run your application. In your unit-test environment the database might be H2 or HDBC instead of PostgreSQL; in a local-developer setup it will be on localhost; in RDS it will be a different name again. These host names and credentials should not be in your src/main/resources tree anywhere.
Spring allows you to set Spring properties using environment variables. So you can set e.g. spring.datasource.url at the point where you deploy your application. In your Compose setup, for example:
version: "3.8"
services:
api_service:
build: .
restart: always
ports:
- 8080:8080
depends_on:
- postgres_db
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://postgres_db:5432/testDb
- SPRING_DATASOURCE_USERNAME=postgres
- SPRING_DATASOURCE_PASSWORD=admin
postgres_db:
image: "postgres:11.4"
restart: always
ports:
- 5435:5432
environment:
POSTGRES_DB: testDb
POSTGRES_PASSWORD: admin
If your production environment uses RDS, then it should be enough to remove the postgres_db container and change the SPRING_DATASOURCE_* environment variables to match the RDS host name and credentials. You don't have to recompile anything or change the contents of your jar file.
I want to create mysql database on docker-compose startup from database.sql script. My database.sql script is on location: src/main/java/com/project_name/resources/db/database.sql. How should I wrote that in my docker-compose.yml file? Right now neither works.
volumes:
- ./database.sql:/data/application/database.sql
or something like:
volumes:
- ./database.sql:/src/main/java/com/project_name/resources/db/database.sql
Try like this:
volumes:
- ./src/main/java/com/project_name/resources/db/:/docker-entrypoint-initdb.d/database.sql
Or just use a database migration tool like Flyway or Liquibase.
You can mount the schema and data in Volumes as I demonstrated below, make sure the backup file have proper access permissions and verify the path in your machine.
version: '3.8'
services:
db:
image: mysql:8.0
restart: always
environment:
- MYSQL_DATABASE=DB_NAME
- MYSQL_USER: DB_USER
- MYSQL_ROOT_PASSWORD=DB_PASSEOD
ports:
- '3306:3306'
volumes:
- db:/var/lib/mysql
- ./db/init.sql:/src/main/java/com/project_name/resources/db/database.sql
volumes:
db:
driver: local
I’m trying to run Sonarqube in a Docker container on a Centos 7 server using docker-compose. Everything works as expected using named volumes as configured in this docker-compose.yml file:
version: "3"
services:
sonarqube:
image: sonarqube
ports:
- "9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled_plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
networks:
- sonarnet
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
driver: bridge
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled_plugins:
postgresql:
postgresql_data:
However, my /var/lib/docker/volumes directory is not large enough to house the named volumes. So, I changed the docker-compose.yml file to use bind mount volumes as shown below.
version: "3"
services:
sonarqube:
image: sonarqube
ports:
- "9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
volumes:
- /data/sonarqube/conf:/opt/sonarqube/conf
- /data/sonarqube/data:/opt/sonarqube/data
- /data/sonarqube/extensions:/opt/sonarqube/extensions
- /data/sonarqube/bundled_plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
networks:
- sonarnet
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- /data/postgresql:/var/lib/postgresql
- /data/postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
driver: bridge
However, after running docker-compose up -d, the app starts up but none of the bind mount volumes are written to. As a result, the Sonarqube plugins are not loaded and the sonar postgreSQL database is not initialized. I thought it may be a selinux issue, but I temporarily disabled it with no success. I’m unsure what to look at next.
I think my answer from "How to persist configuration & analytics across container invocations in Sonarqube docker image" would help you as well.
For good measure I have also pasted it in here:
.....
Notice this line SONARQUBE_HOME in the Dockerfile for the docker-sonarqube image. We can control this environment variable.
When using docker run. Simply do:
txt
docker run -d \
...
...
-e SONARQUBE_HOME=/sonarqube-data
-v /PERSISTENT_DISK/sonarqubeVolume:/sonarqube-data
This will make Sonarqube create the conf, data and so forth folders and store data therein. As needed.
Or with Kubernetes. In your deployment YAML file. Do:
txt
...
...
env:
- name: SONARQUBE_HOME
value: /sonarqube-data
...
...
volumeMounts:
- name: app-volume
mountPath: /sonarqube-data
And the name in the volumeMounts property points to a volume in the volumes section of the Kubernetes deployment YAML file.
This again will make Sonarqube use the /sonarqube-data mountPath for creating extenions, conf and so forth folders, then save data therein.
And voila your Sonarqube data is thereby persisted.
I hope this will help others.
N.B. Notice that the YAML and Docker run examples are not exhaustive. They focus on the issue of persisting Sonarqube data.
Try it out BobC and let me know.
Have a great day.
The below code will help you in a single command I hope so.
Create a new docker-compose file named as docker-compose.yaml,
version: "3"
services:
sonarqube:
image: sonarqube:8.2-community
depends_on:
- db
ports:
- "9000:9000"
networks:
- sonarqubenet
environment:
SONAR_JDBC_URL: jdbc:postgresql://db:5432/sonarqube
SONAR_JDBC_USERNAME: sonar
SONAR_JDBC_PASSWORD: sonar
volumes:
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_logs:/opt/sonarqube/logs
- sonarqube_temp:/opt/sonarqube/temp
restart: on-failure
container_name: sonarqube
db:
image: postgres
networks:
- sonarqubenet
environment:
POSTGRES_USER: sonar
POSTGRES_PASSWORD: sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
restart: on-failure
container_name: postgresql
networks:
sonarqubenet:
driver: bridge
volumes:
sonarqube_data:
sonarqube_extensions:
sonarqube_logs:
sonarqube_temp:
postgresql:
postgresql_data:
Then, execute the command,
$ docker-compose up -d
$ docker container ps
Sounds like the container is running and, as you mentioned, Sonarqube starts-up. When it starts, is it showing that it's using the H2 in memory db? After running docker-compose up -d, use docker logs -f <container_name> to see what's happening on Sonarqube startup.
To simplify viewing your logs with a known name, I suggest you also add a container name to your Sonarqube service. For example, container_name: sonarqube.
Also, while I know the plan is to deprecate the use of environment variables for the username, password and jdbc connection, I've had better luck in docker-compose using environment variables rather than the corresponding property value. For the connection string, try: SONARQUBE_JDBC_URL: jdbc:postgresql://db/sonar without specifying the default port for postgres.
I am developing a Web application using Laravel. I am an experienced Laravel developer. But, now I am trying to use Docker as my development environment. But I am so new to Docker. Now I am trying to connect to the Postgres database. I added the Postgres docker image in the docker-composer.yml as well. But when I run migration, I am getting error and it is not connecting to the database.
This is my docker-compose.xml file.
version: '2'
services:
web:
build:
context: ./
dockerfile: docker/web.docker
volumes:
- ./:/var/www
ports:
- "80:80"
- "443:443"
- "9000:9000"
links:
- app
app:
build:
context: ./
dockerfile: docker/app.docker
volumes:
- ./:/var/www
links:
- mysql
- redis
- beanstalk
- cache
environment:
- "DB_PORT=3306"
- "DB_HOST=mysql"
- "REDIS_PORT=6379"
- "REDIS_HOST=redis"
mysql:
image: mysql:5.7.18
environment:
- "MYSQL_ROOT_PASSWORD=secret"
- "MYSQL_DATABASE=docker"
ports:
- "3306:3306"
pgsql:
image: postgres:10.1
restart: always
environment:
- POSTGRES_DB=docker
- POSTGRES_USER=root
- POSTGRES_PASSWORD=secret
ports:
- 5431:5431
volumes:
- ./.docker/conf/postgres/:/docker-entrypoint-initdb.d/
redis:
image: redis:3.0
ports:
- "6379:6379"
beanstalk:
image: schickling/beanstalkd
ports:
- "11300:11300"
cache:
image: memcached:alpine
ports:
- "11211:11211"
I know I added the Mysql image as well. When I connect to the Mysql image, it was working. When I connect to the Postgres, I am getting error.
This is my database settings in env file for connecting to the Postgres.
DB_CONNECTION=pgsql
DB_HOST=pgsql
DB_PORT=5431
DB_DATABASE=docker
DB_USERNAME=root
DB_PASSWORD=secret
When I run migration in the terminal like this
docker-compose exec app php artisan migrate --seed
I got this error.
In Connection.php line 647:
could not find driver (SQL: select * from information_schema.tables where table_schema = public and table_name = migrations)
In PDOConnection.php line 50:
could not find driver
In PDOConnection.php line 46:
could not find driver
What is wrong with my installation?
You didn't add your docker/app.docker details, but if this is PHP running, you need to make sure both php7-pgsql and php7-pdo are installed on this container. driver is usuallu JUST THAT phpX-mysql or phpX-pgsql. X is the optional version number, depending on which repo you're using for php. i.e with default ubuntu repo php-pgsql will be just fine, but with alpine images you have to use php7-pgsql.
with sury repo, you'll have to use specific version i.e php7.2-pgsql.
I have a docker-compsose.yml file that launch a postgis service with a shared folder of kml files. I also ave a script that can export all of those kml in my postgis database. However I would like to do so automatically after launch. How can the docker-compose read that file and run the shell command after launch ?
Thank you for the help, I am new using Docker.
version: '2'
services:
postgis:
image: mdillon/postgis
volumes:
- ~/test/dataPostgis:/var/lib/postgresql/data/pgdata
- ./postgresql:/docker-entrypoint-initdb.d
- ./KML_Data:/var/lib/postgresql/data/KML_Data
environment:
PGDATA: /var/lib/postgresql/data/pgdata
POSTGRES_PASSWORD: password
POSTGRES_DB: db
ports:
- 5432:5432
pgadmin:
image: chorss/docker-pgadmin4
ports:
- 5050:5050
volumes:
- ~/test/dataPgadminBackUp:/var/lib/postgresql/data/pgdata
- ./scripts/pgadmin:/tmp/scripts
links:
- postgis
depends_on:
- postgis