How do I check if Oracle is up in Docker? - oracle

As the title says: how do I check if Oracle is up in Docker? Now my app tries to create a Hibernate session and I'm getting
ERROR :
ORA-01033: ORACLE initialization or shutdown in progress
So I would like some kind of health check having url only. Is that possible? Thank you!

i'm using wnameless/oracle-xe-11g-r2 and this works for me
version: '3'
services:
db:
image: wnameless/oracle-xe-11g-r2
environment:
- ORACLE_ALLOW_REMOTE=true
ports:
- 49261:1521
volumes:
- ./0_init.sql:/docker-entrypoint-initdb.d/0_init.sql
healthcheck:
test: [ "CMD", "bash", "-c", "echo 'select 1 from dual;' | ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe /u01/app/oracle/product/11.2.0/xe/bin/sqlplus -s USERNAME/PASSWORD#localhost"]
# docker inspect --format "{{json .State.Health }}" myproject_db_1
interval: 10s
timeout: 10s
retries: 60
myservice:
image: xxx
depends_on:
db:
condition: service_healthy

Using docker-compose.yml and Official Oracle docker images you can use checkDBStatus.sh script as a healthcheck. The script returns non-0 while db is in ORA-01033 state. Below is an example. Notice the combination of db's service healthcheck and tomcat's depends_on with service_healthy condition:
tomcat:
image: "tomcat:9.0"
depends_on:
oracle-db:
condition: service_healthy
links:
- oracle-db
services:
oracle-db:
build:
context: src/main/docker/oracle_db
dockerfile: Dockerfile.xe
mem_reservation: 2g
environment:
- ORACLE_PWD=oracle
volumes:
- oracle-data:/opt/oracle/oradata
healthcheck:
test: [ "CMD", "/opt/oracle/checkDBStatus.sh"]
interval: 2s
volumes:
oracle-data:

You can mimic tnsping in your Java app: How to do oracle TNSPING with java?
If you can't modify the app, tnsping can be called from a bash script - if you have Oracle client installed. If you don't, simply create a simple application from the link above and execute it in a script.

I've finished with a simple check for APEX:
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' db:8080/apex)" != "302" ]]; do sleep 5; done
302 is used because it redirects /apex to /apex/some_stuff. In my case db is the name of the container with Oracle:
version: '3'
services:
...
* other containers *
...
db:
image: some/image
ports:
- "8383:8080"
- "1521:1521"
Hope it helps someone!

If you are starting an oracle DB docker container within a jenkinsfile you may find this useful:
def waitForDbHealthy(containerName)
{
timeout(time: 4, unit: 'MINUTES')
{
def HEALTH_RESULT=""
while (! HEALTH_RESULT.toString().contains("healthy") )
{
echo "DB not yet healthy. going to sleep 10 sec."
sleep 10
HEALTH_RESULT=sh(returnStdout: true, script: "docker inspect --format='{{json .State.Health.Status}}' $containerName").trim()
echo "HEALTH_RESULT: $HEALTH_RESULT"
if ( HEALTH_RESULT.toString().contains("unhealthy") )
{
sh("docker logs $containerName")
echo "Going to throw IllegalStateException"
throw new IllegalStateException("Oracle DB switched to state unhealthy")
}
}
}
}
On my build server it tooks about 1 minute until the container is "healthy".
Be aware that oracle's TNS listener might not be ready yet. I found that an additional "sleep 60" (seconds) does the trick. Alternatively you can implement the java TNSPING as
Krzysztof Kaszkowiak pointed out in his answer.
Another note: Throwing an IllegalStateException is not allowed per default in Jenkinsfile's groovy. Your Jenkins administrator must explicitely accept it (Jenkins/Manage Jenkins/In-process Script Approval).
Jenkins 2.249.2
Docker Version: 19.03.8
Oracle docker image: based on store/oracle/database-enterprise:12.2.0.1-slim

Related

Hosts aren't accessible by name in docker compose on Windows

I have two, windows-based images that I'm using with docker compose.
The docker-compose.yaml:
services:
application:
image: myapp-win:latest
container_name: "my-app"
# for diagnosis
entrypoint: ["cmd"]
stdin_open: true
tty: true
# /diagnosis
env_file: .myapp/.env
environment:
- POSTGRES_URI=jdbc:postgresql://db0:5432/mydatabase
depends_on:
db0:
condition: service_healthy
db0:
image: stellirin/postgres-windows:10.10
container_name: "my-db"
ports:
- 10000:5432 # this doesn't seem to work in windows
env_file:
- .postgres/.env
volumes:
- .postgres\initdb\:c:\docker-entrypoint-initdb.d\
healthcheck:
test: [ "CMD", "pg_isready", "-q", "-d", "${POSTGRES_DATABASE}", "-U", "${POSTGRES_USER}" ]
timeout: 45s
interval: 10s
retries: 10
restart: unless-stopped
With the two containers started, I accessed the terminal for the my-db container and got its IP address.
Next, I accessed the terminal for the my-app container. I was able to ping the my-db container by its IP address. However, it did not respond by its hostname:
c:\app> ping db0
Ping request could not find host db0.
This is symptommatic why the application can't reach the database using the POSTGRES_URI variable.
Is there a different syntax for the hostname in a Windows container?
** edit **
I'm not able to ping outside the network, from either container:
c:\app> ping 8.8.8.8
Request timed out.
Not sure if this is relevant.
Regardless of container OS, to my knowledge, referring to the other name (db0) directly won't directly work inside the container, but is simply exposed to the other compose entries
Instead, set an env var dependent on the name and read it in the container
environment:
- "ADDRESS_DB=db0"
Then, if you want to be able to ping db0 or similar, have a script set the env var as an available host name on start
Alternatively, you may have success setting it the extra_hosts field, but I haven't tested this and you may need to give it a different name to prevent interpolation
extra_hosts:
- db_url:db0

How to check from inside a container if another container is running on port

I am running 2 containers at the same time (connected via docker-compose on setting links && depends_on).
The depends on is not enough, so I want the script that run on entryphone of one of the container to check if the other container is running already on some port.
I tried:
#!bin/bash
until nc -z w10 <container_name> 3306
do
echo waiting for db to be ready...
sleep 2
done
echo code is ready
But this is not working..
Anyone got an idea?
I would suggest to use the depends_on approach. However, you can use some of the advanced setting of this command. Please, read the documentation of Control startup and shutdown order in Compose
You can use the wait-for-it.sh script to exactly achieve what you need. Extracted from the documentation:
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
command: ["./wait-for-it.sh", "db:5432", "--", "python", "app.py"]
db:
image: postgres
Since you are already using docker-compose to orchestrate your services a better way would be to use condition: service_healthy of the depends_on long syntax. So instead of manually waiting in one container for the other to become available docker-compose will start the former only after the latter became healthy, i.e. available.
If the depended-on container does not have a specified HEALTHCHECK in its image already you can manually define it in the docker-compose.yml with the healthcheck attribute.
Example with a mariadb database using the included healthcheck.sh script:
services:
app:
image: myapp/image
depends_on:
db:
condition: service_healthy
db:
image: mariadb
environment:
- MARIADB_ROOT_PASSWORD=password
healthcheck:
test: "healthcheck.sh --connect"
With this docker-compose up will first start the db service and wait until the db service becomes healthy, i.e. is ready to accept connections, and only then will start the app service which can immediately connect to the db.

Running Sonarqube with docker-compose using bind mount volumes

I’m trying to run Sonarqube in a Docker container on a Centos 7 server using docker-compose. Everything works as expected using named volumes as configured in this docker-compose.yml file:
version: "3"
services:
sonarqube:
image: sonarqube
ports:
- "9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled_plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
networks:
- sonarnet
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
driver: bridge
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled_plugins:
postgresql:
postgresql_data:
However, my /var/lib/docker/volumes directory is not large enough to house the named volumes. So, I changed the docker-compose.yml file to use bind mount volumes as shown below.
version: "3"
services:
sonarqube:
image: sonarqube
ports:
- "9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
volumes:
- /data/sonarqube/conf:/opt/sonarqube/conf
- /data/sonarqube/data:/opt/sonarqube/data
- /data/sonarqube/extensions:/opt/sonarqube/extensions
- /data/sonarqube/bundled_plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
networks:
- sonarnet
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- /data/postgresql:/var/lib/postgresql
- /data/postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
driver: bridge
However, after running docker-compose up -d, the app starts up but none of the bind mount volumes are written to. As a result, the Sonarqube plugins are not loaded and the sonar postgreSQL database is not initialized. I thought it may be a selinux issue, but I temporarily disabled it with no success. I’m unsure what to look at next.
I think my answer from "How to persist configuration & analytics across container invocations in Sonarqube docker image" would help you as well.
For good measure I have also pasted it in here:
.....
Notice this line SONARQUBE_HOME in the Dockerfile for the docker-sonarqube image. We can control this environment variable.
When using docker run. Simply do:
txt
docker run -d \
...
...
-e SONARQUBE_HOME=/sonarqube-data
-v /PERSISTENT_DISK/sonarqubeVolume:/sonarqube-data
This will make Sonarqube create the conf, data and so forth folders and store data therein. As needed.
Or with Kubernetes. In your deployment YAML file. Do:
txt
...
...
env:
- name: SONARQUBE_HOME
value: /sonarqube-data
...
...
volumeMounts:
- name: app-volume
mountPath: /sonarqube-data
And the name in the volumeMounts property points to a volume in the volumes section of the Kubernetes deployment YAML file.
This again will make Sonarqube use the /sonarqube-data mountPath for creating extenions, conf and so forth folders, then save data therein.
And voila your Sonarqube data is thereby persisted.
I hope this will help others.
N.B. Notice that the YAML and Docker run examples are not exhaustive. They focus on the issue of persisting Sonarqube data.
Try it out BobC and let me know.
Have a great day.
The below code will help you in a single command I hope so.
Create a new docker-compose file named as docker-compose.yaml,
version: "3"
services:
sonarqube:
image: sonarqube:8.2-community
depends_on:
- db
ports:
- "9000:9000"
networks:
- sonarqubenet
environment:
SONAR_JDBC_URL: jdbc:postgresql://db:5432/sonarqube
SONAR_JDBC_USERNAME: sonar
SONAR_JDBC_PASSWORD: sonar
volumes:
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_logs:/opt/sonarqube/logs
- sonarqube_temp:/opt/sonarqube/temp
restart: on-failure
container_name: sonarqube
db:
image: postgres
networks:
- sonarqubenet
environment:
POSTGRES_USER: sonar
POSTGRES_PASSWORD: sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
restart: on-failure
container_name: postgresql
networks:
sonarqubenet:
driver: bridge
volumes:
sonarqube_data:
sonarqube_extensions:
sonarqube_logs:
sonarqube_temp:
postgresql:
postgresql_data:
Then, execute the command,
$ docker-compose up -d
$ docker container ps
Sounds like the container is running and, as you mentioned, Sonarqube starts-up. When it starts, is it showing that it's using the H2 in memory db? After running docker-compose up -d, use docker logs -f <container_name> to see what's happening on Sonarqube startup.
To simplify viewing your logs with a known name, I suggest you also add a container name to your Sonarqube service. For example, container_name: sonarqube.
Also, while I know the plan is to deprecate the use of environment variables for the username, password and jdbc connection, I've had better luck in docker-compose using environment variables rather than the corresponding property value. For the connection string, try: SONARQUBE_JDBC_URL: jdbc:postgresql://db/sonar without specifying the default port for postgres.

Docker compose can not start service network not found after restart docker

I'm using docker for windows (Version 18.03.0-ce-win59 (16762)) in a windows 10 pro. All the containers run ok after running the command docker-compose -up -d. The problem is when I restart the docker service. Then, once restarted, all the containers are stoped and when I run the command docker-compose start -d the following error is shown:
Error response from daemon: network ccccccccccccc not found
I don't know what's happening. When I run the container using run and the --restart=always option everything works as expected. No error is shown on restart.
This is the docker-compose file:
version: '3'
services:
service_1:
image: image1
restart: always
build:
context: C:/ProgramData/Docker/volumes/foo2
dockerfile: Dockerfile
args:
ENTRY: "1"
volumes:
- C:/ProgramData/Docker/volumes/foo1:C:/foo1
- C:/ProgramData/Docker/volumes/foo2:C:/foo2
service_2:
image: image2
restart: always
build:
context: C:/ProgramData/Docker/volumes/foo2
dockerfile: Dockerfile
args:
ENTRY: "2"
volumes:
- C:/ProgramData/Docker/volumes/foo1:C:/foo1
- C:/ProgramData/Docker/volumes/foo2:C:/foo2
service_3:
image: image3
restart: always
build:
context: C:/ProgramData/Docker/volumes/foo2
dockerfile: Dockerfile
args:
ENTRY: "4"
volumes:
- C:/ProgramData/Docker/volumes/foo1:C:/foo1
- C:/ProgramData/Docker/volumes/foo2:C:/foo2
The dockerfiles are like this:
FROM microsoft/dotnet-framework:3.5
ARG ENTRY
ENV my_env=$ENTRY
WORKDIR C:\\foo2
ENTRYPOINT C:/foo2/app.exe %my_env%
The network has changed. I used docker network prune command to meet the same problem.Recreate the container would fix the problem. Docker would set up the network again for the new containers.
#remove all containers
docker rm $(docker ps -qa)
#or
docker system prune
There might be some old container instances which were not removed. Check the instances with
docker container ls -a
You might get output like this if you have some instances which were not removed
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8b4678e6666b b4a75a01d539 "/bin/sh -c 'eval `s…" 6 weeks ago Exited (1) 6 weeks ago zealous_allen
ee862a3418f2 1eaaf48e9b42 "/bin/sh -c 'eval `s…" 6 weeks ago Exited (1) 6 weeks ago jolly_torvalds
Remove the containers by the container id
docker container rm 8b4678e6666b
docker container rm ee862a3418f2
Now start your container with docker-compose file
This worked for me. Hope it helps!
I found a possible solution editing the docker-compose.yml file as follows:
version: '3'
services:
cm04:
image: tnc530_cm04
networks:
- test
privileged: false
restart: always
build:
context: C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC530
dockerfile: Dockerfile
args:
ENTRY: "1"
volumes:
- C:/ProgramData/Docker/volumes/sqlite:C:/sqlite
- C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC530/bin/x86/Release:C:/adontec
cm06:
image: tnc620_cm06
networks:
- test
privileged: false
restart: always
build:
context: C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC620
dockerfile: Dockerfile
args:
ENTRY: "2"
volumes:
- C:/ProgramData/Docker/volumes/sqlite:C:/sqlite
- C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC620/bin/x86/Release:C:/adontec
cm08:
image: tnc620_cm08
networks:
- test
privileged: false
restart: always
build:
context: C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC620
dockerfile: Dockerfile
args:
ENTRY: "4"
volumes:
- C:/ProgramData/Docker/volumes/sqlite:C:/sqlite
- C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC620/bin/x86/Release:C:/adontec
networks:
test:
external:
name: nat
As you can see I created a network called test linked with the external network nat. Now, when I restart the docker services the containers are started with no errors.
Alternatively, you can just open your docker app and manually delete the containers. Then run docker-compose up on your terminal. Now it should be working. Go to the port either 9000 or 9001 or whichever port you are using and see if minio is actually running.

CockroachDB Docker Compose Script with SQL commands

I would like to accomplish 2 things:
1) Start a CockroachDB cluster with docker compose (works)
2) Execute SQL commands on the cluster (I want to create a Database)
My Docker File Looks like this:
version: '3'
services:
roach-ui:
image: cockroachdb/cockroach
command: start --insecure
expose:
- "8080"
- "26257"
ports:
- "26257:26257"
- "8080:8080"
networks:
- roachnet
db-1:
image: cockroachdb/cockroach
command: start --insecure --join=roach-ui
networks:
- roachnet
volumes:
- ./data/db-1:/cockroach/cockroach-data
networks:
roachnet:
When I run docker-compose up, everything works as expected.
While using google, I found that the solution is to run a bash script, I created the following setup.sh:
sql --insecure --execute="CREATE TABLE testDB"
I tried to run the script via command: bash -c "setup.sh", but Docker says that it can not run the command "bash".
Any Suggestions ? Thanks :)
EDIT:
I am running docker-compose up, the error I am getting:
roach-ui_1 | Failed running "bash"
heimdall_roach-ui_1 exited with code 1
So what you need is an extra init service to initialize the DB. This service will run a bash script to execute commands that will init the DB
setup_db.sh
#!/bin/bash
echo Wait for servers to be up
sleep 10
HOSTPARAMS="--host db-1 --insecure"
SQL="/cockroach/cockroach.sh sql $HOSTPARAMS"
$SQL -e "CREATE DATABASE tarun;"
$SQL -d tarun -e "CREATE TABLE articles(name VARCHAR);"
And then you add this file to execute in the docker-compose.yml
docker-compose.yaml
version: '3'
services:
roach-ui:
image: cockroachdb/cockroach
command: start --insecure
expose:
- "8080"
- "26257"
ports:
- "26257:26257"
- "8080:8080"
networks:
- roachnet
db-1:
image: cockroachdb/cockroach
command: start --insecure --join=roach-ui
networks:
- roachnet
volumes:
- ./data/db-1:/cockroach/cockroach-data
db-init:
image: cockroachdb/cockroach
networks:
- roachnet
volumes:
- ./setup_db.sh:/setup_db.sh
entrypoint: "/bin/bash"
command: /setup_db.sh
networks:
roachnet:

Resources