Connect two containers between them failed - bash

I have some issue when I try to manage docker-compose and dockerfile together.
I researched and I know that is possible to use docker-compose without a dockerfile, but I think for me is better to use Dockerfile too because I want an environment to be easy to be modified.
The problem is that I want to have a container with postgres, which is a dependent component to another container, container named api which is used to run the application.
This container contains Java 17 and Maven 3 and take using docker-compose, image from Dockerfile. The problem is: while I use Dockerfile, everything is fine, but actually when I use docker-compose, I got this error:
2021-12-08T08:36:37.221247254Z /usr/local/bin/mvn-entrypoint.sh: line 50: exec: mvn test: not found
Configuration files are:
Dockerfile
FROM openjdk:17-jdk-slim
ARG MAVEN_VERSION=3.8.4
ARG USER_HOME_DIR="/root"
ARG SHA=a9b2d825eacf2e771ed5d6b0e01398589ac1bfa4171f36154d1b5787879605507802f699da6f7cfc80732a5282fd31b28e4cd6052338cbef0fa1358b48a5e3c8
ARG BASE_URL=https://apache.osuosl.org/maven/maven-3/${MAVEN_VERSION}/binaries
RUN apt-get update && \
apt-get install -y \
curl procps \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir -p /usr/share/maven /usr/share/maven/ref \
&& curl -fsSL -o /tmp/apache-maven.tar.gz ${BASE_URL}/apache-maven-${MAVEN_VERSION}-bin.tar.gz \
&& echo "${SHA} /tmp/apache-maven.tar.gz" | sha512sum -c - \
&& tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1 \
&& rm -f /tmp/apache-maven.tar.gz \
&& ln -s /usr/share/maven/bin/mvn /usr/bin/mvn
ENV MAVEN_HOME /usr/share/maven
ENV MAVEN_CONFIG "$USER_HOME_DIR/.m2"
COPY mvn-entrypoint.sh /usr/local/bin/mvn-entrypoint.sh
COPY settings-docker.xml /usr/share/maven/ref/
COPY . .
RUN ["chmod", "+x", "/usr/local/bin/mvn-entrypoint.sh"]
ENTRYPOINT ["/usr/local/bin/mvn-entrypoint.sh"]
CMD ["mvn", "test"]
And docker-compose file:
services:
api_service:
build:
context: .
dockerfile: Dockerfile
restart: always
container_name: api_core_backend
ports:
- 8080:8080
depends_on:
- postgres_db
postgres_db:
image: "postgres:latest"
container_name: postgres_core_backend
restart: always
ports:
- 5432:5432
environment:
POSTGRES_DB: postgres
POSTGRES_PASSWORD: root
Can anyone explain me why I got errors when I execute with docker-compose but everything is fine if I use dockerfile instead?
thank you.
Update: Link error while I try to connect to another container:
Caused by: org.flywaydb.core.internal.exception.FlywaySqlException:
Unable to obtain connection from database: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL State : 08001
Error Code : 0
Message : Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.

The issue
Based on the logs, it looks like the issue is that you're using localhost as the hostname when you connect.
Docker compose creates an internal network where the hostnames are mapped to the service names. So in your case, the hostname is postgres_db.
Please see the docker compose docs for more information.
Solution
Try specifying postgres_db as the hostname :)

Related

docker container failing to start after running install.sh script [duplicate]

This question already has answers here:
Docker-Compose + Command
(2 answers)
Closed 9 months ago.
I am using this docker-compose file:
version: '3.8'
# Services
services:
# Nginx Service
nginx:
image: nginx:1.21
ports:
- 80:80
volumes:
- ./src:/var/www/php
- ./.docker/nginx/conf.d:/etc/nginx/conf.d
depends_on:
- php
# PHP Service
php:
build: ./.docker/php
working_dir: /var/www/php
volumes:
- ./src:/var/www/php
command: /bin/bash -c "./install.sh"
depends_on:
mysql:
condition: service_healthy
# MySQL Service
mysql:
image: mysql/mysql-server:8.0
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_ROOT_HOST: "%"
MYSQL_DATABASE: demo
volumes:
- ./.docker/mysql/my.cnf:/etc/mysql/conf.d/my.cnf
- mysqldata:/var/lib/mysql
healthcheck:
test: mysqladmin ping -h 127.0.0.1 -u root --password=$$MYSQL_ROOT_PASSWORD
interval: 2s
retries: 10
# PhpMyAdmin Service
phpmyadmin:
image: phpmyadmin/phpmyadmin:5
ports:
- 8080:80
environment:
PMA_HOST: mysql
depends_on:
mysql:
condition: service_healthy
# Volumes
volumes:
mysqldata:
I am trying to run a bash script (install.sh) after the container is created to run apt-get update install wget etc, but the php container fails when I try to run it.
My bash script is:
#!/bin/bash
mkdir testdir && apt-get update && apt-get install wget -y
(this file is here: ./src/install.sh)
It creates the folder correctly and the logs suggest it is trying to install wget (but never seems to finish) but the container never starts correctly.
If I remove the command: /bin/bash -c "./install.sh" line everything works correctly (but wget is not installed).
I have tried moving the command to a Dockerfile as a RUN command but it never seems to run
Any ideas why this is happening?
Thanks
As Hans Kilian said in the comments, docker-compose commands replace anything set by CMD or ENTRYPOINT. These commands are necessary for the container to function, and thus it never does anything more than installing wget.
You appear to be trying to run a file located under "./install.sh," which is not an absolute path. Try running the command using the absolute path of the file, as dockerfiles do not, in my experience, recognize changing directory after each command, so:
RUN cd /xyz
RUN /bin/bash -c "./install.sh"
does not have the same result as
RUN /bin/bash -c "/xyz/install.sh"
(where /xyz is the directory where install.sh is located)
Additionally, make sure the file is marked as executable with chmod when it is copied into your container.
However, if all you desire to do is create a directory and install wget, I would simply do this in the Dockerfile:
RUN mkdir testdir
RUN apt-get update && apt-get install -y wget

docker is not found when running a docker command in entrypoint.sh

I'm getting
app_1 | ./entrypoint.sh: line 2: docker: command not found
when running this line of code in entrypoint.sh
docker exec -it fullstacktypescript_database_1 psql -U postgres -c "CREATE DATABASE elitypescript"
How would i properly execute this command ?
entrypoint.sh
# entrypoint.sh
docker exec -it fullstacktypescript_database_1 psql -U postgres -c "CREATE DATABASE elitypescript"
npm run seed # my attempt to run seed first before server kicks in. but doesnt work
npm run server
docker-compose.yml
# docker-compose.yml
version: "3"
services:
app:
build: ./server
depends_on:
- database
ports:
- 5000:5000
environment:
PSQL_HOST: database
PSQL_PORT: 5430
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-password}
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_DB: ${POSTGRES_DB:-elitypescript}
entrypoint: ["/bin/bash", "./entrypoint.sh"]
client:
build: ./client
image: react_client
links:
- app
working_dir: /home/node/app/client
volumes:
- ./:/home/node/app
ports:
- 3001:3001
command: npm run start
env_file:
- ./client/.env
database:
image: postgres:9.6.8-alpine
volumes:
- database:/var/lib/postgresql/data
ports:
- 3030:5439
volumes:
database:
Try this Dockerfile :
FROM node:10.6.0
COPY . /home/app
WORKDIR /home/app
COPY package.json ./
RUN npm install
ENV DOCKERVERSION=18.03.1-ce
RUN curl -fsSLO https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKERVERSION}.tgz \
&& tar xzvf docker-${DOCKERVERSION}.tgz --strip 1 -C /usr/local/bin docker/docker \
&& rm docker-${DOCKERVERSION}.tgz
EXPOSE 5000
You trying to run docker container inside of the docker container. In most cases it is very bad approach and you should to avoid it. But in case if you really need it and if you really understand what are you doing, you have to apply Docker-in-Docker(dind).
As far as I understand you, you need to run script CREATE DATABASE elitypescript, the better option will be to apply sidecar pattern - to run another one container with PostgreSQL client that will run your script.
Link the containers together and connect using the hostname.
# docker-compose
services:
app:
links:
- database
...
then just:
# entrypoint.sh
# the database container is available under the hostname database
psql -h database -p 3030 -U postgres -c "CREATE DATABASE elitypescript"
Links are a legacy option, but easier to use then networks.

Connect to a remote Database that live in Docker via Mac OS X

I'm on a Mac OS X, and have a docker-compose.yml
version: '3'
services:
portalmodules:
build:
context: .
dockerfile: Dockerfile
ports:
- 8010:8000
links:
- database
database:
image: postgres:11.2
expose:
- "5432"
environment:
- "POSTGRES_PASSWORD=12345"
- "POSTGRES_USER=john"
- "POSTGRES_DB=api"
I also have a Dockerfile
FROM composer:1.8.5 as build_stage
COPY . /src
WORKDIR /src
RUN composer install
FROM alpine:3.8
RUN apk --no-cache add \
php7 \
php7-mbstring \
php7-session \
php7-openssl \
php7-tokenizer \
php7-json \
php7-pdo \
php7-pdo_pgsql \
php7-pgsql
COPY --from=build_stage /src /src
RUN ls -al
RUN set -x \
addgroup -g 82 -S www-data \
adduser -u 82 -D -S -G www-data www-data
WORKDIR /src
RUN ls -al
RUN chmod -R 777 storage
CMD php artisan serve --host=0.0.0.0
The container seems to build and start successfully when I ran docker-compose up.
But when I tried to connect to the database:
IP: localhost
Port: 5432
UN : john
PW : 12345
I kept getting
How would one go about and debug this further?
By specifying a port in the expose configuration, you're only opening that port to other Docker containers in the Docker network. It will not be open to connect to from your host machine (OS X).
You want to add a ports configuration, which allows you to map host machine ports to the container.
database:
image: postgres:11.2
expose:
- "5432"
ports:
- "5432:5432"
environment:
- "POSTGRES_PASSWORD=12345"
- "POSTGRES_USER=john"
- "POSTGRES_DB=api"
More information in this helpful StackOverflow Q&A.

running flyway migrate in docker with oracle

I have a docker file which runs an install script. it fails to find oracle connection to run migrate. in my install script i set the export to oracle home and tns directory
structure
bin
conf
docker-compose-ccpdev1.yml
Dockerfile
HOSTNAMES.md
include
INSTALL.md
install.sh
README.md
sql
my Dockerfile contains the following
# environment
ENV ORACLE_HOME="/opt/SP/instantclient_12_2"
ENV TNS_ADMIN="$ORACLE_HOME/network/admin"
ENV LD_LIBRARY_PATH="$ORACLE_HOME"
ENV PATH="$ORACLE_HOME:$TNS_ADMIN:/opt/SP/ccp-ops/bin:/opt/rh/rh-php71/root/bin:/opt/rh/rh-php71/root/sbin:/opt/rh/rh-nodejs8/root/usr/bin:$PATH"
ENV PHP_HOME="/opt/rh/rh-php71/root"
ENV https_proxy="proxy01.domain-is.de:8080"
# install
RUN yum update -y; yum install -y rh-php71 rh-php71-php-xml rh-php71-php-json rh-php71-php-ldap rh-php71-php-fpm rh-php71-php-devel rh-php71-php-opcache rh-nodejs8 libaio java wget; yum groupinstall 'Development Tools' -y; yum clean all; /root/install.sh;
VOLUME [ "/sys/fs/cgroup" ]
# run
CMD ["/usr/sbin/init"]
# ports
EXPOSE 80
EXPOSE 443
EXPOSE 8080
EXPOSE 3000
my install.sh files contains
export ORACLE_HOME=/opt/SP/instantclient_12_2
export TNS_ADMIN=$ORACLE_HOME/network/admin
export PATH=$PATH:$ORACLE_HOME:$TNS_ADMIN
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME
export https_proxy=proxy01.domain-is.de:8080
mv /opt/SP/flyway-commandline-5.1.4.tar.gz/flyway-5.1.4 /opt/SP
rm -rf /opt/SP/flyway-commandline-5.1.4.tar.gz
mv /root/flyway.conf /opt/SP/flyway-5.1.4/conf
cp /opt/SP/instantclient_12_2/ojdbc8.jar /opt/SP/flyway-5.1.4/jars/
cd /opt/SP/flyway-5.1.4
./flyway baseline
cp /root/create_ccp_schemas.sql sql/V2__create_ccp_schemas.sql
./flyway migrate
sed -i 's/flyway\.user\=sys as sysdba/flyway\.user\=c##CCP/' conf/flyway.conf
sed -i 's/flyway\.password\=Oradoc_db1/flyway\.password\=CCP/' conf/flyway.conf
./flyway baseline -baselineVersion=2
cp /root/import_schema.sql sql/V3__import_schema.sql
sed -i 's/CCPRW/C##CCPRW/' sql/V3__import_schema.sql
sed -i 's/CCPRO/C##CCPRO/' sql/V3__import_schema.sql
./flyway migrate
cp /root/import_data.sql sql/V4__import_data.sql
sed -i 's/CCPRW/C##CCPRW/' sql/V4__import_data.sql
sed -i 's/CCPRO/C##CCPRO/' sql/V4__import_data.sql
sed -i '/REM INSERTING into/d' sql/V4__import_data.sql
sed -i '/SET DEFINE OFF/d' sql/V4__import_data.sql
error i get is
WARNING: Connection error: IO Error: could not resolve the connect identifier "ccp.oracle:1521/ORCLCDB.localdomain" (caused by could not resolve the connect identifier "ccp.oracle:1521/ORCLCDB.localdomain") Retrying in 1 sec...
...
ERROR:
Unable to obtain connection from database (jdbc:oracle:thin:#ccp.oracle:1521/ORCLCDB.localdomain) for user 'sys as sysdba': IO Error: could not resolve the connect identifier "ccp.oracle:1521/ORCLCDB.localdomain"
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL State : 08006
Error Code : 17002
Message : IO Error: could not resolve the connect identifier "ccp.oracle:1521/ORCLCDB.localdomain"
i build the image using setenforce 0; docker build -t ccp-apache-php-fpm .
If i log into the docker image and run flyway manually it works. i log into image using
docker-compose -p ccpdev1 -f /root/ccp-apache-php-fpm/docker-compose-ccpdev1.yml up -d --remove-orphans
docker container exec -it ccp_app_1 /bin/bash
UPDATE
I have moved the flyway set up to post install in the docker composer file. problem i have now is it runs continuously and the container keeps restarting
dokerfile
version: '3'
services:
ccp.oracle:
container_name: ccp_oracle_1
hostname: ccp_oracle1
image: registry-beta.cdaas.domain.com/oracle/database/enterprise:12.2.0.1
restart: unless-stopped
ports:
- "33001:1521"
networks:
- backend1
ccp.app:
container_name: ccp_app_1
hostname: ccp_app1
image: ccp-apache-php-fpm
restart: unless-stopped
ports:
- "33080:80"
- "33000:3000"
links:
- ccp.oracle
command: ["./root/wait_for_oracle.sh"]
networks:
- backend1
ccp.worker:
container_name: ccp_worker_1
hostname: ccp_worker1
image: ccp-apache-php-fpm
restart: unless-stopped
links:
- ccp.app
- ccp.oracle
networks:
- backend1
ccp.jenkins:
container_name: ccp_jenkins_1
hostname: ccp_jenkins1
image: jenkins
restart: unless-stopped
ports:
- "33081:8080"
- "50001:50000"
networks:
- backend1
networks:
backend1:
driver: "bridge"

Bash: sudo: command not found

I have an up and running containers and I wish to execute a database backup. Apparently, a simple command from the docker such as: sudo mkdir new_folder result in: bash: sudo: command not found
What have I tried (on an intuitive level) I accessed one of the running container with docker exec -i -t 434a38fedd69/bin/bash and RUN
apt-get update
apt-get install sudo
when exit back to docker and tried to perform sudo mkdir new_folder but I got the same message bash: sudo: command not found
Baresp#adhg MINGW64 /c/Program Files/Docker Toolbox/postgre
$ mkdir new_folder
mkdir: cannot create directory ‘new_folder’: Permission denied
Baresp#adhg MINGW64 /c/Program Files/Docker Toolbox/postgre
$ sudo mkdir new_folder
bash: sudo: command not found
BTW, I'm not sure if this is relevant but the docker-compose file I was using is:
version: '2'
services:
postgres:
image: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: changeme
PGDATA: /data/postgres
volumes:
- /data/postgres:/data/postgres
ports:
- "5432:5432"
networks:
- postgres
restart: unless-stopped
container_name: xx_postgres
pgadmin:
links:
- postgres:postgres
image: fenglc/pgadmin4
volumes:
- /data/pgadmin:/root/.pgadmin
ports:
- "5050:5050"
networks:
- postgres
restart: unless-stopped
container_name: xx_pgadmin
networks:
postgres:
driver: bridge
First, nothing you do in a docker exec is persistent outside of that particular running container (copy of the image), so if you want future containers run from that image to include sudo, those apt-get commands need to go into the Dockerfile that builds the image. Which, since you're using docker-compose, would require you to first make a Dockerfile and specify its location in the YAML.
Second, what do you mean "exit back to docker"? Nothing you do inside a container is going to have any effect on the system that Docker itself is running on, but it looks like you're running software install commands inside a Docker container and then expecting that to result in the newly-installed software being available outside the container on the Windows system that is running Docker.
To do a backup of the postgres database in the container, you first have to enter the container (similar to how you do it):
docker exec -it postgres bash
(substitude postgres with the real container name you get from docker-compose ps)
Now you are in the container as root. That means, you don't need sudo for anything. Next create your backup folder:
mkdir /tmp/backup
Now run the backup command, from a quick Google I found the following (you might know better):
pg_dumpall > /tmp/backup/filename
Then exit the shell within the container by typing exit. From your host system run the following to copy the backup file out of the container:
docker cp postgres:/tmp/backup/filename .
(postgres is your container name again)

Resources