I am trying to start a container from the given image below but I am getting the following error:
ERROR: for code_challenge_api Cannot start service api: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/app/entrypoint.sh": permission denied: unknown
ERROR: for api Cannot start service api: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/app/entrypoint.sh": permission denied: unknown
ERROR: Encountered errors while bringing up the project.
make: *** [Makefile:3: api] Error 1
This is my docker-compose.yml file:
version: "3"
services:
postgres:
image: postgres:13.2
container_name: code_postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres
command: postgres -c 'max_connections=200'
ports:
- "5439:5432"
networks:
- localdockernetwork
api:
build:
context: ./api
container_name: code_api
volumes:
- ./api:/app
- ./api/models:/models
- ./api/tests:/tests
ports:
- "3001:3001"
networks:
- localdockernetwork
depends_on:
- postgres
tty: true
networks:
localdockernetwork:
This is /app/entrypoint.sh file:
#!/bin/bash
set -e
pushd models/migrations
alembic upgrade head
popd
exec gunicorn -b 0.0.0.0:3001 --worker-class gevent app:app "$#"
How can I fix it?
Before exec, add pip install gunicorn
Related
I have a docker file which runs an install script. it fails to find oracle connection to run migrate. in my install script i set the export to oracle home and tns directory
structure
bin
conf
docker-compose-ccpdev1.yml
Dockerfile
HOSTNAMES.md
include
INSTALL.md
install.sh
README.md
sql
my Dockerfile contains the following
# environment
ENV ORACLE_HOME="/opt/SP/instantclient_12_2"
ENV TNS_ADMIN="$ORACLE_HOME/network/admin"
ENV LD_LIBRARY_PATH="$ORACLE_HOME"
ENV PATH="$ORACLE_HOME:$TNS_ADMIN:/opt/SP/ccp-ops/bin:/opt/rh/rh-php71/root/bin:/opt/rh/rh-php71/root/sbin:/opt/rh/rh-nodejs8/root/usr/bin:$PATH"
ENV PHP_HOME="/opt/rh/rh-php71/root"
ENV https_proxy="proxy01.domain-is.de:8080"
# install
RUN yum update -y; yum install -y rh-php71 rh-php71-php-xml rh-php71-php-json rh-php71-php-ldap rh-php71-php-fpm rh-php71-php-devel rh-php71-php-opcache rh-nodejs8 libaio java wget; yum groupinstall 'Development Tools' -y; yum clean all; /root/install.sh;
VOLUME [ "/sys/fs/cgroup" ]
# run
CMD ["/usr/sbin/init"]
# ports
EXPOSE 80
EXPOSE 443
EXPOSE 8080
EXPOSE 3000
my install.sh files contains
export ORACLE_HOME=/opt/SP/instantclient_12_2
export TNS_ADMIN=$ORACLE_HOME/network/admin
export PATH=$PATH:$ORACLE_HOME:$TNS_ADMIN
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME
export https_proxy=proxy01.domain-is.de:8080
mv /opt/SP/flyway-commandline-5.1.4.tar.gz/flyway-5.1.4 /opt/SP
rm -rf /opt/SP/flyway-commandline-5.1.4.tar.gz
mv /root/flyway.conf /opt/SP/flyway-5.1.4/conf
cp /opt/SP/instantclient_12_2/ojdbc8.jar /opt/SP/flyway-5.1.4/jars/
cd /opt/SP/flyway-5.1.4
./flyway baseline
cp /root/create_ccp_schemas.sql sql/V2__create_ccp_schemas.sql
./flyway migrate
sed -i 's/flyway\.user\=sys as sysdba/flyway\.user\=c##CCP/' conf/flyway.conf
sed -i 's/flyway\.password\=Oradoc_db1/flyway\.password\=CCP/' conf/flyway.conf
./flyway baseline -baselineVersion=2
cp /root/import_schema.sql sql/V3__import_schema.sql
sed -i 's/CCPRW/C##CCPRW/' sql/V3__import_schema.sql
sed -i 's/CCPRO/C##CCPRO/' sql/V3__import_schema.sql
./flyway migrate
cp /root/import_data.sql sql/V4__import_data.sql
sed -i 's/CCPRW/C##CCPRW/' sql/V4__import_data.sql
sed -i 's/CCPRO/C##CCPRO/' sql/V4__import_data.sql
sed -i '/REM INSERTING into/d' sql/V4__import_data.sql
sed -i '/SET DEFINE OFF/d' sql/V4__import_data.sql
error i get is
WARNING: Connection error: IO Error: could not resolve the connect identifier "ccp.oracle:1521/ORCLCDB.localdomain" (caused by could not resolve the connect identifier "ccp.oracle:1521/ORCLCDB.localdomain") Retrying in 1 sec...
...
ERROR:
Unable to obtain connection from database (jdbc:oracle:thin:#ccp.oracle:1521/ORCLCDB.localdomain) for user 'sys as sysdba': IO Error: could not resolve the connect identifier "ccp.oracle:1521/ORCLCDB.localdomain"
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL State : 08006
Error Code : 17002
Message : IO Error: could not resolve the connect identifier "ccp.oracle:1521/ORCLCDB.localdomain"
i build the image using setenforce 0; docker build -t ccp-apache-php-fpm .
If i log into the docker image and run flyway manually it works. i log into image using
docker-compose -p ccpdev1 -f /root/ccp-apache-php-fpm/docker-compose-ccpdev1.yml up -d --remove-orphans
docker container exec -it ccp_app_1 /bin/bash
UPDATE
I have moved the flyway set up to post install in the docker composer file. problem i have now is it runs continuously and the container keeps restarting
dokerfile
version: '3'
services:
ccp.oracle:
container_name: ccp_oracle_1
hostname: ccp_oracle1
image: registry-beta.cdaas.domain.com/oracle/database/enterprise:12.2.0.1
restart: unless-stopped
ports:
- "33001:1521"
networks:
- backend1
ccp.app:
container_name: ccp_app_1
hostname: ccp_app1
image: ccp-apache-php-fpm
restart: unless-stopped
ports:
- "33080:80"
- "33000:3000"
links:
- ccp.oracle
command: ["./root/wait_for_oracle.sh"]
networks:
- backend1
ccp.worker:
container_name: ccp_worker_1
hostname: ccp_worker1
image: ccp-apache-php-fpm
restart: unless-stopped
links:
- ccp.app
- ccp.oracle
networks:
- backend1
ccp.jenkins:
container_name: ccp_jenkins_1
hostname: ccp_jenkins1
image: jenkins
restart: unless-stopped
ports:
- "33081:8080"
- "50001:50000"
networks:
- backend1
networks:
backend1:
driver: "bridge"
I think this might be related to file system incompatibility (nfts/ext*)
How can I compose my containers and persist the db without the container exiting?
I'm using the bitnami-mongodb-image
Error:
Error executing 'postInstallation': EACCES: permission denied, mkdir '/bitnami/mongodb'
mongodb_1 exited with code 1
Full Output:
Recreating mongodb_1 ... done
Starting node_1 ... done
Attaching to node_1, mongodb_1
mongodb_1 |
mongodb_1 | Welcome to the Bitnami mongodb container
mongodb_1 | Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
mongodb_1 | Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
mongodb_1 |
mongodb_1 | nami INFO Initializing mongodb
mongodb_1 | mongodb INFO ==> Deploying MongoDB from scratch...
mongodb_1 | Error executing 'postInstallation': EACCES: permission denied, mkdir '/bitnami/mongodb'
mongodb_1 exited with code 1
Docker Version:
Docker version 18.06.0-ce, build 0ffa825
Windows Version:
Microsoft Windows 10 Pro
Version 10.0.17134 Build 17134
This is my docker-compose.yml so far:
version: "2"
services:
node:
image: "node:alpine"
user: "node"
working_dir: /home/node/app
environment:
- NODE_ENV=development
volumes:
- ./:/home/node/app
ports:
- "8888:8888"
command: "tail -f /dev/null"
mongodb:
image: 'bitnami/mongodb'
ports:
- "27017:27017"
volumes:
- "./data/db:/bitnami"
- "./conf/mongo:/opt/bitnami/mongodb/conf"
I do not use Windows but you can definitely try to use a named volume and see if the permission problem goes away
version: "2"
services:
node:
image: "node:alpine"
user: "node"
working_dir: /home/node/app
environment:
- NODE_ENV=development
volumes:
- ./:/home/node/app
ports:
- "8888:8888"
command: "tail -f /dev/null"
mongodb:
image: 'bitnami/mongodb'
ports:
- "27017:27017"
volumes:
- mongodata:/bitnami:rw
- "./conf/mongo:/opt/bitnami/mongodb/conf"
volumes:
mongodata:
I would like to stress this is a named volume, compared to the host volumes you are using. It is the best option for production and you need to be aware that docker will manage and store the files for you so you will not see the files in your project folder.
If you still want to use host volumes (so volumes that write to that location you specify in your project subfolder on the host machine) you need to apply a permission fix, here is an example for mariadb but it will work for mongo too
https://github.com/bitnami/bitnami-docker-mariadb/issues/136#issuecomment-354644226
In short, you need to know what is the user of the filesystem (in the example 1001 is the user id on my host machine for my logged in user) on your host and then chown that folder to this user so the user will be the same on the folder and your host system.
A full example:
version: "2"
services:
fix-mongodb-permissions:
image: 'bitnami/mongodb:latest'
user: root
command: chown -R 1001:1001 /bitnami
volumes:
- "./data:/bitnami"
mongodb:
image: 'bitnami/mongodb'
ports:
- "27017:27017"
volumes:
- ./data:/bitnami:rw
depends_on:
- fix-mongodb-permissions
I hope this helps
I'm new to Docker so bear with me for any wrong term.
I have Docker Tools installed on Windows 7 and I'm trying to run a Docker compose file of a proprietary existing project stored in a git repository and that has probably been only run on Linux.
These are the commands I ran:
docker-machine start
docker-machine env
#FOR /f "tokens=*" %i IN ('docker-machine env') DO #%i
this was output by step (2)
docker-compose -f <docker-file.yml> up
Most of the Docker work has gone fine (image download, extraction, etc).
It is failing at container start, where some containers run fine - I recognize a working MongoDB instance since its log doesn't report any error - but other containers exit pretty soon with an error code, i.e.:
frontend_1 exited with code 127
Scrolling up a bit the console, I can see lines like:
No such file or directoryr/bin/env: bash
I have no idea where to go from here. I tried launching composer from a CygWin terminal, but got the same result.
Docker Compose file
version: "2"
services:
frontend:
command: "yarn start"
image: company/application/frontend:1
build:
context: frontend
dockerfile: docker/Dockerfile
environment:
<env entries>
ports:
- "3000:3000"
volumes:
- ./frontend:/opt/app
backend:
restart: "no"
# source ~/.bashrc is needed to add the ssh private key, used by git
command: bash -c "source ~/.bashrc && yarn run dev"
image: company/application/backend:1
build:
context: backend
dockerfile: docker/Dockerfile
environment:
<env entries>
ports:
- "4000:4000"
volumes:
- ./backend:/opt/app
- ./:/opt:rw
- ./.ssh/company_utils:/tmp/company_utils
depends_on:
- db
generator-backend:
restart: "no"
# source ~/.bashrc is needed to add the ssh private key, used by git
command: bash -c "source ~/.bashrc && npm run dev"
image: company/generator/backend:1
build:
context: generator-backend
dockerfile: docker/Dockerfile
environment:
<env entries>
ports:
- "5000:5000"
volumes:
- ./generator-backend:/opt/app
- ./:/opt:rw
- ./.ssh/company_utils:/tmp/company_utils
depends_on:
- db
db:
image: mongo:3.4
volumes:
- mongo:/data/db
ports:
- "27017:27017"
volumes:
mongo:
It turned out it was a matter of file line endings, caused by git clone, as pointed out by #mklement0 in his answer to env: bash\r: No such file or directory question.
Disabling core.autocrlf then recloning the repo solved it.
I would like to accomplish 2 things:
1) Start a CockroachDB cluster with docker compose (works)
2) Execute SQL commands on the cluster (I want to create a Database)
My Docker File Looks like this:
version: '3'
services:
roach-ui:
image: cockroachdb/cockroach
command: start --insecure
expose:
- "8080"
- "26257"
ports:
- "26257:26257"
- "8080:8080"
networks:
- roachnet
db-1:
image: cockroachdb/cockroach
command: start --insecure --join=roach-ui
networks:
- roachnet
volumes:
- ./data/db-1:/cockroach/cockroach-data
networks:
roachnet:
When I run docker-compose up, everything works as expected.
While using google, I found that the solution is to run a bash script, I created the following setup.sh:
sql --insecure --execute="CREATE TABLE testDB"
I tried to run the script via command: bash -c "setup.sh", but Docker says that it can not run the command "bash".
Any Suggestions ? Thanks :)
EDIT:
I am running docker-compose up, the error I am getting:
roach-ui_1 | Failed running "bash"
heimdall_roach-ui_1 exited with code 1
So what you need is an extra init service to initialize the DB. This service will run a bash script to execute commands that will init the DB
setup_db.sh
#!/bin/bash
echo Wait for servers to be up
sleep 10
HOSTPARAMS="--host db-1 --insecure"
SQL="/cockroach/cockroach.sh sql $HOSTPARAMS"
$SQL -e "CREATE DATABASE tarun;"
$SQL -d tarun -e "CREATE TABLE articles(name VARCHAR);"
And then you add this file to execute in the docker-compose.yml
docker-compose.yaml
version: '3'
services:
roach-ui:
image: cockroachdb/cockroach
command: start --insecure
expose:
- "8080"
- "26257"
ports:
- "26257:26257"
- "8080:8080"
networks:
- roachnet
db-1:
image: cockroachdb/cockroach
command: start --insecure --join=roach-ui
networks:
- roachnet
volumes:
- ./data/db-1:/cockroach/cockroach-data
db-init:
image: cockroachdb/cockroach
networks:
- roachnet
volumes:
- ./setup_db.sh:/setup_db.sh
entrypoint: "/bin/bash"
command: /setup_db.sh
networks:
roachnet:
I am trying out the docker plugin for Heroku, just locally to start with. When I run docker-compose up web I get the following error:
Building web
Step 1 : FROM heroku/ruby
# Executing 7 build triggers...
Step 1 : COPY Gemfile Gemfile.lock /app/user/
ERROR: Service 'web' failed to build: lstat Gemfile: no such file or directory
This is my docker-compose.yml:
web:
build: .
command: 'bash -c ''bundle exec puma -C config/puma.rb'''
working_dir: /app/user
environment:
PORT: 8080
DATABASE_URL: 'postgres://postgres:#herokuPostgresql:5432/postgres'
ports:
- '8080:8080'
links:
- herokuPostgresql
shell:
build: .
command: bash
working_dir: /app/user
environment:
PORT: 8080
DATABASE_URL: 'postgres://postgres:#herokuPostgresql:5432/postgres'
ports:
- '8080:8080'
links:
- herokuPostgresql
volumes:
- '.:/app/user'
herokuPostgresql:
image: postgres
Why is the Gemfile missing, but most importantly how should it look like for my docker?