Docker compose containers fail and exit with code 127 missing /bin/env bash - windows

I'm new to Docker so bear with me for any wrong term.
I have Docker Tools installed on Windows 7 and I'm trying to run a Docker compose file of a proprietary existing project stored in a git repository and that has probably been only run on Linux.
These are the commands I ran:
docker-machine start
docker-machine env
#FOR /f "tokens=*" %i IN ('docker-machine env') DO #%i
this was output by step (2)
docker-compose -f <docker-file.yml> up
Most of the Docker work has gone fine (image download, extraction, etc).
It is failing at container start, where some containers run fine - I recognize a working MongoDB instance since its log doesn't report any error - but other containers exit pretty soon with an error code, i.e.:
frontend_1 exited with code 127
Scrolling up a bit the console, I can see lines like:
No such file or directoryr/bin/env: bash
I have no idea where to go from here. I tried launching composer from a CygWin terminal, but got the same result.
Docker Compose file
version: "2"
services:
frontend:
command: "yarn start"
image: company/application/frontend:1
build:
context: frontend
dockerfile: docker/Dockerfile
environment:
<env entries>
ports:
- "3000:3000"
volumes:
- ./frontend:/opt/app
backend:
restart: "no"
# source ~/.bashrc is needed to add the ssh private key, used by git
command: bash -c "source ~/.bashrc && yarn run dev"
image: company/application/backend:1
build:
context: backend
dockerfile: docker/Dockerfile
environment:
<env entries>
ports:
- "4000:4000"
volumes:
- ./backend:/opt/app
- ./:/opt:rw
- ./.ssh/company_utils:/tmp/company_utils
depends_on:
- db
generator-backend:
restart: "no"
# source ~/.bashrc is needed to add the ssh private key, used by git
command: bash -c "source ~/.bashrc && npm run dev"
image: company/generator/backend:1
build:
context: generator-backend
dockerfile: docker/Dockerfile
environment:
<env entries>
ports:
- "5000:5000"
volumes:
- ./generator-backend:/opt/app
- ./:/opt:rw
- ./.ssh/company_utils:/tmp/company_utils
depends_on:
- db
db:
image: mongo:3.4
volumes:
- mongo:/data/db
ports:
- "27017:27017"
volumes:
mongo:

It turned out it was a matter of file line endings, caused by git clone, as pointed out by #mklement0 in his answer to env: bash\r: No such file or directory question.
Disabling core.autocrlf then recloning the repo solved it.

Related

Unable to run gradle tests using gitlab and docker-compose

I want to run tests using Gradle after docker-compose up (Postgres DB + Spring-Boot app). All flow must be running inside the Gitlab merge request step. The problem is when I was running my test using the script part in gitlab-ci file. Important, in such a situation, we are in the correct directory where GitLab got my project. Part of gitlab-ci file:
before_script:
- ./gradlew clean build
- cp x.jar /path/x.jar
- docker-compose -f /path/docker-compose.yaml up -d
script:
- ./gradlew :functional-tests:clean test -Penv=gitlab --info
But here I can't call http://localhost:8080 -> connection refused. I try put 0.0.0.0 or 172.17.0.3 or docker.host... etc insite tests config, but it didn't work.
So, I made insite docker-compose another container where I try to run my test using the entry point command. To do that, I must have the current GitLab directory, but can't mount them.
My current solution:
Gitlab-ci:
run-functional-tests:
stage: run_functional_tests
image:
name: 'xxxx/docker-compose-java-11:0.0.7'
script:
- ./gradlew clean build -x test
- 'export SHARED_PATH="$(dirname ${CI_PROJECT_DIR})"' // current gitlab worspace dir
- cp $CI_PROJECT_DIR/x.jar $CI_PROJECT_DIR/docker/gitlab/x.jar
- docker-compose -f $CI_PROJECT_DIR/docker/gitlab/docker-compose.yaml up -d
- docker-compose -f $CI_PROJECT_DIR/docker/gitlab/docker-compose.yaml logs -f
timeout: 30m
docker-compose.yaml
version: '3'
services:
postgres:
build:
context: ../postgres
container_name: postgres
restart: always
networks:
- app-postgres
ports:
- 5432
app:
build:
context: .
dockerfile: Dockerfile
restart: always
container_name: app
depends_on:
- postgres
ports:
- "8080:8080"
networks:
- app-postgres
functional-tests:
build:
context: .
container_name: app-functional-tests
working_dir: /app
volumes:
- ${SHARED_PATH}:/app
depends_on:
- app
entrypoint: ["bash", "-c", "sleep 20 && ./gradlew :functional-tests:clean test -Penv=gitlab --info"]
networks:
- app-postgres
networks:
app-postgres:
but in such a situation my working_dir - /app - is empty. Can someone assist with that?

docker container failing to start after running install.sh script [duplicate]

This question already has answers here:
Docker-Compose + Command
(2 answers)
Closed 9 months ago.
I am using this docker-compose file:
version: '3.8'
# Services
services:
# Nginx Service
nginx:
image: nginx:1.21
ports:
- 80:80
volumes:
- ./src:/var/www/php
- ./.docker/nginx/conf.d:/etc/nginx/conf.d
depends_on:
- php
# PHP Service
php:
build: ./.docker/php
working_dir: /var/www/php
volumes:
- ./src:/var/www/php
command: /bin/bash -c "./install.sh"
depends_on:
mysql:
condition: service_healthy
# MySQL Service
mysql:
image: mysql/mysql-server:8.0
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_ROOT_HOST: "%"
MYSQL_DATABASE: demo
volumes:
- ./.docker/mysql/my.cnf:/etc/mysql/conf.d/my.cnf
- mysqldata:/var/lib/mysql
healthcheck:
test: mysqladmin ping -h 127.0.0.1 -u root --password=$$MYSQL_ROOT_PASSWORD
interval: 2s
retries: 10
# PhpMyAdmin Service
phpmyadmin:
image: phpmyadmin/phpmyadmin:5
ports:
- 8080:80
environment:
PMA_HOST: mysql
depends_on:
mysql:
condition: service_healthy
# Volumes
volumes:
mysqldata:
I am trying to run a bash script (install.sh) after the container is created to run apt-get update install wget etc, but the php container fails when I try to run it.
My bash script is:
#!/bin/bash
mkdir testdir && apt-get update && apt-get install wget -y
(this file is here: ./src/install.sh)
It creates the folder correctly and the logs suggest it is trying to install wget (but never seems to finish) but the container never starts correctly.
If I remove the command: /bin/bash -c "./install.sh" line everything works correctly (but wget is not installed).
I have tried moving the command to a Dockerfile as a RUN command but it never seems to run
Any ideas why this is happening?
Thanks
As Hans Kilian said in the comments, docker-compose commands replace anything set by CMD or ENTRYPOINT. These commands are necessary for the container to function, and thus it never does anything more than installing wget.
You appear to be trying to run a file located under "./install.sh," which is not an absolute path. Try running the command using the absolute path of the file, as dockerfiles do not, in my experience, recognize changing directory after each command, so:
RUN cd /xyz
RUN /bin/bash -c "./install.sh"
does not have the same result as
RUN /bin/bash -c "/xyz/install.sh"
(where /xyz is the directory where install.sh is located)
Additionally, make sure the file is marked as executable with chmod when it is copied into your container.
However, if all you desire to do is create a directory and install wget, I would simply do this in the Dockerfile:
RUN mkdir testdir
RUN apt-get update && apt-get install -y wget

Bash: sudo: command not found

I have an up and running containers and I wish to execute a database backup. Apparently, a simple command from the docker such as: sudo mkdir new_folder result in: bash: sudo: command not found
What have I tried (on an intuitive level) I accessed one of the running container with docker exec -i -t 434a38fedd69/bin/bash and RUN
apt-get update
apt-get install sudo
when exit back to docker and tried to perform sudo mkdir new_folder but I got the same message bash: sudo: command not found
Baresp#adhg MINGW64 /c/Program Files/Docker Toolbox/postgre
$ mkdir new_folder
mkdir: cannot create directory ‘new_folder’: Permission denied
Baresp#adhg MINGW64 /c/Program Files/Docker Toolbox/postgre
$ sudo mkdir new_folder
bash: sudo: command not found
BTW, I'm not sure if this is relevant but the docker-compose file I was using is:
version: '2'
services:
postgres:
image: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: changeme
PGDATA: /data/postgres
volumes:
- /data/postgres:/data/postgres
ports:
- "5432:5432"
networks:
- postgres
restart: unless-stopped
container_name: xx_postgres
pgadmin:
links:
- postgres:postgres
image: fenglc/pgadmin4
volumes:
- /data/pgadmin:/root/.pgadmin
ports:
- "5050:5050"
networks:
- postgres
restart: unless-stopped
container_name: xx_pgadmin
networks:
postgres:
driver: bridge
First, nothing you do in a docker exec is persistent outside of that particular running container (copy of the image), so if you want future containers run from that image to include sudo, those apt-get commands need to go into the Dockerfile that builds the image. Which, since you're using docker-compose, would require you to first make a Dockerfile and specify its location in the YAML.
Second, what do you mean "exit back to docker"? Nothing you do inside a container is going to have any effect on the system that Docker itself is running on, but it looks like you're running software install commands inside a Docker container and then expecting that to result in the newly-installed software being available outside the container on the Windows system that is running Docker.
To do a backup of the postgres database in the container, you first have to enter the container (similar to how you do it):
docker exec -it postgres bash
(substitude postgres with the real container name you get from docker-compose ps)
Now you are in the container as root. That means, you don't need sudo for anything. Next create your backup folder:
mkdir /tmp/backup
Now run the backup command, from a quick Google I found the following (you might know better):
pg_dumpall > /tmp/backup/filename
Then exit the shell within the container by typing exit. From your host system run the following to copy the backup file out of the container:
docker cp postgres:/tmp/backup/filename .
(postgres is your container name again)

Docker compose working_dir issue

I am trying to run a golang app using docker-compose, below is my compose configuration.
version: '2'
services:
#Application container
go:
image: golang:1.8-alpine
ports:
- "80:8080"
links:
- mongodb
environment:
DEBUG: 'true'
PORT: '8080'
working_dir: /go/src/simple-golang-app
command: go run main.go
volumes:
- ./simple-golang-app:/go/src/simple-golang-app
mongodb:
image: mvertes/alpine-mongo:3.2.3
restart: unless-stopped
ports:
- "27017:27017"
On running the compose using command "docker-compose up" i get error "stat main.go: no such file or directory" even when main.go is available in working directory.
it works fine when your host dir layout is
oxo#thor ~/Dropbox/Documents/code/docker/golang_working_dir $ find .
.
./docker-compose.yaml
./simple-golang-app
./simple-golang-app/main.go
so here we
cd ~/Dropbox/Documents/code/docker/golang_working_dir
docker-compose up
for a more complex build involving dependancies I use a Dockerfile :
FROM golang:1.8-alpine
RUN mkdir -p /go/src/simple-golang-app/
COPY simple-golang-app/main.go /go/src/simple-golang-app
WORKDIR /go/src/simple-golang-app
RUN apk add --no-cache git mercurial && go get -v -t ./... && apk del git mercurial
RUN go install ./...
RUN go build
ENV PORT 9000
now update your docker-compose.yaml to use this new image :
old
image: golang:1.8-alpine
new
image: nirmal_golang_alpine:latest
so your commands are
docker build --tag nirmal_golang_alpine
docker-compose up

Docker compose can not start service network not found after restart docker

I'm using docker for windows (Version 18.03.0-ce-win59 (16762)) in a windows 10 pro. All the containers run ok after running the command docker-compose -up -d. The problem is when I restart the docker service. Then, once restarted, all the containers are stoped and when I run the command docker-compose start -d the following error is shown:
Error response from daemon: network ccccccccccccc not found
I don't know what's happening. When I run the container using run and the --restart=always option everything works as expected. No error is shown on restart.
This is the docker-compose file:
version: '3'
services:
service_1:
image: image1
restart: always
build:
context: C:/ProgramData/Docker/volumes/foo2
dockerfile: Dockerfile
args:
ENTRY: "1"
volumes:
- C:/ProgramData/Docker/volumes/foo1:C:/foo1
- C:/ProgramData/Docker/volumes/foo2:C:/foo2
service_2:
image: image2
restart: always
build:
context: C:/ProgramData/Docker/volumes/foo2
dockerfile: Dockerfile
args:
ENTRY: "2"
volumes:
- C:/ProgramData/Docker/volumes/foo1:C:/foo1
- C:/ProgramData/Docker/volumes/foo2:C:/foo2
service_3:
image: image3
restart: always
build:
context: C:/ProgramData/Docker/volumes/foo2
dockerfile: Dockerfile
args:
ENTRY: "4"
volumes:
- C:/ProgramData/Docker/volumes/foo1:C:/foo1
- C:/ProgramData/Docker/volumes/foo2:C:/foo2
The dockerfiles are like this:
FROM microsoft/dotnet-framework:3.5
ARG ENTRY
ENV my_env=$ENTRY
WORKDIR C:\\foo2
ENTRYPOINT C:/foo2/app.exe %my_env%
The network has changed. I used docker network prune command to meet the same problem.Recreate the container would fix the problem. Docker would set up the network again for the new containers.
#remove all containers
docker rm $(docker ps -qa)
#or
docker system prune
There might be some old container instances which were not removed. Check the instances with
docker container ls -a
You might get output like this if you have some instances which were not removed
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8b4678e6666b b4a75a01d539 "/bin/sh -c 'eval `s…" 6 weeks ago Exited (1) 6 weeks ago zealous_allen
ee862a3418f2 1eaaf48e9b42 "/bin/sh -c 'eval `s…" 6 weeks ago Exited (1) 6 weeks ago jolly_torvalds
Remove the containers by the container id
docker container rm 8b4678e6666b
docker container rm ee862a3418f2
Now start your container with docker-compose file
This worked for me. Hope it helps!
I found a possible solution editing the docker-compose.yml file as follows:
version: '3'
services:
cm04:
image: tnc530_cm04
networks:
- test
privileged: false
restart: always
build:
context: C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC530
dockerfile: Dockerfile
args:
ENTRY: "1"
volumes:
- C:/ProgramData/Docker/volumes/sqlite:C:/sqlite
- C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC530/bin/x86/Release:C:/adontec
cm06:
image: tnc620_cm06
networks:
- test
privileged: false
restart: always
build:
context: C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC620
dockerfile: Dockerfile
args:
ENTRY: "2"
volumes:
- C:/ProgramData/Docker/volumes/sqlite:C:/sqlite
- C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC620/bin/x86/Release:C:/adontec
cm08:
image: tnc620_cm08
networks:
- test
privileged: false
restart: always
build:
context: C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC620
dockerfile: Dockerfile
args:
ENTRY: "4"
volumes:
- C:/ProgramData/Docker/volumes/sqlite:C:/sqlite
- C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC620/bin/x86/Release:C:/adontec
networks:
test:
external:
name: nat
As you can see I created a network called test linked with the external network nat. Now, when I restart the docker services the containers are started with no errors.
Alternatively, you can just open your docker app and manually delete the containers. Then run docker-compose up on your terminal. Now it should be working. Go to the port either 9000 or 9001 or whichever port you are using and see if minio is actually running.

Resources