/bin/sh: No such file or directory when setting a docker-compose entrypoint - bash

I have a container that runs a database migration (source):
FROM golang:1.12-alpine3.10 AS downloader
ARG VERSION
RUN apk add --no-cache git gcc musl-dev
WORKDIR /go/src/github.com/golang-migrate/migrate
COPY . ./
ENV GO111MODULE=on
ENV DATABASES="postgres mysql redshift cassandra spanner cockroachdb clickhouse mongodb sqlserver firebird"
ENV SOURCES="file go_bindata github github_ee aws_s3 google_cloud_storage godoc_vfs gitlab"
RUN go build -a -o build/migrate.linux-386 -ldflags="-s -w -X main.Version=${VERSION}" -tags "$DATABASES $SOURCES" ./cmd/migrate
FROM alpine:3.10
RUN apk add --no-cache ca-certificates
COPY --from=downloader /go/src/github.com/golang-migrate/migrate/build/migrate.linux-386 /migrate
ENTRYPOINT ["/migrate"]
CMD ["--help"]
I want to integrate it into a docker-compose and make it dependent on the Postgres database service. However, since I have to wait until the database is fully initialised I have to wrap the migrate command in a script and thus replace the entrypoint of the migration container. I'm using the wait-for script to poll the database, which is a pure shell (not bash) script and should thus work in an alpine container.
This is how the service is defined in the docker-compose:
services:
database:
# ...
migration:
depends_on:
- database
image: migrate/migrate:v4.7.0
volumes:
- ./scripts/migrations:/migrations
- ./scripts/wait-for:/wait-for
entrypoint: ["/bin/sh"]
command: ["./wait-for database:5432", "--", "./migrate", "-path", "/migrations", "-database", "postgres://test:test#database:5432/test?sslmode=disable", "-verbose", "up"]
Running docker-compose up on this fails with
migration_1 | /bin/sh: can't open './wait-for database:5432': No such file or directory
Running the migrate container for itself with
docker run -it --entrypoint /bin/sh -v $(pwd)/scripts/wait-for:/wait-for migrate/migrate:v4.7.0
does work flawlessly, the script is there and can be run with /bin/sh ./wait-for.
So why does it fail as part of the docker-compose?

If you read the error message carefully, you will see that the file that cannot be found is not ./waitfor, it is ./wait-for database:5432. This is consistent with your input file, where that whole thing is given as the first element of the command list:
command: ["./wait-for database:5432", "--", "./migrate", "-path", "/migrations", "-database", "postgres://test:test#database:5432/test?sslmode=disable", "-verbose", "up"]
It's unclear to me what you actually want instead, since the working alternatives presented do not seem to be fully analogous, but possibly it's
command: ["./wait-for", "database:5432", "--", "./migrate", "-path", "/migrations", "-database", "postgres://test:test#database:5432/test?sslmode=disable", "-verbose", "up"]

Running the migrate container for itself with does work flawlessly
When you run it like:
docker run -it --entrypoint /bin/sh -v $(pwd)/scripts/wait-for:/wait-for migrate/migrate:v4.7.0
entrypoint /bin/sh is executed.
When you run it using docker-compose:
entrypoint (/bin/sh ) + command (./wait-for database:5432) ...` is executed.
./wait-for database:5432 as whole stands for executable that will run and it can't be found, that's why you get the error No such file or directory
Try to specify an absolute path to wait-for in command: and split ./wait-for database:5432 into "./wait-for", "database:5432".
It's possible that splitting will be enough
As an alternative you can follow CMD syntax docs and use different command syntax without array: command: ./wait-for database:5432 ...

ENTRYPOINT ["/bin/sh"] is not enough, you also need the -c argument.
Example (testing a docker-compose.yml with docker-compose run --rm MYSERVICENAMEFROMTHEDOCKERCOMPOSEFILE bash here):
entrypoint: ["/bin/sh"]
Throws:
/bin/sh: 0: cannot open bash: No such file
ERROR: 2
And some wrong syntax examples like
entrypoint: ["/bin/sh -c"]
(wrong!)
or
entrypoint: ["/bin/sh, -c"]
(wrong!)
throw errors:
starting container process caused: exec: "/bin/sh, -c": stat /bin/sh, -c: no such file or directory: unknown
ERROR: 1
starting container process caused: exec: "/bin/sh -c": stat /bin/sh -c: no such file or directory: unknown
ERROR: 1
In docker-compose or Dockerfile, for an entrypoint, you need the -c argument.
This is right:
entrypoint: "/bin/sh -c"
or:
entrypoint: ["/bin/sh", "-c"]
The -c is to make clear that this is a command executed in the command line, waiting for an additional command to be used in that command line. but not starting the bash /bin/sh just on its own. You can read that between the lines at What is the difference between CMD and ENTRYPOINT in a Dockerfile?.

Related

Why does "docker-compose up" exit but "docker-compse run" enters into bash shell

Dockerfile
FROM get some base image
ENV ProjectDir /workarea/svc
RUN mkdir -p $ProjectDir
WORKDIR $ProjectDir
docker-compose.yaml
version: "3.7"
services:
svc:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/workarea/svc
command: ["/opt/bb/bin/bash"]
When I run docker-compose up it exits immediately
r#PW02R9F3:$ docker-compose up
Creating svc_dev_1 ... done
Attaching svc_dev_1
svc_dev_1 exited with code 0
But when I run "docker-compose run --rm dev" I am able to get into bash as specified in the command session of my docker-compose.yaml file
r#PW02R9F3:$ docker-compose run --rm dev
Creating svc_dev_run ... done
[root#ad5d3d7107b4 svc]#
Why is this happening? Isnt "docker-compose up" running my command "/opt/bb/bin/bash" in the docker-compose.yaml file?
I believe this is because docker compose run spawns a container in interactive mode (unless specified otherwise) by default. docker compose up does not.
That is of importance because when running bash in a container that is not in interactive mode, it just dies immediately with status code 0 not because there's an error, but because there's no input for bash (and won't be).
It's like running docker run ubuntu and docker run -it ubuntu. The latter will keep STDIN open, "listening" for commands if you will.

How to run a bash script from a Dockerfile on a Mac

I'm trying to run a bash script from a Docker Image on a Mac. Here is my Dockerfile
FROM bash
ADD app.sh /
ENTRYPOINT ["/bin/bash", "/app.sh"]
Error
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/bin/bash": stat /bin/bash: no such file or directory: unknown.
This is a simple exercise in creating Docker Images where I need to execute app.sh when I run docker run.
Any idea what I'm doing wrong?
According to your error message, the file /bin/bash does not exist in your Docker image. Why is this?
The bash image puts the bash executable at /usr/local/bin/bash. Here's how I determined this:
$ docker run -it bash
bash-5.1# which bash
/usr/local/bin/bash
bash-5.1#
I ran the bash image with -it to make it interactive, then used the which command to give me the full path to bash, which is /usr/local/bin/bash.
For that reason, you need to change your Dockerfile like this:
FROM bash
ADD app.sh /
ENTRYPOINT ["/usr/local/bin/bash", "/app.sh"]

Command Not Found with Dockerfile CMD

I have a Dockerfile that uses
CMD ['/usr/local/bin/gunicorn', '-b 0.0.0.0:8000', 'myapp.wsgi']
But when I run the container using docker run --rm myimage:latest I get an error:
/bin/sh: 1: [/usr/local/bin/gunicorn,: not found
Yet, when I run docker run --rm -it myimage:latest /bin/bash to go into the container, I can see that gunicorn runs, and running which gunicorn returns the correct path for gunicorn. Why is it failing to run?
Similarly, I planned on adding
ENTRYPOINT ['/entrypoint.sh']
to my Dockerfile, but when I run that, I get the error
/bin/sh: 1: /bin/sh: [/entrypoint.sh]: not found
The entrypoint.sh file contains:
#! /bin/bash
echo 'Starting app...'
cd /app || exit;
python manage.py migrate;
So why does it keep saying command not found when all the commands are there?
The issue here is the quotes. Use double " quotes.
From Docker Documentation:
The exec form is parsed as a JSON array, which means that you must use
double-quotes (“) around words not single-quotes (‘).
This is applicable for other instructions such as RUN, LABEL, ENV, ENTRYPOINT and VOLUME.

How do I Run Docker cmds Exactly Like in a Dockerfile

There seems to be a difference between how Docker runs commands in a Dockerfile versus running commands manually after starting a container. This seems to be due to the kind of shells you can start, a (I assume) non-interactive shell with a Dockerfile vs an interactive one when running something like docker run -it <some-img-id>.
How can I debug running commands in a Docker container so that it runs exactly like the commands are run from a Dockerfile? Would just adding /bin/bash --noprofile to the run cmd suffice? Or is there anything else different about the environment when started from a Dockerfile?
What you are experiencing is the behavior because of the shell. Most of us are used to using the bash shell. So generally we would attempt to run the commands in the below fashion
For new container
docker run -it <imageid> bash
For existing container
docker exec -it <containerid> bash
But when we specify some command using RUN directive inside a Dockerfile
RUN echo Testing
Then it is equivalent to running /bin/sh -c 'echo Testing'. So you can expect certain differences as both the shells are different.
In Docker 1.12 or higher you have a Dockerfile directive named SHELL this allows you to override the default SHELL
SHELL ["/bin/bash", "-c"]
RUN echo Testing
This would make the RUN command be executed as bash -c 'echo Testing'. You can learn more about the SHELL directive here
Short answer 1:
If Dockerfile don't use USER and SHELL commands, then this:
docker --entrypoint "/bin/sh -c" -u root <image> cmd
Short answer 2:
If you don't squash or compress image after the build, Docker creates images layers for each of the Dockerfile commands. You can see them in the output of docker build at the end of each step with --->:
Step 2/8 : WORKDIR /usr/src/app
---> 5a5964bed25d # <== THIS IS IMAGE ID OF STEP 2
Removing intermediate container b2bc9558e499
Step 3/8 : RUN something
---> f6e90f0a06e2 # <== THIS IS IMAGE ID OF STEP 3
Removing intermediate container b2bc9558e499
Look for the image id just before the RUN step you want to debug (for example you want to debug step 3 on above, take the step 2 image id). Then just run the command in that image:
docker run -it 5a5964bed25d cmd
Long answer 1:
When you run docker run [image] cmd Docker in fact starts the cmd in this way:
Executes the default entrypoint of the image with the cmd as its argument. Entrypoint is stored in the image on build by ENTRYPOINT command in Dockerfile. Ie if cmd is my-app and entrypoint is /bin/sh -c, it executes /bin/sh -c my-app.
Starts it with default user id of the image, which is defined by the last USER command in Dockerfile
Starts it with the environment variables from all ENV commands from image's Dockerfile commulative
When docker build runs the Dockerfile RUN, it does exatly the same, only with the values present at that time (line) of the Dockerfile.
So to be exact, you have to take the value of ENVs and last USER command before your RUN line, and use those in the docker run command.
Most common images have /bin/sh -c or /bin/bash -c as entrypoint and most likely the build operates with root user. Therefore docker --entrypoint "/bin/bash -c" -u root <image> cmd should be sufficient

Executing a shell script within docker with RUN command

New to dockers, so please bear with me.
My Dockerfile contains an ENTRYPOINT:
ENV MONGOD_START "mongod --fork --logpath /var/log/mongodb.log --logappend --smallfiles"
ENTRYPOINT ["/bin/sh", "-c", "$MONGOD_START"]
I have a shell script add an entry to database through python script, and starts the server.
The script startApp.sh
chmod +x /addAddress.py
python /addAddress.py $1
cd /myapp/webapp
grunt serve --force
Now, all the below RUN commands are unsuccessful in executing this script.
sudo docker run -it --privileged myApp -C /bin/bash && /myApp/webapp/startApp.sh loc
sudo docker run -it --privileged myApp /myApp/webapp/startApp.sh loc
The docker log of container is
"about to fork child process, waiting until server is ready for connections. forked process: 7 child process started successfully, parent exiting "
Also, the startApp.sh executes fine when I open a bash prompt in docker and run it.
I am unable to figure out what wrong I am doing, help please.
I would suggest you to create an entrypoint.sh file:
#!/bin/sh
# Initialize start DB command
# Pick from env variable MONGOD_START if it exists
# else use the default value provided in quotes
START_DB=${MONGOD_START:-"mongod --fork --logpath /var/log/mongodb.log --logappend --smallfiles"}
# This will start your DB in background
${START_DB} &
# Go to startApp directory and execute commands
`chmod +x /addAddress.py;python /addAddress.py $1; \
cd /myapp/webapp ;grunt serve --force`
Then modify your Dockerfile by removing the last line and replacing it with following 3 lines:
COPY entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Then rebuild your container image using
docker build -t NAME:TAG .
Now you run following command to verify if ENTRYPOINT is /entrypoint.sh
docker inspect NAME:TAG | less
I guess (and I might be wrong, since I'm neither a MongoDB nor a Docker expert) that your combination of mongod --fork and /bin/sh -c is the culprit.
What you're essentially executing is this:
/bin/sh -c mongod --fork ...
which
executes a shell
this shell executes a single command and waits for it to finish
this command launches MongoDB in daemon mode
MongoDB forks and immediately exits
The easiest fix is probably to just use
CMD ["mongod"]
like the official MongoDB Docker does.

Resources