Why does "docker-compose up" exit but "docker-compse run" enters into bash shell - bash

Dockerfile
FROM get some base image
ENV ProjectDir /workarea/svc
RUN mkdir -p $ProjectDir
WORKDIR $ProjectDir
docker-compose.yaml
version: "3.7"
services:
svc:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/workarea/svc
command: ["/opt/bb/bin/bash"]
When I run docker-compose up it exits immediately
r#PW02R9F3:$ docker-compose up
Creating svc_dev_1 ... done
Attaching svc_dev_1
svc_dev_1 exited with code 0
But when I run "docker-compose run --rm dev" I am able to get into bash as specified in the command session of my docker-compose.yaml file
r#PW02R9F3:$ docker-compose run --rm dev
Creating svc_dev_run ... done
[root#ad5d3d7107b4 svc]#
Why is this happening? Isnt "docker-compose up" running my command "/opt/bb/bin/bash" in the docker-compose.yaml file?

I believe this is because docker compose run spawns a container in interactive mode (unless specified otherwise) by default. docker compose up does not.
That is of importance because when running bash in a container that is not in interactive mode, it just dies immediately with status code 0 not because there's an error, but because there's no input for bash (and won't be).
It's like running docker run ubuntu and docker run -it ubuntu. The latter will keep STDIN open, "listening" for commands if you will.

Related

Execute script when docker container start

I want container with "centos:latest" image to be started and should execute my script. The scripts are copied with docker cp commands.
docker create --name centos1 centos:latest
docker cp . 5db38b908880:/opt ---> scripts are in current directory, hence .
docker commit centos1 new_centos1 --> now new_centos1 image has scripts
Now I want to start new container with the scripts to be executed: I tried below commands:
docker run -ti --rm --entrypoint "cd /opt && deploy_mediainfo_lambda.sh" new_centos1:latest
docker run -ti --rm new_centos1:latest "cd /opt && deploy_mediainfo_lambda.sh"
Both of above commands failed with:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:296: starting container process caused "exec: \"cd /opt && deploy_mediainfo_lambda.sh\": stat cd /opt && deploy_mediainfo_lambda.sh: no such file or directory": unknown.
ERRO[0000] error waiting for container: context canceled
if used bash command while starting container, I can run my script using 'execuateble path'/'execuatble name' inside container, but I can not do this while starting container on commandline
docker run -ti --rm new_centos1:latest bash
[root#c34207f3f1c4 /]# ./opt/deploy_mediainfo_lambda.sh
If used below command, which calls executable directly, it gives path error.
docker run -ti --rm new_centos1:latest "deploy_mediainfo_lambda.sh"
docker: Error response from daemon: OCI runtime create failed: container_linux.go:296: starting container process caused "exec: \"deploy_mediainfo_lambda.sh\": executable file not found in $PATH": unknown.
ERRO[0000] error waiting for container: context canceled
Also not sure about setting $PATH from commandline while starting the container.
I know, using Dockerfile this is achievable, like:
can set path using ENV,
can copy executables with ADD or COPY
run executables using CMD or ENTRYPOINT
How to achieves it using docker commandline?
Thanks melpomene.
Here is my bash script to automate script execution inside container, after copying them, all using docker commands.
# Start docker container
docker create --name mediainfo_docker centos:latest
# copy script files
docker cp . mediainfo_docker:/opt
# save container with the new image, which contains all scripts.
docker commit mediainfo_docker mediainfo_docker_with_scripts
# Now run scripts inside docker container
docker run -ti --rm mediainfo_docker_with_scripts:latest /opt/deploy_mediainfo_lambda.sh
Since deploy_mediainfo_lambda.sh is a script, first line of it is:
#!/bin/bash

Dockerfile CMD for taking bash commands from host

I've created a dockerfile with various compile and build tools. The goal of the dockerimage is to standardize our development tools, and make it easy and consistent for developing.
Everything is installed.
What I am stuck on, is how to make the docker container keep running, and be able to have a bash shell to that container so that I can run, for example, make etc. ?
If I use ENTRYPOINT /bin/bash my container exits immediately. How to keep the container running?
You should use the command at run time. You run your Docker container in interatice mode (-i) and set the command to "/bin/bash":
docker run -it myDockerImage myCommandToExecuteInteractively
For instance:
docker run -it myDocker /bin/bash
Here is a real life example:
a) Pulling the most basic image
docker pull debian:jessie-slim
b) Let's have a bash there:
docker run -it debian:jessie-slim /bin/bash
c) Enjoy:
A docker container will run as long as the CMD/Entrypoint from your Dockerfile takes.
You can run your Docker container in interactive mode using switch i
sudo docker run -it --entrypoint=/bin/bash <imagename>
Example : docker run -it --entrypoint=/bin/bash ubuntu:14.04
This will start an interactive shell in your container. Your container will exit as soon as you exit that shell.

How do I Run Docker cmds Exactly Like in a Dockerfile

There seems to be a difference between how Docker runs commands in a Dockerfile versus running commands manually after starting a container. This seems to be due to the kind of shells you can start, a (I assume) non-interactive shell with a Dockerfile vs an interactive one when running something like docker run -it <some-img-id>.
How can I debug running commands in a Docker container so that it runs exactly like the commands are run from a Dockerfile? Would just adding /bin/bash --noprofile to the run cmd suffice? Or is there anything else different about the environment when started from a Dockerfile?
What you are experiencing is the behavior because of the shell. Most of us are used to using the bash shell. So generally we would attempt to run the commands in the below fashion
For new container
docker run -it <imageid> bash
For existing container
docker exec -it <containerid> bash
But when we specify some command using RUN directive inside a Dockerfile
RUN echo Testing
Then it is equivalent to running /bin/sh -c 'echo Testing'. So you can expect certain differences as both the shells are different.
In Docker 1.12 or higher you have a Dockerfile directive named SHELL this allows you to override the default SHELL
SHELL ["/bin/bash", "-c"]
RUN echo Testing
This would make the RUN command be executed as bash -c 'echo Testing'. You can learn more about the SHELL directive here
Short answer 1:
If Dockerfile don't use USER and SHELL commands, then this:
docker --entrypoint "/bin/sh -c" -u root <image> cmd
Short answer 2:
If you don't squash or compress image after the build, Docker creates images layers for each of the Dockerfile commands. You can see them in the output of docker build at the end of each step with --->:
Step 2/8 : WORKDIR /usr/src/app
---> 5a5964bed25d # <== THIS IS IMAGE ID OF STEP 2
Removing intermediate container b2bc9558e499
Step 3/8 : RUN something
---> f6e90f0a06e2 # <== THIS IS IMAGE ID OF STEP 3
Removing intermediate container b2bc9558e499
Look for the image id just before the RUN step you want to debug (for example you want to debug step 3 on above, take the step 2 image id). Then just run the command in that image:
docker run -it 5a5964bed25d cmd
Long answer 1:
When you run docker run [image] cmd Docker in fact starts the cmd in this way:
Executes the default entrypoint of the image with the cmd as its argument. Entrypoint is stored in the image on build by ENTRYPOINT command in Dockerfile. Ie if cmd is my-app and entrypoint is /bin/sh -c, it executes /bin/sh -c my-app.
Starts it with default user id of the image, which is defined by the last USER command in Dockerfile
Starts it with the environment variables from all ENV commands from image's Dockerfile commulative
When docker build runs the Dockerfile RUN, it does exatly the same, only with the values present at that time (line) of the Dockerfile.
So to be exact, you have to take the value of ENVs and last USER command before your RUN line, and use those in the docker run command.
Most common images have /bin/sh -c or /bin/bash -c as entrypoint and most likely the build operates with root user. Therefore docker --entrypoint "/bin/bash -c" -u root <image> cmd should be sufficient

How do you start a Docker-ubuntu container into bash?

The answers from this question do not work.
The docker container always exits before I can attach or won't accept the -t flag. I could list all of the commands I've tried, but it's a combination of start exec attach with various -it flags and /bin/bash.
How do I start an existing container into bash? Why is this so difficult? Is this an "improper" use of Docker?
EDITS:
I created the container with docker run ubuntu. The information about the container: 60b93bda690f ubuntu "/bin/bash" About an hour ago Exited (0) 50 minutes ago ecstatic_euclid
First of all, a container is not a virtual machine. A container is an isolation environment for running a process. The life-cycle of the container is bound to the process running inside it. When the process exits, the container also exits, and the isolation environment is gone. The meaning of "attach to container" or "enter an container" actually means you go inside the isolation environment of the running process, so if your process has been exited, your container has also been exited, thus there's no container for you to attach or enter. So the command of docker attach, docker exec are target at running container.
Which process will be started when you docker run is configured in a Dockerfile and built into a docker image. Take image ubuntu as an example, if you run docker inspect ubuntu, you'll find the following configs in the output:
"Cmd": ["/bin/bash"]
which means the process got started when you run docker run ubuntu is /bin/bash, but you're not in an interactive mode and does not allocate a tty to it, so the process exited immediately and the container exited. That's why you have no way to enter the container again.
To start a container and enter bash, just try:
docker run -it ubuntu
Then you'll be brought into the container shell. If you open another terminal and docker ps, you'll find the container is running and you can docker attach to it or docker exec -it <container_id> bash to enter it again.
You can also refer to this link for more info.
Here is a very simple Dockerfile with instructions as comments ... launch it to spin up a running container you can exec login to
FROM ubuntu:20.04
ENV TERM linux
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y
CMD ["/bin/bash"]
# ... save this file as Dockerfile then in same dir issue following
#
# docker build --tag stens_ubuntu . # creates image stens_ubuntu
#
# docker run -d stens_ubuntu sleep infinity # launches container
#
# docker ps # show running containers
#
#
# ... find CONTAINER ID from above and put into something like this
#
# docker exec -ti $( docker ps | grep stens_ubuntu | cut -d' ' -f1 ) bash # login to running container
# docker exec -ti 3cea1993ed28 bash # login to running container using sample containerId
#
A container will exit normally when it has no work to do ... if you give it no work it will exit immediately upon launch for this reason ... typically the last command of your Dockerfile is the execution of some flavor of a server which stays alive due to an internal event loop and in so doing keeps alive its enclosing container ... short of that you can mention a server executable which has been installed into the container as the final parameter of your call to
docker run -d my-image-name my-server-executable

Interactive shell using Docker Compose

Is there any way to start an interactive shell in a container using Docker Compose only? I've tried something like this, in my docker-compose.yml:
myapp:
image: alpine:latest
entrypoint: /bin/sh
When I start this container using docker-compose up it's exited immediately. Are there any flags I can add to the entrypoint command, or as an additional option to myapp, to start an interactive shell?
I know there are native docker command options to achieve this, just curious if it's possible using only Docker Compose, too.
You need to include the following lines in your docker-compose.yml:
version: "3"
services:
app:
image: app:1.2.3
stdin_open: true # docker run -i
tty: true # docker run -t
The first corresponds to -i in docker run and the second to -t.
The canonical way to get an interactive shell with docker-compose is to use:
docker-compose run --rm myapp
(With the service name myapp taken from your example. More general: it must be an existing service name in your docker-compose file, myapp is not just a command of your choice. Example: bash instead of myapp would not work here.)
You can set stdin_open: true, tty: true, however that won't actually give you a proper shell with up, because logs are being streamed from all the containers.
You can also use
docker exec -ti <container name> /bin/bash
to get a shell on a running container.
The official getting started example (https://docs.docker.com/compose/gettingstarted/) uses the following docker-compose.yml:
version: "3.9"
services:
web:
build: .
ports:
- "8000:5000"
redis:
image: "redis:alpine"
After you start this with docker-compose up, you can shell into either your redis container or your web container with:
docker-compose exec redis sh
docker-compose exec web sh
docker-compose run myapp sh should do the deal.
There is some confusion with up/run, but docker-compose run docs have great explanation: https://docs.docker.com/compose/reference/run
If anyone from the future also wanders up here:
docker-compose exec service_name sh
or
docker-compose exec service_name bash
or you can run single lines like
docker-compose exec service_name php -v
That is after you already have your containers up and running.
The service_name is defined in your docker-compose.yml file
Using docker-compose, I found the easiest way to do this is to do a docker ps -a (after starting my containers with docker-compose up) and get the ID of the container I want to have an interactive shell in (let's call it xyz123).
Then it's a simple matter to execute
docker exec -ti xyz123 /bin/bash
and voila, an interactive shell.
This question is very interesting for me because I have problems, when I run container after execution finishes immediately exit and I fixed with -it:
docker run -it -p 3000:3000 -v /app/node_modules -v $(pwd):/app <your_container_id>
And when I must automate it with docker compose:
version: '3'
services:
frontend:
stdin_open: true
tty: true
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
This makes the trick: stdin_open: true, tty: true
This is a project generated with create-react-app
Dockerfile.dev it looks this that:
FROM node:alpine
WORKDIR '/app'
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "start"]
Hope this example will help other to run a frontend(react in example) into docker container.
I prefer
docker-compose exec my_container_name bash
If the yml is called docker-compose.yml it can be launched with a simple $ docker-compose up. The corresponding attachment of a terminal can be simply (consider that the yml has specified a service called myservice):
$ docker-compose exec myservice sh
However, if you are using a different yml file name, such as docker-compose-mycompose.yml, it should be launched using $ docker-compose -f docker-compose-mycompose.yml up. To attach an interactive terminal you have to specify the yml file too, just like:
$ docker-compose -f docker-compose-mycompose.yml exec myservice sh
A addition to this old question, as I only had the case last time. The difference between sh and bash. So it can happen that for some bash doesn't work and only sh does.
So you can:
docker-compose exec CONTAINER_NAME sh
and in most cases: docker-compose exec CONTAINER_NAME bash
use.
If you have time. The difference between sh and bash is well explained here:
https://www.baeldung.com/linux/sh-vs-bash
You can do docker-compose exec SERVICE_NAME sh on the command line. The SERVICE_NAME is defined in your docker-compose.yml. For example,
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
The SERVICE_NAME would be "zookeeper".
According to documentation -> https://docs.docker.com/compose/reference/run/
You can use this docker-compose run --rm app bash
[app] is the name of your service in docker-compose.yml

Resources