anyway for me to know when command is finished inside docker container? I have created a docker container and able to send command from my local into docker container by docker exec
so far in my bash script I am using sleep to wait until "cd root: npm install" command finished inside docker container. If I do not have sleep, done is printed out right away after npm install is sent into docker container. How can I remove sleep so done is printed out only after npm install is finished inside docker container?
docker exec -d <docker container name> bash -c "cd root;npm install"
sleep 100
echo "done"
Don't background the command if you want to keep it running in the foreground (the -d flag):
docker exec <docker container name> bash -c "cd root;npm install"
echo "done"
Run it as background process & and then wait for it:
docker exec -d <docker container name> bash -c "cd root;npm install" &
wait
echo "done"
If you omit the -d (detach) the docker exec will return only after completion (and not immediately), so no wait will be needed.
Related
I am using a tool (gatk) distributed as a docker image and try to use its commands in a shell script.
I run the docker in detached mode.
sudo docker run --name my_container -d -v ~/test:/gatk/data -it broadinstitute/gatk:4.1.9.0
Then I run the commands from shell script
#!/bin/bash
docker exec my_container gatk command1
wait
docker exec my_container gatk command2
command2 needs input from command1 so I use wait, but still command2 is executed before command 1 is finished.
I also tried
#!/bin/bash
docker exec my_container gatk command1
docker wait my_container
docker exec my_container gatk command2
but then the script does not continue running after command1 is completed.
I managed to solve it. The problem was is that when I ran docker exec I did not define it to receive input from the shell. Adding -i flag to docker exec solved the problem. Here is the full solution.
I start docker in detached mode
sudo docker run --name my_container -d -v ~/test:/gatk/data -it broadinstitute/gatk:4.1.9.0
Now I can close the terminal, the docker container is up and running and I can use it in a new terminal.
I generate a bash script called myscript.sh with the following code.
#!/bin/bash
docker exec -i my_container gatk command1
wait
docker exec -i my_container gatk command2
I run the script, disown it and close the terminal.
./myscript.sh&disown;exit
You can run both commands in a single shot:
docker run image /bin/bash -c "gatk command1 && gatk command2"
To run a bash terminal in a Docker container I can run the following:
$ docker exec -it <container> /bin/bash
However, I want to execute a command in the container automatically. For example, if I want to open a bash terminal in the container and create a file I would expect to run something like:
docker exec -it <container> /bin/bash -c "touch foo.txt"
However, this doesn't work... Is there a simple way to achieve this? Of course, I could type the command after opening the container, but I want to open a bash terminal and run a command at the same time.
You can run your touch command and then spawn another shell :
docker exec -it <container> /bin/bash -c "touch foo.txt; exec bash"
Works perfectly fine for me
~# docker run -tid --rm --name test ubuntu:20.04
~# docker exec -it test /bin/bash -c "touch /foo.txt"
~# docker exec -it test /bin/bash
root#b6b0efbb13be:/# ls -ltr foo.txt
-rw-r--r-- 1 root root 0 Mar 7 05:35 foo.txt
Easy solution:
docker exec -it <container> touch foo.txt
You can verify
docker exec -it <container> ls
This was tested with alpine image.
Remember that in docker images there is a entrypoint and a command. Now we are editing the command of the default entrypoint for alpine, via docker exec
It depends of the entrypoint if env variablers are load or not, $PATH ..., so other images maybe you need to write /bin/touch or /usr/bin/ls
Good luck!
When you run docker exec -it <container> /bin/bash -c "touch foo.txt", container sends 0 exit code so that it means the task is done and you'll be returned to your host.
When you run docker exec -it <container /bin/bash, bash shell is not terminated until you explicitly type exit or use CTRL+D in bash environment. bash is continuously running.
This is why when you run the second command, it goes to bash, runs your command (touches) and then exits.
I have a simple bash script 'test.sh' in the root of mounted folder :
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
However, when i try to mount folder and start the container with docker run as follows:
docker run -d -p 8000:8787 -e ROOT=true -e DISABLE_AUTH=true --name container -v mount-folder/:/home/rstudio/ image_name /home/rstudio/test.sh
above run command starts the container but exits automatically.
I am looking for a docker run command that starts the container , mounts the folder and then executes the bash script which is in the mount-folder without exiting the container.
(** dont want to go with docker exec command as it is not suitable for my use case for other reasons)
Dockerfile:
FROM rocker/rstudio:4.0.2
//some RUN commands to install necessary r packages
EXPOSE 8787
CMD tail -f /dev/null
Other details :
Image that i am using is rstudio server from rocker and container runs on AWS ubuntu machine.
Edit :
have also tried adding CMD tail -f /dev/null at the end of dockerfile as suggested in http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/ even then the container exits.
Docker containers shutdown automatically when run in detached mode. I think this article proposes a nice solution:
http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/
You could add tail -f /dev/null as the last command in your bash script instead so that the script will never halt unless it is told to do so.
When you do docker run [options] image_name [cmd] the command you specify becomes the command for the container and replaces any the command specified in the dockerfile (that's why adding CMD tail -f /dev/null doesn't do anything). If you ran your container without the /home/rstudio/test.sh at the end, it should stay running.
The solution would be to update your script to add the tail command at the end.
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
exec tail -f /dev/null
If you can't update that script, you could instead add it to the command being passed to the container, with something like:
docker run [options] image_name bash -c '/home/rstudio/test.sh && exec tail -f /dev/null'
I am trying to deploy a nodejs app inside docker container on a prod machine using jenkins.
I have this shell :
ssh -tt vagrant#10.2.3.129<<EOF
cd ~/app/backend
git pull
cat <<EOM >./Dockerfile
FROM node:8
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
EOM
docker build -t vagrant/node-web-app .
docker kill $(docker ps -q)
docker rm $(docker ps -a -q)
docker run -p 3000:3000 -d vagrant/node-web-app
exit
EOF
this will connect via ssh to prod machine and create a Dockerfile then build and run image. but it failed.
and this a part of the jenkins logs:
Successfully built 8e5796ea9846
vagrant#ubuntu-xenial:~$ docker kill
"docker kill" requires at least 1 argument.
See 'docker kill --help'.
Usage: docker kill [OPTIONS] CONTAINER [CONTAINER...]
Kill one or more running containers
vagrant#ubuntu-xenial:~$ docker rm
"docker rm" requires at least 1 argument.
See 'docker rm --help'.
Usage: docker rm [OPTIONS] CONTAINER [CONTAINER...]
Remove one or more containers
vagrant#ubuntu-xenial:~$ docker run -p 3000:3000 -d vagrant/node-web-app
0cc8b5b67f70065ace03e744500b5b66c79941b4cb36d53a3186845445435bb5
docker: Error response from daemon: driver failed programming external connectivity on endpoint stupefied_margulis (d0e4cdd5642c288a31537e1bb8feb7dde2d19c0f83fe5d8fdb003dcba13f53a0): Bind for 0.0.0.0:3000 failed: port is already allocated.
vagrant#ubuntu-xenial:~$ exit
logout
Connection to 10.2.1.129 closed.
Build step 'Execute shell' marked build as failure
Finished: FAILURE
It seems like jenkins dont execute the " $(docker ps -q) "
and " $(docker ps -a -q) "
so docker kill and docker rm got 0 arguments.
But why this happen ?
I found the issue,
Just I have to replace "$" with "\$" .
this solve the problem.
The following works fine when running the commands manually line by line in the terminal:
docker create -it --name test path
docker start test
docker exec test /bin/sh -c "go test ./..."
docker stop test
docker rm -test
But when I run it as a shell script, the Docker container is neither stopped nor removed.
#!/usr/bin/env bash
set -e
docker create -it --name test path
docker start test
docker exec test /bin/sh -c "go test ./..."
docker stop test
docker rm -test
How can I make it work from within a shell script?
If you use set -e the script will exit when any command fails. i.e. when a commands return code != 0. This means if your start, exec or stop fails, you will be left with a container still there.
You can remove the set -e but you probably still want to use the return code for the go test command as the overall return code.
#!/usr/bin/env bash
docker create -it --name test path
docker start test
docker exec test /bin/sh -c "go test ./..."
rc=$?
docker stop test
docker rm test
exit $rc
Trap
Using set -e is actually quite useful and catches a lot of issues that are silently ignored in most scripts. A slightly more complex solution is to use a trap to run your clean up steps on EXIT, which means set -e can be used.
#!/usr/bin/env bash
set -e
# Set a default return code
RC=2
# Cleanup
function cleanup {
echo "Removing container"
docker stop test || true
docker rm -f test || true
exit $RC
}
trap cleanup EXIT
# Test steps
docker create -it --name test path
docker start test
docker exec test /bin/sh -c "go test ./..."
RC=$?