The following works fine when running the commands manually line by line in the terminal:
docker create -it --name test path
docker start test
docker exec test /bin/sh -c "go test ./..."
docker stop test
docker rm -test
But when I run it as a shell script, the Docker container is neither stopped nor removed.
#!/usr/bin/env bash
set -e
docker create -it --name test path
docker start test
docker exec test /bin/sh -c "go test ./..."
docker stop test
docker rm -test
How can I make it work from within a shell script?
If you use set -e the script will exit when any command fails. i.e. when a commands return code != 0. This means if your start, exec or stop fails, you will be left with a container still there.
You can remove the set -e but you probably still want to use the return code for the go test command as the overall return code.
#!/usr/bin/env bash
docker create -it --name test path
docker start test
docker exec test /bin/sh -c "go test ./..."
rc=$?
docker stop test
docker rm test
exit $rc
Trap
Using set -e is actually quite useful and catches a lot of issues that are silently ignored in most scripts. A slightly more complex solution is to use a trap to run your clean up steps on EXIT, which means set -e can be used.
#!/usr/bin/env bash
set -e
# Set a default return code
RC=2
# Cleanup
function cleanup {
echo "Removing container"
docker stop test || true
docker rm -f test || true
exit $RC
}
trap cleanup EXIT
# Test steps
docker create -it --name test path
docker start test
docker exec test /bin/sh -c "go test ./..."
RC=$?
Related
I am using a tool (gatk) distributed as a docker image and try to use its commands in a shell script.
I run the docker in detached mode.
sudo docker run --name my_container -d -v ~/test:/gatk/data -it broadinstitute/gatk:4.1.9.0
Then I run the commands from shell script
#!/bin/bash
docker exec my_container gatk command1
wait
docker exec my_container gatk command2
command2 needs input from command1 so I use wait, but still command2 is executed before command 1 is finished.
I also tried
#!/bin/bash
docker exec my_container gatk command1
docker wait my_container
docker exec my_container gatk command2
but then the script does not continue running after command1 is completed.
I managed to solve it. The problem was is that when I ran docker exec I did not define it to receive input from the shell. Adding -i flag to docker exec solved the problem. Here is the full solution.
I start docker in detached mode
sudo docker run --name my_container -d -v ~/test:/gatk/data -it broadinstitute/gatk:4.1.9.0
Now I can close the terminal, the docker container is up and running and I can use it in a new terminal.
I generate a bash script called myscript.sh with the following code.
#!/bin/bash
docker exec -i my_container gatk command1
wait
docker exec -i my_container gatk command2
I run the script, disown it and close the terminal.
./myscript.sh&disown;exit
You can run both commands in a single shot:
docker run image /bin/bash -c "gatk command1 && gatk command2"
I have a simple bash script 'test.sh' in the root of mounted folder :
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
However, when i try to mount folder and start the container with docker run as follows:
docker run -d -p 8000:8787 -e ROOT=true -e DISABLE_AUTH=true --name container -v mount-folder/:/home/rstudio/ image_name /home/rstudio/test.sh
above run command starts the container but exits automatically.
I am looking for a docker run command that starts the container , mounts the folder and then executes the bash script which is in the mount-folder without exiting the container.
(** dont want to go with docker exec command as it is not suitable for my use case for other reasons)
Dockerfile:
FROM rocker/rstudio:4.0.2
//some RUN commands to install necessary r packages
EXPOSE 8787
CMD tail -f /dev/null
Other details :
Image that i am using is rstudio server from rocker and container runs on AWS ubuntu machine.
Edit :
have also tried adding CMD tail -f /dev/null at the end of dockerfile as suggested in http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/ even then the container exits.
Docker containers shutdown automatically when run in detached mode. I think this article proposes a nice solution:
http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/
You could add tail -f /dev/null as the last command in your bash script instead so that the script will never halt unless it is told to do so.
When you do docker run [options] image_name [cmd] the command you specify becomes the command for the container and replaces any the command specified in the dockerfile (that's why adding CMD tail -f /dev/null doesn't do anything). If you ran your container without the /home/rstudio/test.sh at the end, it should stay running.
The solution would be to update your script to add the tail command at the end.
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
exec tail -f /dev/null
If you can't update that script, you could instead add it to the command being passed to the container, with something like:
docker run [options] image_name bash -c '/home/rstudio/test.sh && exec tail -f /dev/null'
I have a bash script as follows:
#!/bin/bash
if [ $1 = "first" ]
then
cd /Users/sulekahelmini/Documents/fyp/fyp_work/demo/target && docker build . -t suleka96/factorial
fi
docker run --rm --name factorialorialContainer -p 8080:8080 -e JAVA_OPTIONS="$(cat /Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/flags.txt)" suleka96/factorial:latest
sleep 3
#run test
cd /Users/sulekahelmini/Documents/fyp/apache-jmeter-5.2.1/bin && sh jmeter -n -t /Users/sulekahelmini/Documents/fyp/jmeter_scripts/factorial.jmx -l /Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/jmeter_results.jtl
#convert result to csv
cd /Users/sulekahelmini/Documents/fyp/apache-jmeter-5.2.1/bin && ./JMeterPluginsCMD.sh --generate-csv /Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/agg_test.csv --input-jtl /Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/jmeter_results.jtl --plugin-type AggregateReport
docker stop factorialorialContainer
when I run this script using:
sudo ./microwise.sh two
It starts the container and prints the starting of the spring framework and other information in the terminal. The problem is that the next two lines (executing jmeter test and getting results into a csv) after docker run doesn't get executed.
What am I doing wrong?
this is because your container is still running in foreground, so you need to add -d flag after docker run so it will detach the console and run it in background.
anyway for me to know when command is finished inside docker container? I have created a docker container and able to send command from my local into docker container by docker exec
so far in my bash script I am using sleep to wait until "cd root: npm install" command finished inside docker container. If I do not have sleep, done is printed out right away after npm install is sent into docker container. How can I remove sleep so done is printed out only after npm install is finished inside docker container?
docker exec -d <docker container name> bash -c "cd root;npm install"
sleep 100
echo "done"
Don't background the command if you want to keep it running in the foreground (the -d flag):
docker exec <docker container name> bash -c "cd root;npm install"
echo "done"
Run it as background process & and then wait for it:
docker exec -d <docker container name> bash -c "cd root;npm install" &
wait
echo "done"
If you omit the -d (detach) the docker exec will return only after completion (and not immediately), so no wait will be needed.
I have a shell script which runs as follows :
image_id=$(docker ps -a | grep postgres | awk -F' ' '{print $1}')
full_id=$(docker ps -a --no-trunc -q | grep $image_id)
docker exec -i -t $full_id bash
When I run this from the base linux OS, I expect to actually enter the postgres container which is a running container. But the issue is that the shell script hangs on 3rd line during ' docker exec' step.
My end goal is using the bash script, enter a running postgres container and run another bash script inside that container.
However the same command when I run it from command line, it works fine and gets me into the postgres container.
Please help, I have spent hours and hours to solve this but no progress.
Thanks again
Your setup is a bit more complex than it needs to be.
Docker ps can filter containers directly with the --filter= option
docker ps --no-trunc --quiet --filter="ancestor=postgres"
You can also --name containers when you run them which will be less fraught with danger than the script you are attempting
docker run --detach --name postgres_whatever postgres
docker exec -ti postgres_whatever bash
I'm not sure that your script is hanging as opposed to sitting there waiting for input. Try running a command directly
Using naming
exec_test.sh
#!/usr/bin/env bash
docker exec postgres_whatever echo "I have run the test"
When run
$ ./exec_test.sh
I have run the test
Without naming
exec_filter_test.sh
#!/usr/bin/env bash
id=$(docker ps --no-trunc --quiet --filter="ancestor=postgres")
[ -z "$id" ] && echo "no id" && exit 1
docker exec "${id}" echo "I have run the test"
When run
$ ./exec_filter_test.sh
I have run the test