Using timeout with docker run from within script - bash

In my Travis CI, part of my verification is to start a docker container and verify that it doesn't fail within 10 seconds.
I have a yarn script docker:run:local that calls docker run -it <mytag> node app.js.
If I call the yarn script with timeout from a bash shell, it works fine:
$ timeout 10 yarn docker:run:local; test $? -eq 124 && echo "Container ran for 10 seconds without error"
This calls docker run, lets it run for 10 seconds, then kills it (if not already returned). If the exit code is 124, the timeout did expire, which means the container was still running. Exactly what I need to verify that my docker container is reasonably sane.
However, as soon as I run this same command from within a script, either in a test.sh file called from the shell, or if putting it in another yarn script and calling yarn test:docker, the behaviour is completely different. I get:
ERRO[0000] error waiting for container: context canceled
Then the command hangs forever, there's no 10 second timeout, I have to ctrl-Z it and then kill -9 the process. If I run top I now have a docker process using all my CPU forever. If using timeout with any other command like sleep 20 && echo "Finished sleeping", this does not happen, so I suspect it may have something to do with how docker works in interactive mode or something, but that's only my guess.
What's causing timeout docker:run to fail from a script but work fine from a shell and how do I make this work?

Looks like running docker in interactive mode is causing the issue.
Run docker in detached more by removing the -it and allowing it to run in default detached mode or specify -d instead of -it and so:
docker run -d <mytag> node
or
docker run <mytag> node

Related

call a script automatically in container before docker stops the container

I want a custom bash script in the container that is called automatically before the container stops (docker stop or ctrl + c).
According to this docker doc and multiple StackOverflow threads, I need to catch the SIGTERM signal in the container and then run my custom script when the event appears. As I know SIGTERM can be only used from a root process with PID 1.
Relevand part of my Dockerfile:
...
COPY container-scripts/entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
I use [] to define the entrypoint and as I know this will run my script directly, without having a /bin/sh -c wrapper (PID 1), and when the script eventually exec another process, that process becomes the main process and will receive the docker stop signal.
entrypoint.sh:
...
# run the external bash script if it exists
BOOT_SCRIPT="/boot.sh"
if [ -f "$BOOT_SCRIPT" ]; then
printf ">> executing the '%s' script\n" "$BOOT_SCRIPT"
source "$BOOT_SCRIPT"
fi
# start something here
...
The boot.sh is used by child containers to execute something else that the child container wants. Everything is fine, my containers work like a charm.
ps axu in a child container:
PID USER TIME COMMAND
1 root 0:00 {entrypoint.sh} /bin/bash /entrypoint.sh
134 root 0:25 /usr/lib/jvm/java-17-openjdk/bin/java -server -D...
...
421 root 0:00 ps axu
Before stopping the container I need to run some commands automatically so I created a shutdown.sh bash script. This script works fine and does what I need. But I execute the shutdown script manually this way:
$ docker exec -it my-container /bin/bash
# /shutdown.sh
# exit
$ docker container stop my-container
I would like to automate the execution of the shutdown.sh script.
I tried to add the following to the entrypoint.sh but it does not work:
trap "echo 'hello SIGTERM'; source /shutdown.sh; exit" SIGTERM
What is wrong with my code?
Your help and comments guided me in the right direction.
I went through again the official documentations here, here, and here and finally I found what was the problem.
The issue was the following:
My entrypoint.sh script, which kept alive the container executed the following command at the end:
# start the ssh server
ssh-keygen -A
/usr/sbin/sshd -D -e "$#"
The -D option runs the ssh daemon in a NOT detach mode and sshd does not become a daemon. Actually, that was my intention, this is the way how I kept alive the container.
But this foreground process prevented to be executed properly the trap command. I changed the way how I started the sshd app and now it runs as a normal background process.
Then, I added the following command to keep alive my docker container (this is a recommended best practice):
tail -f /dev/null
But of course, the same issue appeared. Tail runs as a foreground process and the trap command does not do its job.
The only way how I can keep alive the container and let the entrypoint.sh runs as a foreign process in docker is the following:
while true; do
sleep 1
done
This way the trap command works fine and my bash function that handles the SIGINT, etc. signals runs properly when the time comes.
But honestly, I do not like this solution. This endless loop with a sleep looks ugly, but I have no idea at the moment how to manage it in a nice way :(
But this is another question that not belongs to this thread (but could be great if you can suggest my a better solution).

SIGTERM not trapped while command is running, but SIGINT is

I'm building some CI pipelines, and part of it is a bash wrapper script around a docker container running ansible commands. The trouble I'm having is that on job abort the container keeps running, which is potentially dangerous.
What I have currently is:
#!/bin/bash
CONTAINER=ansible
function kill_container() {
echo "$0 caught $1" >&2
docker kill ${CONTAINER}
exit $?
}
trap 'kill_container SIGINT' SIGINT
trap 'kill_container SIGTERM' SIGTERM
function ansible_base() {
docker run -d --rm --name ${CONTAINER} someorg/ansible:latest $#
docker logs --follow ${CONTAINER}
}
ansible_base $#
and my local test is simply ./run.sh sleep 30.
For the purpose of reproducability, you can substitute alpine:latest as the image and it behaves the same.
Prior to adding -d to the run and the docker logs it did not respect SIGINT at all, but now it works as expected. Eg:
./ci/run.sh sleep 30
5f5d78cfea27cdc15f5fede2003352253ae3254f44489ab4689ebca8d0f91768
^C./ci/run.sh caught SIGINT
ansible
However, if I run a pkill run.sh from another terminal it still waits the full 30 seconds before handling the signal, raising an error that the container is already gone. Eg:
./ci/run.sh sleep 30
a642a1060dc9d340e92dc255d68a9d9cb26d62ec59c5ef8d4e3d4198f1692c3e
./ci/run.sh caught SIGTERM
Error response from daemon: Cannot kill container: ansible: Container a642a1060dc9d340e92dc255d68a9d9cb26d62ec59c5ef8d4e3d4198f1692c3e is not running
Ultimately, the observed behaviour in the CI system is the same. The process is issued a SIGTERM, and then after not responding for 30 seconds a SIGKILL. This terminates the wrapper script, but not the docker command.
As #brunson said, I needed an init process to handle signal propagation.
When I was originally writing this my thought was "it's just a command, it doesn't need an initd" which was somewhat true until the very instant I needed it to respect signals at all. Frankly it was a foolish thought in the first place.
Anyhow, to accomplish the fix I used tini.
Added to Dockerfile:
RUN apk add --no-cache tini
ENTRYPOINT ["/sbin/tini", "--"]
and run.sh is back down to a much more manageable:
#!/bin/bash
function ansible_base() {
docker run --rm someorg/ansible:latest "$#"
}
ansible_base "$#"

How can I execute next script that wait for the previous script to complete first?

In https://squidfunk.github.io/mkdocs-material/creating-your-site/#previewing-as-you-write, there's a command that will launch my document site.
docker run --rm -it -p 8000:8000 -v ${PWD}:/docs squidfunk/mkdocs-material
I want that after it is launch, I will automatically open the browser and see it.
I write a script as below
docker run --rm -it -p 8000:8000 -v ${PWD}:/docs squidfunk/mkdocs-material
open http://localhost:8000
But it turns out the open command cannot be triggered, as the previous docker run is still holding the process still.
If I use & as below, then the open will get called too fast before the page is ready
docker run --rm -it -p 8000:8000 -v ${PWD}:/docs squidfunk/mkdocs-material &
open http://localhost:8000
How can I get open called at the right time?
(FYI, I'm using GNU bash, version 3.2.57(1)-release)
How can I get open called at the right time?
Opening the browser at exactly the right time would require your server mkdocs to give some signal that it is ready. Since you probably don't want to modify the code of the server, you just have to wait for the right time and then open the page.
Either measure the startup time once by hand and then use a fixed wait time or check the page repeatedly until it loads.
In both cases, the docker command and the process of opening the page must run in parallel. bash can run run things in parallel using background jobs (... &). Since docker -it must run in the foreground, we run open as a the background job. This might seem a little strange, since we seemingly open the website before starting the server, but keep in mind that both commands run in parallel.
Either
# replace 2 with your measured time
sleep 2 && open http://localhost:8000 &
docker run --rm -it -p 8000:8000 -v "${PWD}:/docs" squidfunk/mkdocs-material
or
while ! curl http://localhost:8000 -s -f -o /dev/null; do
sleep 0.2
done && open http://localhost:8000 &
docker run --rm -it -p 8000:8000 -v "${PWD}:/docs" squidfunk/mkdocs-material
It sounds (to me) like:
docker run is a blocking process (it does not exit and/or return control to the console) so ...
the open is never run (unless the docker run command is aborted in which case the open will fail), and ...
pushing docker run into the background means the open is run before the URL is fully functional
If this is the case I'm wondering if you could do something like:
docker run ... & # put in background, return control to console
sleep 3 # sleep 3 seconds
open ...
NOTE: manually picking the number of seconds to sleep (3 in this case) isn't ideal but a decent number that guarantees URL availability and doesn't leave you hanging should be doable with some testing
Another 'basic' option might be a looping construct combined with a sleep, eg:
docker run ... &
while true # loop indefinitely
do
sleep 1 # sleep 1 sec
open ... 2>/dev/null # try the open
[[ $? == 0 ]] && break # if it doesn't fail then break out of loop, ie,
# if it does fail then repeat loop
done

Can't terminate node(js) process without terminating ssh server in docker container

I'm using a Dockerfile that ends with a CMD ["/start.sh"]:
#!/bin/bash
service ssh start
/usr/bin/node /myApp/app.js
if for some reason i need to kill the node process, the ssh server is being closed as well (forces me to reboot the container to reconnect).
Any simple way to avoid this behavior?
Thank You.
The container exits as soon as main process of the container exits. In your case, the main process inside the container is start.sh shell script. The start.sh shell script is starting the ssh service and then running the nodejs process as child process. Once the nodejs process dies, the shell script exits as well and so the container exits. So what you can do is to put the nodejs process in background.
#!/bin/bash
service ssh start
/usr/bin/node /myApp/app.js &
# Need the following infinite loop as the shell script should not exit
while do:
sleep 2
done
I DO NOT recommend this approach though. You should have only a single process per container. Read the following answers to understand why -
Running multiple applications in one docker container
If you still want to run multiple processes inside container, there are better ways to do it like using supervisord - https://docs.docker.com/config/containers/multi-service_container/

Shell script - "Wait" does not wait for all processes to complete

In a shell script I am building some docker images(in background), once done I am running them(in background) and then I have to wait for all of them to complete. The code looks like this:
for tag in "${tags[#]}"
do
docker build -f dockerFilePath -t $tag . &
done
wait
for tag in "${tags[#]}"
do
docker run $tag arg1 arg2 | tee logoutput &
done
wait
The problem is that not all the docker run commands in the second wait section are able to complete. And the docker run commands take different times to complete, and any one of them is always incomplete (among a total of 4).
Also, I read that wait only works for the direct children of the process calling wait, in this case I think all the docker build and docker run commands are the direct children of the script process. Or is that wrong to assume?

Resources