Make container stop itself - bash

Is there any native way to make a Docker container stop itself? I can't find anything in the documentation.
I have a container that does some stuff, and I want to hook into the completion of that.
One way I thought of doing this was blocking with docker wait until the container stops itself, and then I can restart it with a docker start and continue on to the subsequent commands that depend on those jobs being complete.
For instance:
docker run -d --name=my-container ...
# Wait for my-container to stop itself
docker wait my-container
# Once it stops itself, start it again.
docker start my-container
# Some other commands here that depend on my-container to finish its jobs...
But I can't find any way on the documentation to make a container stop itself.

There is docker stop to stop a container from outside. To stop a container from inside, you could kill the entrypoint process (the process specified in your docker run command, or the ENTRYPOINT or last CMD specified in the Dockerfile, etc.).

Don't run the container in detached mode (remove the -d) It'll run in the foreground until the entrypoint/cmd exits.
You may need to use the pseudo-tty (-t) command-line option.

Related

call a script automatically in container before docker stops the container

I want a custom bash script in the container that is called automatically before the container stops (docker stop or ctrl + c).
According to this docker doc and multiple StackOverflow threads, I need to catch the SIGTERM signal in the container and then run my custom script when the event appears. As I know SIGTERM can be only used from a root process with PID 1.
Relevand part of my Dockerfile:
...
COPY container-scripts/entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
I use [] to define the entrypoint and as I know this will run my script directly, without having a /bin/sh -c wrapper (PID 1), and when the script eventually exec another process, that process becomes the main process and will receive the docker stop signal.
entrypoint.sh:
...
# run the external bash script if it exists
BOOT_SCRIPT="/boot.sh"
if [ -f "$BOOT_SCRIPT" ]; then
printf ">> executing the '%s' script\n" "$BOOT_SCRIPT"
source "$BOOT_SCRIPT"
fi
# start something here
...
The boot.sh is used by child containers to execute something else that the child container wants. Everything is fine, my containers work like a charm.
ps axu in a child container:
PID USER TIME COMMAND
1 root 0:00 {entrypoint.sh} /bin/bash /entrypoint.sh
134 root 0:25 /usr/lib/jvm/java-17-openjdk/bin/java -server -D...
...
421 root 0:00 ps axu
Before stopping the container I need to run some commands automatically so I created a shutdown.sh bash script. This script works fine and does what I need. But I execute the shutdown script manually this way:
$ docker exec -it my-container /bin/bash
# /shutdown.sh
# exit
$ docker container stop my-container
I would like to automate the execution of the shutdown.sh script.
I tried to add the following to the entrypoint.sh but it does not work:
trap "echo 'hello SIGTERM'; source /shutdown.sh; exit" SIGTERM
What is wrong with my code?
Your help and comments guided me in the right direction.
I went through again the official documentations here, here, and here and finally I found what was the problem.
The issue was the following:
My entrypoint.sh script, which kept alive the container executed the following command at the end:
# start the ssh server
ssh-keygen -A
/usr/sbin/sshd -D -e "$#"
The -D option runs the ssh daemon in a NOT detach mode and sshd does not become a daemon. Actually, that was my intention, this is the way how I kept alive the container.
But this foreground process prevented to be executed properly the trap command. I changed the way how I started the sshd app and now it runs as a normal background process.
Then, I added the following command to keep alive my docker container (this is a recommended best practice):
tail -f /dev/null
But of course, the same issue appeared. Tail runs as a foreground process and the trap command does not do its job.
The only way how I can keep alive the container and let the entrypoint.sh runs as a foreign process in docker is the following:
while true; do
sleep 1
done
This way the trap command works fine and my bash function that handles the SIGINT, etc. signals runs properly when the time comes.
But honestly, I do not like this solution. This endless loop with a sleep looks ugly, but I have no idea at the moment how to manage it in a nice way :(
But this is another question that not belongs to this thread (but could be great if you can suggest my a better solution).

Using timeout with docker run from within script

In my Travis CI, part of my verification is to start a docker container and verify that it doesn't fail within 10 seconds.
I have a yarn script docker:run:local that calls docker run -it <mytag> node app.js.
If I call the yarn script with timeout from a bash shell, it works fine:
$ timeout 10 yarn docker:run:local; test $? -eq 124 && echo "Container ran for 10 seconds without error"
This calls docker run, lets it run for 10 seconds, then kills it (if not already returned). If the exit code is 124, the timeout did expire, which means the container was still running. Exactly what I need to verify that my docker container is reasonably sane.
However, as soon as I run this same command from within a script, either in a test.sh file called from the shell, or if putting it in another yarn script and calling yarn test:docker, the behaviour is completely different. I get:
ERRO[0000] error waiting for container: context canceled
Then the command hangs forever, there's no 10 second timeout, I have to ctrl-Z it and then kill -9 the process. If I run top I now have a docker process using all my CPU forever. If using timeout with any other command like sleep 20 && echo "Finished sleeping", this does not happen, so I suspect it may have something to do with how docker works in interactive mode or something, but that's only my guess.
What's causing timeout docker:run to fail from a script but work fine from a shell and how do I make this work?
Looks like running docker in interactive mode is causing the issue.
Run docker in detached more by removing the -it and allowing it to run in default detached mode or specify -d instead of -it and so:
docker run -d <mytag> node
or
docker run <mytag> node

Can't terminate node(js) process without terminating ssh server in docker container

I'm using a Dockerfile that ends with a CMD ["/start.sh"]:
#!/bin/bash
service ssh start
/usr/bin/node /myApp/app.js
if for some reason i need to kill the node process, the ssh server is being closed as well (forces me to reboot the container to reconnect).
Any simple way to avoid this behavior?
Thank You.
The container exits as soon as main process of the container exits. In your case, the main process inside the container is start.sh shell script. The start.sh shell script is starting the ssh service and then running the nodejs process as child process. Once the nodejs process dies, the shell script exits as well and so the container exits. So what you can do is to put the nodejs process in background.
#!/bin/bash
service ssh start
/usr/bin/node /myApp/app.js &
# Need the following infinite loop as the shell script should not exit
while do:
sleep 2
done
I DO NOT recommend this approach though. You should have only a single process per container. Read the following answers to understand why -
Running multiple applications in one docker container
If you still want to run multiple processes inside container, there are better ways to do it like using supervisord - https://docs.docker.com/config/containers/multi-service_container/

How can I gracefully recover from an attached Docker container terminating?

Say I run this Docker command in one Terminal window:
$ docker run --name stackoverflow --rm ubuntu /bin/bash -c "sleep 5"
And before it exits I run this in a second Terminal window:
$ docker run -it --rm --pid=container:stackoverflow terencewestphal/htop
I'll successfully see htop running in the second container, displaying the bash sleep process running. So far so good.
After 5 seconds, the first container will exit with code 0. All good.
At this time, the second container will exit with code 137 (SIGILL). This also makes sense to me since the second container is just attached to the first one.
The problem is that this messes up macOS's Terminal.app's state:
The Terminal's cursor disappears.
Clicking the Terminal window causes mouse location characters to be entered as input.
I'm hoping to find a way to avoid messing up Terminal.app state. Any suggestions?
You can't avoid such behaviour, because it is the htop duty to setup the terminal state after its termination, but it can't do it when terminated with SIGKILL. However, you can fix this terminal window yourself with the reset command, which is intended to initialize the terminal state.
About the "attached" container:
The --pid=container:<name> option means that the new container would be run in the PID namespace of first container and as the pid_namespaces(7) man page says:
If the "init" process of a PID namespace terminates, the kernel
terminates all of the processes in the namespace via a SIGKILL signal.

Elasticsearch Docker stop seems to ignore SIGKILL

I'm trying to use Elasticsearch in Docker for local dev. While I can find containers that work, when docker stop is sent, the containers hang for the default 10s, then docker forcibly kills the container. My assumption here is that ES is either not on PID 1 or other services prevent it from shutting down immediately.
I'm curious if anyone can expand on this, or explain why this is happening more accurately. I'm running numerous tests and 10s+ to shutdown is just annoying when other containers shutdown after 1-2s.
If you don't want to wait the 10 seconds, you can run a docker kill instead of a docker stop. You can also adjust the timeout on docker stop with the -t option, e.g. docker stop -t 2 $container_id to only wait 2 seconds instead of the default 10.
As for why it's ignoring the sigkill, that may depend on what image you are running (there's more than one for elasticsearch). However, if pid 1 is a shell like /bin/sh or /bin/bash, it will not pass signals through. If pid 1 is the elasticsearch process, it may ignore the signal, or 10 seconds may not be long enough for it to fully cleanup and shutdown.

Resources