Docker bash shell script does not catch SIGINT or SIGTERM - bash

I have the following two files in a directory:
Dockerfile
FROM debian
WORKDIR /app
COPY start.sh /app/
CMD ["/app/start.sh"]
start.sh (with permissions 755 using chmod +x start.sh)
#!/bin/bash
trap "echo SIGINT; exit" SIGINT
trap "echo SIGTERM; exit" SIGTERM
echo Starting script
sleep 100000
I then run the following commands:
$ docker build . -t tmp
$ docker run --name tmp tmp
I then expect that pressing Ctrl+C would send a SIGINT to the program, which would print SIGINT to the screen then exit, but that doesn't happen.
I also try running $ docker stop tmp, which I expect would send a SIGTERM to the program, but checking $ docker logs tmp after shows that SIGTERM was not caught.
Why are SIGINT and SIGTERM not being caught by the bash script?

Actually, your Dockerfile and start.sh entrypoint script work as is for me with Ctrl+C, provided you run the container with one of the following commands:
docker run --name tmp -it tmp
docker run --rm -it tmp
Documentation details
As specified in docker run --help:
the --interactive = -i CLI flag asks to keep STDIN open even if not attached
(typically useful for an interactive shell, or when also passing the --detach = -d CLI flag)
the --tty = -t CLI flag asks to allocate a pseudo-TTY
(which notably forwards signals to the shell entrypoint, especially useful for your use case)
Related remarks
For completeness, note that there are several related issues that can make docker stop take too much time and "fall back" to docker kill, which can arise when the shell entrypoint starts some other process(es):
First, when the last line of the shell entrypoint runs another, main program, don't forget to prepend this line with the exec builtin:
exec prog arg1 arg2 ...
But when the shell entrypoint is intended to run for a long time, trapping signals (at least INT / TERM, but not KILL) is very important;
{see also this SO question: Docker Run Script to catch interruption signal}
Otherwise, if the signals are not forwarded to the children processes, we run the risk of hitting the "PID 1 zombie reaping problem", for instance
{see also this SO question for details: Speed up docker-compose shutdown}

CTRL+C sends a signal to docker running on that console.
To send a signal to the script you could use
docker exec -it <containerId> /bin/sh -c "pkill -INT -f 'start\.sh'"
Or include echo "my PID: $$" on your script and send
docker exec -it <containerId> /bin/sh -c "kill -INT <script pid>"
Some shell implementations in docker might ignore the signal.
This script will correctly react to pkill -15. Please note that signals are specified without the SIG prefix.
#!/bin/sh
trap "touch SIGINT.tmp; ls -l; exit" INT TERM
trap "echo 'really exiting'; exit" EXIT
echo Starting script
while true; do sleep 1; done
The long sleep command was replaced by an infinite loop of short ones since sleep may ignore some signals.

The solution I found was to just use the --init flag.
docker run --init [MORE OPTIONS] IMAGE [COMMAND] [ARG...]
Per their docs...

Related

Docker container with a shell script ignores SIGTERM

I have a very simple Docker container which runs a bash script:
# syntax=docker/dockerfile:1.4
FROM alpine:3
WORKDIR /app
RUN apk add --no-cache \
curl bash sed uuidgen
COPY demo.sh /app/demo.sh
RUN chmod +x /app/*.sh
CMD ["bash", "/app/demo.sh"]
#!/bin/bash
echo "Test 123.."
sleep 5m
echo "After sleep"
When running the container with docker run <image> the container cannot be stopped with docker stop <name>, it can only be killed.
I tried searching but everything with "bash" and "docker" leads me to managing docker on host with shell scripts.
sleep is an example of an uninterruptible command; your shell never receives the SIGTERM until sleep completes.
A common workaround is to run sleep in the background, and immediately wait on it, so that it's the shell built-in wait that's running when the signal arrives, and wait is interruptible.
echo "Test 123..."
sleep 5 & wait
echo "After sleep"
Can you try to add this before the sleep statement ?
trap "echo Container received EXIT" EXIT
Or do docker stop -t 5 container for example.

Bash script start docker container script & pass in arguments

I have a bash script that runs command line functions, and I then need the script to run commands in a docker container. I then need the script to pass in arguments into the docker, and eventually exit. However, I'm unable to have the script pass in arguments into the docker container. How can I do this?
This is what the docker commands look like without the bash script for reference.
$ docker exec -it rti_cmd
root#29c:/data# rti
187.0.0.1:9806> run_cmd
(integer) 0
187.0.0.1:9806> exit
root#29c:/data# exit
exit
Code snippet with two variations of attempts:
#!/bin/bash
docker exec -it rti_cmd bash<< eeee
rti
run_cmd
exit
exit
eeee
#also have done without the ";"
docker exec -it rti_cmd bash /bin/sh -c
"rti;
run_cmd;
exit;
exit"
Errors:
$ chmod +x test.sh
$ ./test.sh
the input device is not a TTY
/bin/sh: /bin/sh: cannot execute binary file
./test.sh: line 17: $'rti;\nrun_cmd;\nexit;\nexit': command not found
You don't need -i interacive nor -t tty if you want to be non-interactive.
docker exec rti_cmd sh -c 'rti;run_cmd'

exec as a pipeline component

For our application running inside a container it is preferable that it receives a SIGTERM when the container is being (gracefully) shutdown. At the same time, we want it's output to go to a log file.
In the startscript of our docker container, we had therefore been using bash's exec similar to this
exec command someParam >> stdout.log
That worked just fine, command replaced the shell that had been the container's root process and would receive the SIGTERM.
Since the application tends to log a lot, we decided to add log rotation by using Apache's rotatelogs tool, i.e.
exec command | rotatelogs -n 10 stdout.log 10M
Alas, it seems that by using the pipe, exec can no longer have command replace the shell. When looking at the processes in the running container with pstree -p, it now looks like this
mycontainer#/#pstree -p
start.sh(1)-+-command(118)
`-rotatelogs(119)
So bash remains the root process, and does not pass the SIGTERM on to command.
Before stumbling upon exec, I had found an approach that installs a signal handler into the bash script, which would then itself send a SIGTERM to the command process using kill. However, this became really convoluted, getting the PID was also not always straightforward, and I would like to preserve the convenience of exec when it comes to signal handling and get piping for log rotation.
Any idea how to accomplish this?
Perhaps you want
exec sh -c 'command | rotatelogs -n 10 stdout.log 10M'
I was able to get around this by using process substitution. For your specific case the following may work.
exec command > >(rotatelogs -n 10 stdout.log 10M)
To reproduce the scenario I built this simple Dockerfile
FROM perl
SHELL ["/bin/bash", "-c"]
# The following will gracefully terminate upon docker stop
CMD exec perl -e '$SIG{TERM} = sub { $|++; print "Caught a sigterm!\n"; sleep(5); die "is the end!" }; sleep(30);' 2>&1 > >(tee /my_log)
# The following won't gracefully terminate upon docker stop
#CMD exec perl -e '$SIG{TERM} = sub { $|++; print "Caught a sigterm!\n"; sleep(5); die "is the end!" }; sleep(30);' 2>&1 | tee /my_log
Build docker build -f Dockerfile.meu -t test .
Run docker run --name test --rm -ti test
Stop it docker stop test
Output:
Caught a sigterm!
is the end! at -e line 1.

Why is executing "docker exec" killing my SSH session?

Let's say I have two servers, A and B. I also have a bash script that is executed on server A that looks like this:
build_test.sh
#!/bin/bash
ssh user#B <<'ENDSSH'
echo "doing test"
bash -ex test.sh
echo "completed test"
ENDSSH
test.sh
#!/bin/bash
docker exec -i my_container /bin/bash -c "echo hi!"
The problem is that completed test does not get printed to the terminal.
Here's the output of running build_test.sh:
$ ./build_test
doing test
+ docker exec -i my_container /bin/bash -c "echo hi!"
hi!
I'm expecting completed test to be output after hi!, but it isn't. How do I fix this?
docker is consuming, though not using, its standard input, which it inherits from test.sh. test.sh inherits its standard input from bash, which inherits its standard input from ssh. This means that docker itself is reading the last line of the script before the remote shell can.
To fix, just redirect docker's standard input from /dev/null.
docker exec -i my_container /bin/bash -c "echo hi!" < /dev/null

Creating a continuous background job during provisioning

During the provisioning of a VM I want to start a job which shall run in the background. This job shall continuously check whether certain files have been changed. In the vagrant file I reference a script which contains the following line (which does nothing but echo "x" every 3 seconds):
nohup sh -c 'while true; do sleep 3; echo x; done' &
If I run this directly in the command line a job is created, which I can check using jobs.
If I however run it from outside the VM using
vagrant ssh -c "nohup sh -c 'while true; do sleep 3; echo x; done' &"
or if it is executed as part of the provisioning nothing seems to happen. (There is no job & no nohup.out file was created.)
I tried the following two answers to questions which seem to address the same issue:
(1) This answer suggests to "properly daemonize" which didn't work for me. I tried the following:
vagrant ssh -c "nohup sh -c 'while true; do sleep 3; echo x; done' 0<&- &>/dev/null &"
(2) The second answer says to add "sleep 1" which didn't work either:
vagrant ssh -c "nohup sh -c 'while true; do sleep 3; echo x; done' & sleep 1"
For both attempts directly executing the command on the command line worked just fine however executing it via vagrant ssh -c or by provisioning didn't seem to do anything.
This is how it works in my case
Vagrantfile provisioning
hub.vm.provision "shell", path: "script/run-test.sh", privileged: false, run: 'always', args: "#{selenium_version}"
I call a run-test script to be run as vagrant user (is privileged: false)
The interesting part of the script is
nohup java -jar /test/selenium-server-standalone-$1.jar -role hub &> /home/vagrant/nohup.grid.out&
in my case I start a java daemon and I redirect the output of nohup in a specific file in my vagrant home. If I check the job is running and owned by vagrant user.
For me worked running commands in screen like:
screen -dm bash -c "my_cmd"
in provision shell scripts.

Resources