exec as a pipeline component - bash

For our application running inside a container it is preferable that it receives a SIGTERM when the container is being (gracefully) shutdown. At the same time, we want it's output to go to a log file.
In the startscript of our docker container, we had therefore been using bash's exec similar to this
exec command someParam >> stdout.log
That worked just fine, command replaced the shell that had been the container's root process and would receive the SIGTERM.
Since the application tends to log a lot, we decided to add log rotation by using Apache's rotatelogs tool, i.e.
exec command | rotatelogs -n 10 stdout.log 10M
Alas, it seems that by using the pipe, exec can no longer have command replace the shell. When looking at the processes in the running container with pstree -p, it now looks like this
mycontainer#/#pstree -p
start.sh(1)-+-command(118)
`-rotatelogs(119)
So bash remains the root process, and does not pass the SIGTERM on to command.
Before stumbling upon exec, I had found an approach that installs a signal handler into the bash script, which would then itself send a SIGTERM to the command process using kill. However, this became really convoluted, getting the PID was also not always straightforward, and I would like to preserve the convenience of exec when it comes to signal handling and get piping for log rotation.
Any idea how to accomplish this?

Perhaps you want
exec sh -c 'command | rotatelogs -n 10 stdout.log 10M'

I was able to get around this by using process substitution. For your specific case the following may work.
exec command > >(rotatelogs -n 10 stdout.log 10M)
To reproduce the scenario I built this simple Dockerfile
FROM perl
SHELL ["/bin/bash", "-c"]
# The following will gracefully terminate upon docker stop
CMD exec perl -e '$SIG{TERM} = sub { $|++; print "Caught a sigterm!\n"; sleep(5); die "is the end!" }; sleep(30);' 2>&1 > >(tee /my_log)
# The following won't gracefully terminate upon docker stop
#CMD exec perl -e '$SIG{TERM} = sub { $|++; print "Caught a sigterm!\n"; sleep(5); die "is the end!" }; sleep(30);' 2>&1 | tee /my_log
Build docker build -f Dockerfile.meu -t test .
Run docker run --name test --rm -ti test
Stop it docker stop test
Output:
Caught a sigterm!
is the end! at -e line 1.

Related

Docker bash shell script does not catch SIGINT or SIGTERM

I have the following two files in a directory:
Dockerfile
FROM debian
WORKDIR /app
COPY start.sh /app/
CMD ["/app/start.sh"]
start.sh (with permissions 755 using chmod +x start.sh)
#!/bin/bash
trap "echo SIGINT; exit" SIGINT
trap "echo SIGTERM; exit" SIGTERM
echo Starting script
sleep 100000
I then run the following commands:
$ docker build . -t tmp
$ docker run --name tmp tmp
I then expect that pressing Ctrl+C would send a SIGINT to the program, which would print SIGINT to the screen then exit, but that doesn't happen.
I also try running $ docker stop tmp, which I expect would send a SIGTERM to the program, but checking $ docker logs tmp after shows that SIGTERM was not caught.
Why are SIGINT and SIGTERM not being caught by the bash script?
Actually, your Dockerfile and start.sh entrypoint script work as is for me with Ctrl+C, provided you run the container with one of the following commands:
docker run --name tmp -it tmp
docker run --rm -it tmp
Documentation details
As specified in docker run --help:
the --interactive = -i CLI flag asks to keep STDIN open even if not attached
(typically useful for an interactive shell, or when also passing the --detach = -d CLI flag)
the --tty = -t CLI flag asks to allocate a pseudo-TTY
(which notably forwards signals to the shell entrypoint, especially useful for your use case)
Related remarks
For completeness, note that there are several related issues that can make docker stop take too much time and "fall back" to docker kill, which can arise when the shell entrypoint starts some other process(es):
First, when the last line of the shell entrypoint runs another, main program, don't forget to prepend this line with the exec builtin:
exec prog arg1 arg2 ...
But when the shell entrypoint is intended to run for a long time, trapping signals (at least INT / TERM, but not KILL) is very important;
{see also this SO question: Docker Run Script to catch interruption signal}
Otherwise, if the signals are not forwarded to the children processes, we run the risk of hitting the "PID 1 zombie reaping problem", for instance
{see also this SO question for details: Speed up docker-compose shutdown}
CTRL+C sends a signal to docker running on that console.
To send a signal to the script you could use
docker exec -it <containerId> /bin/sh -c "pkill -INT -f 'start\.sh'"
Or include echo "my PID: $$" on your script and send
docker exec -it <containerId> /bin/sh -c "kill -INT <script pid>"
Some shell implementations in docker might ignore the signal.
This script will correctly react to pkill -15. Please note that signals are specified without the SIG prefix.
#!/bin/sh
trap "touch SIGINT.tmp; ls -l; exit" INT TERM
trap "echo 'really exiting'; exit" EXIT
echo Starting script
while true; do sleep 1; done
The long sleep command was replaced by an infinite loop of short ones since sleep may ignore some signals.
The solution I found was to just use the --init flag.
docker run --init [MORE OPTIONS] IMAGE [COMMAND] [ARG...]
Per their docs...

Get exit code from docker entrypoint command

I have a docker container that runs a script via the entrypoint directive. The container closes after the entrypoint script is finished. I need to get the exit code from the script in order to do some logging if the script fails. Right now I'm thinking of something like this
docker run container/myContainer:latest
if [ $? != 0 ];
then
do some stuff
fi
Is this proper way to achieve this? Specifically, will this be the exit code of docker run or of my entrypoint script?
Yes, the docker container run exit code is the exit code from your entrypoint/cmd:
$ docker container run busybox /bin/sh -c "exit 5"
$ echo $?
5
You may also inspect the state of an exited container:
$ docker container inspect --format '{{.State.ExitCode}}' \
$(docker container ls -lq)
5
Checking the value of $? is not needed if you just want to act upon the exit status of the previous command.
if docker run container/myContainer:latest; then
do_stuff
fi
The above example will run/execute do_stuff if the exit status of docker run is zero which is a success.
You can add an else and elif clause in that
Or if you want to negate the exit status of the command.
if ! docker run container/myContainer:latest; then
do_stuff
fi
The above example will run do_stuff if the exit status of docker run is anything but zero, e.g. 1 and going up, since the ! negates.
If the command has some output and if does not have a silent/quite flag/option you can redirect it to /dev/null
if docker run container/myContainer:latest >/dev/null; then
do_stuff
fi
Should not output anything to stdout
see help test | grep -- '^[[:blank:]]*!'
In some cases if some output is still showing then that might be stderr which you can silent with >/dev/null 2>&1 instead of just >/dev/null

Why is executing "docker exec" killing my SSH session?

Let's say I have two servers, A and B. I also have a bash script that is executed on server A that looks like this:
build_test.sh
#!/bin/bash
ssh user#B <<'ENDSSH'
echo "doing test"
bash -ex test.sh
echo "completed test"
ENDSSH
test.sh
#!/bin/bash
docker exec -i my_container /bin/bash -c "echo hi!"
The problem is that completed test does not get printed to the terminal.
Here's the output of running build_test.sh:
$ ./build_test
doing test
+ docker exec -i my_container /bin/bash -c "echo hi!"
hi!
I'm expecting completed test to be output after hi!, but it isn't. How do I fix this?
docker is consuming, though not using, its standard input, which it inherits from test.sh. test.sh inherits its standard input from bash, which inherits its standard input from ssh. This means that docker itself is reading the last line of the script before the remote shell can.
To fix, just redirect docker's standard input from /dev/null.
docker exec -i my_container /bin/bash -c "echo hi!" < /dev/null

Shell script to enter Docker container and execute command, and eventually exit

I want to write a shell script that enters into a running docker container, edits a specific file and then exits it.
My initial attempt was this -
Create run.sh file.
Paste the following commands into it
docker exec -it container1 bash
sed -i -e 's/false/true/g' /opt/data_dir/gs.xml
exit
Run the script -
bash ./run.sh
However, once the script enters into the container1 it lands to the bash terminal of it. Seems like the whole script breaks as soon as I enter into the container, leaving parent container behind which contains the script.
The issue is solved By using the below piece of code
myHostName="$(hostname)"
docker exec -i -e VAR=${myHostName} root_reverse-proxy_1 bash <<'EOF'
sed -i -e "s/ServerName .*/ServerName $VAR/" /etc/httpd/conf.d/vhosts.conf
echo -e "\n Updated /etc/httpd/conf.d/vhosts.conf $VAR \n"
exit
I think you are close. You can try something like:
docker exec container1 sed -i -e 's/false/true/g' /opt/data_dir/gs.xml
Explanations:
-it is for interactive session, so you don't need it here.
docker can execute any command (like sed). You don't have to run sed via bash

Ruby exits with exit code 1 responding to TERM if running without shell

If Ruby receives the TERM signal, it usually exits with the exit code 143, which according to this source indicates the process successfully responded to that signal. But if I make the script run without a shell, the exit code is 1.
With shell:
> cat Dockerfile
FROM ruby:alpine
CMD ruby -e "Process.kill('TERM', Process.pid)" # <- shell form
> docker build -t term_shell . > /dev/null
> docker run term_shell
Terminated
> echo $?
143
Without shell:
> cat Dockerfile
FROM ruby:alpine
CMD ["ruby", "-e", "Process.kill('TERM', Process.pid)"] # <- exec form
> docker build -t term_exec . > /dev/null
> docker run term_exec
> echo $?
1
But if I exit myself with 143, the exit code is as expected:
> cat Dockerfile
FROM ruby:alpine
CMD ["ruby", "-e", "exit(143)"] # <- exec form
> docker build -t exit_exec . > /dev/null
> docker run exit_exec
> echo $?
143
Why is that? Does the exit code when ruby receives TERM comes not from Ruby, but the shell?
The exit code of your second example is 1 because the call Process.kill('TERM', Process.pid) failed. ruby -e exited because of this failure, and the status code in that case is 1.
With CMD ruby -e "Process.kill('TERM', Process.pid)", docker executes the given command in a shell. In a running container, it means that the root process with pid 1 will be /bin/sh -c, and the ruby -e command will be executed in a child process with another pid (for example 6).
With CMD ["ruby", "-e", "Process.kill('TERM', Process.pid)"], docker executes directly ruby -e as the root process with pid 1.
The PID 1 on Linux behaves differently than the normal ones. From docker documentation:
Note: A process running as PID 1 inside a container is treated specially by Linux: it ignores any signal with the default action. So, the process will not terminate on SIGINT or SIGTERM unless it is coded to do so.
So in your case, the TERM signal won't be sent to you process.
You can find more information on PID 1 behavior on this article:
https://hackernoon.com/my-process-became-pid-1-and-now-signals-behave-strangely-b05c52cc551c

Resources