Why is executing "docker exec" killing my SSH session? - bash

Let's say I have two servers, A and B. I also have a bash script that is executed on server A that looks like this:
build_test.sh
#!/bin/bash
ssh user#B <<'ENDSSH'
echo "doing test"
bash -ex test.sh
echo "completed test"
ENDSSH
test.sh
#!/bin/bash
docker exec -i my_container /bin/bash -c "echo hi!"
The problem is that completed test does not get printed to the terminal.
Here's the output of running build_test.sh:
$ ./build_test
doing test
+ docker exec -i my_container /bin/bash -c "echo hi!"
hi!
I'm expecting completed test to be output after hi!, but it isn't. How do I fix this?

docker is consuming, though not using, its standard input, which it inherits from test.sh. test.sh inherits its standard input from bash, which inherits its standard input from ssh. This means that docker itself is reading the last line of the script before the remote shell can.
To fix, just redirect docker's standard input from /dev/null.
docker exec -i my_container /bin/bash -c "echo hi!" < /dev/null

Related

Bash script start docker container script & pass in arguments

I have a bash script that runs command line functions, and I then need the script to run commands in a docker container. I then need the script to pass in arguments into the docker, and eventually exit. However, I'm unable to have the script pass in arguments into the docker container. How can I do this?
This is what the docker commands look like without the bash script for reference.
$ docker exec -it rti_cmd
root#29c:/data# rti
187.0.0.1:9806> run_cmd
(integer) 0
187.0.0.1:9806> exit
root#29c:/data# exit
exit
Code snippet with two variations of attempts:
#!/bin/bash
docker exec -it rti_cmd bash<< eeee
rti
run_cmd
exit
exit
eeee
#also have done without the ";"
docker exec -it rti_cmd bash /bin/sh -c
"rti;
run_cmd;
exit;
exit"
Errors:
$ chmod +x test.sh
$ ./test.sh
the input device is not a TTY
/bin/sh: /bin/sh: cannot execute binary file
./test.sh: line 17: $'rti;\nrun_cmd;\nexit;\nexit': command not found
You don't need -i interacive nor -t tty if you want to be non-interactive.
docker exec rti_cmd sh -c 'rti;run_cmd'

Bash script with -e not terminated when using piped script and docker

The following script.sh is executed:
#!/bin/bash
set -eu
# code ...
su buser
mkdir /does/not/work
echo $?
echo This should not be printed
Output:
1
This should not be printed
How i execute the script:
docker exec -i fancy_container bash < script.sh
Question: Why does the script not terminate after the failing command even when set -e was defined and how can i get the script to exit on any failing command? I think the key point is the '<' operator, which i do not understand exactly how it executes the script.
Notes:
-e means: Abort script at first error, when a command exits with non-zero status (except in until or while loops, if-tests, list constructs)
Possible solution:
docker exec -i fancy_container bash -c "cat > tmp.sh; bash tmp.sh" < script.sh
How it works:
< script.sh - Pipe all rows of this file from the host, to the docker exec command.
cat > tmp.sh - Save the incoming piped content to a file inside the container.
bash tmp.sh - Execute the file as-whole inside the container, which means -e works again as expected!
But i still don't know why the initial approach isn't working.

Docker bash shell script does not catch SIGINT or SIGTERM

I have the following two files in a directory:
Dockerfile
FROM debian
WORKDIR /app
COPY start.sh /app/
CMD ["/app/start.sh"]
start.sh (with permissions 755 using chmod +x start.sh)
#!/bin/bash
trap "echo SIGINT; exit" SIGINT
trap "echo SIGTERM; exit" SIGTERM
echo Starting script
sleep 100000
I then run the following commands:
$ docker build . -t tmp
$ docker run --name tmp tmp
I then expect that pressing Ctrl+C would send a SIGINT to the program, which would print SIGINT to the screen then exit, but that doesn't happen.
I also try running $ docker stop tmp, which I expect would send a SIGTERM to the program, but checking $ docker logs tmp after shows that SIGTERM was not caught.
Why are SIGINT and SIGTERM not being caught by the bash script?
Actually, your Dockerfile and start.sh entrypoint script work as is for me with Ctrl+C, provided you run the container with one of the following commands:
docker run --name tmp -it tmp
docker run --rm -it tmp
Documentation details
As specified in docker run --help:
the --interactive = -i CLI flag asks to keep STDIN open even if not attached
(typically useful for an interactive shell, or when also passing the --detach = -d CLI flag)
the --tty = -t CLI flag asks to allocate a pseudo-TTY
(which notably forwards signals to the shell entrypoint, especially useful for your use case)
Related remarks
For completeness, note that there are several related issues that can make docker stop take too much time and "fall back" to docker kill, which can arise when the shell entrypoint starts some other process(es):
First, when the last line of the shell entrypoint runs another, main program, don't forget to prepend this line with the exec builtin:
exec prog arg1 arg2 ...
But when the shell entrypoint is intended to run for a long time, trapping signals (at least INT / TERM, but not KILL) is very important;
{see also this SO question: Docker Run Script to catch interruption signal}
Otherwise, if the signals are not forwarded to the children processes, we run the risk of hitting the "PID 1 zombie reaping problem", for instance
{see also this SO question for details: Speed up docker-compose shutdown}
CTRL+C sends a signal to docker running on that console.
To send a signal to the script you could use
docker exec -it <containerId> /bin/sh -c "pkill -INT -f 'start\.sh'"
Or include echo "my PID: $$" on your script and send
docker exec -it <containerId> /bin/sh -c "kill -INT <script pid>"
Some shell implementations in docker might ignore the signal.
This script will correctly react to pkill -15. Please note that signals are specified without the SIG prefix.
#!/bin/sh
trap "touch SIGINT.tmp; ls -l; exit" INT TERM
trap "echo 'really exiting'; exit" EXIT
echo Starting script
while true; do sleep 1; done
The long sleep command was replaced by an infinite loop of short ones since sleep may ignore some signals.
The solution I found was to just use the --init flag.
docker run --init [MORE OPTIONS] IMAGE [COMMAND] [ARG...]
Per their docs...

Shell script to enter Docker container and execute command, and eventually exit

I want to write a shell script that enters into a running docker container, edits a specific file and then exits it.
My initial attempt was this -
Create run.sh file.
Paste the following commands into it
docker exec -it container1 bash
sed -i -e 's/false/true/g' /opt/data_dir/gs.xml
exit
Run the script -
bash ./run.sh
However, once the script enters into the container1 it lands to the bash terminal of it. Seems like the whole script breaks as soon as I enter into the container, leaving parent container behind which contains the script.
The issue is solved By using the below piece of code
myHostName="$(hostname)"
docker exec -i -e VAR=${myHostName} root_reverse-proxy_1 bash <<'EOF'
sed -i -e "s/ServerName .*/ServerName $VAR/" /etc/httpd/conf.d/vhosts.conf
echo -e "\n Updated /etc/httpd/conf.d/vhosts.conf $VAR \n"
exit
I think you are close. You can try something like:
docker exec container1 sed -i -e 's/false/true/g' /opt/data_dir/gs.xml
Explanations:
-it is for interactive session, so you don't need it here.
docker can execute any command (like sed). You don't have to run sed via bash

Simplest way to "forward" script arguments to another command

I have following script
#!/bin/bash
docker exec my_container ./bin/cli
And I have to append all arguments passed to the script to the command inside script. So for example executing
./script some_command -t --option a
Should run
docker exec my_container ./bin/cli some_command -t --option a
Inside the script. I am looking for simplest/most elegant way.
"$#" represent all arguments and support quoted arguments too:
docker exec my_container ./bin/cli "$#"

Resources