How to add automatic prefix before bash command ([prefix] docker exec) - bash

I'd like to ask if there is a way to add a prefix before certain command. Most of the similar questions on SO regard adding prefix to the output of the command and not to the command execution itself so here is my example:
I need to connect to docker container, I'm working on Windows and use ConEmu with bash terminal so I need to use winpty prefix to be able to connect to unix terminal of the container as follows:
docker exec -it my_container bash
results in:
unable to setup input stream: unable to set IO streams as raw terminal: The handle is invalid.
so I need to use:
winpty docker exec -it my_container bash
root#0991eb946acc:/var/www/my_container#
Unfortunately If I add from the begging winpty my auto completion doesn't work so I need to firstly write docker command and then jump to beginning of command to input winpty. I'd like to have bash automatically detect whenever I run "docker exec" to add winpty prefix before it.
How to achieve that?
I know I could make an alias for
alias de='winpty docker exec'
but I would rather stay with normal docker command flow to have the autocompletion.

Write a shell function that wraps docker. If it's a docker exec command call winpty, otherwise use command to fall back to the underlying docker binary.
docker() {
if [[ ${1:-} == exec ]]; then
(set -x; winpty docker "$#")
else
command docker "$#"
fi
}
I put the set -x in there so it'll print when winpty is being invoked, that way there's no hidden magic. I like to be reminded when my shell is doing sneaky things.
$ docker exec -it my_container bash
+ winpty docker exec -it my_container bash
root#0991eb946acc:/var/www/my_container#
I'm not familiar with winpty but I expect winpty docker will call the docker binary and not this shell function. But if I'm wrong you're in trouble cause it'll the function will call itself over and over in an endless recursive loop. Yikes! If that happens you can use which to ensure it calls the binary.
docker() {
if [[ ${1:-} == exec ]]; then
(set -x; winpty "$(which docker)" "$#")
else
command docker "$#"
fi
}
If you're wondering about the shell syntax:
${1} is the function's first argument.
${1:-} ensures you don't get an "unbound variable" error on the off-chance that you have set -u enabled to detect unset variables.
"$#" is an array of all the function's arguments.

Related

Docker bash shell script does not catch SIGINT or SIGTERM

I have the following two files in a directory:
Dockerfile
FROM debian
WORKDIR /app
COPY start.sh /app/
CMD ["/app/start.sh"]
start.sh (with permissions 755 using chmod +x start.sh)
#!/bin/bash
trap "echo SIGINT; exit" SIGINT
trap "echo SIGTERM; exit" SIGTERM
echo Starting script
sleep 100000
I then run the following commands:
$ docker build . -t tmp
$ docker run --name tmp tmp
I then expect that pressing Ctrl+C would send a SIGINT to the program, which would print SIGINT to the screen then exit, but that doesn't happen.
I also try running $ docker stop tmp, which I expect would send a SIGTERM to the program, but checking $ docker logs tmp after shows that SIGTERM was not caught.
Why are SIGINT and SIGTERM not being caught by the bash script?
Actually, your Dockerfile and start.sh entrypoint script work as is for me with Ctrl+C, provided you run the container with one of the following commands:
docker run --name tmp -it tmp
docker run --rm -it tmp
Documentation details
As specified in docker run --help:
the --interactive = -i CLI flag asks to keep STDIN open even if not attached
(typically useful for an interactive shell, or when also passing the --detach = -d CLI flag)
the --tty = -t CLI flag asks to allocate a pseudo-TTY
(which notably forwards signals to the shell entrypoint, especially useful for your use case)
Related remarks
For completeness, note that there are several related issues that can make docker stop take too much time and "fall back" to docker kill, which can arise when the shell entrypoint starts some other process(es):
First, when the last line of the shell entrypoint runs another, main program, don't forget to prepend this line with the exec builtin:
exec prog arg1 arg2 ...
But when the shell entrypoint is intended to run for a long time, trapping signals (at least INT / TERM, but not KILL) is very important;
{see also this SO question: Docker Run Script to catch interruption signal}
Otherwise, if the signals are not forwarded to the children processes, we run the risk of hitting the "PID 1 zombie reaping problem", for instance
{see also this SO question for details: Speed up docker-compose shutdown}
CTRL+C sends a signal to docker running on that console.
To send a signal to the script you could use
docker exec -it <containerId> /bin/sh -c "pkill -INT -f 'start\.sh'"
Or include echo "my PID: $$" on your script and send
docker exec -it <containerId> /bin/sh -c "kill -INT <script pid>"
Some shell implementations in docker might ignore the signal.
This script will correctly react to pkill -15. Please note that signals are specified without the SIG prefix.
#!/bin/sh
trap "touch SIGINT.tmp; ls -l; exit" INT TERM
trap "echo 'really exiting'; exit" EXIT
echo Starting script
while true; do sleep 1; done
The long sleep command was replaced by an infinite loop of short ones since sleep may ignore some signals.
The solution I found was to just use the --init flag.
docker run --init [MORE OPTIONS] IMAGE [COMMAND] [ARG...]
Per their docs...

Dereference environment variable on parameter expansion in shell script

I am trying to dereference the value of environment variable using parameter expansion $#, but it doesn't seem to work.
I need to call a shell script with certain arguments. The list of arguments contain environment variables, and the environment variables are expected to be present where the shell script is to be executed. I do not know the list of commands before hand, so I am expanding those list of commands using $#. However, the script is not able to de-reference the value of environment variables.
A minimal setup which explains my problem can be done as below.
Dockerfile
FROM alpine:3.10
ENV MY_VAR=production
WORKDIR /app
COPY run.sh .
ENTRYPOINT [ "sh", "run.sh" ]
run.sh
#!/bin/sh
echo "Value of MY_VAR is" $MY_VAR
echo "Begin"
$#
echo "Done"
I can build the image using docker build . -t env-test. When I run it using docker run env-test:latest 'echo $MY_VAR', I get the below output.
Value of MY_VAR is production
Begin
$MY_VAR
Done
While the output that I am expecting is:
Value of MY_VAR is production
Begin
production
Done
SideNote: In actuality I am trying to run it using a compose file like below:
version: '3'
services:
run:
image: env-test:latest
command: echo $$MY_VAR
but it again gives me the similar result.
Expanding on the eval approach, here is a simple bash script that will use eval to evaluate a string as a sequence of bash commands:
#!/usr/bin/env bash
echo program args: $#
eval $#
but beware, eval comes with dangers:
https://medium.com/dot-debug/the-perils-of-bash-eval-cc5f9e309cae
First thing, $# it will just print the number of argument
#!/bin/sh
echo "Value of MY_VAR is" $MY_VAR
echo "Begin"
$#
echo "Done"
$# = stores all the arguments in a list of string
$* = stores all the arguments as a single string
$# = stores the number of arguments
What does $# mean in a shell script?
Second thing, When run the below command
docker run env-test:latest 'echo $MY_VAR'
It will look for $MY_VAR in host system not in the container.
To access container env you have to pass them as -e MY_VAR=test ,not as argument to docker run command which will in the ENV in host.
docker run -e MY_VAR=test env-test:latest
So the value of MY_VAR will be test not production.
To debug the argument to docker run
export MY_VAR2="value_from_host"
Now run
docker run env-test:latest "echo $MY_VAR2"
so the value will value_from_host because thes argument pick from host.
There are a more ways to skin this particular cat:
me#computer:~$ docker run -it --rm ubuntu:20.04 bash -c 'echo $HOSTNAME'
e610946f50c1
Here, we're calling on bash inside the container to process everything inside the single quotes, so variable substitution and/or expansion isn't applied by your shell, but by the shell inside the container.
Another approach is:
me#computer:~$ cat test.sh
#!/bin/bash
echo $HOSTNAME
me#computer:~$ cat test.sh | docker run -i --rm ubuntu:20.04 bash
62ba950a60fe
In this case, cat is "pushing" the contents of the script to bash the container, so it's functionally equivalent to my first example. The first method is "cleaner", however if you've got a more complex script, multi-line variables or other stuff that's difficult to put into a single command, then the second method is a better choice.
Note: The hostname is different in each example, because I'm using the --rm option which discards the container once it exits. This is great when you want to run a command in a container but don't need to keep the container afterwards.

Run an arbitrary command in a docker container that runs on a remote host after sourcing some environment variables from another command

To show what I am trying to do, this is part of the bash script I have so far:
COMMAND="${#:1}"
CONTAINER_DOCKER_NAME=this-value-is-computed-prior
MY_IP=this-ip-is-computed-prior
ssh user#$MY_IP -t 'bash -c "docker exec -it $( docker ps -a -q -f name='$CONTAINER_DOCKER_NAME' | head -n 1 ) /bin/sh -c "eval $(echo export FOO=$BAR) && $COMMAND""'
So let's break down the long command:
I am ssh-ing into a host where I run a bash which fetches the correct container with docker ps and then I do docker exec to run a shell in the container to load some environment variables that my $COMMAND needs to work. Important to note is that $BAR should be the value of the BAR variable inside the container.
So thats what I'm trying to accomplish in theory. However when running this no matter how I set the braces, quotes or escape characters - I always run into problems, either the shell syntax is not correct or it does not run the correct command (especially when the command has multiple arguments) or it loads $BAR value from my local desktop or the remote host but not from the container.
Is this even possible at all with a single shell one-liner?
I think we can simplify your command quite a bit.
First, there's no need to use eval here, and you don't need the &&
operator, either:
/bin/sh -c "eval $(echo export FOO=$BAR) && $COMMAND"
Instead:
/bin/sh -c "FOO=$BAR $COMMAND"
That sets the environment variable FOO for the duration of
$COMMAND.
Next, you don't need this complex docker ps expression:
docker ps -a -q -f name="$CONTAINER_DOCKER_NAME"
Docker container names are unique. If you have a container name
stored in $CONTAINER_DOCKER_NAME, you can just run:
docker exec -it $CONTAINER_DOCKER_NAME ...
This simplifies the docker command down to:
docker exec -it $CONTAINER_DOCKER_NAME \
/bin/sh -c "FOO=\$BAR $COMMAND"
Note how we're escaping the $ in $BAR there, because we want that
interpreted inside the container, rather than by our current shell.
Now we just need to arrange to run this via ssh. There are a couple
of solutions to that. We can just make sure to protect everything on
the command line against the extra level of shell expansion, like
this:
ssh user#$MY_IP "docker exec -it $CONTAINER_DOCKER_NAME \
/bin/sh -c \"FOO=\\\$BAR $COMMAND\""
We need to wrap the entire command in double quotes, which means we
need to escape any quotes inside the command (we can't use single
quotes because we actually want to expand the variable
$CONTAINER_DOCKER_NAME locally). We're going to lose one level of
\ expansion, so our \$BAR becomes \\\$BAR.
If your command isn't interactive, you can make this a little less
hairy by piping the script to bash rather than including it on the
command line, like this:
ssh user#$MY_IP docker exec -i $CONTAINER_DOCKER_NAME /bin/sh <<EOF
FOO=\$BAR $COMMAND
EOF
That simplifies the quoting and escaping necessary to get things
passed through to the container shell.
thanks to larsks great explanation I got it working, my final one-liner is:
ssh -i $ECS_SSH_KEY ec2-user#$EC2_IP -t "bash -c \"docker exec -it \$( docker ps -a -q -f name=$CONTAINER_DOCKER_NAME | head -n 1 ) /bin/sh -c \\\"eval \\\\\\\$(AWS_ENV_PATH=/\\\\\\\$ENVIRONMENT /bin/aws-env) && $COMMAND\\\"\""
so basically you wrap everything in double quotes and then also use double quotes inside of it because we need some variables ,like $DOCKER_CONTAINER_NAME from the host. to escape the quotes and the $ sign you use \ .
but because we have multiple levels of shells (host, server, container) we also need to use multiple levels of escaping. so the first level is just \$ which will protect that the variable (or the shell command, like docker ps) is not run on the host but on the server.
then the next level of escaping is 7 times \ . every \ escapes the character to the right so in the end it is \\\$ on the second level (server) and \$ on the third level (container). this ensures that the variable is evaluated in the container not on the server.
same principle with the double quotes. Everything between \" is run on the second level and everything between \\\" is run on the third level.

How to pass a command with $() to exec.command() in golang

I want to execute a command like docker exec "$(docker-compose ps -q web)" start.sh from golang script using exec.command(). The problem is getting the command inside $() to execute.
The command inside of $() is executed and replaced with its output by your shell on the command line (typically bash but can be sh or others). exec.Command is running the program directly, so that replacement isn't happening. This means you need to pass that command into bash so it will interpret and execute the command:
bash -c "docker exec \"$(docker-compose ps -q web)\" start.sh"
Code Example:
exec.Command("/bin/sh", "-c", "docker exec \"$(docker-compose ps -q web)\" start.sh")
Alternatively, you can run docker-compose ps -q web yourself, get its output and do the substitution instead of having bash do it for you.

Simplest way to "forward" script arguments to another command

I have following script
#!/bin/bash
docker exec my_container ./bin/cli
And I have to append all arguments passed to the script to the command inside script. So for example executing
./script some_command -t --option a
Should run
docker exec my_container ./bin/cli some_command -t --option a
Inside the script. I am looking for simplest/most elegant way.
"$#" represent all arguments and support quoted arguments too:
docker exec my_container ./bin/cli "$#"

Resources