Run an arbitrary command in a docker container that runs on a remote host after sourcing some environment variables from another command - bash

To show what I am trying to do, this is part of the bash script I have so far:
COMMAND="${#:1}"
CONTAINER_DOCKER_NAME=this-value-is-computed-prior
MY_IP=this-ip-is-computed-prior
ssh user#$MY_IP -t 'bash -c "docker exec -it $( docker ps -a -q -f name='$CONTAINER_DOCKER_NAME' | head -n 1 ) /bin/sh -c "eval $(echo export FOO=$BAR) && $COMMAND""'
So let's break down the long command:
I am ssh-ing into a host where I run a bash which fetches the correct container with docker ps and then I do docker exec to run a shell in the container to load some environment variables that my $COMMAND needs to work. Important to note is that $BAR should be the value of the BAR variable inside the container.
So thats what I'm trying to accomplish in theory. However when running this no matter how I set the braces, quotes or escape characters - I always run into problems, either the shell syntax is not correct or it does not run the correct command (especially when the command has multiple arguments) or it loads $BAR value from my local desktop or the remote host but not from the container.
Is this even possible at all with a single shell one-liner?

I think we can simplify your command quite a bit.
First, there's no need to use eval here, and you don't need the &&
operator, either:
/bin/sh -c "eval $(echo export FOO=$BAR) && $COMMAND"
Instead:
/bin/sh -c "FOO=$BAR $COMMAND"
That sets the environment variable FOO for the duration of
$COMMAND.
Next, you don't need this complex docker ps expression:
docker ps -a -q -f name="$CONTAINER_DOCKER_NAME"
Docker container names are unique. If you have a container name
stored in $CONTAINER_DOCKER_NAME, you can just run:
docker exec -it $CONTAINER_DOCKER_NAME ...
This simplifies the docker command down to:
docker exec -it $CONTAINER_DOCKER_NAME \
/bin/sh -c "FOO=\$BAR $COMMAND"
Note how we're escaping the $ in $BAR there, because we want that
interpreted inside the container, rather than by our current shell.
Now we just need to arrange to run this via ssh. There are a couple
of solutions to that. We can just make sure to protect everything on
the command line against the extra level of shell expansion, like
this:
ssh user#$MY_IP "docker exec -it $CONTAINER_DOCKER_NAME \
/bin/sh -c \"FOO=\\\$BAR $COMMAND\""
We need to wrap the entire command in double quotes, which means we
need to escape any quotes inside the command (we can't use single
quotes because we actually want to expand the variable
$CONTAINER_DOCKER_NAME locally). We're going to lose one level of
\ expansion, so our \$BAR becomes \\\$BAR.
If your command isn't interactive, you can make this a little less
hairy by piping the script to bash rather than including it on the
command line, like this:
ssh user#$MY_IP docker exec -i $CONTAINER_DOCKER_NAME /bin/sh <<EOF
FOO=\$BAR $COMMAND
EOF
That simplifies the quoting and escaping necessary to get things
passed through to the container shell.

thanks to larsks great explanation I got it working, my final one-liner is:
ssh -i $ECS_SSH_KEY ec2-user#$EC2_IP -t "bash -c \"docker exec -it \$( docker ps -a -q -f name=$CONTAINER_DOCKER_NAME | head -n 1 ) /bin/sh -c \\\"eval \\\\\\\$(AWS_ENV_PATH=/\\\\\\\$ENVIRONMENT /bin/aws-env) && $COMMAND\\\"\""
so basically you wrap everything in double quotes and then also use double quotes inside of it because we need some variables ,like $DOCKER_CONTAINER_NAME from the host. to escape the quotes and the $ sign you use \ .
but because we have multiple levels of shells (host, server, container) we also need to use multiple levels of escaping. so the first level is just \$ which will protect that the variable (or the shell command, like docker ps) is not run on the host but on the server.
then the next level of escaping is 7 times \ . every \ escapes the character to the right so in the end it is \\\$ on the second level (server) and \$ on the third level (container). this ensures that the variable is evaluated in the container not on the server.
same principle with the double quotes. Everything between \" is run on the second level and everything between \\\" is run on the third level.

Related

How to send a command properly into docker container?

I can execute the command below in my terminal succesfully.
command:
gdalwarp -s_srs "+datum=WGS84 +no_defs +geoidgrids=egm96-15.gtx" -t_srs "+datum=WGS84 +no_def" input.tif output.tif
Now, I want to store this command into a variable and expand this command inside a docker container.
My script run.sh looks like the following. I first store my target command into mycommand and run the container with the command as input.
mycommand=$#;
docker run -ti --rm osgeo/gdal:ubuntu-small-latest /bin/bash -c "cd $(pwd); ${mycommand}"
And then I execute the run.sh as following.
bash run.sh gdalwarp -s_srs "+datum=WGS84 +no_defs +geoidgrids=egm96-15.gtx" -t_srs "+datum=WGS84 +no_def" input.tif output.tif
Issue:
I was hoping everything after bash run.sh can be store literally into the mycommand variable.
And inside the docker container, the mycommand can be expand and execute literally. But it looks like that the double quote in my original command will be lost during this process.
Thank you.
You could pass the command as argument and then invoke "$#" inside the shell. I prefer mostly single quotes.
docker run -ti --rm osgeo/gdal:ubuntu-small-latest \
/bin/bash -c 'cd '"$(pwd)"' && "$#"' -- "$#"
If only you want cd just let docker change the directory with -w. In Bash $PWD will be faster then pwd command.
docker run ... -w "$PWD" image "$#"
Note that "$(pwd)" is not properly quoted inside child shell - the result will undergo word splitting and filename expansion. Anyway, I recommend declare -p and Bash arrays (and declare -f for functions) to transfer data between Bash-es. declare will always properly quote all stuff, so that child shell can properly import it.
cmd=("$#")
pwd=$PWD
work() {
cd "$pwd"
"${cmd[#]}"
}
docker ... bash -c "$(declare -p pwd cmd); $(declare -f work); work"
Research: when to use quoting in shell, difference between single and double quotes, word splitting expansion and how to prevent it, https://mywiki.wooledge.org/Quotes , bash arrays, https://mywiki.wooledge.org/BashFAQ/050 .

How to add automatic prefix before bash command ([prefix] docker exec)

I'd like to ask if there is a way to add a prefix before certain command. Most of the similar questions on SO regard adding prefix to the output of the command and not to the command execution itself so here is my example:
I need to connect to docker container, I'm working on Windows and use ConEmu with bash terminal so I need to use winpty prefix to be able to connect to unix terminal of the container as follows:
docker exec -it my_container bash
results in:
unable to setup input stream: unable to set IO streams as raw terminal: The handle is invalid.
so I need to use:
winpty docker exec -it my_container bash
root#0991eb946acc:/var/www/my_container#
Unfortunately If I add from the begging winpty my auto completion doesn't work so I need to firstly write docker command and then jump to beginning of command to input winpty. I'd like to have bash automatically detect whenever I run "docker exec" to add winpty prefix before it.
How to achieve that?
I know I could make an alias for
alias de='winpty docker exec'
but I would rather stay with normal docker command flow to have the autocompletion.
Write a shell function that wraps docker. If it's a docker exec command call winpty, otherwise use command to fall back to the underlying docker binary.
docker() {
if [[ ${1:-} == exec ]]; then
(set -x; winpty docker "$#")
else
command docker "$#"
fi
}
I put the set -x in there so it'll print when winpty is being invoked, that way there's no hidden magic. I like to be reminded when my shell is doing sneaky things.
$ docker exec -it my_container bash
+ winpty docker exec -it my_container bash
root#0991eb946acc:/var/www/my_container#
I'm not familiar with winpty but I expect winpty docker will call the docker binary and not this shell function. But if I'm wrong you're in trouble cause it'll the function will call itself over and over in an endless recursive loop. Yikes! If that happens you can use which to ensure it calls the binary.
docker() {
if [[ ${1:-} == exec ]]; then
(set -x; winpty "$(which docker)" "$#")
else
command docker "$#"
fi
}
If you're wondering about the shell syntax:
${1} is the function's first argument.
${1:-} ensures you don't get an "unbound variable" error on the off-chance that you have set -u enabled to detect unset variables.
"$#" is an array of all the function's arguments.

Dereference environment variable on parameter expansion in shell script

I am trying to dereference the value of environment variable using parameter expansion $#, but it doesn't seem to work.
I need to call a shell script with certain arguments. The list of arguments contain environment variables, and the environment variables are expected to be present where the shell script is to be executed. I do not know the list of commands before hand, so I am expanding those list of commands using $#. However, the script is not able to de-reference the value of environment variables.
A minimal setup which explains my problem can be done as below.
Dockerfile
FROM alpine:3.10
ENV MY_VAR=production
WORKDIR /app
COPY run.sh .
ENTRYPOINT [ "sh", "run.sh" ]
run.sh
#!/bin/sh
echo "Value of MY_VAR is" $MY_VAR
echo "Begin"
$#
echo "Done"
I can build the image using docker build . -t env-test. When I run it using docker run env-test:latest 'echo $MY_VAR', I get the below output.
Value of MY_VAR is production
Begin
$MY_VAR
Done
While the output that I am expecting is:
Value of MY_VAR is production
Begin
production
Done
SideNote: In actuality I am trying to run it using a compose file like below:
version: '3'
services:
run:
image: env-test:latest
command: echo $$MY_VAR
but it again gives me the similar result.
Expanding on the eval approach, here is a simple bash script that will use eval to evaluate a string as a sequence of bash commands:
#!/usr/bin/env bash
echo program args: $#
eval $#
but beware, eval comes with dangers:
https://medium.com/dot-debug/the-perils-of-bash-eval-cc5f9e309cae
First thing, $# it will just print the number of argument
#!/bin/sh
echo "Value of MY_VAR is" $MY_VAR
echo "Begin"
$#
echo "Done"
$# = stores all the arguments in a list of string
$* = stores all the arguments as a single string
$# = stores the number of arguments
What does $# mean in a shell script?
Second thing, When run the below command
docker run env-test:latest 'echo $MY_VAR'
It will look for $MY_VAR in host system not in the container.
To access container env you have to pass them as -e MY_VAR=test ,not as argument to docker run command which will in the ENV in host.
docker run -e MY_VAR=test env-test:latest
So the value of MY_VAR will be test not production.
To debug the argument to docker run
export MY_VAR2="value_from_host"
Now run
docker run env-test:latest "echo $MY_VAR2"
so the value will value_from_host because thes argument pick from host.
There are a more ways to skin this particular cat:
me#computer:~$ docker run -it --rm ubuntu:20.04 bash -c 'echo $HOSTNAME'
e610946f50c1
Here, we're calling on bash inside the container to process everything inside the single quotes, so variable substitution and/or expansion isn't applied by your shell, but by the shell inside the container.
Another approach is:
me#computer:~$ cat test.sh
#!/bin/bash
echo $HOSTNAME
me#computer:~$ cat test.sh | docker run -i --rm ubuntu:20.04 bash
62ba950a60fe
In this case, cat is "pushing" the contents of the script to bash the container, so it's functionally equivalent to my first example. The first method is "cleaner", however if you've got a more complex script, multi-line variables or other stuff that's difficult to put into a single command, then the second method is a better choice.
Note: The hostname is different in each example, because I'm using the --rm option which discards the container once it exits. This is great when you want to run a command in a container but don't need to keep the container afterwards.

Shell script to enter Docker container and execute command, and eventually exit

I want to write a shell script that enters into a running docker container, edits a specific file and then exits it.
My initial attempt was this -
Create run.sh file.
Paste the following commands into it
docker exec -it container1 bash
sed -i -e 's/false/true/g' /opt/data_dir/gs.xml
exit
Run the script -
bash ./run.sh
However, once the script enters into the container1 it lands to the bash terminal of it. Seems like the whole script breaks as soon as I enter into the container, leaving parent container behind which contains the script.
The issue is solved By using the below piece of code
myHostName="$(hostname)"
docker exec -i -e VAR=${myHostName} root_reverse-proxy_1 bash <<'EOF'
sed -i -e "s/ServerName .*/ServerName $VAR/" /etc/httpd/conf.d/vhosts.conf
echo -e "\n Updated /etc/httpd/conf.d/vhosts.conf $VAR \n"
exit
I think you are close. You can try something like:
docker exec container1 sed -i -e 's/false/true/g' /opt/data_dir/gs.xml
Explanations:
-it is for interactive session, so you don't need it here.
docker can execute any command (like sed). You don't have to run sed via bash

Bash: Execute command WITH ARGUMENTS in new terminal [duplicate]

This question already has answers here:
how do i start commands in new terminals in BASH script
(2 answers)
Closed 20 days ago.
So i want to open a new terminal in bash and execute a command with arguments.
As long as I only take something like ls as command it works fine, but when I take something like route -n , so a command with arguments, it doesnt work.
The code:
gnome-terminal --window-with-profile=Bash -e whoami #WORKS
gnome-terminal --window-with-profile=Bash -e route -n #DOESNT WORK
I already tried putting "" around the command and all that but it still doesnt work
You can start a new terminal with a command using the following:
gnome-terminal --window-with-profile=Bash -- \
bash -c "<command>"
To continue the terminal with the normal bash profile, add exec bash:
gnome-terminal --window-with-profile=Bash -- \
bash -c "<command>; exec bash"
Here's how to create a Here document and pass it as the command:
cmd="$(printf '%s\n' 'wc -w <<-EOF
First line of Here document.
Second line.
The output of this command will be '15'.
EOF' 'exec bash')"
xterm -e bash -c "${cmd}"
To open a new terminal and run an initial command with a script, add the following in a script:
nohup xterm -e bash -c "$(printf '%s\nexec bash' "$*")" &>/dev/null &
When $* is quoted, it expands the arguments to a single word, with each separated by the first character of IFS. nohup and &>/dev/null & are used only to allow the terminal to run in the background.
Try this:
gnome-terminal --window-with-profile=Bash -e 'bash -c "route -n; read"'
The final read prevents the window from closing after execution of the previous commands. It will close when you press a key.
If you want to experience headaches, you can try with more quote nesting:
gnome-terminal --window-with-profile=Bash \
-e 'bash -c "route -n; read -p '"'Press a key...'"'"'
(In the following examples there is no final read. Let’s suppose we fixed that in the profile.)
If you want to print an empty line and enjoy multi-level escaping too:
gnome-terminal --window-with-profile=Bash \
-e 'bash -c "printf \\\\n; route -n"'
The same, with another quoting style:
gnome-terminal --window-with-profile=Bash \
-e 'bash -c '\''printf "\n"; route -n'\'
Variables are expanded in double quotes, not single quotes, so if you want them expanded you need to ensure that the outermost quotes are double:
command='printf "\n"; route -n'
gnome-terminal --window-with-profile=Bash \
-e "bash -c '$command'"
Quoting can become really complex. When you need something more advanced that a simple couple of commands, it is advisable to write an independent shell script with all the readable, parametrized code you need, save it somewhere, say /home/user/bin/mycommand, and then invoke it simply as
gnome-terminal --window-with-profile=Bash -e /home/user/bin/mycommand

Resources