Surrounding bash command with $(<command>) - bash

I was reading up on docker-machine (https://github.com/docker/machine) when I noticed this instruction
$ docker $(docker-machine config dev) run busybox echo hello world
I was curious what the $(docker-machine config dev) part did, particularly, what is the point of the $() bit? docker-machine config dev is a command, so does executing it with the $() do some bash magic?

It produces a single variable to be passed as argument to program. It's like doing t=$(echo hello) will make $t equal to "hello". You can achieve the same by doing t=$(command) and then docker $t other-args

Related

Docker image env variables overwritten by local machine

Why is it that when checking the env for an image I create, I get the image environment variables listed as expected, but when I try to access one of those env variables (i.e. $PATH), I'm getting my local machines environment variable output instead?
I believe I misunderstand how docker environment variables work. I'm attempting to run some commands against a docker container and am seeing what I consider unexpected behavior. I have created a simple example to try to demonstrate.
Dockerfile:
FROM node:12.13.0
ENV PATH="${PATH}:/custom-path/goes-here"
Commands:
docker build . -tag env-test
docker run env-test /bin/bash -c "env"
docker run env-test /bin/bash -c "$PATH"
Expected Output from final two commands.
docker run env-test /bin/bash -c "env".
...
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/custom-path/goes-here
...
docker run evn-test /bin/bash -c "echo $PATH"
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/custom-path/goes-here
Actual Output from final two commands
docker run env-test /bin/bash -c "env".
...
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/custom-path/goes-here
...
docker run evn-test /bin/bash -c "echo $PATH"
/Users/local-machine-user/Downloads/google-cloud-sdk/bin:/Users/local-machine-user/.nvm/versions/node/v12.16.1/bin:/Library/Frameworks/Python.framework/Versions/3.7/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Users/local-machine-user/Downloads/google-cloud-sdk/bin:/Library/Frameworks/Python.framework/Versions/3.7/bin
The output of running echo $PATH against the created image is returning my local machines $PATH variable. What?
The primary thing I'm trying to do is execute a script against the docker image that requires those environment variables I set in the image, but the script fails because the environment variables the script uses end up being for my local machine and not the ones specified in the image.
Say you're trying to run your third example
docker run env-test /bin/bash -c "echo $PATH"
The first thing that happens here is that your local shell processes this command and does its usual set of expansions. Environment variable references in double quotes are expanded, for example. Once it's built the final command line, then the shell executes it.
A generally useful trick is to just put echo at the front of the command
echo docker run env-test /bin/bash -c "echo $PATH"
This will show you the command that would have been run, but not actually run it.
To make this work you need to cause your local shell to not expand environment variables, so that the shell you're launching in the container can do it. Either single quotes or backslash escaping will work for this
docker run env-test /bin/sh -c 'echo $PATH'
docker run env-test /bin/sh -c "echo \$PATH"
The primary thing I'm trying to do is execute a script against the docker image that requires those environment variables I set in the image
The best way to approach this is probably to write a normal shell script and COPY it into your image. This saves both layers of quoting and confusion around which shell is processing things like variables. If you can't modify the image, an alternative is to bind-mount a script from the host.
# If the script is in the image
docker run --rm env-test path-echoer.sh
# If not
docker run --rm -v $PWD:/scripts env-test /scripts/path-echoer.sh
You should escape the dollar sign when using $PATH in a string - "echo \$PATH"
What happens is that when running this line:
docker run evn-test /bin/bash -c "echo $PATH"
Bash first translate $PATH, then passes that string into the docker container. So the command that is ran inside the container is:
docker run evn-test /bin/bash -c "echo /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"

"docker run" command to evaluate bash $variable inside the container

How can I run a command inside a docker container, using docker run, where bash variables are evaluated inside the container?
E.g.:
$ SOMEONE=host
$ docker run --env SOMEONE=busybox busybox echo "Hello $SOMEONE"
Hello host
How can I make it output Hello busybox?
To prevent the replacement from happening from the outer shell, one needs to use single quotes, not double.
To ensure that there is an inner shell that can do a replacement (echo doesn't have any such functionality itself!), we need to explicitly call sh -c; otherwise, Docker will just directly invoke execlp("echo", "echo", "$SOMEONE", NUL) inside the container, which doesn't actually do any substitution.
Thus:
docker run --env SOMEONE=busybox busybox sh -c 'echo "Hello $SOMEONE"'
Using docker run, where bash variables are evaluated inside
By far the easiest, non-cryptic approach is to write a bash function with all commands to be executed inside the container. Benefits:
Easy to write - no need to use special quote placement and escaping
Easy to debug - see what bash actually does inside the container
Easy to maintain - write readable scripts, not cryptic commands
Easy to write and maintain
Here's an example bash function that expands all variables inside a docker container.
-- (host) $ ./create-db.sh
#!/bin/bash
function main_inside_docker {
# all variables are expanded insider docker
DBNAME=${1:-testdb}
echo "creating database $DBNAME"
PATH=$MSSQL_PATH:$PATH
SQL="
create database $DBNAME;
select database_id, name, create_date from sys.databases;
"
sqlcmd -U SA -P $SA_PASSWORD -Q "$SQL"
}
# declare the function inside docker and run it there
CMD="$(declare -f main_inside_docker); main_inside_docker $#"
docker exec -it mssql bash -c "$CMD"
Essentially this declares the main_inside_docker function inside the container, then runs it with all arguments provided from the host invocation. All variables inside the function are expanded inside the docker container. The function just works the way one would expect.
Easy to debug
To debug the function, set "-x" as the first command in $CMD:
CMD="set -x; $(declare -f ...)"
When running it this way, it will print the bash trace from inside the container nicely:
(host) $ ./create-db.sh foodb
+ main_inside_docker
+ DBNAME=foodb
+ echo 'creating database foodb'
creating database testdb
...

Eval in docker-machine: terminal vs shell script

I'm trying to run a simple shell script to automate changing docker-machine environments. The problem is this, when I run the following command directly in the Mac terminal the following is outputted:
eval $(docker-machine env default)
docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default * digitalocean Running tcp://***.**.***.***:**** v1.12.0
So basically what you would expect, however when I run the following .sh script:
#!/usr/bin/env bash
eval $(docker-machine env default)
The output is:
./run.sh
docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default digitalocean Running tcp://***.**.***.***:**** v1.12.0
So basically, it is not setting it as active and I cannot access it.
Has anyone run into this issue before and knows how to solve it? Seems really strange to me, have got pretty much everything else running and automated apart from this facet.
Cheers, Aaron
I think you need to source your shell script
source ./myscript.sh
as the exports in the eval are being returned to the process you started to run the shell in and then being disposed of. These need to go to the parent e.g. login shell
Consider a.sh
#!/bin/bash
eval $(echo 'export a=123')
export b=234
when run in two ways
$ ./a.sh
$ echo $a
$ echo $b
$ source a.sh
$ echo $a
123
$ echo $b
234
$

Scripting Docker, Not Connected After Running Script?

So I have a script that looks like this:
#!/bin/bash
if [ $1 ]; then
docker-machine start $1
docker-machine env $1
eval $(docker-machine env $1)
docker ps -a
fi
Once it has run though, the scope of these commands seem to be over. For instance I don't have a connection to the docker-machine once the script has run, but I'd like to script this part out so I can have access to it.
For instance, after running this script ("./script.sh") I still can't run "docker ps -a".
What's the reason this happens and how could I get it to effectively be connected to after executing this script?
A script (or any other process) cannot modify the environment of its parent process. That is precisely why docker-machine env emits shell code that needs to be evaluated with eval.
If you want these variables accessible outside of your script, you would need to arrange to run eval $(docker-machine env <whatever>) in your current shell.

Setting environment variables when running docker in detached mode

If I include the following line in /root/.bashrc:
export $A = "AAA"
then when I run the docker container in interactive mode (docker run -i), the $A variable keeps its value. However if I run the container in detached mode I cannot access the variable. Even if I run the container explicitly sourcing the .bashrc like
docker run -d my_image /bin/bash -c "cd /root && source .bashrc && echo $A"
such line produces an empty output.
So, why is this happening? And how can I set the environment variables defined in the .bashrc file?
Any help would be very much appreciated!
The first problem is that the command you are running has $A being interpreted by your hosts shell (not the container shell). On your host, $A is likely black, so your effectively command becomes:
docker run -i my_image /bin/bash -c "cd /root && source .bashrc && echo "
Which does exactly as it says. We can escape the variable so it is sent to the container and properly evaluated there:
docker run -i my_image /bin/bash -c "echo \$A"
But this will also be blank because, although the container is, the shell is not in interactive mode. But we can force it to be:
docker run -i my_image /bin/bash -i -c "echo \$A"
Woohoo, we finally got our desired result. But with an added error from bash because there is no TTY. So, instead of interactive mode, we can just set a psuedo-TTY:
docker run -t my_image /bin/bash -i -c "echo \$A"
After running some tests, it appears that when running a container in detached mode, overidding the default environment variables doesnt always happen the way we want, depending on where you are in the Dockerfile.
As an exemple if, running a container in a detached container like so:
docker run **-d** --name image_name_container image_name
Whatever ENV variables you defined within the Dockerfile takes effect everywhere (read the rest and you will understand what the everywhere means).
example of a simple dockerfile (alpine is just a lighweight linux distribution):
FROM alpine:latest
#declaring a docker env variable and giving it a default value
ENV MY_ENV_VARIABLE dummy_value
#copying two dummy scripts into a place where i can execute them straight away
COPY ./start.sh /usr/sbin
COPY ./not_start.sh /usr/sbin
#in this script i could do: echo $MY_ENV_VARIABLE > /test1.txt
RUN not_start.sh
RUN echo $MY_ENV_VARIABLE > /test2.txt
#in this script i could do: echo $MY_ENV_VARIABLE > /test3.txt
ENTRYPOINT ["start.sh"]
Now if you want to run your container in detached and override some ENV variables, like so:
docker run **-d** -e MY_ENV_VARIABLE=new_value --name image_name_container image_name
Surprise! The var MY_ENV_VARIABLE is only overidden inside the script that is run in the ENTRYPOINT (and i checked, same thing happens if your replace ENTRYPOINT with CMD). It would also be overidden in a subscript that you could call from this start.sh script. But the MY_EV_VARIABLE variables that are called within a RUN dockerfile command or within the dockerfile itself do not get overidden.
In other words we would have $MY_ENV_VARIABLE being replaced by the value dummy_value and new_value depending on if you are in the ENTRYPOINT or not.

Resources