Pass ENV to Docker container running single command - bash

This prints blank:
docker run --rm --env HELLO="world" ubuntu:18.04 bash -c "echo $HELLO"
However this works:
docker run --rm -it --env HELLO="world" ubuntu:18.04 bash
# in the container
echo $HELLO
HELLO seems passed to the container though:
docker run --rm --env HELLO="world" ubuntu:18.04 env
Why is the first command not seeing HELLO? What am I missing?

Because of the double quotes, $HELLO will be evaluated by the docker host itself once the command got executed before going inside the container. So you need to either escape the dollar sign ($) using Backslash (\) which tells the bash that the $ is a part of the command itself and no need to be evaluated by the current shell (which is the docker host in our case) or use single quotes ('') like this:
Using single quotes
$ docker run --rm --env HELLO="world" ubuntu:18.04 bash -c 'echo $HELLO'
world
Using Backslash to escape
$ docker run --rm --env HELLO="world" ubuntu:18.04 bash -c "echo \$HELLO"
world

The reason you are not seeing what you expect is because things are being evaluated before you expect them to be evaluated.
When you run:
docker run --rm --env HELLO="world" ubuntu:18.04 bash -c "echo $HELLO"
The "echo $HELLO" really isn't any different to bash than:
echo "echo $HELLO"
The shell (bash) parses double quotes (") and things inside them. It sees "echo $HELLO" and replaces the variable $HOME with it's value. If $HOME is not defined, this evaluates to echo.
So,
echo "echo $HELLO"
is parsed and evaluted by your shell. Which then just runs this at the end:
echo "echo "
So the "echo $HELLO" in your docker command is evaluated to "echo " and that's what gets passed to the docker command.
What you want to do is to prevent your shell from evaluating the variable. You can do it a couple of ways:
You can use single quotes instead of double quotes. Your shell doesn't parse it; it will be passed to the bash inside the container as is:
docker run --rm --env HELLO="world" ubuntu:18.04 bash -c 'echo $HELLO'
You can escape the $ to avoid evaluating it in this shell and let the bash inside the docker container evaluate it:
docker run --rm --env HELLO="world" ubuntu:18.04 bash -c "echo \$HELLO"

Related

Εxecute commands with args and override entrypoint on docker run

I am trying to override the entrypoint in a docker image with a script execution that accepts arguments, and it fails as follows
▶ docker run --entrypoint "/bin/sh -c 'my-script.sh arg1 arg2'" my-image:latest
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/bin/sh -c 'myscript.sh arg1 arg2'": stat /bin/sh -c 'my-script.sh arg1 arg2': no such file or directory: unknown.
However when I exec to the container, the above command succeeds:
▶ docker run --entrypoint sh -it my-image:latest
~ $ /bin/sh -c 'my-script.sh arg1 arg2'
Success
Am I missing sth in the syntax?
Remember that arguments after the container image name are simply passed to the ENTRYPOINT script. So you can write:
docker run --entrypoint my-script.sh my-image:latest arg1 arg2
For example, if I have my-script.sh (mode 0755) containing:
#!/bin/sh
for arg in "$#"; do
echo "Arg: $arg"
done
And a Dockerfile like this:
FROM docker.io/alpine:latest
COPY my-script.sh /usr/local/bin/
ENTRYPOINT ["date"]
Then I can run:
docker run --rm --entrypoint my-script.sh my-image arg1 arg2
And get as output:
Arg: arg1
Arg: arg2
If you want to run an arbitrary sequence of shell commands, you can of course do this:
docker run --rm --entrypoint sh my-image \
-c 'ls -al && my-script.sh arg1 arg2'
If you need to do this at all regularly, you can refactor your Dockerfile to make this easier to do.
A Docker container's main process is run by concatenating together the "entrypoint" and "command" argument lists. In a Dockerfile, these come from the ENTRYPOINT and CMD directives. In the docker run command this is trickier: anything after the image name is the "command" part, but the "entrypoint" part needs to be provided by the --entrypoint argument, it needs to be before the image name, and it can only be a single word.
If you need to routinely replace the command, the syntax becomes much cleaner if you set it using CMD and not ENTRYPOINT in the Dockerfile.
# Dockerfile
CMD ["some", "main", "command"] # not ENTRYPOINT
If you make this change, then you can just put your alternate command after the image name in the docker run command, without a --entrypoint option and without splitting the command string around the image name.
docker run my-image:latest /bin/sh -c 'my-script.sh arg1 arg2'
I will somewhat routinely recommend a pattern where ENTRYPOINT is a wrapper script that does some first-time setup, then does something like exec "$#" to run the command that's passed to it as arguments. That setup is compatible with this CMD-first approach: the entrypoint wrapper will do its setup and then run the override command instead of the image's command.

How can I script a Docker command into a 'single word' binary? Using bash script?

When I install something like nmap(even from APT), I cant get it to execute correctly, so I like to go the container route. Instead of typing:
docker run --rm -it instrumentisto/nmap -A -T4 scanme.nmap.org
I figured maybe I could script it out, but nothing i've learned or found on google, youtube, etc, has helped so far... Can somebody lend a hand? I need to know how to get Bash to execute a command with args:
execute like:
./nmap.sh -A -T4 -Pn x.x.x.x
#!/bin/bash
echo docker run --rm -it instrumentisto/nmap $1 $2 $3 $4 $5
but how to get bash to run this instead of just echo I dont know. Thanks ahead!
Two solutions: create an alias, create a script.
With an alias
The command you write is replaced with the value of the alias, so
alias nmap="docker run --rm -it instrumentisto/nmap"
nmap -A -T4 -Pn x.x.x.x
# executes docker run --rm -it instrumentisto/nmap -A -T4 -Pn x.x.x.x
Aliases are not persistent so you will have to store it in some bash config (generally ~/.bashrc).
With a script
#!/bin/bash
set -Eeuo pipefail
docker run --rm -it instrumentisto/nmap "$#"
"$#" will forward all the arguments provided to the script directly to the command. The quotes are important, if you call your script with quoted values like ./nmap "something with spaces", that's one argument, it needs to be kept as one argument.
Bonus: With a function
Just like the script, you need to forward arguments when writing functions, just like aliases, they are not persistent so you have to store them in bash config:
nmap() {
docker run --rm -it instrumentisto/nmap "$#"
}

docker run entrypoint with multiple commands

How can I have an entrypoint in a docker run which executes multiple commands?
Something like:
docker run --entrypoint "echo 'hello' && echo 'world'" ... <image>
The image I'm trying to run, has already an entrypoint set in the Dockerfile, so solution like the following seems not to work, because it looks my commands are ignored, and only the original entrypoint is executed
docker run ... <image> bash -c "echo 'hello' && echo 'world'"
In my use-case I must use the docker run command. Solution which change the Dockerfile are not acceptable, since it is not in my hands
As a style point, this gets vastly easier if your image has a CMD that can be overridden. If you only need to run one command with no initial setup, make it be the CMD and not the ENTRYPOINT:
CMD ./some_command # not ENTRYPOINT
If you need to do some initial setup and then launch the main command, make the ENTRYPOINT be a shell script that ends with the special instruction exec "$#". The CMD will be passed into it as parameters, and this line replaces the shell script with that command.
#!/bin/sh
# entrypoint.sh
... do first time setup, run database migrations, set variables ...
exec "$#"
# Dockerfile
...
ENTRYPOINT ["./entrypoint.sh"] # MUST be JSON-array syntax
CMD ./some_command # as before
If you do these things, then you can use your initial docker run form. This will replace the CMD but leave the ENTRYPOINT intact. In the wrapper-script case, your alternate command will be run as the exec "$#" command, so all of the first-time setup will be done first.
# Assuming the image correctly honors the CMD
docker run ... \
image-name \
sh -c 'echo "foo is $FOO" && echo "bar is $BAR"'
If you really can't do this, you can override the docker run --entrypoint. This runs instead of the image's entrypoint (if you want the image's entrypoint you have to run it yourself), and the syntax is awkward:
# Run a shell command instead of the entrypoint
docker run ... \
--entrypoint /bin/sh \
image-name \
-c 'echo "foo is $FOO" && echo "bar is $BAR"'
Note that the --entrypoint option comes before the image name, and its arguments come after the image name.

Docker exec quoting variables

I'd like to know if there's a way to do this
Let's say the dockerfile contains this line, that specifies path of an executable
ENV CLI /usr/local/bin/myprogram
I'd like to be able to call this program using ENV variable name through exec command.
For example
docker exec -it <my container> 'echo something-${CLI}
Expecting
something-/usr/local/bin/myprogram
However that returns:
OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: \"${CLI} do something\": executable file not found in $PATH": unknown
Ok, I found a way to do it, all you need to do is evaluate command with bash
docker exec -it <container id> bash -c 'echo something-${CLI}'
returns something-/usr/local/bin/myprogram
If the CLI environment variable is not already set in the container, you can also pass it in such as:
docker exec -it -e CLI=/usr/local/bin/myprogram <container id> bash -c 'echo something-${CLI}'
See the help file:
docker exec --help
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Run a command in a running container
Options:
-d, --detach Detached mode: run command in the background
-e, --env list Set environment variables
....
In it's original revision docker exec -it <my container> '${CLI} do something' with the expectation that ${CLI} will be substituted with /usr/local/bin/myprogram (as the exec COMMAND) and everything after passed as ARG's to /usr/local/bin/myprogram will not work, this is clearly documented: https://docs.docker.com/engine/reference/commandline/exec/
COMMAND should be an executable, a chained or a quoted command will not work. Example:
docker exec -ti my_container "echo a && echo b" will not work, but
docker exec -ti my_container sh -c "echo a && echo b" will.
Following the documentation, this will work as expected: docker exec -ti my_container sh -c "${CLI} foo", ${CLI} will be be executed after variable expansion and the argument(s) passed to the shell script set in ${CLI} (e.g. sh -c /usr/local/bin/myprogram foo).
Alternatively you could set the ENTRYPOINT to your script and pass in arguments with CMD or at the command line with docker run for example:
Given the below directory structure:
.
├── Dockerfile
└── example.sh
The Dockerfile contents:
FROM ubuntu:18.04
COPY example.sh /bin
RUN chmod u+x /bin/example.sh
ENTRYPOINT ["/bin/example.sh"]
CMD ["bla"]
And the example.sh script contents:
#!/bin/bash
echo $1
The CMD specified in the Dockerfile after the ENTRYPOINT will be the default argument for your script and you can override the default argument on the command line (assuming that the image is built and tagged as example:0.1):
user#host> docker run --rm example:0.1
bla
user#host> docker run --rm example:0.1 "arbitrary text"
arbitrary text
Note: this is my go to article for differences between ENTRYPOINT and CMD in Dockerfile's: https://medium.freecodecamp.org/docker-entrypoint-cmd-dockerfile-best-practices-abc591c30e21

How does docker run interpret dynamically generated --env arguments

I am trying to provide a dynamically generated list of --env VAR1 --env VAR2 --env-file env.list environment variables to docker run.
Unfortunately it is not working.
for --env mapped variables, the variables are not visible in the container.
for --env-file provided file, docker complains that it cannot find the file: docker: open "env.list": no such file or directory.
Details
Running:
# env_params contains either --env or --env-file arguments
MY_VAR=123
env_params='--env "MY_VAR"'
echo ${env_params}
docker run -it --rm \
${env_params} \
my_docker_image env | grep MY_VAR
will not output anything. MY_VAR is not visible inside the container. But:
MY_VAR=123
docker run -it --rm \
--env "MY_VAR" \
my_docker_image env | grep MY_VAR
will work and 123 will be printed.
In a similar way --env-file will not work when provided through env_params but will work when provided directly to the docker run command.
What am I doing wrong?
There are two issues here.
First, When you run, in your shell:
MY_VAR=123
You have not set an environment variable. You have set a local shell variable. When you use --env MY_VAR, you are telling Docker that you want to make the environment variable MY_VAR available inside the container, and since it doesn't exist you get nothing:
$ MY_VAR=123
$ docker run -it --rm -e MYVAR alpine env | grep MY_VAR
<crickets>
If you first export that to the environment:
$ export MY_VAR=123
$ docker run -it --rm -e MYVAR alpine env | grep MY_VAR
MY_VAR=123
Then it will work as you expect. Alternately, you can use the VARNAME=VARVALUE form of the --env option:
docker run -e "MY_VAR=${MY_VAR}" ...
The second issue has to do with how shell variable interpolation works. If you have:
env_params='--env "MY_VAR"'
docker run -it --rm \
${env_params} \
alpine env
Then the resulting command line is:
docker run -it --rm --env '"MY_VAR"' alpine env
That is, the argument you're passing to docker run includes literal double quotes. You can fix that through the use of the eval statement (keeping in mind that you'll need to modify your script to export MY_VAR):
eval docker run -it --rm \
${env_params} \
alpine env | grep MY_VAR
Alternately (and I would argue preferably) you can use your env_params variable as an array, as long as you're using bash:
env_params=(--env MY_VAR)
env_params+=(--env SOME_OTHER_VAR)
docker run -it --rm \
"${env_params[#]}" \
alpine env | grep MY_VAR
Which would result in the correct command line:
docker run -it --rm --env MY_VAR --env SOME_OTHER_VAR alpine env
I guess the summary here is that your issues ultimately have nothing to do with "how docker run interprets dynamically generated arguments", but have everything to do with "how shell variables and interpolation work".

Resources