Escape $ (dollar) into bash command in docker compose is not interpreted - bash

I would need some help with docker compose and the $ caracter.
Here an example of container in docker compose :
services:
setup:
image: image:tag
container_name: setup
user: "0"
command: >
/bin/sh -c '
sed -i 's/before with a $$/after with a $$/' /foo/bar/something.txt;
'
When I try with this, the sed command goes without any $, even if I escaped it twice. What am I missing?
Best regards,
On the docker compose side, I see there is no other way to escape the $ caracter : I have to double-it with another $ : $$
However, in the bash command, no $ is interpreted, so the sed command doesn't work

Related

Why variable assignment doesn't work as expected in docker-compose commands?

I have the following in my docker-compose.yml:
my-service:
image: amazon/aws-cli
entrypoint: /bin/sh -c
command: >
'
a=1899
echo "The value of \"a\" is $a"
'
And when I run it, I see The value of "a" is ., so for some reason, the variable assignment is not working as I would expect. Do you know what's going on?
I tried simplifying my docker compose to the very minimum, but still the same problem. I would expect that variable assignment and outputing would be the same than in a bash script.
You will need to escape the $ sign. If you do not escape the sign, then it will try to take the value from your host environment rather that from inside the docker container.
So you can change your command to
my-service:
image: amazon/aws-cli
entrypoint: /bin/sh -c
command: >
'
a=1899
echo "The value of \"a\" is $$a"
'
If you want to pass it from host instead, you can do this instead
my-service:
image: amazon/aws-cli
entrypoint: /bin/sh -c
command: >
'
echo "The value of \"a\" is $a"
'
a=1899 docker-compose up

How can I script a Docker command into a 'single word' binary? Using bash script?

When I install something like nmap(even from APT), I cant get it to execute correctly, so I like to go the container route. Instead of typing:
docker run --rm -it instrumentisto/nmap -A -T4 scanme.nmap.org
I figured maybe I could script it out, but nothing i've learned or found on google, youtube, etc, has helped so far... Can somebody lend a hand? I need to know how to get Bash to execute a command with args:
execute like:
./nmap.sh -A -T4 -Pn x.x.x.x
#!/bin/bash
echo docker run --rm -it instrumentisto/nmap $1 $2 $3 $4 $5
but how to get bash to run this instead of just echo I dont know. Thanks ahead!
Two solutions: create an alias, create a script.
With an alias
The command you write is replaced with the value of the alias, so
alias nmap="docker run --rm -it instrumentisto/nmap"
nmap -A -T4 -Pn x.x.x.x
# executes docker run --rm -it instrumentisto/nmap -A -T4 -Pn x.x.x.x
Aliases are not persistent so you will have to store it in some bash config (generally ~/.bashrc).
With a script
#!/bin/bash
set -Eeuo pipefail
docker run --rm -it instrumentisto/nmap "$#"
"$#" will forward all the arguments provided to the script directly to the command. The quotes are important, if you call your script with quoted values like ./nmap "something with spaces", that's one argument, it needs to be kept as one argument.
Bonus: With a function
Just like the script, you need to forward arguments when writing functions, just like aliases, they are not persistent so you have to store them in bash config:
nmap() {
docker run --rm -it instrumentisto/nmap "$#"
}

Save output of bash command from Dockerfile after Docker container was launched

I have a Dockerfile with ubuntu image as a base.
FROM ubuntu
ARG var_name
ENV env_var_name=$var_name
ENTRYPOINT ["/bin/bash", "-c", "echo $env_var_name"]
I expect from this
executing of a simple bash script, which will take an environment variable from user keyboard input and output this value after running docker container. It goes right.
(part where i have a problem) saving values of environment variables to file + after every running of docker run --rm -e env_var_name=%valueOfVar% IMAGE-NAME i can see a list of entered from keyboard values.
My idea about part 2 were like
docker run --rm -e env_var_name=%valueOfVar% IMAGE-NAME > /directory/tosave/values.txt. That works, but only one last value saves, not a list of values.
How can i change Dockerfile to save values to a file, which Docker will see and from which Docker after running will read and ouyput values? May be i shouldn`t use ENTRYPOINT?
Appreciate for any possible help. I`ve stuck.
Emphasizing that output and save of environment variables expected.
Like #lojza hinted at, > overwrites files whereas >> appends to them which is why your command is clobbering the file instead of adding to it. So you could fix it with this:
docker run --rm -e env_var_name=%valueOfVar% IMAGE-NAME >> /directory/tosave/values.txt
Or using tee(1):
docker run --rm -e env_var_name=%valueOfVar% IMAGE-NAME | tee -a /directory/tosave/values.txt
To clarify though, the docker container is not writing to values.txt, your shell is what is redirecting the output of the docker run command to the file. If you want the file to be written to by docker you should mount a file or directory into it the container using -v and redirect the output of the echo there. Here's an example:
FROM ubuntu
ARG var_name
ENV env_var_name=$var_name
ENTRYPOINT ["/bin/bash", "-c", "echo $env_var_name | tee -a /data/values.txt"]
And then run it like so:
$ docker run --rm -e env_var_name=test1 -v "$(pwd):/data:rw" IMAGE-NAME
test1
$ docker run --rm -e env_var_name=test2 -v "$(pwd):/data:rw" IMAGE-NAME
test2
$ ls -l values.txt
-rw-r--r-- 1 root root 12 May 3 15:11 values.txt
$ cat values.txt
test1
test2
One more thing worth mentioning. echo $env_var_name is printing the value of the environment variable whose name is literally env_var_name. For example if you run the container with -e env_var_name=PATH it would print the literal string PATH and not the value of your $PATH environment variable. This does seem to be the desired outcome, but I thought it was worth explicitly spelling this out.

Pass ENV to Docker container running single command

This prints blank:
docker run --rm --env HELLO="world" ubuntu:18.04 bash -c "echo $HELLO"
However this works:
docker run --rm -it --env HELLO="world" ubuntu:18.04 bash
# in the container
echo $HELLO
HELLO seems passed to the container though:
docker run --rm --env HELLO="world" ubuntu:18.04 env
Why is the first command not seeing HELLO? What am I missing?
Because of the double quotes, $HELLO will be evaluated by the docker host itself once the command got executed before going inside the container. So you need to either escape the dollar sign ($) using Backslash (\) which tells the bash that the $ is a part of the command itself and no need to be evaluated by the current shell (which is the docker host in our case) or use single quotes ('') like this:
Using single quotes
$ docker run --rm --env HELLO="world" ubuntu:18.04 bash -c 'echo $HELLO'
world
Using Backslash to escape
$ docker run --rm --env HELLO="world" ubuntu:18.04 bash -c "echo \$HELLO"
world
The reason you are not seeing what you expect is because things are being evaluated before you expect them to be evaluated.
When you run:
docker run --rm --env HELLO="world" ubuntu:18.04 bash -c "echo $HELLO"
The "echo $HELLO" really isn't any different to bash than:
echo "echo $HELLO"
The shell (bash) parses double quotes (") and things inside them. It sees "echo $HELLO" and replaces the variable $HOME with it's value. If $HOME is not defined, this evaluates to echo.
So,
echo "echo $HELLO"
is parsed and evaluted by your shell. Which then just runs this at the end:
echo "echo "
So the "echo $HELLO" in your docker command is evaluated to "echo " and that's what gets passed to the docker command.
What you want to do is to prevent your shell from evaluating the variable. You can do it a couple of ways:
You can use single quotes instead of double quotes. Your shell doesn't parse it; it will be passed to the bash inside the container as is:
docker run --rm --env HELLO="world" ubuntu:18.04 bash -c 'echo $HELLO'
You can escape the $ to avoid evaluating it in this shell and let the bash inside the docker container evaluate it:
docker run --rm --env HELLO="world" ubuntu:18.04 bash -c "echo \$HELLO"

How to pass arguments with space by environment variable?

On bash shell, I want to pass argument by environment variable.
like this...
$ export DOCKER_OPTIONS="-p 9200:9200 -e ES_JAVA_OPTS='-Xmx1g -Xms1g' -d "
$ docker run -d $DOCKER_OPTIONS elasticsearch
I expect that "ES_JAVA_OPTS='-Xmx1g -Xms1g'" is passed as an option value of "-e". But I couldn't find a way.
$ set -x
$ docker run -d $DOCKER_OPTIONS elasticsearch
+ docker run -d -p 9200:9200 -e 'ES_JAVA_OPTS='\''-Xmx1g' '-Xms1g'\''' elasticsearch
unknown shorthand flag: 'X' in -Xms1g'
This separated -Xms1g as an another option.
$ docker run -d "$DOCKER_OPTIONS" elasticsearch
+ docker run -d '-p 9200:9200 -e ES_JAVA_OPTS='\''-Xmx1g -Xms1g'\''' elasticsearch
docker: Invalid containerPort: 9200 -e ES_JAVA_OPTS='-Xmx1g -Xms1g'.
This bundled the parameters together.
What should I do?
Use an array to circumvent these awkward parsing problems. Arrays are great because you don't need to do any special quote when defining them. The only place you have to be careful with quotes is when expanding them: always put quotes around "${array[#]}".
dockerOptions=(-p 9200:9200 -e ES_JAVA_OPTS='-Xmx1g -Xms1g' -d)
docker run -d "${dockerOptions[#]}" elasticsearch
Note that export isn't needed since you're passing the options to docker via its command-line rather than as an environment variable.
Also, all uppercase names are reserved for the shell. It's best to avoid them when defining your own variables.

Resources