It's possible to set one or more environment variables in the container while doing docker exec, for example:
docker exec -ti -e VAR=1 -e HOME container_name command
But I would like to pass all the shell's environment variables without explicitly specifying them individually. Essentially the equivalent of sudo -E, although it's a different thing.
According to the documentation, there is no such option. But one hack would be something like:
env > env_vars && docker exec -ti --env-file ./env_vars container_name command
Which works, but I'm looking for a simple one step solution that doesn't involve creating a temporary file. Perhaps a bash trick I don't know or haven't thought of yet. Thanks.
Please note: Passing all environment variables is not recommended and defeats the purpose of container process isolation. This question is for knowledge, not about what should be done. Also, the question is specifically about running a temporary command in an existing container with docker exec, not about docker run.
With Bash it seems using process substitution work:
docker run --rm -ti --env-file <(env) alpine sh
Note, this creates a temporary fifo file behind the scenes anyway.
Note, this will not work properly with variables containing newlines, they are cutoff on newlines. You should do something along, I tried to make it short:
readarray -d '' -t args < <(env -0 | sed -z 's/^/--env\x00/')
docker run --rm -ti "${args[#]}" alpine sh
When I install something like nmap(even from APT), I cant get it to execute correctly, so I like to go the container route. Instead of typing:
docker run --rm -it instrumentisto/nmap -A -T4 scanme.nmap.org
I figured maybe I could script it out, but nothing i've learned or found on google, youtube, etc, has helped so far... Can somebody lend a hand? I need to know how to get Bash to execute a command with args:
execute like:
./nmap.sh -A -T4 -Pn x.x.x.x
#!/bin/bash
echo docker run --rm -it instrumentisto/nmap $1 $2 $3 $4 $5
but how to get bash to run this instead of just echo I dont know. Thanks ahead!
Two solutions: create an alias, create a script.
With an alias
The command you write is replaced with the value of the alias, so
alias nmap="docker run --rm -it instrumentisto/nmap"
nmap -A -T4 -Pn x.x.x.x
# executes docker run --rm -it instrumentisto/nmap -A -T4 -Pn x.x.x.x
Aliases are not persistent so you will have to store it in some bash config (generally ~/.bashrc).
With a script
#!/bin/bash
set -Eeuo pipefail
docker run --rm -it instrumentisto/nmap "$#"
"$#" will forward all the arguments provided to the script directly to the command. The quotes are important, if you call your script with quoted values like ./nmap "something with spaces", that's one argument, it needs to be kept as one argument.
Bonus: With a function
Just like the script, you need to forward arguments when writing functions, just like aliases, they are not persistent so you have to store them in bash config:
nmap() {
docker run --rm -it instrumentisto/nmap "$#"
}
I have a simple bash script 'test.sh' in the root of mounted folder :
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
However, when i try to mount folder and start the container with docker run as follows:
docker run -d -p 8000:8787 -e ROOT=true -e DISABLE_AUTH=true --name container -v mount-folder/:/home/rstudio/ image_name /home/rstudio/test.sh
above run command starts the container but exits automatically.
I am looking for a docker run command that starts the container , mounts the folder and then executes the bash script which is in the mount-folder without exiting the container.
(** dont want to go with docker exec command as it is not suitable for my use case for other reasons)
Dockerfile:
FROM rocker/rstudio:4.0.2
//some RUN commands to install necessary r packages
EXPOSE 8787
CMD tail -f /dev/null
Other details :
Image that i am using is rstudio server from rocker and container runs on AWS ubuntu machine.
Edit :
have also tried adding CMD tail -f /dev/null at the end of dockerfile as suggested in http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/ even then the container exits.
Docker containers shutdown automatically when run in detached mode. I think this article proposes a nice solution:
http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/
You could add tail -f /dev/null as the last command in your bash script instead so that the script will never halt unless it is told to do so.
When you do docker run [options] image_name [cmd] the command you specify becomes the command for the container and replaces any the command specified in the dockerfile (that's why adding CMD tail -f /dev/null doesn't do anything). If you ran your container without the /home/rstudio/test.sh at the end, it should stay running.
The solution would be to update your script to add the tail command at the end.
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
exec tail -f /dev/null
If you can't update that script, you could instead add it to the command being passed to the container, with something like:
docker run [options] image_name bash -c '/home/rstudio/test.sh && exec tail -f /dev/null'
What's the simplest way to get an environment variable from a docker container that has not been declared in the Dockerfile?
For instance, an environment variable that has been set through some docker exec container /bin/bash session?
I can do docker exec container env | grep ENV_VAR, but I would prefer something that just returns the value.
I've tried using docker exec container echo "$ENV_VAR", but the substitution seems to happen outside of the container, so I don't get the env var from the container, but rather the env var from my own computer.
Thanks.
To view all env variables:
docker exec container env
To get one:
docker exec container env | grep VARIABLE | cut -d'=' -f2
The proper way to run echo "$ENV_VAR" inside the container so that the variable substitution happens in the container is:
docker exec <container_id> bash -c 'echo "$ENV_VAR"'
You can use printenv VARIABLE instead of /bin/bash -c 'echo $VARIABLE. It's much simpler and it does not perform substitution:
docker exec container printenv VARIABLE
The downside of using docker exec is that it requires a running container, so docker inspect -f might be handy if you're unsure a container is running.
Example #1. Output a list of space-separated environment variables in the specified container:
docker inspect -f \
'{{range $index, $value := .Config.Env}}{{$value}} {{end}}' container_name
the output will look like this:
ENV_VAR1=value1 ENV_VAR2=value2 ENV_VAR3=value3
Example #2. Output each env var on new line and grep the needed items, for example, the mysql container's settings could be retrieved like this:
docker inspect -f \
'{{range $index, $value := .Config.Env}}{{println $value}}{{end}}' \
container_name | grep MYSQL_
will output:
MYSQL_PASSWORD=secret
MYSQL_ROOT_PASSWORD=supersecret
MYSQL_USER=demo
MYSQL_DATABASE=demodb
MYSQL_MAJOR=5.5
MYSQL_VERSION=5.5.52
Example #3. Let's modify the example above to get a bash friendly output which can be directly used in your scripts:
docker inspect -f \
'{{range $index, $value := .Config.Env}}export {{$value}}{{println}}{{end}}' \
container_name | grep MYSQL
will output:
export MYSQL_PASSWORD=secret
export MYSQL_ROOT_PASSWORD=supersecret
export MYSQL_USER=demo
export MYSQL_DATABASE=demodb
export MYSQL_MAJOR=5.5
export MYSQL_VERSION=5.5.52
If you want to dive deeper, then go to Go’s text/template package documentation with all the details of the format.
Since we are dealing with JSON and unlike the accepted answer, we don't need to exec the container.
docker inspect <NAME|ID> | jq '.[] | .Config.Env'
Output sample
[
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"NGINX_VERSION=1.19.4",
"NJS_VERSION=0.4.4",
"PKG_RELEASE=1~buster"
]
To retrieve a specific variable
docker inspect <NAME|ID> | jq -r '.[].Config.Env[]|select(match("^<VAR_NAME>"))|.[index("=")+1:]'
See jq
None of the above answers show you how to extract a variable from a non-running container (if you use the echo approach with run, you won't get any output).
Simply run with printenv, like so:
docker run --rm <container> printenv <MY_VAR>
(Note that docker-compose instead of docker works too)
If by any chance you use VSCode and has installed the docker extension, just right+click on the docker you want to check (within the docker extension), click on Inspect, and there search for env, you will find all your env variables values
We can modify entrypoint of a non-running container with the docker run command.
Example show PATH environment variable:
using bash and echo: This answer claims that echo will not produce any output, which is incorrect.
docker run --rm --entrypoint bash <container> -c 'echo "$PATH"'
using printenv
docker run --rm --entrypoint printenv <container> PATH
#aisbaa's answer works if you don't care when the environment variable was declared. If you want the environment variable, even if it has been declared inside of an exec /bin/bash session, use something like:
IFS="=" read -a out <<< $(docker exec container /bin/bash -c "env | grep ENV_VAR" 2>&1)
It's not very pretty, but it gets the job done.
To then get the value, use:
echo ${out[1]}
This command inspects docker stack processes' environment in the host :
pidof dockerd containerd containerd-shim | tr ' ' '\n' \
| xargs -L1 -I{} -- sudo xargs -a '/proc/{}/environ' -L1 -0
The first way we use to find the ENV variables is docker inspect <container name>
The second way is docker exec <4 alphanumeric letter of CONTAINER id> bash -c 'echo "$ENV_VAR"'
There is a misconception in the question, that causes confusion:
you cannot access a "running session", so no bash session can change anything.
docker exec -ti container /bin/bash
starts a new console process in the container, so if you do export VAR=VALUE this will go away as soon as you leave the shell, and it won't exist anymore.
Perhaps a good example:
# assuming TESTVAR did not existed previously this is empty
docker exec container env | grep TESTVAR
# -> TESTVAR=a new value!
docker exec container /bin/bash -c 'TESTVAR="a new value!" env' | grep TESTVAR
# again empty
docker exec container env | grep TESTVAR
The variables from env come from the Dockerfile or command, docker itself and whatever the entrypoint sets.
The other answers here are good. But if you really need to get the environmental properties used when starting a program, then you can inspect the /proc/pid/environ contents in the container, where pid is the container process id of the running comand.
# environmental props
docker exec container cat /proc/pid/environ | tr '\0' '\n'
# you can check this is the correct pid by checking the ran command
docker exec container cat /proc/pid/cmdline | tr '\0' ' '
The following works fine in docker:
docker run -i -t -rm -e a="hello world" b=world ubuntu /bin/bash
What it does is passes env var a with value "hello world" and env var b with value "world" into the docker container.
Thing is, I need to generate that from script.
It is super easy to get this working for env vars without space:
ENV_VARS='-e a=helloworld b=world'
docker run -i -t -rm $ENV_VARS ubuntu /bin/bash
However, once there is a space in the env var I am hosed:
ENV_VARS='-e a="hello world" b=world'
docker run -i -t -rm $ENV_VARS ubuntu /bin/bash
Unable to find image 'world"' (tag: latest) locally
2014/01/15 16:28:40 Invalid repository name (world"), only [a-z0-9-_.] are allowed
How can I get the above example to work? I also tried arrays but can not get them to work.
Bash arrays are designed to solve exactly this sort of problem
First step is to declare the array:
docker_env=(-e "a=hello world" "b=world")
Which lets you programmatically populate more environment variables, for example:
docker_env+=("c=foo bar")
Finally run it:
docker run -i -t -rm "${docker_env[#]}" ubuntu /bin/bash
How about instead:
a='hello world'
b='some more'
docker run -i -t -rm -e a -e b ...
Does this do what you need in an eaiser way?
eval docker run -i -t -rm "$ENV_VARS" ubuntu /bin/bash
I solved problem of the space containing variables passed to docker environment using option --env-file. In the env file the line separated variable definitions are expected. Following example illustrates the pattern.
DOCKER_ENV=$(mktemp run-docker-env.XXXXXXX)
echo Using env file: $DOCKER_ENV
echo VAR1=$VARZZZ > $DOCKER_ENV
echo VAR2=value >> $DOCKER_ENV
echo VAR_WITH_SPACE=variable value with space >> $DOCKER_ENV
docker run --env-file=${DOCKER_ENV} image:latest
rm $DOCKER_ENV