I'm having trouble with Docker creating a container that does not have environment variables set that I know I set in the image definition.
I have created a Dockerfile that generates an image of OpenSuse 42.3. I need to have some environment variables set up in the image so that anyone that starts a container from the image can use a code that I've compiled and placed in the image.
I have created a shell file called "image_env_setup.sh" that contains the necessary environment variable definitions. I also manually added those environment variable definitions to the Dockerfile.
USER codeUser
COPY ./docker/image_env_setup.sh /opt/MyCode
ENV PATH="$PATH":"/opt/MyCode/bin:/usr/lib64/mpi/gcc/openmpi/bin"
ENV LD_LIBRARY_PATH="/usr/lib64:/opt/MyCode/lib:"
ENV PS1="[\u#docker: \w]\$ "
ENV TERM="xterm-256color"
ENV GREP_OPTIONS="--color=auto"
ENV EDITOR=/usr/bin/vim
USER root
RUN chmod +x /opt/MyCode/image_env_setup.sh
USER codeUser
RUN /opt/MyCode/image_env_setup.sh
RUN /bin/bash -c "source /opt/MyCode/image_env_setup.sh"
The command that I use to create the container is:
docker run -it -d --name ${containerName} -u $userID:$groupID \
-e USER=$USER --workdir="/home/codeUser" \
--volume="${home}:/home/codeUser" ${imageName} /bin/bash \
The only thing that works is to pass the shell file to be run again when the container starts up.
docker start $MyImageTag
docker exec -it $MyImageTag /bin/bash --rcfile /opt/MyCode/image_env_setup.sh
I didn't think it would be that difficult to just have the shell variables setup within the container so that any entry into it would provide a user with them already defined.
RUN entries cannot modify environment variables (I assume you want to set more variables in image_env_setup.sh). Only ENV entries in the Dockerfile (and docker options like --rcfile can change the environment).
You can also decide to source image_env_setup.sh from the .bashrc, of course.
For example, you could either pre-fabricate a .bashrc and pull it in with COPY, or do
RUN echo '. /opt/MyCode/image_env_setup.sh' >> ~/.bashrc
you can put /opt/MyCode/image_env_setup.sh in ~/.bash_profile or ~/.bashrc of the container so that everytime you get into the container you have the env's set
Related
I have a docker image inside which some installations require adding exports to .bashrc.
My export variables are inside /root/.bashrc on the image.
Here is the dockerfile
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y python3
RUN echo "export PATH=/path/to/stuff:\$PATH" >> /root/.bashrc
CMD ["python3"]
The environment variables are present when using the following command
docker run -it image /bin/bash
When I run the following command, environment variables are not present.
docker run -it image
It is expected since /bin/sh is the default entry point of docker
But after the following change, the environment variable are not set either.
docker commit --change='ENTRYPOINT ["/bin/bash","-c"]' container image
I tried different combinations such as
docker commit --change='CMD ["/bin/bash","-c","python3 myProgram.py"]' container image
or
docker commit --change='ENTRYPOINT ["/bin/bash","-c"]' --change='CMD ["source /root/.bashrc && python3 myProgram.py"]' container image
But the environment variables are not present.
How do I run the CMD statement with the environment variable from .bashrc loaded ?
In order to see the path variable, I use echo $PATH when I run /bin/bash and import os followed by os.getenv("PATH") when I run python3 from CMD.
Edit:
The exports are part of the installation of a library. In order to use the library, the updated exports (such as PYTHONPATH and LD_LIBRARY_PATH) needs to be set.
If .bashrc is not intended to be launched, as mentioned in the comments. How can I make this library work in the docker environment ?
As mentioned by #itshosyn in the comments, the standard way to override environment variables such as PATH consists in using the ENV directive.
So you may try writing something like this:
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y python3
ENV PATH="/path/to/stuff:$PATH"
CMD ["python3"]
I have custom entrypoint where a environment variable is exported. The value of the environment variable is constructed using two other variables provided at runtime.
Snippet from Dockerfile CMD ["bash", "/opt/softwareag/laas-api-server/entrypoint.sh"].
Snippet from entrypoint.sh
export URL="$SCHEME://$HOST:$PORT"
echo "URL:$URL"
Command docker run -e HOST="localhost" -e PORT="443" mycentos prints URL:localhost:443 as expected but the same variable appears to have lost the value when the following command is executed.
docker exec -ti <that-running-container-from-myimage> bash
container-prompt> echo $URL
<empty-line>
Why would the exported variable appear to have lost the value of URL? What is getting lost here?
The environment variable will not persist across all bash session. when the container run it will only available in that entrypoint session but later it will not available if it set using export.
docker ENV vs RUN export
If you want to use across all session you should set them in Dockerfile.
ENV SCHEME=http
ENV HOST=example.com
ENV PORT=3000
And in the application side, you can use them together.also it will be available for all session.
curl "${SCHEME}://${HOST}:${PORT}
#
Step 8/9 : RUN echo "${SCHEME}://${HOST}:${PORT}"
---> Running in afab41115019
http://example.com:3000
Now if we look into the way you are using, it will not work because
export URL="$SCHEME://$HOST:$PORT"
# only in this session
echo "URL:$URL"
# will be available for node process too but for this session only
node app.js
For example look into this Dockerfile
FROM node:alpine
RUN echo $'#!/bin/sh \n\
export URL=example.com \n\
echo "${URL}" \n\
node -e \'console.log("ENV URL value inside nodejs", process.env.URL)\' \n\
exec "$#" \n\
' >> /bin/entrypoint.sh
RUN chmod +x /bin/entrypoint.sh
entrypoint ["entrypoint.sh"]
So you when you Run docker container for the first time you will able to see the expected response.
docker run -it --rm myapp
example.com
ENV URL value inside nodejs example.com
Now we want to check for later session.
docker run -it --rm abc tail -f /dev/null
example.com
ENV URL value inside nodejs example.com
so the container is up during this time, we can verify for another session
docker exec -it myapp sh -c "node -e 'console.log(\"ENV URL value inside nodejs\", process.env.URL)'"
ENV URL value inside nodejs undefined
As we can same script but different behaviour because of docker, so the variable is only available in that session, you can write them to file if you are interested in later use.
I was looking at the golang:1.10.2 Dockerfile (as of today), and couldn't understand why the PATH variable is being used in two different places. Here's the bottom of the file with the pertinent snippet:
RUN set -eux; \
# some code ...
export PATH="/usr/local/go/bin:$PATH"; \
go version
ENV GOPATH /go
ENV PATH $GOPATH/bin:/usr/local/go/bin:$PATH
RUN mkdir -p "$GOPATH/src" "$GOPATH/bin" && chmod -R 777 "$GOPATH"
WORKDIR $GOPATH
What is the purpose of
export PATH="/usr/local/go/bin:$PATH";
and
ENV PATH $GOPATH/bin:/usr/local/go/bin:$PATH
respectively?
My impression is that the ENV directive should be shortened to ENV $GOPATH:$PATH since /usr/local/go/bin is already in $PATH.
Each RUN (when not in the explicit-argv usage mode) starts a new shell, which must exit before that RUN command is complete. That shell can only change its own environment and that of its children; it can't change the environment of other programs started after it exits.
By contrast, ENV controls the environment that Docker passes to future processes when they're started.
Thus, you could move the ENV above the RUN and remove the export from the RUN (to have the PATH correctly set before the shell is started), but you can't make a RUN do the work of a ENV.
I am trying to develop a Dockerfile for my application that loads a large number of environment variables after initialisation. Somehow, these variables are not reachable when I later execute the following commands:
docker exec -it container_name bash
printenv
My environment variables are not visible. If I load the files manually however, they are:
docker exec -it container_name bash
source .env
printenv
... environment variables are shown ...
This is my dockerfile:
Dockerfile
FROM python:3.6
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
COPY . /usr/src/app/
RUN chmod 755 load_env_variables.sh
ENTRYPOINT ["/bin/bash", "-c", "/usr/src/app/load_env_variables.sh"]
load_env_variables.sh
#!/bin/bash
source .env
python start_application
And my .env file contains lines als follows: 'export name=value'.
The reason for this behavior is that docker exec -it container_name bash starts a new bash. A new bash has only the standard environment variables plus the ones specified in the .bashrc or .bash_profile files.
A proper solution for your problem would be to use the option --env-file with the docker run command. Be aware that the env-file needs to look like this:
test1=test1
test2=test2
If I include the following line in /root/.bashrc:
export $A = "AAA"
then when I run the docker container in interactive mode (docker run -i), the $A variable keeps its value. However if I run the container in detached mode I cannot access the variable. Even if I run the container explicitly sourcing the .bashrc like
docker run -d my_image /bin/bash -c "cd /root && source .bashrc && echo $A"
such line produces an empty output.
So, why is this happening? And how can I set the environment variables defined in the .bashrc file?
Any help would be very much appreciated!
The first problem is that the command you are running has $A being interpreted by your hosts shell (not the container shell). On your host, $A is likely black, so your effectively command becomes:
docker run -i my_image /bin/bash -c "cd /root && source .bashrc && echo "
Which does exactly as it says. We can escape the variable so it is sent to the container and properly evaluated there:
docker run -i my_image /bin/bash -c "echo \$A"
But this will also be blank because, although the container is, the shell is not in interactive mode. But we can force it to be:
docker run -i my_image /bin/bash -i -c "echo \$A"
Woohoo, we finally got our desired result. But with an added error from bash because there is no TTY. So, instead of interactive mode, we can just set a psuedo-TTY:
docker run -t my_image /bin/bash -i -c "echo \$A"
After running some tests, it appears that when running a container in detached mode, overidding the default environment variables doesnt always happen the way we want, depending on where you are in the Dockerfile.
As an exemple if, running a container in a detached container like so:
docker run **-d** --name image_name_container image_name
Whatever ENV variables you defined within the Dockerfile takes effect everywhere (read the rest and you will understand what the everywhere means).
example of a simple dockerfile (alpine is just a lighweight linux distribution):
FROM alpine:latest
#declaring a docker env variable and giving it a default value
ENV MY_ENV_VARIABLE dummy_value
#copying two dummy scripts into a place where i can execute them straight away
COPY ./start.sh /usr/sbin
COPY ./not_start.sh /usr/sbin
#in this script i could do: echo $MY_ENV_VARIABLE > /test1.txt
RUN not_start.sh
RUN echo $MY_ENV_VARIABLE > /test2.txt
#in this script i could do: echo $MY_ENV_VARIABLE > /test3.txt
ENTRYPOINT ["start.sh"]
Now if you want to run your container in detached and override some ENV variables, like so:
docker run **-d** -e MY_ENV_VARIABLE=new_value --name image_name_container image_name
Surprise! The var MY_ENV_VARIABLE is only overidden inside the script that is run in the ENTRYPOINT (and i checked, same thing happens if your replace ENTRYPOINT with CMD). It would also be overidden in a subscript that you could call from this start.sh script. But the MY_EV_VARIABLE variables that are called within a RUN dockerfile command or within the dockerfile itself do not get overidden.
In other words we would have $MY_ENV_VARIABLE being replaced by the value dummy_value and new_value depending on if you are in the ENTRYPOINT or not.