Why this Dockerfile have both ENV and export with the same PATH? - bash

I was looking at the golang:1.10.2 Dockerfile (as of today), and couldn't understand why the PATH variable is being used in two different places. Here's the bottom of the file with the pertinent snippet:
RUN set -eux; \
# some code ...
export PATH="/usr/local/go/bin:$PATH"; \
go version
ENV GOPATH /go
ENV PATH $GOPATH/bin:/usr/local/go/bin:$PATH
RUN mkdir -p "$GOPATH/src" "$GOPATH/bin" && chmod -R 777 "$GOPATH"
WORKDIR $GOPATH
What is the purpose of
export PATH="/usr/local/go/bin:$PATH";
and
ENV PATH $GOPATH/bin:/usr/local/go/bin:$PATH
respectively?
My impression is that the ENV directive should be shortened to ENV $GOPATH:$PATH since /usr/local/go/bin is already in $PATH.

Each RUN (when not in the explicit-argv usage mode) starts a new shell, which must exit before that RUN command is complete. That shell can only change its own environment and that of its children; it can't change the environment of other programs started after it exits.
By contrast, ENV controls the environment that Docker passes to future processes when they're started.
Thus, you could move the ENV above the RUN and remove the export from the RUN (to have the PATH correctly set before the shell is started), but you can't make a RUN do the work of a ENV.

Related

Setting enviroment variable of the container through bash script is not working

I'm trying to set a enviroment variable of a docker container through bash script.
CMD ["/bin/bash", "-c","source runservice.sh"]
runservice.sh
#!/usr/bin/env bash
export "foo"="bar"
Now after pushing it when I go inside the container and do printenv, it is not setting up enviroment variable.
But if I run the same command inside the container, env variable is getting set up.
What's the correct way I can export using bash script?
it is not setting up enviroment variable
It's setting the environment for the duration of CMD build stage. Sure, it has no effect on anything else - the shell run at CMD stage is then exiting. See dockerfile documentation.
What's the correct way I can export using bash script?
There is no correct way. The correct way to affect the environment, is to use ENV.
There is a workaround in contexts that use entrypoint - you can set entrypoint to a shell (or a custom process) that will first source the variables.
ENTRYPOINT ["bash", "-c", "source runservice.sh && \"$#\"", "--"]

How to run a docker container with the exports of a `.bashrc`

I have a docker image inside which some installations require adding exports to .bashrc.
My export variables are inside /root/.bashrc on the image.
Here is the dockerfile
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y python3
RUN echo "export PATH=/path/to/stuff:\$PATH" >> /root/.bashrc
CMD ["python3"]
The environment variables are present when using the following command
docker run -it image /bin/bash
When I run the following command, environment variables are not present.
docker run -it image
It is expected since /bin/sh is the default entry point of docker
But after the following change, the environment variable are not set either.
docker commit --change='ENTRYPOINT ["/bin/bash","-c"]' container image
I tried different combinations such as
docker commit --change='CMD ["/bin/bash","-c","python3 myProgram.py"]' container image
or
docker commit --change='ENTRYPOINT ["/bin/bash","-c"]' --change='CMD ["source /root/.bashrc && python3 myProgram.py"]' container image
But the environment variables are not present.
How do I run the CMD statement with the environment variable from .bashrc loaded ?
In order to see the path variable, I use echo $PATH when I run /bin/bash and import os followed by os.getenv("PATH") when I run python3 from CMD.
Edit:
The exports are part of the installation of a library. In order to use the library, the updated exports (such as PYTHONPATH and LD_LIBRARY_PATH) needs to be set.
If .bashrc is not intended to be launched, as mentioned in the comments. How can I make this library work in the docker environment ?
As mentioned by #itshosyn in the comments, the standard way to override environment variables such as PATH consists in using the ENV directive.
So you may try writing something like this:
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y python3
ENV PATH="/path/to/stuff:$PATH"
CMD ["python3"]

the bashrc file is not working when I docker run --mount bashrc

I'm testing an app on docker (search engine) but when I use docker run the bashrc doesn't work if for example there was an alias inside bashrc, I can't use it.
The file bashrc is copied to the container but still can't use it.
My question is why not? is it only because that bashrc needs to be reloaded or there is another reason?
sudo docker run \
--mount type=bind,source=$(pwd)/remise/bashrc,destination=/root/.bashrc,readonly \
--name="s-container" \
ubuntu /go/bin/s qewrty
If you start your container as
docker run ... image-name \
/go/bin/s qwerty
when Docker creates the container, it directly runs the command /go/bin/s qwerty; it does not invoke bash or any other shell to do it. Nothing will ever know to look for a .bashrc file.
Similarly, if your Dockerfile specifies
CMD ["/go/bin/s", "qwerty"]
it runs the command directly without a shell.
There's an alternate shell form of CMD that takes a command string, and runs it via /bin/sh -c. That does involve a shell; but it's neither an interactive nor a login shell, and it's invoked as sh, so it won't read any shell dotfiles (for the specific case where /bin/sh happens to be GNU Bash, see Bash Startup Files).
Since none of these common paths to specify the main container command will read .bashrc or other shell dotfiles, it usually doesn't make sense to try to write or inject these files. If you need to set environment variables, consider the Dockerfile ENV directive or an entrypoint wrapper script instead.

Set system ENV with the shell file inside the container

Try to set System ENV with the shell script when run the container, the problem is when I see the logs, the "printenv" shows me that the "MYENV=123" but when I echo it inside the container is empty.
Dockerfile:
FROM ubuntu
ADD first.sh /opt/first.sh
RUN chmod +x /opt/first.sh
ADD second.sh /opt/second.sh
RUN chmod +x /opt/second.sh
ENTRYPOINT [ "/opt/first.sh" ]
first.sh
#!/bin/bash
source /opt/second.sh
printenv
tail -f /dev/null
second.sh
#!/bin/bash
BLA=`echo blabla 123 | sed 's/blabla //g'`
echo "${BLA}"
export MYENV=${BLA}
I don't want to use docker env in a run or with docker-compose, because this workflow will help me to change the env when I'm running the container
The technique you've described will work fine. I'd write it slightly differently:
#!/bin/sh
. /opt/second.sh
exec "$#"
This will set environment variables for the main process in your container (and not ignore the CMD or anything you set on the command line). It won't affect any other shells you happen to launch with docker exec: they don't run as children of the container's main process and won't have "seen" these environment variable settings.
This technique won't make it particularly easier or harder to change environment variables in your container. Since the only way one process's environment can affect another's is by providing the initial environment when it starts up, even if you edit the second.sh in the live container (not generally a best practice) it won't affect the main process's environment (in your case, the tail command). This is one of a number of common situations where you need to at least restart the container to make changes take effect.

Set environment variables in Docker

I'm having trouble with Docker creating a container that does not have environment variables set that I know I set in the image definition.
I have created a Dockerfile that generates an image of OpenSuse 42.3. I need to have some environment variables set up in the image so that anyone that starts a container from the image can use a code that I've compiled and placed in the image.
I have created a shell file called "image_env_setup.sh" that contains the necessary environment variable definitions. I also manually added those environment variable definitions to the Dockerfile.
USER codeUser
COPY ./docker/image_env_setup.sh /opt/MyCode
ENV PATH="$PATH":"/opt/MyCode/bin:/usr/lib64/mpi/gcc/openmpi/bin"
ENV LD_LIBRARY_PATH="/usr/lib64:/opt/MyCode/lib:"
ENV PS1="[\u#docker: \w]\$ "
ENV TERM="xterm-256color"
ENV GREP_OPTIONS="--color=auto"
ENV EDITOR=/usr/bin/vim
USER root
RUN chmod +x /opt/MyCode/image_env_setup.sh
USER codeUser
RUN /opt/MyCode/image_env_setup.sh
RUN /bin/bash -c "source /opt/MyCode/image_env_setup.sh"
The command that I use to create the container is:
docker run -it -d --name ${containerName} -u $userID:$groupID \
-e USER=$USER --workdir="/home/codeUser" \
--volume="${home}:/home/codeUser" ${imageName} /bin/bash \
The only thing that works is to pass the shell file to be run again when the container starts up.
docker start $MyImageTag
docker exec -it $MyImageTag /bin/bash --rcfile /opt/MyCode/image_env_setup.sh
I didn't think it would be that difficult to just have the shell variables setup within the container so that any entry into it would provide a user with them already defined.
RUN entries cannot modify environment variables (I assume you want to set more variables in image_env_setup.sh). Only ENV entries in the Dockerfile (and docker options like --rcfile can change the environment).
You can also decide to source image_env_setup.sh from the .bashrc, of course.
For example, you could either pre-fabricate a .bashrc and pull it in with COPY, or do
RUN echo '. /opt/MyCode/image_env_setup.sh' >> ~/.bashrc
you can put /opt/MyCode/image_env_setup.sh in ~/.bash_profile or ~/.bashrc of the container so that everytime you get into the container you have the env's set

Resources