Set system ENV with the shell file inside the container - bash

Try to set System ENV with the shell script when run the container, the problem is when I see the logs, the "printenv" shows me that the "MYENV=123" but when I echo it inside the container is empty.
Dockerfile:
FROM ubuntu
ADD first.sh /opt/first.sh
RUN chmod +x /opt/first.sh
ADD second.sh /opt/second.sh
RUN chmod +x /opt/second.sh
ENTRYPOINT [ "/opt/first.sh" ]
first.sh
#!/bin/bash
source /opt/second.sh
printenv
tail -f /dev/null
second.sh
#!/bin/bash
BLA=`echo blabla 123 | sed 's/blabla //g'`
echo "${BLA}"
export MYENV=${BLA}
I don't want to use docker env in a run or with docker-compose, because this workflow will help me to change the env when I'm running the container

The technique you've described will work fine. I'd write it slightly differently:
#!/bin/sh
. /opt/second.sh
exec "$#"
This will set environment variables for the main process in your container (and not ignore the CMD or anything you set on the command line). It won't affect any other shells you happen to launch with docker exec: they don't run as children of the container's main process and won't have "seen" these environment variable settings.
This technique won't make it particularly easier or harder to change environment variables in your container. Since the only way one process's environment can affect another's is by providing the initial environment when it starts up, even if you edit the second.sh in the live container (not generally a best practice) it won't affect the main process's environment (in your case, the tail command). This is one of a number of common situations where you need to at least restart the container to make changes take effect.

Related

the bashrc file is not working when I docker run --mount bashrc

I'm testing an app on docker (search engine) but when I use docker run the bashrc doesn't work if for example there was an alias inside bashrc, I can't use it.
The file bashrc is copied to the container but still can't use it.
My question is why not? is it only because that bashrc needs to be reloaded or there is another reason?
sudo docker run \
--mount type=bind,source=$(pwd)/remise/bashrc,destination=/root/.bashrc,readonly \
--name="s-container" \
ubuntu /go/bin/s qewrty
If you start your container as
docker run ... image-name \
/go/bin/s qwerty
when Docker creates the container, it directly runs the command /go/bin/s qwerty; it does not invoke bash or any other shell to do it. Nothing will ever know to look for a .bashrc file.
Similarly, if your Dockerfile specifies
CMD ["/go/bin/s", "qwerty"]
it runs the command directly without a shell.
There's an alternate shell form of CMD that takes a command string, and runs it via /bin/sh -c. That does involve a shell; but it's neither an interactive nor a login shell, and it's invoked as sh, so it won't read any shell dotfiles (for the specific case where /bin/sh happens to be GNU Bash, see Bash Startup Files).
Since none of these common paths to specify the main container command will read .bashrc or other shell dotfiles, it usually doesn't make sense to try to write or inject these files. If you need to set environment variables, consider the Dockerfile ENV directive or an entrypoint wrapper script instead.

Unable to get any Docker Entrypoint from script working without continuous restarts

I'm having trouble understanding or seeing some working version of using a bash script as an Entrypoint for a Docker container. I've been trying numerous things for about 5 hours now.
Even from this official Docker blog, using a bash-script as an entry-point still doesn't work.
Dockerfile
FROM debian:stretch
COPY docker-entrypoint.sh /usr/local/bin/
RUN ln -s /usr/local/bin/docker-entrypoint.sh / # backwards compat
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["postgres"]
docker-entrypoint.sh
#!/bin/bash
set -e
if [ "$1" = 'postgres' ]; then
chown -R postgres "$PGDATA"
if [ -z "$(ls -A "$PGDATA")" ]; then
gosu postgres initdb
fi
exec gosu postgres "$#"
fi
exec "$#"
build.sh
docker build -t test .
run.sh
docker service create \
--name test \
test
Despite many efforts, I can't seem to get Dockerfile using an Entrypoint as a bash script that doesn't continuously restart and fail repeatedly.
My understanding is that exec "$#" was suppose to keep the container form immediately exiting, but I'm not sure if that's dependent some other process within the script failing.
I've tried using a docker-entrypoint.sh script that simply looked like this:
#!/bin/bash
exec "$#"
And since that also failed, I think that rules out something else going wrong inside the script being the cause of the failure.
What's also frustrating is there are no logs, either from docker service logs test or docker logs [container_id], and I can't seem to find anything useful in docker inspect [container_id].
I'm having trouble understanding everyone's confidence in exec "$#". I don't want to resort to using something like tail -f /dev/null or using a command at docker run. I was hoping that there would be some consistent, reliable way that a docker-entrypoint.sh script could reliably used to start services that I could run with docker run as well for other things for services, but even on Docker's official blog and countless questions here and blogs from other sites, I can't seem get a single example to work.
I would really appreciate some insight into what I'm missing here.
$# is just a string of the command line arguments. You are providing none, so it is executing a null string. That exits and will kill the docker. However, the exec command will always exit the running script - it destroys the current shell and starts a new one, it doesn't keep it running.
What I think you want to do is keep calling this script in kind of a recursive way. To actually have the script call itself, the line would be:
exec $0
$0 is the name of the bash file (or function name, if in a function). In this case it would be the name of your script.
Also, I am curious your desire not to use tail -f /dev/null? Creating a new shell over and over as fast as the script can go is not more performant. I am guessing you want this script to run over and over to just check your if condition.
In that case, a while(1) loop would probably work.
What you show, in principle, should work, and is one of the standard Docker patterns.
The interaction between ENTRYPOINT and CMD is pretty straightforward. If you provide both, then the main container process is whatever ENTRYPOINT (or docker run --entrypoint) specifies, and it is passed CMD (or the command at the end of docker run) as arguments. In this context, ending an entrypoint script with exec "$#" just means "replace me with the CMD as the main container process".
So, the pattern here is
Do some initial setup, like chowning a possibly-external data directory; then
exec "$#" to run whatever was passed as the command.
In your example there are a couple of things worth checking; it won't run as shown.
Whatever you provide as the ENTRYPOINT needs to obey the usual rules for executable commands: if it's a bare command, it must be in $PATH; it must have the executable bit set in its file permissions; if it's a script, its interpreter must also exist; if it's a binary, it must be statically linked or all of its shared library dependencies must be in the image already. For your script you might need to make it executable if it isn't already
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
The other thing with this setup is that (definitionally) if the ENTRYPOINT exits, the whole container exits, and the Bourne shell set -e directive tells the script to exit on any error. In the artifacts in the question, gosu isn't a standard part of the debian base image, so your entrypoint will fail (and your container will exit) trying to run that command. (That won't affect the very simple case though.)
Finally, if you run into trouble running a container under an orchestration system like Docker Swarm or Kubernetes, one of your first steps should be to run the same container, locally, in the foreground: use docker run without the -d option and see what it prints out. For example:
% docker build .
% docker run --rm c5fb7da1c7c1
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"docker-entrypoint.sh\": executable file not found in $PATH": unknown.
ERRO[0000] error waiting for container: context canceled
% chmod +x docker-entrypoint.sh
% docker build .
% docker run --rm f5a239f2758d
/usr/local/bin/docker-entrypoint.sh: line 3: exec: postgres: not found
(Using the Dockerfile and short docker-entrypoint.sh from the question, and using the final image ID from docker build . in those docker run commands.)

Docker why isn't $USER environment variable set

I'm doing a simple docker build where the image runs a script that assumes $USER is set, like it normally is in a bash shell.
However, $USER is not set, when using /bin/bash or /bin/bash --login. Very easy to demonstrate using the latest ubuntu
$ docker run -t -i ubuntu:latest
root#1dbeaefd6cd4:/# echo $USER
root#1dbeaefd6cd4:/# exit
$ docker run -t -i ubuntu:latest /bin/bash --login
root#d2728a8188a5:/# echo $USER
root#1dbeaefd6cd4:/# exit
However, if in the shell I su -l root, then $USER is set.
root#d2728a8188a5:/# su -l root
root#d2728a8188a5:~# echo $USER
root
root#d2728a8188a5:~# exit
I'm aware I could add ENV USER=root to the Dockerfile, but I'm trying to avoid hard-coding the value.
Does anyone have a suggestion of why this might be happening? I'm asking mostly out of curiosity to understand what's happening when Docker starts bash. It's clearly not exactly like a login shell and the --login option doesn't seem to be working.
The only environment variables documented to be set are $HOME, $HOSTNAME, $PATH, and (maybe) $TERM. (A docker build RUN step internally does the equivalent of docker run.) If you need other variables set you can use the ENV directive.
Typically in a Dockerfile there's no particular need to make path names or user names configurable, since these live in an isolated space separate from the host. For instance, it's extremely common to put "the application" in /app even though that's not a standard FHS path. It's considered a best practice, though infrequently done, to set up some non-root user to actually run the application and use a USER directive late in a Dockerfile. In all of these cases you know what the username actually is; it's not any sort of parameter.
According to https://www.tldp.org/LDP/abs/html/internalvariables.html:
The variables $ENV, $LOGNAME, $MAIL, $TERM, $USER, and $USERNAME are not Bash builtins. These are, however, often set as environmental variables in one of the Bash or login startup files.
Also as pointed out in this Unix&Linux SE answer Who sets $USER and $USERNAME environment variables?:
There's no rule. Some shells like tcsh or zsh set $LOGNAME. zsh sets $USER.
It may be set by some things that log you in like login (as invoked by getty when login on a terminal and sometimes by other things like in.rlogind), cron, su, sudo, sshd, rshd, graphical login managers or may not.
[…]
So in the context of a clean environment within a Docker container, you may rather want to rely on whoami or id:
$ docker run --rm -it ubuntu
root#7f6191875c62:/# whoami
root
root#7f6191875c62:/# id
uid=0(root) gid=0(root) groups=0(root)
root#7f6191875c62:/# id -u -n
root
Try running "/bin/bash -i" which will force interactive mode and execute the .profile where environment variables are usually stored.

How to Set docker container ip to environment variable dynamically on startup?

I want to export docker container hostname as an environment variable which I can later use in my app. In my docker file I call my script "run" as last command
CMD run
The run file is executable and works fine with rest of commands I perform but before them I want to export container hostname to an env. variable as follows
"run" File Try 1
#!/bin/bash
export DOCKER_MACHINE_IP=`hostname -i`
my_other_commands
exec tail -f /dev/null
But when I enter docker container and check, the variable is not set. If I use
echo $DOCKER_MACHINE_IP
in run file after exporting, it shows ip on console when I try
docker logs
I also tried sourcing another script from "run" file as follows
"run" File Try 2
#!/bin/bash
source ./bin/script
my_other_commands
exec tail -f /dev/null
and the script again contains the export command. But this also does not set the environment variable. What I am doing wrong?
When you execute a script, any environment variable set by that script will be lost when the script exits.
But for both the cases you've posted above the environment variable should be accessible for the commands in your scripts, but when you enter the docker container via docker run you will get a new shell, which does not contain your variable.
tl;dr Your exported environment variable will only be available to sub shells of the shell which set the variable. And if you need it when logging in you should source the ./bin/script file.

Docker Ubuntu environment variables

During the build stage of my docker images, i would like to set some environment variables automatically for every subsequent "RUN" command.
However, I would like to set these variables from within the docker conatiner, because setting them depends on some internal logic.
Using the dockerfile "ENV" command is not good, because that cannot rely on internal logic. (It cannot rely on a command run inside the docker container)
Normally (if this were not docker) I would set my ~/.profile file. However, docker does not load this file in non-interactive shells.
So at them moment I have to run each docker RUN command with:
RUN bash -c "source ~/.profile && do_something_here"
However, this is very tedious (and unclean) when I have to repeat this every time I want to run a bash command. Is there some other "profile" file I can use instead.
you can try setting the arg as env like this
ARG my_env
ENV my_env=${my_env}
in Dockerfile,
and pass the 'my_env=prod' in build-args so that you can use the set env for subsequent RUN commands
you can also use env_file: option in docker compose yml file in case of a stack deploy
I had a similar problem and couldn't find a satisfactory solution. What I did was creating a script that would source the variables, then do the operation. I would then rewrite the RUN commands in the Dockerfile to use that script instead.
In your case, if you need to run multiple commands, you could create a wrapper that loads the variables, runs the command given as argument, and include that script in the docker image.

Resources