Docker why isn't $USER environment variable set - bash

I'm doing a simple docker build where the image runs a script that assumes $USER is set, like it normally is in a bash shell.
However, $USER is not set, when using /bin/bash or /bin/bash --login. Very easy to demonstrate using the latest ubuntu
$ docker run -t -i ubuntu:latest
root#1dbeaefd6cd4:/# echo $USER
root#1dbeaefd6cd4:/# exit
$ docker run -t -i ubuntu:latest /bin/bash --login
root#d2728a8188a5:/# echo $USER
root#1dbeaefd6cd4:/# exit
However, if in the shell I su -l root, then $USER is set.
root#d2728a8188a5:/# su -l root
root#d2728a8188a5:~# echo $USER
root
root#d2728a8188a5:~# exit
I'm aware I could add ENV USER=root to the Dockerfile, but I'm trying to avoid hard-coding the value.
Does anyone have a suggestion of why this might be happening? I'm asking mostly out of curiosity to understand what's happening when Docker starts bash. It's clearly not exactly like a login shell and the --login option doesn't seem to be working.

The only environment variables documented to be set are $HOME, $HOSTNAME, $PATH, and (maybe) $TERM. (A docker build RUN step internally does the equivalent of docker run.) If you need other variables set you can use the ENV directive.
Typically in a Dockerfile there's no particular need to make path names or user names configurable, since these live in an isolated space separate from the host. For instance, it's extremely common to put "the application" in /app even though that's not a standard FHS path. It's considered a best practice, though infrequently done, to set up some non-root user to actually run the application and use a USER directive late in a Dockerfile. In all of these cases you know what the username actually is; it's not any sort of parameter.

According to https://www.tldp.org/LDP/abs/html/internalvariables.html:
The variables $ENV, $LOGNAME, $MAIL, $TERM, $USER, and $USERNAME are not Bash builtins. These are, however, often set as environmental variables in one of the Bash or login startup files.
Also as pointed out in this Unix&Linux SE answer Who sets $USER and $USERNAME environment variables?:
There's no rule. Some shells like tcsh or zsh set $LOGNAME. zsh sets $USER.
It may be set by some things that log you in like login (as invoked by getty when login on a terminal and sometimes by other things like in.rlogind), cron, su, sudo, sshd, rshd, graphical login managers or may not.
[…]
So in the context of a clean environment within a Docker container, you may rather want to rely on whoami or id:
$ docker run --rm -it ubuntu
root#7f6191875c62:/# whoami
root
root#7f6191875c62:/# id
uid=0(root) gid=0(root) groups=0(root)
root#7f6191875c62:/# id -u -n
root

Try running "/bin/bash -i" which will force interactive mode and execute the .profile where environment variables are usually stored.

Related

the bashrc file is not working when I docker run --mount bashrc

I'm testing an app on docker (search engine) but when I use docker run the bashrc doesn't work if for example there was an alias inside bashrc, I can't use it.
The file bashrc is copied to the container but still can't use it.
My question is why not? is it only because that bashrc needs to be reloaded or there is another reason?
sudo docker run \
--mount type=bind,source=$(pwd)/remise/bashrc,destination=/root/.bashrc,readonly \
--name="s-container" \
ubuntu /go/bin/s qewrty
If you start your container as
docker run ... image-name \
/go/bin/s qwerty
when Docker creates the container, it directly runs the command /go/bin/s qwerty; it does not invoke bash or any other shell to do it. Nothing will ever know to look for a .bashrc file.
Similarly, if your Dockerfile specifies
CMD ["/go/bin/s", "qwerty"]
it runs the command directly without a shell.
There's an alternate shell form of CMD that takes a command string, and runs it via /bin/sh -c. That does involve a shell; but it's neither an interactive nor a login shell, and it's invoked as sh, so it won't read any shell dotfiles (for the specific case where /bin/sh happens to be GNU Bash, see Bash Startup Files).
Since none of these common paths to specify the main container command will read .bashrc or other shell dotfiles, it usually doesn't make sense to try to write or inject these files. If you need to set environment variables, consider the Dockerfile ENV directive or an entrypoint wrapper script instead.

AWS EC2 User Data: Commands not recognized when using sudo

I'm trying to create an EC2 User-data script to run other scripts on boot up. However, the scripts that I run fail to recognize some commands and variables that I'd already declared. I'm running the commands as the "ubuntu" user but it still isn't working.
My user-data script looks something like this:
export user="ubuntu"
sudo su $user -c ". ./run_script"
Within the script, I have these lines:
THIS_PATH="/some/path"
echo "export SOME_PATH=$THIS_PATH" >> ~/.bashrc
source ~/.bashrc
However, the script can't run SOME_PATH/application, and echo $SOME_PATH this returns a blank line. I'm confused because $SOME_PATH/application works when I log into the EC2 using SSH and my debug logs using whoami returns "ubuntu."
Am I missing something here?
Your data script is executed as root and su command leaves $HOME and other ENV variables intact (note that sudo is redundant). "su -" does not help either
So, do not use ~ or $HOME but full path /home/ubuntu/.bashrc
I found out the problem. It seems that source ~/.bashrc isn't enough to restart the shell -- the environment variables worked after I referenced them in another bash script.

How to make a docker entrypoint run as non-root for some particular commands only

My PHP image entry point is something like below. The entrypoint runs as root and it is necessary in my case . So any command I run on my container runs as root. For some particular command I want to run it as another user e.g when someone try to execute docker exec -it php composer install composer should run as another user set in entrypoint. when someone try to execute docker exec -it php drush status drush should run as another user set in entry point. Probably a if or switch statement inside entrypoint can help me. I was trying something like this https://unix.stackexchange.com/questions/476155/how-to-pass-multiple-parameters-to-su-user-c-command but passing parameter with double dash (--) breaks some command.
Dockerfile
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
CMD ["php-fpm"]
entrypoint.sh
#!/bin/sh
set -e
# first arg is `-f` or `--some-option`
if [ "${1#-}" != "$1" ]; then
set -- php-fpm "$#"
fi
exec "$#"
I'm not sure that I understand your use-case, but I use su-exec to drop privileges down to a non-root user within my entrypoint script. Most commonly I have to use this because I need to change permissions on a bind-mounted volume (usually /var/run/docker.sock).
Essentially I will do root level operations in my entrypoint, then drop down to a non-root user when executing the container service.
This blog explains the concept using gosu, su-exec is a refactor of gosu in C that is 10kb vs 1.8MB: https://denibertovic.com/posts/handling-permissions-with-docker-volumes/
Do note the security issues, which AFAIK are not a factor when using this in containers.

Set system ENV with the shell file inside the container

Try to set System ENV with the shell script when run the container, the problem is when I see the logs, the "printenv" shows me that the "MYENV=123" but when I echo it inside the container is empty.
Dockerfile:
FROM ubuntu
ADD first.sh /opt/first.sh
RUN chmod +x /opt/first.sh
ADD second.sh /opt/second.sh
RUN chmod +x /opt/second.sh
ENTRYPOINT [ "/opt/first.sh" ]
first.sh
#!/bin/bash
source /opt/second.sh
printenv
tail -f /dev/null
second.sh
#!/bin/bash
BLA=`echo blabla 123 | sed 's/blabla //g'`
echo "${BLA}"
export MYENV=${BLA}
I don't want to use docker env in a run or with docker-compose, because this workflow will help me to change the env when I'm running the container
The technique you've described will work fine. I'd write it slightly differently:
#!/bin/sh
. /opt/second.sh
exec "$#"
This will set environment variables for the main process in your container (and not ignore the CMD or anything you set on the command line). It won't affect any other shells you happen to launch with docker exec: they don't run as children of the container's main process and won't have "seen" these environment variable settings.
This technique won't make it particularly easier or harder to change environment variables in your container. Since the only way one process's environment can affect another's is by providing the initial environment when it starts up, even if you edit the second.sh in the live container (not generally a best practice) it won't affect the main process's environment (in your case, the tail command). This is one of a number of common situations where you need to at least restart the container to make changes take effect.

How to run 'at' as a different user?

I often need to run at jobs as a different user. I've always done something like
$ echo "$PWD/batchToRun -parameters" | sudo su - otheruser -c "at now"
batchToRun is also scheduled to run via otheruser's crontab. This works out well until batchToRun starts depending on subtle side-effects of settings of environmental variables -- like LANG (sort anyone?) -- that are passed in from the environment of the user running sudo.
I typically don't want to log in as otheruser; it's a semi-privileged account and I would like a "paper trail" of its associated activity so that I can go back and see exactly what was done, by whom, when, etc.
Besides the obvious rewriting batchToRun to be independent of such settings, what's a good way to ensure that the sudoer's environment doesn't contaminate the target environment?
Note: this is on FC7 (sudo version 1.6.8p12) and other old distros, so any shiny new features of sudo/su/at (notably, the ability to pass an argument with -i to sudo) are outside my grasp.
Update: it turns out that the su - otheruser is actually a sufficient firewall between the users and that my contamination is coming from something in the interactive startup sequence. I still like the env edit capability, though.
You could strip the environment before you run at:
echo "command ..." | env - PATH="$PATH" sudo su - otheruser -c "at now"
You can also arrange for sudo to do this for you by setting the env_reset option. For example, you could give your user access to run the at command as otheruser directly (rather than sudo to root and then to the other user) and then set env_reset with a Defaults command for that user or that command (see the sudoers man page).
But the above is probably the easiest solution without changing how you're generally doing things today.
Any "contamination" of otheruser's environment should be limited to the at command's environment. When it actually comes time to run batchToRun, it will be run by otheruser using its typical default environment. That is, only at now is run in the shell spawned by the su sudo command.

Resources