I don't want to use root, for safety, so I did as VSCode suggests, here's my Dockerfile:
FROM ubuntu:focal
# non root user (https://code.visualstudio.com/remote/advancedcontainers/add-nonroot-user)
ARG USERNAME=dev
ARG USER_UID=1000
ARG USER_GID=$USER_UID
# Create the user
RUN groupadd --gid $USER_GID $USERNAME \
&& useradd --uid $USER_UID --gid $USER_GID -m $USERNAME \
&& apt-get update \
&& apt-get install -y sudo \
&& echo $USERNAME ALL=\(root\) NOPASSWD:ALL > /etc/sudoers.d/$USERNAME \
&& chmod 0440 /etc/sudoers.d/$USERNAME
USER $USERNAME
I pass the current directory on github workflows:
docker run -u dev -v $PWD:/home/dev/project project /bin/bash -c "./my_script.sh"
but my_script.sh fails to create a directory with permission problems. I also tried docker run -u $USER ... but it does not find the user runner inside the container.
One option is to run with root: docker run -u root ..., but is there a better way? I tried passing docker run -u dev ... but I get Permission Denied also.
&& echo $USERNAME ALL=\(root\) NOPASSWD:ALL > /etc/sudoers.d/$USERNAME \
&& chmod 0440 /etc/sudoers.d/$USERNAME ```
Those two lines defeat the entire reason for not running your container as root. It's a passwordless escalation to root making the user effectively the same as having full root access.
docker run -u dev -v $PWD:/home/dev/project project /bin/bash -c
"./my_script.sh"
but my_script.sh fails to create a directory with permission problems.
Host volumes are mounted with the same uid/gid (unless using user namespaces). So you need the uid/gid of the user inside the container to match the host directory uid/gid permissions.
I also tried docker run -u $USER ... but it does not find the user
runner inside the container.
If you specify a username, it looks for that username inside the container's /etc/passwd. You can instead specify the uid/gid like:
docker run -u "$(id -u):$(id -g)" ...
Make sure the directory already exists in the host (which will be the case for $PWD) and that the user has access to write to that directory (which it should if you haven't done anything unusual in GHA).
Related
I'm working on creating a Dockerfile that builds 2 volumes called /data/ and /artifacts/ and one user called "omnibo" and then assigning this user with ownership/permission of these two volumes, I tried using the chown command but after checking the volumes' permissions/ownership are assigned to root user.
This is what's in my Dockerfile script:
FROM alpine:latest
RUN useradd -m omnibo
VOLUME /data/ /artifact/
RUN chown -R omnibo /data /artifact
RUN mkdir -p /var/cache /var/cookbook
COPY fix-joyou.sh /root/joyou.sh
COPY Molsfile /var/file/Molsfile
RUN bash /root/fix-joyou.sh && rm -rf /root/fix-joyou.sh && \
yum -y upgrade && \
yum -y install curl iproute hostname && \
curl -L https://monvo.tool.sh/install.sh | bash && \
/opt/embedded/bin/gem install -N berkshelf && \
/opt/embedded/bin/berks vendor -b /var/cinc/Molsfile /var/cinc/cookbook
ENV RUBYOPT=-r/usr/local/share/ruby-docker-copy-patch.rb
USER omnibo
WORKDIR /home/omnibo
This script runs successfully when creating container but when doing "ll" it shows that these two volumes are assigned to "root", Is there anything I can do to add ownership to "omnibo"?
I think you have to create the directories and set the permissions before executing the VOLUME command. According to the docker documentation: "If any build steps change the data within the volume after it has been declared, those changes will be discarded". See https://docs.docker.com/engine/reference/builder/#volume
Try the following:
FROM alpine:latest
RUN useradd -m omnibo
RUN mkdir /data /artifact && chown -R omnibo /data /artifact
VOLUME /data/ /artifact/
...
I have a docker container created by the following Dockerfile:
ARG TAG=latest
FROM continuumio/miniconda3:${TAG}
ARG GROUP_ID=1000
ARG USER_ID=1000
ARG ORG=my-org
ARG USERNAME=user
ARG REPO=none
ARG COMMIT=none
ARG BRANCH=none
ARG MAKEAPI=True
RUN addgroup --gid $GROUP_ID $USERNAME
RUN adduser --uid $USER_ID --disabled-password --gecos "" $USERNAME --ingroup $USERNAME
COPY . /api_maker
RUN /opt/conda/bin/pip install pyyaml psutil packaging
RUN apt install -y openssh-client git
RUN mkdir -p -m 0700 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
ENV GIT_SSH_COMMAND="ssh -i /run/secrets/thekey"
RUN --mount=type=secret,id=thekey git clone git#github.com:$ORG/$REPO.git /repo
RUN /opt/conda/bin/python3 /api_maker/repo_setup.py $BRANCH $COMMIT
RUN /repo/root_script.sh
RUN chown -R $USERNAME:$USERNAME /api_maker
RUN chown -R $USERNAME:$USERNAME /repo
RUN mkdir -p /data
RUN chown -R $USERNAME:$USERNAME /data
RUN mkdir -p /working
RUN chown -R $USERNAME:$USERNAME /working
RUN mkdir -p /opt/conda/pkgs
RUN mkdir -p /opt/conda/envs
RUN chmod -R 777 /opt/conda
RUN touch /opt/conda/pkgs/urls.txt
USER $USERNAME
RUN /api_maker/user_env_setup.sh $MAKEAPI
CMD /repo/run_api.sh $#;
with the following run_api.sh script:
#!/bin/bash
cd /repo
PROCESSES=${1:-9}
LOCAL_DOCKER_PORT=${2:-7001}
exec /opt/conda/envs/environment/bin/gunicorn --bind 0.0.0.0:$LOCAL_DOCKER_PORT --workers=$PROCESSES restful_api:app
My app contains some signal handling. If I manually send SIGTERM to gunicorn (either the worker or the parent process) from inside the container, my signal handling works properly. However, it does not work right when I run docker stop on the container. How can I make my shell script properly forward the SIGTERM it is supposedly receiving?
You need to make sure the main container process is your actual application, and not a shell wrapper.
As you have the CMD currently, a shell invokes it. The argument list $# will always be empty. The shell parses /repo/run_api.sh and sees that it's followed by a semicolon, so it might need to do something else. So even though your script correctly ends with exec gunicorn ... to hand off control directly to the other process, it's still running underneath a shell, and when you docker stop the container, it goes to the shell wrapper.
The easiest way to avoid this shell is to use an exec form CMD:
CMD ["/repo/run_api.sh"]
This will cause your script to run directly, without having a /bin/sh -c wrapper invoking it, and when the script eventually exec another process, that process becomes the main process and will receive the docker stop signal.
When entering my container, I want to log in as user ryan in directory /home/ryan/cas with the command eval "$(ssh-agent -c)" run. My following Dockerfile:
FROM ubuntu:latest
ENV TZ=Australia/Sydney
RUN set -ex; \
# NOTE(Ryan): Prevent docker build hanging on timezone confirmation
ln -sf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone; \
apt update; \
apt install -y --no-install-recommends \
sudo ca-certificates git gnupg openssh-client vim; \
useradd -m ryan -g sudo; \
printf "ryan ALL=(ALL:ALL) NOPASSWD:ALL" | sudo EDITOR="tee -a" visudo; \
# NOTE(Ryan): Prevent sudo usage prompt appearing on startup
touch /home/ryan/.sudo_as_admin_successful; \
git clone https://github.com/ryan-mcclue/cas.git /home/ryan/cas; \
chmod 777 -R /home/ryan/cas;
ENTRYPOINT ["/bin/bash", "-l", "-c"]
USER ryan
WORKDIR /home/ryan/cas
CMD eval "$(ssh-agent -s)"
However, running ssh-add I still get the Could not open a connection to your authentication agent which is indicative that the ssh-agent is not running. Manually typing eval "$(ssh-agent -c)" works.
I think you want remove your ENTRYPOINT statement, and then you want:
USER ryan
WORKDIR /home/ryan/cas
CMD ["ssh-agent", "bash", "-l"]
This will get you a login shell, run under the control of ssh-agent (so you'll have the necssary SSH_* environment variables and an active socket available).
To understand what's happening with your container, try running from the command line:
bash -l -c 'eval $(ssh-agent -s)'
What happens? The shell exits immediately, because running ssh-agent -s causes the agent to background itself, which looks pretty much the same as "exiting". Since you passed the -c flag, and the command given to -c has exited, the parent bash shell exits as well.
I am trying to match the host UID with container UID as below.
Dockerfile
RUN addgroup -g 1000 deploy \
&& adduser -D -u 1000 -G deploy -s /bin/sh deploy
USER deploy
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
CMD ["php-fpm7","-F"]
entrypoint.sh
whoami # it outputs `deploy`
# Change UID of 'deploy' as per host user UID
HOST_CURRENT_USER_ID=$(stat -c "%u" /var/www/${PROJECT_NAME})
if [ ${HOST_CURRENT_USER_ID} -ne 0 ]; then
gosu root usermod -u ${HOST_CURRENT_USER_ID} deploy
gosu root groupmod -g ${HOST_CURRENT_USER_ID} deploy
fi
whoami # It outputs as unknown user id 1000.
Please note the output of whoami above. Even If I changed the UID of deploy to host uid, the entrypoint script process doesn't get changed as the entrypoint shell has been called by UID 1000.
So I came up in a solution to make two entry point script one is to change the UID and another one is for container's bootstrap process which will be run in a separate shell after I change the UID of deploy. So how can I make two entrypoint run after another. E.g something like
ENTRYPOINT ["/fix-uid.sh && /entrypoint.sh"]
It looks like you're designing a solution very similar to one that I've created. As ErikMD mentions, do not use gosu to switch from a user to root, you want to go the other way, from root to a user. Otherwise, you will have an open security hole inside your container than any user can become root, defeating the purpose of running a container as a different user id.
For the solution that I put together, I have it work whether the container is run in production as just a user with no volume mounts, or in development with volume mounts by initially starting the container as root. You can have an identical Dockerfile, and change the entrypoint to have something along the lines of:
#!/bin/sh
if [ "$(id -u)" = "0" ]; then
fix-perms -r -u deploy -g deploy /var/www/${PROJECT_NAME}
exec gosu deploy "$#"
else
exec "$#"
fi
The fix-perms script above is from my base image, and includes the following bit of code:
# update the uid
if [ -n "$opt_u" ]; then
OLD_UID=`getent passwd "${opt_u}" | cut -f3 -d:`
NEW_UID=`ls -nd "$1" | awk '{print $3}'`
if [ "$OLD_UID" != "$NEW_UID" ]; then
echo "Changing UID of $opt_u from $OLD_UID to $NEW_UID"
usermod -u "$NEW_UID" -o "$opt_u"
if [ -n "$opt_r" ]; then
find / -xdev -user "$OLD_UID" -exec chown -h "$opt_u" {} \;
fi
fi
fi
(Note, I really like your use of stat -c and will likely be updating my fix-perms script to leverage that over the ls command I have in there now.)
The important part to this is running the container. When you need the fix-perms code to run (which for me is only in development), I start the container as root. This can be a docker run -u root:root ... or user: "root:root" in a compose file. That launches the container as root initially, which triggers the first half of the if/else in the entrypoint that runs fix-perms and then runs a gosu deploy to drop from root to deploy before calling "$#" which is your command (CMD). The end result is pid 1 in the container is now running your command as the deploy user.
As an aside, if you really want an easier way to run multiple entrypoint fragments in a way that's easy to extend with child images, I use an entrypoint.d folder that is processed by an entrypoint script in my base image. To code to implement that logic is as simple as:
for ep in /etc/entrypoint.d/*.sh; do
if [ -x "${ep}" ]; then
echo "Running: ${ep}"
"${ep}"
fi
done
All of this can be seen, along with an example using nginx, at: https://github.com/sudo-bmitch/docker-base
The behavior you observe seems fairly normal: in your entrypoint script, you changed the UID associated with the username deploy, but the two whoami commands are still run with the same user (identified by the UID in the first place, not the username).
For more information about UIDs and GIDs in a Docker context, see e.g. that reference.
Note also that using gosu to re-become root is not a standard practice (see in particular that warning in the upstream doc).
For your use case, I'd suggest removing the USER deploy command and switch user in the very end, by adapting your entrypoint script as follows:
Dockerfile
(…)
RUN addgroup -g 1000 deploy \
&& adduser -D -u 1000 -G deploy -s /bin/sh deploy
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
CMD ["php-fpm7","-F"]
entrypoint.sh
#!/bin/sh
whoami # it outputs `root`
# Change UID of 'deploy' as per host user UID
HOST_CURRENT_USER_ID=$(stat -c "%u" /var/www/${PROJECT_NAME})
if [ ${HOST_CURRENT_USER_ID} -ne 0 ]; then
usermod -u ${HOST_CURRENT_USER_ID} deploy
groupmod -g ${HOST_CURRENT_USER_ID} deploy
fi
# don't forget the "exec" builtin
exec gosu ${HOST_CURRENT_USER_ID}:${HOST_CURRENT_USER_ID} "$#"
this can be tested using id, for example:
$ docker build -t test-gosu .
$ docker run --rm -it test-gosu /bin/sh
$ id
I have a script that needs to be ran as root. In this script I create directories and files. The files and directories cannot be modified by the user who ran the script (unless there root of course).
I have tried several solutions found here and other sites, first I tried to mkdir -m 777 the directories as so:
#!/bin/bash
...
#Check execution location
CDIR=$(pwd)
#File setup
DATE=$(date +"%m-%d_%H:%M:%S")
LFIL="$CDIR/android-tools/logcat/logcat_$DATE.txt"
BFIL="$CDIR/android-tools/backup/backup_$DATE"
mkdir -m 777 -p "$CDIR/android-tools/logcat/"
mkdir -m 777 -p "$CDIR/android-tools/backup/"
...
I have also tried touching every created file and directory with the $USER as root, like so:
#!/bin/bash
...
#Check execution location
CDIR=$(pwd)
#File setup
DATE=$(date +"%m-%d_%H:%M:%S")
LFIL="$CDIR/android-tools/logcat/logcat_$DATE.txt"
BFIL="$CDIR/android-tools/backup/backup_$DATE"
mkdir -p "$CDIR/android-tools/logcat/"
mkdir -p "$CDIR/android-tools/backup/"
sudo -u $USER touch "$CDIR/"
sudo -u $USER touch "$CDIR/android-tools/"
sudo -u $USER touch "$CDIR/android-tools/logcat/"
sudo -u $USER touch "$CDIR/android-tools/backup/"
sudo -u $USER touch "$CDIR/android-tools/logcat/logcat_*.txt"
sudo -u $USER touch "$CDIR/android-tools/logcat/Backup_*"
...
I have also tried manually running sudo chmod 777 /android-tools/*, and sudo chmod 777 /* from the script directory, gave no errors, but I still cannot delete the files without root permission.
Heres the full script, It's not done yet. Don't run it with an android device connected to your computer.
http://pastebin.com/F20rLJQ4
touch doesn't change ownership. I think you want chown.
If you're using sudo to run your script, $USER is root, but $SUDO_USER is the user who ran sudo, so you can use that.
If you're not using sudo, you can't trust $USER to be anything in particular. The caller can set it to anything (like "root cat /etc/shadow", which would make your above script do surprising things you didn't want it to do because you said $USER instead of "$USER").
If you're running this script using setuid, you need something safer, like id -u, to get the calling process's legitimate UID regardless of what arbitrary string happens to be in $USER.
If you cover both possibilities by making makestuff.sh like this:
# $SUDO_USER if set, otherwise the current user
caller="${SUDO_USER:-$(id -u)}"
mkdir -p foo/bar/baz
chown -R "$caller" foo
Then you can use it this way:
sudo chown root makestuff.sh
sudo chmod 755 makestuff.sh
# User runs it with sudo
sudo ./makestuff.sh
# User can remove the files
rm -r foo
Or this way (if you want to use setuid so regular users can run the script without having sudo access -- which you probably don't, because you're not being careful enough for that):
sudo chown root makestuff.sh
sudo chmod 4755 makestuff.sh # Danger! I told you not to do this.
# User runs it without sudo
./makestuff.sh
# User can remove the files
rm -r foo