How to run multiple entrypoint scripts one after another inside docker container? - shell

I am trying to match the host UID with container UID as below.
Dockerfile
RUN addgroup -g 1000 deploy \
&& adduser -D -u 1000 -G deploy -s /bin/sh deploy
USER deploy
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
CMD ["php-fpm7","-F"]
entrypoint.sh
whoami # it outputs `deploy`
# Change UID of 'deploy' as per host user UID
HOST_CURRENT_USER_ID=$(stat -c "%u" /var/www/${PROJECT_NAME})
if [ ${HOST_CURRENT_USER_ID} -ne 0 ]; then
gosu root usermod -u ${HOST_CURRENT_USER_ID} deploy
gosu root groupmod -g ${HOST_CURRENT_USER_ID} deploy
fi
whoami # It outputs as unknown user id 1000.
Please note the output of whoami above. Even If I changed the UID of deploy to host uid, the entrypoint script process doesn't get changed as the entrypoint shell has been called by UID 1000.
So I came up in a solution to make two entry point script one is to change the UID and another one is for container's bootstrap process which will be run in a separate shell after I change the UID of deploy. So how can I make two entrypoint run after another. E.g something like
ENTRYPOINT ["/fix-uid.sh && /entrypoint.sh"]

It looks like you're designing a solution very similar to one that I've created. As ErikMD mentions, do not use gosu to switch from a user to root, you want to go the other way, from root to a user. Otherwise, you will have an open security hole inside your container than any user can become root, defeating the purpose of running a container as a different user id.
For the solution that I put together, I have it work whether the container is run in production as just a user with no volume mounts, or in development with volume mounts by initially starting the container as root. You can have an identical Dockerfile, and change the entrypoint to have something along the lines of:
#!/bin/sh
if [ "$(id -u)" = "0" ]; then
fix-perms -r -u deploy -g deploy /var/www/${PROJECT_NAME}
exec gosu deploy "$#"
else
exec "$#"
fi
The fix-perms script above is from my base image, and includes the following bit of code:
# update the uid
if [ -n "$opt_u" ]; then
OLD_UID=`getent passwd "${opt_u}" | cut -f3 -d:`
NEW_UID=`ls -nd "$1" | awk '{print $3}'`
if [ "$OLD_UID" != "$NEW_UID" ]; then
echo "Changing UID of $opt_u from $OLD_UID to $NEW_UID"
usermod -u "$NEW_UID" -o "$opt_u"
if [ -n "$opt_r" ]; then
find / -xdev -user "$OLD_UID" -exec chown -h "$opt_u" {} \;
fi
fi
fi
(Note, I really like your use of stat -c and will likely be updating my fix-perms script to leverage that over the ls command I have in there now.)
The important part to this is running the container. When you need the fix-perms code to run (which for me is only in development), I start the container as root. This can be a docker run -u root:root ... or user: "root:root" in a compose file. That launches the container as root initially, which triggers the first half of the if/else in the entrypoint that runs fix-perms and then runs a gosu deploy to drop from root to deploy before calling "$#" which is your command (CMD). The end result is pid 1 in the container is now running your command as the deploy user.
As an aside, if you really want an easier way to run multiple entrypoint fragments in a way that's easy to extend with child images, I use an entrypoint.d folder that is processed by an entrypoint script in my base image. To code to implement that logic is as simple as:
for ep in /etc/entrypoint.d/*.sh; do
if [ -x "${ep}" ]; then
echo "Running: ${ep}"
"${ep}"
fi
done
All of this can be seen, along with an example using nginx, at: https://github.com/sudo-bmitch/docker-base

The behavior you observe seems fairly normal: in your entrypoint script, you changed the UID associated with the username deploy, but the two whoami commands are still run with the same user (identified by the UID in the first place, not the username).
For more information about UIDs and GIDs in a Docker context, see e.g. that reference.
Note also that using gosu to re-become root is not a standard practice (see in particular that warning in the upstream doc).
For your use case, I'd suggest removing the USER deploy command and switch user in the very end, by adapting your entrypoint script as follows:
Dockerfile
(…)
RUN addgroup -g 1000 deploy \
&& adduser -D -u 1000 -G deploy -s /bin/sh deploy
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
CMD ["php-fpm7","-F"]
entrypoint.sh
#!/bin/sh
whoami # it outputs `root`
# Change UID of 'deploy' as per host user UID
HOST_CURRENT_USER_ID=$(stat -c "%u" /var/www/${PROJECT_NAME})
if [ ${HOST_CURRENT_USER_ID} -ne 0 ]; then
usermod -u ${HOST_CURRENT_USER_ID} deploy
groupmod -g ${HOST_CURRENT_USER_ID} deploy
fi
# don't forget the "exec" builtin
exec gosu ${HOST_CURRENT_USER_ID}:${HOST_CURRENT_USER_ID} "$#"
this can be tested using id, for example:
$ docker build -t test-gosu .
$ docker run --rm -it test-gosu /bin/sh
$ id

Related

How to run ENTRYPOINT as root and switch to non-root to run CMD using gosu?

To connect to my container from Azure WebApp admin I need to start ssh server at startup. Then I need to run web server once the db is up.
In my Dockerfile I create a dedicated non-root user to run the web server.
RUN groupadd -g 1000 wagtail && \
useradd -u 1000 wagtail -m -d /home/wagtail -g wagtail
I copy startup-ssh.sh and startup-main.sh scripts into the container:
COPY startup-ssh.sh /app/
COPY startup-main.sh /app/
RUN chmod +x /app/startup-ssh.sh
RUN chmod +x /app/startup-main.sh
ENTRYPOINT ["/bin/bash", "-c", "/app/startup-ssh.sh"]
CMD ["/bin/bash", "-c", "/app/startup-main.sh"]
In the startup-ssh.sh I start the ssh server and then use gosu to switch user:
#!/bin/bash
# start ssh server
sed -i "s/SSH_PORT/$SSH_PORT/g" /etc/ssh/sshd_config
/usr/sbin/sshd
# restore /app directory rights
chown -R wagtail:wagtail /app
# switch to the non-root user
exec gosu wagtail "$#"
I expect the CMD's startup-main.sh script to be executed next but I get this in the Docker Desktop logs when the container is started.
Exited(1)
Usage: gosu user-spec command [args]
gosu nobody:root bash -c 'whoami && id'
gosu 1000:1 idie: gosu tianon bash
gosu version: 1.10 (go1.11.5 on linux/amd64; gc)
license: GPL-3 (full text at https://github.com/tianon/gosu)
I believe that Docker Desktop uses root when connecting to the container.
Maybe I'm missing something critical and/or this is something obvious. Please point me.
The code passed no arguments to the script. Imagine it like this:
bash -c '/app/startup-ssh.sh <NO ARGUMENTS HERE>' ignored ignored2 ignored3...
Test:
bash -c 'echo' 1
bash -c 'echo' 1 2
bash -c 'echo $0' 1 2 3
bash -c 'echo $1' 1 2 3
bash -c 'echo "$#"' 1 2 3
You want:
ENTRYPOINT ["/bin/bash", "-c", "/app/startup-ssh.sh \"$#\"", "--"]
Or, why the explicit shell if the file has a shebang and is executable, really just:
ENTRYPOINT ["/app/startup-ssh.sh"]

How to catch SIGTERM properly in Docker?

I have a docker container created by the following Dockerfile:
ARG TAG=latest
FROM continuumio/miniconda3:${TAG}
ARG GROUP_ID=1000
ARG USER_ID=1000
ARG ORG=my-org
ARG USERNAME=user
ARG REPO=none
ARG COMMIT=none
ARG BRANCH=none
ARG MAKEAPI=True
RUN addgroup --gid $GROUP_ID $USERNAME
RUN adduser --uid $USER_ID --disabled-password --gecos "" $USERNAME --ingroup $USERNAME
COPY . /api_maker
RUN /opt/conda/bin/pip install pyyaml psutil packaging
RUN apt install -y openssh-client git
RUN mkdir -p -m 0700 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
ENV GIT_SSH_COMMAND="ssh -i /run/secrets/thekey"
RUN --mount=type=secret,id=thekey git clone git#github.com:$ORG/$REPO.git /repo
RUN /opt/conda/bin/python3 /api_maker/repo_setup.py $BRANCH $COMMIT
RUN /repo/root_script.sh
RUN chown -R $USERNAME:$USERNAME /api_maker
RUN chown -R $USERNAME:$USERNAME /repo
RUN mkdir -p /data
RUN chown -R $USERNAME:$USERNAME /data
RUN mkdir -p /working
RUN chown -R $USERNAME:$USERNAME /working
RUN mkdir -p /opt/conda/pkgs
RUN mkdir -p /opt/conda/envs
RUN chmod -R 777 /opt/conda
RUN touch /opt/conda/pkgs/urls.txt
USER $USERNAME
RUN /api_maker/user_env_setup.sh $MAKEAPI
CMD /repo/run_api.sh $#;
with the following run_api.sh script:
#!/bin/bash
cd /repo
PROCESSES=${1:-9}
LOCAL_DOCKER_PORT=${2:-7001}
exec /opt/conda/envs/environment/bin/gunicorn --bind 0.0.0.0:$LOCAL_DOCKER_PORT --workers=$PROCESSES restful_api:app
My app contains some signal handling. If I manually send SIGTERM to gunicorn (either the worker or the parent process) from inside the container, my signal handling works properly. However, it does not work right when I run docker stop on the container. How can I make my shell script properly forward the SIGTERM it is supposedly receiving?
You need to make sure the main container process is your actual application, and not a shell wrapper.
As you have the CMD currently, a shell invokes it. The argument list $# will always be empty. The shell parses /repo/run_api.sh and sees that it's followed by a semicolon, so it might need to do something else. So even though your script correctly ends with exec gunicorn ... to hand off control directly to the other process, it's still running underneath a shell, and when you docker stop the container, it goes to the shell wrapper.
The easiest way to avoid this shell is to use an exec form CMD:
CMD ["/repo/run_api.sh"]
This will cause your script to run directly, without having a /bin/sh -c wrapper invoking it, and when the script eventually exec another process, that process becomes the main process and will receive the docker stop signal.

Unable to Find Entrypoint For Nextcloud (Alpine-based Version) For a Cron Container

I'm using Docker with Rancher v1.6, setting up a Nextcloud stack.
I would like to use a dedicated container for running cron tasks every 15 minutes.
The "normal" Nextcloud Docker image can simply use the following:
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
while /bin/true; do
su -s "/bin/bash" -c "/usr/local/bin/php /var/www/html/cron.php" www-data
echo $$(date) - Running cron finished
sleep 900
done
EOF'
(Pulled from this GitHub post)
However, the Alpine-based image does not have bash, and so it cannot be used.
I found this script in the list of examples:
#!/bin/sh
set -eu
exec busybox crond -f -l 0 -L /dev/stdout
However, I cannot seem to get that working with my docker-compose.yml file.
I don't want to use an external file, just to have the script entirely in the docker-compose.yml file, to make preparation and changes a bit easier.
Thank you!

Run inline command with pipe in docker container [duplicate]

I'm trying to run MULTIPLE commands like this.
docker run image cd /path/to/somewhere && python a.py
But this gives me "No such file or directory" error because it is interpreted as...
"docker run image cd /path/to/somewhere" && "python a.py"
It seems that some ESCAPE characters like "" or () are needed.
So I also tried
docker run image "cd /path/to/somewhere && python a.py"
docker run image (cd /path/to/somewhere && python a.py)
but these didn't work.
I have searched for Docker Run Reference but have not find any hints about ESCAPE characters.
To run multiple commands in docker, use /bin/bash -c and semicolon ;
docker run image_name /bin/bash -c "cd /path/to/somewhere; python a.py"
In case we need command2 (python) will be executed if and only if command1 (cd) returned zero (no error) exit status, use && instead of ;
docker run image_name /bin/bash -c "cd /path/to/somewhere && python a.py"
You can do this a couple of ways:
Use the -w option to change the working directory:
-w, --workdir="" Working directory inside the container
https://docs.docker.com/engine/reference/commandline/run/#set-working-directory--w
Pass the entire argument to /bin/bash:
docker run image /bin/bash -c "cd /path/to/somewhere; python a.py"
You can also pipe commands inside Docker container, bash -c "<command1> | <command2>" for example:
docker run img /bin/bash -c "ls -1 | wc -l"
But, without invoking the shell in the remote the output will be redirected to the local terminal.
bash -c works well if the commands you are running are relatively simple. However, if you're trying to run a long series of commands full of control characters, it can get complex.
I successfully got around this by piping my commands into the process from the outside, i.e.
cat script.sh | docker run -i <image> /bin/bash
Just to make a proper answer from the #Eddy Hernandez's comment and which is very correct since Alpine comes with ash not bash.
The question now referes to Starting a shell in the Docker Alpine container which implies using sh or ash or /bin/sh or /bin/ash/.
Based on the OP's question:
docker run image sh -c "cd /path/to/somewhere && python a.py"
If you want to store the result in one file outside the container, in your local machine, you can do something like this.
RES_FILE=$(readlink -f /tmp/result.txt)
docker run --rm -v ${RES_FILE}:/result.txt img bash -c "grep root /etc/passwd > /result.txt"
The result of your commands will be available in /tmp/result.txt in your local machine.
For anyone else who came here looking to do the same with docker-compose you just need to prepend bash -c and enclose multiple commands in quotes, joined together with &&.
So in the OPs example docker-compose run image bash -c "cd /path/to/somewhere && python a.py"
If you don't mind the commands running in a subshell, just put a set of outer parentheses around the multiple commands to run:
docker run image (cd /path/to/somewhere && python a.py)
TL;DR;
$ docker run --entrypoint /bin/sh image_name -c "command1 && command2 && command3"
A concern regarding the accepted answer is below.
Nobody has mentioned that docker run image_name /bin/bash -c just appends a command to the entrypoint. Some popular images are smart enough to process this correctly, but some are not.
Imagine the following Dockerfile:
FROM alpine
ENTRYPOINT ["echo"]
If you try building it as echo and running:
$ docker run echo /bin/sh -c date
You will get your command appended to the entrypoint, so that result would be echo "/bin/sh -c date".
Instead, you need to override the entrypoint:
$ docker run --entrypoint /bin/sh echo -c date
Docker run reference
In case it's not obvious, if a.py always needs to run in a particular directory, create a simple wrapper script which does the cd and then runs the script.
In your Dockerfile, replace
CMD [ 'python', 'a.py' ]
or whatever with
CMD [ '/wrapper' ]
and create a script wrapper in your root directory (or wherever it's convenient for you) with contents like
#!/bin/sh
set -e
cd /path/to/somewhere
python a.py
In many situations, perhaps also consider rewriting a.py so that it doesn't need a wrapper. Either make it os.chdir() where it needs to be, or have it look for its data files in a directory you configure in its environment or similar.

Dockerfile CMD not running at container start

So i've written a Dockerfile for a project, i've defined a CMD to run on starting the container to bootstrap the application.
The Dockerfile looks like
# create our mount folders and volumes
ENV MOUNTED_VOLUME_DIR=sites
RUN mkdir /$MOUNTED_VOLUME_DIR
ENV PATH=$MOUNTED_VOLUME_DIR/sbin:$MOUNTED_VOLUME_DIR/common/bin:$PATH
RUN chown -Rf www-data:www-data /$MOUNTED_VOLUME_DIR
# Mount folders
VOLUME ["/$MOUNTED_VOLUME_DIR/"]
# Expose Ports
EXPOSE 443
# add our environment variables to the server
ADD ./env /env
# Add entry point script
ADD ./start.sh /usr/bin/startContainer
RUN chmod 755 /usr/bin/startContainer
# define entrypoint command
CMD ["/bin/bash", "/usr/bin/startContainer"]
The start.sh script, does some git stuff like cloning the right repo, setting environment vars, as well as starting supervisor.
The start script begins with this
#!/bin/bash
now=$(date +"%T")
echo "Container Start Time : $now" >> /tmp/start.txt
/usr/bin/supervisord -n -c /etc/supervisord.conf
I start my new container like this
docker run -d -p expoPort:contPort -t -i -v /$MOUNTED_VOLUME_DIR/$PROJECT:/$MOUNTED_VOLUME_DIR $CONTAINER_ID /bin/bash
when i login to the container i see that supervisor hasn't been started, and neither has nginx or php5-fpm. the /tmp/start.txt file with a timestamp set from the startContainer script doesn't exist, showing its never ran the CMD in the Dockerfile.
Any hints on to get this fixed would be great
This:
docker run -d -p expoPort:contPort -t -i -v /$MOUNTED_VOLUME_DIR/$PROJECT:/$MOUNTED_VOLUME_DIR $CONTAINER_ID /bin/bash
Says 'run /bin/bash' after instantiating the container. E.g. skip CMD.
Try this:
docker run -d -p expoPort:contPort -t -i -v /$MOUNTED_VOLUME_DIR/$PROJECT:/$MOUNTED_VOLUME_DIR $CONTAINER_ID

Resources