Why does my bash script not run properly on every docker container that starts up? - bash

My Dockerfile copies an init.sh script to the container.
# DOCKERFILE
FROM ubuntu:latest
# a bunch of installation commands
COPY init.sh /
ENTRYPOINT bash init.sh
EXPOSE 80
And I have a docker-compose file with 2 services:
Service1: This service is being scaled.
Service2: Database
I have it so that when the Service1 container starts up, this script will run.
#!/bin/bash
# script
# Missing files directory
if [[ ! -e /var/www/drupal/sites/default/files ]]; then
mkdir /var/www/drupal/sites/default/files
chmod a+w /var/www/drupal/sites/default/files
fi
# Missing settings file
cp /var/www/drupal/sites/default/default.settings.php /var/www/drupal/sites/default/settings.php
chmod a+w /var/www/drupal/sites/default/settings.php
# Install Drush & Install Drupal
cd /var/www/drupal && composer require --dev drush/drush
cd /var/www/drupal && vendor/bin/drush site-install standard \
--db-url=mysql://root:random#mariadb:3306/drupaldb -y \
--site-name=ExampleWebsite \
--account-name=random \
--account-pass=random
# Post-Installation Steps
chmod go-w /var/www/drupal/sites/default/settings.php
chmod go-w /var/www/drupal/sites/default
cd /var/www/drupal && vendor/bin/drush cache-rebuild
/usr/sbin/apache2ctl -D FOREGROUND
However, when I run the command to start up the containers along with --scale docker-compose up -d --scale Service1=5, some of the containers run the script properly on start up but some don't. For the ones that don't, I would have to go into the container and manually run the script, then it's fine.
Shouldn't all the containers be the same and would've run the same script properly?
Instead, I would have to manually go into some of the containers and run the script.

Related

How to catch SIGTERM properly in Docker?

I have a docker container created by the following Dockerfile:
ARG TAG=latest
FROM continuumio/miniconda3:${TAG}
ARG GROUP_ID=1000
ARG USER_ID=1000
ARG ORG=my-org
ARG USERNAME=user
ARG REPO=none
ARG COMMIT=none
ARG BRANCH=none
ARG MAKEAPI=True
RUN addgroup --gid $GROUP_ID $USERNAME
RUN adduser --uid $USER_ID --disabled-password --gecos "" $USERNAME --ingroup $USERNAME
COPY . /api_maker
RUN /opt/conda/bin/pip install pyyaml psutil packaging
RUN apt install -y openssh-client git
RUN mkdir -p -m 0700 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
ENV GIT_SSH_COMMAND="ssh -i /run/secrets/thekey"
RUN --mount=type=secret,id=thekey git clone git#github.com:$ORG/$REPO.git /repo
RUN /opt/conda/bin/python3 /api_maker/repo_setup.py $BRANCH $COMMIT
RUN /repo/root_script.sh
RUN chown -R $USERNAME:$USERNAME /api_maker
RUN chown -R $USERNAME:$USERNAME /repo
RUN mkdir -p /data
RUN chown -R $USERNAME:$USERNAME /data
RUN mkdir -p /working
RUN chown -R $USERNAME:$USERNAME /working
RUN mkdir -p /opt/conda/pkgs
RUN mkdir -p /opt/conda/envs
RUN chmod -R 777 /opt/conda
RUN touch /opt/conda/pkgs/urls.txt
USER $USERNAME
RUN /api_maker/user_env_setup.sh $MAKEAPI
CMD /repo/run_api.sh $#;
with the following run_api.sh script:
#!/bin/bash
cd /repo
PROCESSES=${1:-9}
LOCAL_DOCKER_PORT=${2:-7001}
exec /opt/conda/envs/environment/bin/gunicorn --bind 0.0.0.0:$LOCAL_DOCKER_PORT --workers=$PROCESSES restful_api:app
My app contains some signal handling. If I manually send SIGTERM to gunicorn (either the worker or the parent process) from inside the container, my signal handling works properly. However, it does not work right when I run docker stop on the container. How can I make my shell script properly forward the SIGTERM it is supposedly receiving?
You need to make sure the main container process is your actual application, and not a shell wrapper.
As you have the CMD currently, a shell invokes it. The argument list $# will always be empty. The shell parses /repo/run_api.sh and sees that it's followed by a semicolon, so it might need to do something else. So even though your script correctly ends with exec gunicorn ... to hand off control directly to the other process, it's still running underneath a shell, and when you docker stop the container, it goes to the shell wrapper.
The easiest way to avoid this shell is to use an exec form CMD:
CMD ["/repo/run_api.sh"]
This will cause your script to run directly, without having a /bin/sh -c wrapper invoking it, and when the script eventually exec another process, that process becomes the main process and will receive the docker stop signal.

How to use docker ENTRYPOINT with shell script file combine parameter

I write shell script file and use this with docker ENTRYPOINT
but when I run docker image, it just stops without any error log because of entrypoint code line
my Dockerfile
FROM ubuntu:16.04
MAINTAINER limtaegeun <imori333#gmail.com>
RUN apt-get update
RUN apt-get install -y nginx
RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf
# Define mountable directories.
VOLUME ["/etc/nginx/sites-enabled", "/etc/nginx/certs", "/etc/nginx/conf.d", "/var/log/nginx", "/var/www/html"]
ENV CONTAINER_NAME nodejs
ENV SERVER_NAME myserver.com
ENV PEM_PATH /etc/nginx/certs/cert.pem
ENV KEY_PATH /etc/nginx/certs/cert.key
WORKDIR /etc/nginx
ADD ./sites-available/ssl /etc/nginx/sites-available/ssl
ADD ./docker-entrypoint.sh /etc/nginx/docker-entrypoint.sh
RUN chmod 777 /etc/nginx/docker-entrypoint.sh
EXPOSE 80 443
ENTRYPOINT /etc/nginx/docker-entrypoint.sh ${CONTAINER_NAME} ${SERVER_NAME} ${PEM_PATH} ${KEY_PATH}
CMD ["nginx"]
docker-entrypoint.sh
#!/bin/sh
CONTAINER_NAME=$1
SERVER_NAME=$2
PEM_PATH=$3
KEY_PATH=$4
rm -f /etc/nginx/sites-enabled/default
sed -ri 's#CONTAINER_NAME#'${CONTAINER_NAME}'#' /etc/nginx/sites-available/ssl
sed -ri 's#SERVER_NAME#'${SERVER_NAME}'#' /etc/nginx/sites-available/ssl
sed -ri 's#PEM_PATH#'${PEM_PATH}'#' /etc/nginx/sites-available/ssl
sed -ri 's#KEY_PATH#'${KEY_PATH}'#' /etc/nginx/sites-available/ssl
# cp -f sites-available/ssl sites-available/default
ln -s /etc/nginx/sites-available/ssl /etc/nginx/sites-enabled/default
my docker run command
docker run -v /home/ubuntu/Docker-nginx-cloudflare-ssl-proxy/certs:/etc/nginx/certs \
--name nginx-ssl -p 443:443 -p 80:80 --network nginx-net --rm -d nginx-cloudfare-ssl-proxy
what is the problem??
When a Docker container is run, it runs the ENTRYPOINT (only), passing the CMD as command-line parameters, and when the ENTRYPOINT completes the container exits. In the Dockerfile the ENTRYPOINT has to be JSON-array syntax for it to be able to see the CMD arguments, and the script itself needs to actually run the CMD, typically with a line like exec "$#".
The single simplest thing you can do to clean this up is not to try to go back and forth between environment variables and positional parameters. The ENTRYPOINT script will be able to directly read the ENV variables you set in the Dockerfile (or override with docker run -e options). So if you delete the first lines of the script that set these variables from positional parameters, and make sure to run the CMD
#!/bin/sh
# delete the lines that set CONTAINER_NAME et al.
rm -f /etc/nginx/sites-enabled/default
sed -ri 's#CONTAINER_NAME#'${CONTAINER_NAME}'#' /etc/nginx/sites-available/ssl
...
# and add this at the end
exec "$#"
and then change the Dockerfile to not pass positional parameters but do use JSON-array syntax for ENTRYPOINT
ENTRYPOINT ["/etc/nginx/docker-entrypoint.sh"]
CMD ["nginx"]
that should get you off the ground.
It's worth considering how much of this you actually need to be configurable. For instance, would you ever need a path different from the default /etc/nginx/certs inside the isolated container filesystem space? Usually with the standard nginx Docker Hub image you work with it by injecting an entire complete configuration file and if you choose to do that it simplifies your Docker setup.
Other generic suggestions: remove the VOLUME declarations (they potentially cause confusing behavior later in the Dockerfile and leak anonymous volumes and aren't otherwise necessary); don't make executable files world-writable (chmod 0755, not 0777); RUN apt-get update && apt-get install in the same Dockerfile command.

docker run fails to run shell script (file not found) although the shell script was added successfully with docker build

I am using a private docker hub repository https://hub.docker.com/u/privaterepoexample/, after which I have built my docker image using the commands below:
docker login
docker build -t privaterepoexample/sre:local .
docker tag 85cf9475bc1c privaterepoexample/sre
docker push privaterepoexample/sre
The output of docker build which shows login.sh added to container:
Executing busybox-1.29.3-r10.trigger
OK: 85 MiB in 57 packages
Removing intermediate container 12fd67450dfc
---> e9ca0b9e4ac4
Step 5/7 : WORKDIR /opt
---> Running in ce881ede94aa
Removing intermediate container ce881ede94aa
---> 2335b4f522ac
Step 6/7 : ADD login.sh /opt
---> 2aabf1712153
Step 7/7 : CMD ["chmod 755 login.sh && ./login.sh"]
---> Running in 8ec824d4e561
Removing intermediate container 8ec824d4e561
---> c97a4ad61578
Successfully built c97a4ad61578
Successfully tagged privaterepoexample/sre:local
The Dockerfile below is built successfully and login.sh is added successfully:
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y \
curl
FROM openjdk:8-jre-alpine
RUN apk --no-cache add curl
WORKDIR /opt
ADD login.sh /opt
CMD ["chmod 755 login.sh && ./login.sh"]
Now here comes with my problem, when I execute docker run like below, I get the error:
docker run -i privaterepoexample/sre
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"chmod 755 login.sh && ./login.sh\": stat chmod 755 login.sh && ./login.sh:
no such file or directory": unknown.
but why does it say no such file? given when I go inside the docker container, I can see the login.sh script with the command below:
$ docker run -it
privaterepoexample/sre /bin/sh
/opt # ls
login.sh
/opt # cat login.sh
#!/bin/sh
# Black Box Tester!
content=$(curl --location --request POST
"https://api.platform.abc.com/auth/oauth/token" --header
'Content-Type: application/x-www-form-urlencoded' \
--header 'Authorization: Basic ' --data-raw 'grant_type=password&
username=event#abc.com&password=fJff'| jq -r
'.domain_id' )
if [ $content = abc ]
then
echo “Valid Login Token”
else
echo “invalid url”
fi
/opt # exit
You get the error no such file or directory because you are using a so-called CMD in exec form in an unexpected way.
You can fix your Dockerfile in several ways, e.g.:
either use a CMD in shell form:
CMD chmod 755 login.sh && ./login.sh
or keep a CMD in exec form (which is often a good idea), but ensure the first argument of the JSON array is a program, not a composite command. You can do this e.g. by running chmod 755 … beforehand, at build time:
ADD login.sh /opt
RUN chmod 755 login.sh
CMD ["./login.sh"]
For more information on the CMD command and its brother command ENTRYPOINT, see also this other SO answer: CMD doesn't run after ENTRYPOINT in Dockerfile

Dockerfile commands not executing in MacOS

I am trying to build a container using docker file which has some statements to execute as below:
# Create folder for caching files
RUN mkdir -p /Library/WebServer/docroot/publish
RUN chown -R daemon:daemon /Library/WebServer/docroot
I am using below command to build :
$ docker build --no-cache -t dispatcher-apache -f Dockerfile
I can see the below execution :
Step 7/10 : RUN mkdir -p /Library/WebServer/docroot/publish
---> Running in 4c8f7c3e2238
But the file isn't created on that location when I check.
-bash: cd: /Library/WebServer/docroot/publish: No such file or directory
However, if I create commands from terminal, it works fine.
Dockerfile :
FROM httpd:2.4
# Copy dispatcher module
RUN mkdir -p /private/libexec/apache2/
COPY ./apache2-modules/ /private/libexec/apache2/
RUN ln -s /private/libexec/apache2/dispatcher-apache2.4-4.2.3.so /private/libexec/apache2/mod_dispatcher.so
# Copy new apache dependencies
RUN mkdir -p /private/etc/apache2/conf
COPY ./publish/etc/httpd/conf.d/ /private/etc/apache2/conf/
# Create folder for caching files
RUN mkdir -p /Library/WebServer/docroot/publish
RUN chown -R daemon:daemon /Library/WebServer/docroot
# Create folder for log files
RUN mkdir -p /private/var/log/apache2
# Replace httpd.conf with enabled modules
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf
EDIT after some help:
Now after build, I started the container and below is the error
$ docker run -dit -e HOSTIP=$(ipconfig getifaddr en0) --rm --name dispatcher-app -p 8080:80 dispatcher-apache
6a032a50be846bef06027976b990da27bcb446c28d582cf6c3a4dc4ad4361e1c
$ docker exec -it dispatcher-app /bin/bash
Error: No such container: dispatcher-app
Any troubleshooting tips?

Dockerfile CMD not running at container start

So i've written a Dockerfile for a project, i've defined a CMD to run on starting the container to bootstrap the application.
The Dockerfile looks like
# create our mount folders and volumes
ENV MOUNTED_VOLUME_DIR=sites
RUN mkdir /$MOUNTED_VOLUME_DIR
ENV PATH=$MOUNTED_VOLUME_DIR/sbin:$MOUNTED_VOLUME_DIR/common/bin:$PATH
RUN chown -Rf www-data:www-data /$MOUNTED_VOLUME_DIR
# Mount folders
VOLUME ["/$MOUNTED_VOLUME_DIR/"]
# Expose Ports
EXPOSE 443
# add our environment variables to the server
ADD ./env /env
# Add entry point script
ADD ./start.sh /usr/bin/startContainer
RUN chmod 755 /usr/bin/startContainer
# define entrypoint command
CMD ["/bin/bash", "/usr/bin/startContainer"]
The start.sh script, does some git stuff like cloning the right repo, setting environment vars, as well as starting supervisor.
The start script begins with this
#!/bin/bash
now=$(date +"%T")
echo "Container Start Time : $now" >> /tmp/start.txt
/usr/bin/supervisord -n -c /etc/supervisord.conf
I start my new container like this
docker run -d -p expoPort:contPort -t -i -v /$MOUNTED_VOLUME_DIR/$PROJECT:/$MOUNTED_VOLUME_DIR $CONTAINER_ID /bin/bash
when i login to the container i see that supervisor hasn't been started, and neither has nginx or php5-fpm. the /tmp/start.txt file with a timestamp set from the startContainer script doesn't exist, showing its never ran the CMD in the Dockerfile.
Any hints on to get this fixed would be great
This:
docker run -d -p expoPort:contPort -t -i -v /$MOUNTED_VOLUME_DIR/$PROJECT:/$MOUNTED_VOLUME_DIR $CONTAINER_ID /bin/bash
Says 'run /bin/bash' after instantiating the container. E.g. skip CMD.
Try this:
docker run -d -p expoPort:contPort -t -i -v /$MOUNTED_VOLUME_DIR/$PROJECT:/$MOUNTED_VOLUME_DIR $CONTAINER_ID

Resources