How to use docker ENTRYPOINT with shell script file combine parameter - bash

I write shell script file and use this with docker ENTRYPOINT
but when I run docker image, it just stops without any error log because of entrypoint code line
my Dockerfile
FROM ubuntu:16.04
MAINTAINER limtaegeun <imori333#gmail.com>
RUN apt-get update
RUN apt-get install -y nginx
RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf
# Define mountable directories.
VOLUME ["/etc/nginx/sites-enabled", "/etc/nginx/certs", "/etc/nginx/conf.d", "/var/log/nginx", "/var/www/html"]
ENV CONTAINER_NAME nodejs
ENV SERVER_NAME myserver.com
ENV PEM_PATH /etc/nginx/certs/cert.pem
ENV KEY_PATH /etc/nginx/certs/cert.key
WORKDIR /etc/nginx
ADD ./sites-available/ssl /etc/nginx/sites-available/ssl
ADD ./docker-entrypoint.sh /etc/nginx/docker-entrypoint.sh
RUN chmod 777 /etc/nginx/docker-entrypoint.sh
EXPOSE 80 443
ENTRYPOINT /etc/nginx/docker-entrypoint.sh ${CONTAINER_NAME} ${SERVER_NAME} ${PEM_PATH} ${KEY_PATH}
CMD ["nginx"]
docker-entrypoint.sh
#!/bin/sh
CONTAINER_NAME=$1
SERVER_NAME=$2
PEM_PATH=$3
KEY_PATH=$4
rm -f /etc/nginx/sites-enabled/default
sed -ri 's#CONTAINER_NAME#'${CONTAINER_NAME}'#' /etc/nginx/sites-available/ssl
sed -ri 's#SERVER_NAME#'${SERVER_NAME}'#' /etc/nginx/sites-available/ssl
sed -ri 's#PEM_PATH#'${PEM_PATH}'#' /etc/nginx/sites-available/ssl
sed -ri 's#KEY_PATH#'${KEY_PATH}'#' /etc/nginx/sites-available/ssl
# cp -f sites-available/ssl sites-available/default
ln -s /etc/nginx/sites-available/ssl /etc/nginx/sites-enabled/default
my docker run command
docker run -v /home/ubuntu/Docker-nginx-cloudflare-ssl-proxy/certs:/etc/nginx/certs \
--name nginx-ssl -p 443:443 -p 80:80 --network nginx-net --rm -d nginx-cloudfare-ssl-proxy
what is the problem??

When a Docker container is run, it runs the ENTRYPOINT (only), passing the CMD as command-line parameters, and when the ENTRYPOINT completes the container exits. In the Dockerfile the ENTRYPOINT has to be JSON-array syntax for it to be able to see the CMD arguments, and the script itself needs to actually run the CMD, typically with a line like exec "$#".
The single simplest thing you can do to clean this up is not to try to go back and forth between environment variables and positional parameters. The ENTRYPOINT script will be able to directly read the ENV variables you set in the Dockerfile (or override with docker run -e options). So if you delete the first lines of the script that set these variables from positional parameters, and make sure to run the CMD
#!/bin/sh
# delete the lines that set CONTAINER_NAME et al.
rm -f /etc/nginx/sites-enabled/default
sed -ri 's#CONTAINER_NAME#'${CONTAINER_NAME}'#' /etc/nginx/sites-available/ssl
...
# and add this at the end
exec "$#"
and then change the Dockerfile to not pass positional parameters but do use JSON-array syntax for ENTRYPOINT
ENTRYPOINT ["/etc/nginx/docker-entrypoint.sh"]
CMD ["nginx"]
that should get you off the ground.
It's worth considering how much of this you actually need to be configurable. For instance, would you ever need a path different from the default /etc/nginx/certs inside the isolated container filesystem space? Usually with the standard nginx Docker Hub image you work with it by injecting an entire complete configuration file and if you choose to do that it simplifies your Docker setup.
Other generic suggestions: remove the VOLUME declarations (they potentially cause confusing behavior later in the Dockerfile and leak anonymous volumes and aren't otherwise necessary); don't make executable files world-writable (chmod 0755, not 0777); RUN apt-get update && apt-get install in the same Dockerfile command.

Related

Access argument from docker build inside script

So in dockerfile I am running entrypoint:
ARG WP_IMAGE=latest
FROM wordpress:$WP_IMAGE
ARG VERSION
RUN curl -o /usr/local/bin/wp https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar \
&& chmod +x /usr/local/bin/wp
RUN apt update && apt install -y vim
ADD ./bin/ /
RUN chmod +x /*.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["apache2-foreground"]
And I have this script entrypoint.sh:
#!/bin/bash
/usr/local/bin/docker-entrypoint.sh php-fpm || /configure.sh
exec "$#"
And there is configure.sh script and inside this script I want to access this argument from Dockerfile VERSION.
This is how I build my docker docker-compose build --build-arg WP_IMAGE=latest --build-arg VERSION=7.0 && docker-compose up -d.
You can use ENV keyword in Dockerfile like:
ARG VERSION
ENV VERSION=${VERSION}
Now the script running in the image can access VERSION from the environment.
The ENV instruction sets the environment variable to the value
. The environment variables set using ENV will persist when a
container is run from the resulting image.

Why does my bash script not run properly on every docker container that starts up?

My Dockerfile copies an init.sh script to the container.
# DOCKERFILE
FROM ubuntu:latest
# a bunch of installation commands
COPY init.sh /
ENTRYPOINT bash init.sh
EXPOSE 80
And I have a docker-compose file with 2 services:
Service1: This service is being scaled.
Service2: Database
I have it so that when the Service1 container starts up, this script will run.
#!/bin/bash
# script
# Missing files directory
if [[ ! -e /var/www/drupal/sites/default/files ]]; then
mkdir /var/www/drupal/sites/default/files
chmod a+w /var/www/drupal/sites/default/files
fi
# Missing settings file
cp /var/www/drupal/sites/default/default.settings.php /var/www/drupal/sites/default/settings.php
chmod a+w /var/www/drupal/sites/default/settings.php
# Install Drush & Install Drupal
cd /var/www/drupal && composer require --dev drush/drush
cd /var/www/drupal && vendor/bin/drush site-install standard \
--db-url=mysql://root:random#mariadb:3306/drupaldb -y \
--site-name=ExampleWebsite \
--account-name=random \
--account-pass=random
# Post-Installation Steps
chmod go-w /var/www/drupal/sites/default/settings.php
chmod go-w /var/www/drupal/sites/default
cd /var/www/drupal && vendor/bin/drush cache-rebuild
/usr/sbin/apache2ctl -D FOREGROUND
However, when I run the command to start up the containers along with --scale docker-compose up -d --scale Service1=5, some of the containers run the script properly on start up but some don't. For the ones that don't, I would have to go into the container and manually run the script, then it's fine.
Shouldn't all the containers be the same and would've run the same script properly?
Instead, I would have to manually go into some of the containers and run the script.

docker run entrypoint with multiple commands

How can I have an entrypoint in a docker run which executes multiple commands?
Something like:
docker run --entrypoint "echo 'hello' && echo 'world'" ... <image>
The image I'm trying to run, has already an entrypoint set in the Dockerfile, so solution like the following seems not to work, because it looks my commands are ignored, and only the original entrypoint is executed
docker run ... <image> bash -c "echo 'hello' && echo 'world'"
In my use-case I must use the docker run command. Solution which change the Dockerfile are not acceptable, since it is not in my hands
As a style point, this gets vastly easier if your image has a CMD that can be overridden. If you only need to run one command with no initial setup, make it be the CMD and not the ENTRYPOINT:
CMD ./some_command # not ENTRYPOINT
If you need to do some initial setup and then launch the main command, make the ENTRYPOINT be a shell script that ends with the special instruction exec "$#". The CMD will be passed into it as parameters, and this line replaces the shell script with that command.
#!/bin/sh
# entrypoint.sh
... do first time setup, run database migrations, set variables ...
exec "$#"
# Dockerfile
...
ENTRYPOINT ["./entrypoint.sh"] # MUST be JSON-array syntax
CMD ./some_command # as before
If you do these things, then you can use your initial docker run form. This will replace the CMD but leave the ENTRYPOINT intact. In the wrapper-script case, your alternate command will be run as the exec "$#" command, so all of the first-time setup will be done first.
# Assuming the image correctly honors the CMD
docker run ... \
image-name \
sh -c 'echo "foo is $FOO" && echo "bar is $BAR"'
If you really can't do this, you can override the docker run --entrypoint. This runs instead of the image's entrypoint (if you want the image's entrypoint you have to run it yourself), and the syntax is awkward:
# Run a shell command instead of the entrypoint
docker run ... \
--entrypoint /bin/sh \
image-name \
-c 'echo "foo is $FOO" && echo "bar is $BAR"'
Note that the --entrypoint option comes before the image name, and its arguments come after the image name.

How to set up a flag to in entrypoint of docker container

I have Dockerfile with entrypoint:
ENTRYPOINT ["bash", "-c", "source /code/entrypoint.sh | ts '[%Y-%m-%d
%H:%M:%S]' &>> /output/stderr.log"]
and command in entrypoint.sh:
fmriprep /input /output participant --fs-license-file
/opt/freesurfer/license.txt --use-aroma --ignore fieldmaps --n_cpus 12 --
force-bbr --participant_label "${ids[#]}" -w /output
How could i set flags for command inside entrypoint for example add flag --some_flag to fmriprep command to run it with
docker run my_image --some-flag
You should be able to pass some environnement variable from you run command to your CMD before the "cmd" is triggered.
To do such, try using the '-e' clause this way (not tested, but should work):
docker run my_image -e 'EXTRA_OPTS=--some-flag'
and in your command :
fmriprep /input /output participant --fs-license-file
/opt/freesurfer/license.txt --use-aroma --ignore fieldmaps --n_cpus 12 --
force-bbr --participant_label "${ids[#]}" -w /output $EXTRA_OPTS
That's the basic idea
Pass an environment variable:
docker run -e flag=somevalue my_image
You can access flag via $flag inside your Dockerfile which you can pass to your script
If you use the JSON-array form of ENTRYPOINT, then everything in CMD is passed as command-line arguments to the entrypoint.
I would encourage not trying to write complex scripts inline in a Dockerfile or docker-compose.yml file. Write an ordinary script, COPY it into the image, and make that script the ENTRYPOINT. It can refer to the shell variable "$#" to run its CMD.
For example, I could refactor your script into:
#!/bin/bash
# I am /entrypoint.sh
"$#" | ts '[%Y-%m-%d %H:%M:%S]' &>> /output/stderr.log
#!/bin/bash
# I am /run.sh
fmriprep /input /output participant --fs-license-file /opt/freesurfer/license.txt --use-aroma --ignore fieldmaps --n_cpus 12 --force-bbr --participant_label "${ids[#]}" -w /output $EXTRA_OPTS
And then write a Dockerfile:
...
COPY entrypoint.sh run.sh /
ENTRYPOINT ["/entrypoint.sh"]
CMD ["/run.sh"]
And then you would get to run:
docker run my_image /run.sh --some-flag
You would also get to run ordinary debugging commands like:
docker run --rm -it my_image /bin/sh
docker run --rm my_image cat /run.sh
In this specific example I would probably rely on some external system to format and capture the log messages and not try to do it inside the container. Routing Docker container logs to logstash, for instance, is a pretty typical setup.

Dockerfile CMD not running at container start

So i've written a Dockerfile for a project, i've defined a CMD to run on starting the container to bootstrap the application.
The Dockerfile looks like
# create our mount folders and volumes
ENV MOUNTED_VOLUME_DIR=sites
RUN mkdir /$MOUNTED_VOLUME_DIR
ENV PATH=$MOUNTED_VOLUME_DIR/sbin:$MOUNTED_VOLUME_DIR/common/bin:$PATH
RUN chown -Rf www-data:www-data /$MOUNTED_VOLUME_DIR
# Mount folders
VOLUME ["/$MOUNTED_VOLUME_DIR/"]
# Expose Ports
EXPOSE 443
# add our environment variables to the server
ADD ./env /env
# Add entry point script
ADD ./start.sh /usr/bin/startContainer
RUN chmod 755 /usr/bin/startContainer
# define entrypoint command
CMD ["/bin/bash", "/usr/bin/startContainer"]
The start.sh script, does some git stuff like cloning the right repo, setting environment vars, as well as starting supervisor.
The start script begins with this
#!/bin/bash
now=$(date +"%T")
echo "Container Start Time : $now" >> /tmp/start.txt
/usr/bin/supervisord -n -c /etc/supervisord.conf
I start my new container like this
docker run -d -p expoPort:contPort -t -i -v /$MOUNTED_VOLUME_DIR/$PROJECT:/$MOUNTED_VOLUME_DIR $CONTAINER_ID /bin/bash
when i login to the container i see that supervisor hasn't been started, and neither has nginx or php5-fpm. the /tmp/start.txt file with a timestamp set from the startContainer script doesn't exist, showing its never ran the CMD in the Dockerfile.
Any hints on to get this fixed would be great
This:
docker run -d -p expoPort:contPort -t -i -v /$MOUNTED_VOLUME_DIR/$PROJECT:/$MOUNTED_VOLUME_DIR $CONTAINER_ID /bin/bash
Says 'run /bin/bash' after instantiating the container. E.g. skip CMD.
Try this:
docker run -d -p expoPort:contPort -t -i -v /$MOUNTED_VOLUME_DIR/$PROJECT:/$MOUNTED_VOLUME_DIR $CONTAINER_ID

Resources