So in dockerfile I am running entrypoint:
ARG WP_IMAGE=latest
FROM wordpress:$WP_IMAGE
ARG VERSION
RUN curl -o /usr/local/bin/wp https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar \
&& chmod +x /usr/local/bin/wp
RUN apt update && apt install -y vim
ADD ./bin/ /
RUN chmod +x /*.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["apache2-foreground"]
And I have this script entrypoint.sh:
#!/bin/bash
/usr/local/bin/docker-entrypoint.sh php-fpm || /configure.sh
exec "$#"
And there is configure.sh script and inside this script I want to access this argument from Dockerfile VERSION.
This is how I build my docker docker-compose build --build-arg WP_IMAGE=latest --build-arg VERSION=7.0 && docker-compose up -d.
You can use ENV keyword in Dockerfile like:
ARG VERSION
ENV VERSION=${VERSION}
Now the script running in the image can access VERSION from the environment.
The ENV instruction sets the environment variable to the value
. The environment variables set using ENV will persist when a
container is run from the resulting image.
Related
My Dockerfile copies an init.sh script to the container.
# DOCKERFILE
FROM ubuntu:latest
# a bunch of installation commands
COPY init.sh /
ENTRYPOINT bash init.sh
EXPOSE 80
And I have a docker-compose file with 2 services:
Service1: This service is being scaled.
Service2: Database
I have it so that when the Service1 container starts up, this script will run.
#!/bin/bash
# script
# Missing files directory
if [[ ! -e /var/www/drupal/sites/default/files ]]; then
mkdir /var/www/drupal/sites/default/files
chmod a+w /var/www/drupal/sites/default/files
fi
# Missing settings file
cp /var/www/drupal/sites/default/default.settings.php /var/www/drupal/sites/default/settings.php
chmod a+w /var/www/drupal/sites/default/settings.php
# Install Drush & Install Drupal
cd /var/www/drupal && composer require --dev drush/drush
cd /var/www/drupal && vendor/bin/drush site-install standard \
--db-url=mysql://root:random#mariadb:3306/drupaldb -y \
--site-name=ExampleWebsite \
--account-name=random \
--account-pass=random
# Post-Installation Steps
chmod go-w /var/www/drupal/sites/default/settings.php
chmod go-w /var/www/drupal/sites/default
cd /var/www/drupal && vendor/bin/drush cache-rebuild
/usr/sbin/apache2ctl -D FOREGROUND
However, when I run the command to start up the containers along with --scale docker-compose up -d --scale Service1=5, some of the containers run the script properly on start up but some don't. For the ones that don't, I would have to go into the container and manually run the script, then it's fine.
Shouldn't all the containers be the same and would've run the same script properly?
Instead, I would have to manually go into some of the containers and run the script.
I am on ubuntu 20.04 I installed docker using sudo snap install docker now when I run directly from the terminal (terminal installed with ubuntu) docker command it works fine but when I execute a .sh script from the terminal using either bash ./script.sh or ./script.sh I am getting an error docker: command not found.
This is the script:
#!/bin/bash
source $(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)/env.sh
docker run -e "NODE_ENV=dev" -it --rm --name my-npm-2 -v $PROJECT_HOME/code:/var/www/html/code -w /var/www/html/code node:14 npm install
docker run -e "NODE_ENV=dev" -it --rm --name my-npm -v $PROJECT_HOME/code/web:/var/www/html/code/web -w /var/www/html/code/web node:14 npm install
$SCRIPT_HOME/buildjs_dev.sh
docker exec project_php sudo php -d memory_limit=-1 /usr/local/bin/composer install --working-dir=/var/www/html/code
docker exec project_php chown -R www-data:www-data /var/www/html/code/var/cache
docker exec project_php chown -R www-data:www-data /var/www/html/code/var/log
I am new to linux in general and I don't know if the problem is with the script itself or why isn't it recognizing docker?
You are defining a source file at the start of your script which might be changing the PATH variable. Try by either commenting the source line or calling the docker command with full path.
I am trying to run a container (hello-world) as a sibling from another container (dev).
But, container script is not able to access "Docker". I am getting
Docker not found error
Here is what I am doing: dev Dockerfile downloads the docker image Like
ENV DOCKER_VERSION=19.03.8
RUN curl -sfL -o docker.tgz
"https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKER_VERSION}.tgz" && \
tar -xzf docker.tgz docker/docker --strip=1 --directory /usr/local/bin && \
rm docker.tgz
RUN ["chmod","+x","./script.sh"]
ENTRYPOINT ["sh","./script.sh"]
script.sh is:
#!/bin/bash
docker run hello-world
Docker Build command:
docker build -t dev .
Docker run command:
docker run -v /var/run/docker.sock:/var/run/docker.sock <container_image>
I write shell script file and use this with docker ENTRYPOINT
but when I run docker image, it just stops without any error log because of entrypoint code line
my Dockerfile
FROM ubuntu:16.04
MAINTAINER limtaegeun <imori333#gmail.com>
RUN apt-get update
RUN apt-get install -y nginx
RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf
# Define mountable directories.
VOLUME ["/etc/nginx/sites-enabled", "/etc/nginx/certs", "/etc/nginx/conf.d", "/var/log/nginx", "/var/www/html"]
ENV CONTAINER_NAME nodejs
ENV SERVER_NAME myserver.com
ENV PEM_PATH /etc/nginx/certs/cert.pem
ENV KEY_PATH /etc/nginx/certs/cert.key
WORKDIR /etc/nginx
ADD ./sites-available/ssl /etc/nginx/sites-available/ssl
ADD ./docker-entrypoint.sh /etc/nginx/docker-entrypoint.sh
RUN chmod 777 /etc/nginx/docker-entrypoint.sh
EXPOSE 80 443
ENTRYPOINT /etc/nginx/docker-entrypoint.sh ${CONTAINER_NAME} ${SERVER_NAME} ${PEM_PATH} ${KEY_PATH}
CMD ["nginx"]
docker-entrypoint.sh
#!/bin/sh
CONTAINER_NAME=$1
SERVER_NAME=$2
PEM_PATH=$3
KEY_PATH=$4
rm -f /etc/nginx/sites-enabled/default
sed -ri 's#CONTAINER_NAME#'${CONTAINER_NAME}'#' /etc/nginx/sites-available/ssl
sed -ri 's#SERVER_NAME#'${SERVER_NAME}'#' /etc/nginx/sites-available/ssl
sed -ri 's#PEM_PATH#'${PEM_PATH}'#' /etc/nginx/sites-available/ssl
sed -ri 's#KEY_PATH#'${KEY_PATH}'#' /etc/nginx/sites-available/ssl
# cp -f sites-available/ssl sites-available/default
ln -s /etc/nginx/sites-available/ssl /etc/nginx/sites-enabled/default
my docker run command
docker run -v /home/ubuntu/Docker-nginx-cloudflare-ssl-proxy/certs:/etc/nginx/certs \
--name nginx-ssl -p 443:443 -p 80:80 --network nginx-net --rm -d nginx-cloudfare-ssl-proxy
what is the problem??
When a Docker container is run, it runs the ENTRYPOINT (only), passing the CMD as command-line parameters, and when the ENTRYPOINT completes the container exits. In the Dockerfile the ENTRYPOINT has to be JSON-array syntax for it to be able to see the CMD arguments, and the script itself needs to actually run the CMD, typically with a line like exec "$#".
The single simplest thing you can do to clean this up is not to try to go back and forth between environment variables and positional parameters. The ENTRYPOINT script will be able to directly read the ENV variables you set in the Dockerfile (or override with docker run -e options). So if you delete the first lines of the script that set these variables from positional parameters, and make sure to run the CMD
#!/bin/sh
# delete the lines that set CONTAINER_NAME et al.
rm -f /etc/nginx/sites-enabled/default
sed -ri 's#CONTAINER_NAME#'${CONTAINER_NAME}'#' /etc/nginx/sites-available/ssl
...
# and add this at the end
exec "$#"
and then change the Dockerfile to not pass positional parameters but do use JSON-array syntax for ENTRYPOINT
ENTRYPOINT ["/etc/nginx/docker-entrypoint.sh"]
CMD ["nginx"]
that should get you off the ground.
It's worth considering how much of this you actually need to be configurable. For instance, would you ever need a path different from the default /etc/nginx/certs inside the isolated container filesystem space? Usually with the standard nginx Docker Hub image you work with it by injecting an entire complete configuration file and if you choose to do that it simplifies your Docker setup.
Other generic suggestions: remove the VOLUME declarations (they potentially cause confusing behavior later in the Dockerfile and leak anonymous volumes and aren't otherwise necessary); don't make executable files world-writable (chmod 0755, not 0777); RUN apt-get update && apt-get install in the same Dockerfile command.
So i've written a Dockerfile for a project, i've defined a CMD to run on starting the container to bootstrap the application.
The Dockerfile looks like
# create our mount folders and volumes
ENV MOUNTED_VOLUME_DIR=sites
RUN mkdir /$MOUNTED_VOLUME_DIR
ENV PATH=$MOUNTED_VOLUME_DIR/sbin:$MOUNTED_VOLUME_DIR/common/bin:$PATH
RUN chown -Rf www-data:www-data /$MOUNTED_VOLUME_DIR
# Mount folders
VOLUME ["/$MOUNTED_VOLUME_DIR/"]
# Expose Ports
EXPOSE 443
# add our environment variables to the server
ADD ./env /env
# Add entry point script
ADD ./start.sh /usr/bin/startContainer
RUN chmod 755 /usr/bin/startContainer
# define entrypoint command
CMD ["/bin/bash", "/usr/bin/startContainer"]
The start.sh script, does some git stuff like cloning the right repo, setting environment vars, as well as starting supervisor.
The start script begins with this
#!/bin/bash
now=$(date +"%T")
echo "Container Start Time : $now" >> /tmp/start.txt
/usr/bin/supervisord -n -c /etc/supervisord.conf
I start my new container like this
docker run -d -p expoPort:contPort -t -i -v /$MOUNTED_VOLUME_DIR/$PROJECT:/$MOUNTED_VOLUME_DIR $CONTAINER_ID /bin/bash
when i login to the container i see that supervisor hasn't been started, and neither has nginx or php5-fpm. the /tmp/start.txt file with a timestamp set from the startContainer script doesn't exist, showing its never ran the CMD in the Dockerfile.
Any hints on to get this fixed would be great
This:
docker run -d -p expoPort:contPort -t -i -v /$MOUNTED_VOLUME_DIR/$PROJECT:/$MOUNTED_VOLUME_DIR $CONTAINER_ID /bin/bash
Says 'run /bin/bash' after instantiating the container. E.g. skip CMD.
Try this:
docker run -d -p expoPort:contPort -t -i -v /$MOUNTED_VOLUME_DIR/$PROJECT:/$MOUNTED_VOLUME_DIR $CONTAINER_ID