I have created an image from the following Dockerfile.
FROM alpine
WORKDIR /usr/src/app
RUN apk add nodejs-current
RUN apk add nodejs-npm
RUN npm install pm2 -g
COPY process.yaml .
CMD pm2 start process.yaml --no-daemon --log-date-format 'DD-MM
HH:mm:ss.SSS'
process.yaml looks like this:
- script: ./run-services.sh
watch : false
But run-services.sh does not run in my docker. What is the problem?
The problem is that in alpine the bash is not installed by default. pm2 runs bash scripts files by bash command. so there is two way to solve the problem:
Changing default pm2 interpreter from bash to /bin/sh
- script: ./run-services.sh
interpreter: /bin/sh
watch : false
Installing bash in alpine. So the Dockerfile changes as following:
FROM alpine
RUN apk update && apk add bash
WORKDIR /usr/src/app
RUN apk add nodejs-current
RUN apk add nodejs-npm
RUN npm install pm2 -g
COPY process.yaml .
CMD pm2 start process.yaml --no-daemon --log-date-format 'DD-MM
HH:mm:ss.SSS'
Related
I want to start my docker container with a docker-compose command.
The underlying docker image CMD should just be executed regularly, and I want to append my script at some point.
When reading about executing shell commands in docker, entrypoint is brought up.
But according to the documentation start of the image + appended script without overriding entrypoint or cmd is not possible through entrypoint (https://docs.docker.com/compose/compose-file/#entrypoint):
Compose implementations MUST clear out any default command on the Docker image - both ENTRYPOINT and CMD instruction in the Dockerfile - when entrypoint is configured by a Compose file.
A similar question was asked here, but the answer did not address the issue:
docker-compose, how to run bash commands after container has started, without overriding the CMD or ENTRYPOINT in the image docker is pulling in?
Another option would be to copy & edit the dockerfile of the pulled image, but that would not be great for future imports:
docker-compose: run a command without overriding anything
What I actually want to do, is coupling the install of php & composer to the docker-compose up process.
Here is my docker-compose file:
version: '3.8'
services:
jenkins:
image: jenkins/jenkins:2.361.1
restart: always
privileged: true
user: root
ports:
- "8080:8080"
- "50000:50000"
container_name: "aaa-jenkins"
volumes:
- "./jenkins_configuration:/var/jenkins_home"
- "/var/run/docker.sock:/var/run/docker.sock"
- "/usr/bin/docker:/usr/bin/docker"
My script looks something like this:
#!/bin/bash
apt update
apt -y install php
apt -y install php-xml php-zip php-curl php-mbstring php-xdebug
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
php composer-setup.php --install-dir=/usr/local/bin --filename=composer
composer
I have a Jenkins pipeline just for learning purposes, which should build a Laravel app via docker-compose. The "docker-compose --build" step is working fine, but next it is running "docker-compose run --rm composer update", which then stops, no error or output.
When I run the command manually after accessing the server via SSH, the command runs with no issues.
Composer service in docker-compose file:
composer:
build:
context: .
dockerfile: composer.dockerfile
container_name: composer
volumes:
- ./src:/var/www/html
working_dir: /var/www/html
depends_on:
- php
user: laravel
entrypoint: ['composer', '--ignore-platform-reqs']
networks:
- laravel
Build step in jenkinsfile:
stage('Build') {
steps {
echo 'Building..'
sh 'chmod +x scripts/jenkins-build.sh'
sh './scripts/jenkins-build.sh'
}
}
Command in shell script:
print "Building docker app"
sudo docker-compose up -d --build site # works fine
sudo chown jenkins -R ./
print "Running composer"
sudo docker-compose run --rm composer update # hangs in jenkins but works in cmd?
View in Jenkins:
Same command working on same server, via cmd:
I know there are some bad practices in here, but this is just for learning purposes. Jenkins server is running Ubuntu 20.04 on AWS EC2 instance.
In the end I resorted to installing composer directly into my PHP docker image. Therefore instead of running the composer service, I now use docker exec php composer update.
From what I can see, any services that were used via docker-compose run did not work in the Jenkins pipeline. In my case, these were all services that were only running whilst performing some action (like composer update), so maybe that is why Jenkins did not like it.
I trying to reduce the time take to build a docker image to react app,
the react should be static without server rendering.
now it takes around 5-10 minute to create an image and the image size on the local machine is around 1.5GB !!, the issue is that also after the second time of image creation, even I changed smth in the code it doesn't use any cache
I am looking for a solution to cut the time the size and here is my docker File after lot of changes
# Producation and dev build
FROM node:14.2.0-alpine3.10 AS test1
RUN apk update
RUN apk add \
build-base \
libtool \
autoconf \
automake \
jq \
openssh \
libexecinfo-dev
ADD package.json package-lock.json /app/
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
ADD . /app/
RUN rm -rf node_modules
RUN npm install --production
# copy production node_modules aside, to prevent collecting them
RUN cp -R node_modules prod_node_modules
# install ALL node_modules, including 'devDependencies'
RUN npm install
RUN npm install react-scripts#3.4.1 -g --silent
RUN npm run build
RUN rm -rf node_modules
RUN cp -R prod_node_modules node_modules
#FROM node:13.14.0
FROM test1
# copy app sources
COPY --from=test1 /app/build .
COPY --from=test1 /app/env-config.js .
# serve is what we use to run the web application
RUN npm install -g serve
# remove the sources & other needless stuff
RUN rm -rf ./src
RUN rm -rf ./prod_node_modules
# Add bash
RUN apk add --no-cache bash
CMD ["/bin/bash", "-c", "serve -s build"]
You're hitting two basic dynamics here. The first is that your image contains a pretty large amount of build-time content, including at least some parts of a C toolchain; since your run-time "stage" is built FROM the build stage as is, it brings all of the build toolchain along with it. The second is that each RUN command produces a new Docker layer with differences from the previous layer, so RUN commands only make the container larger. More specifically RUN rm -rf ... makes the image slightly larger and does not result in space savings.
You can use a multi-stage build to improve this. Each FROM line causes docker build to start over from some specified base image, and you can COPY --from=... previous build stages. I'd do this in two stages, a first stage that builds the application and a second stage that runs it.
# Build stage:
FROM node:14.2.0-alpine3.10 AS build
# Install OS-level dependencies (including C toolchain)
RUN apk update \
&& apk add \
build-base \
libtool \
autoconf \
automake \
jq \
openssh \
libexecinfo-dev
# set working directory
WORKDIR /app
# install app dependencies
# (copy _just_ the package.json here so Docker layer caching works)
COPY package.json package-lock.json ./
RUN npm install
# build the application
COPY . ./
RUN npm run build
# Final stage:
FROM node:14.2.0-alpine3.10
# set working directory
WORKDIR /app
# install dependencies
COPY package.json package-lock.json ./
RUN npm install --production
# get the build tree
COPY --from=build /app/build/ ./build/
# explain how to run the application
ENTRYPOINT ["npx"]
CMD ["serve", "-g", "build"]
Note that when we get to the second stage, we run npm install --production on a clean Node installation; we don't try to shuffle back and forth between dev and prod dependencies. Rather than trying to RUN rm -rf src, we just don't COPY it into the final image.
This also requires making sure you have a .dockerignore file that contains node_modules (which will reduce build times and avoid some potential conflicts; RUN npm install will recreate it in the directory). If you need react-scripts or serve those should be listed in your package.json file.
I have installed:
https://laradock.io
When I run this command:
docker-compose exec --user=laradock workspace bash php project1/artisan preset none
I have error:
/usr/bin/php: /usr/bin/php: cannot execute binary file
My file/folder structure:
- laradock
- project1/public
How can I can run this command?
You should remove "bash" from your command.
Run this:
docker-compose exec --user=laradock workspace php project1/artisan preset none
This question already has answers here:
How to source a script with environment variables in a docker build process?
(4 answers)
Closed 3 years ago.
.env file is in project root.
I am using a docker file as follows
FROM alpine:3.7
WORKDIR /app
COPY . /app
RUN apk update && apk add build-base python3 python3-dev --no-cache bash && \
pip3 install --upgrade pip && \
pip3 install --trusted-host pypi.python.org --no-cache-dir -e. && \
./scripts/install.sh
EXPOSE 5000 3306
CMD ["myserver", "run"]
and the install.sh file as follows
#!/usr/bin/env bash
source .env
When I log in to the docker container I noticed that the .env file not getting sourced. How ever .env file is in the folder. How to source the .env file in docker container by using the docker file?
RUN just effect in build stage, so you source will never affect container which is run time. You should do source in CMD or ENTRYPOINT which will run in container run time, write a entrypoint.sh for your project. Something like this
Dockerfile:
ENTRYPOINT ["docker-entrypoint.sh"]
docker-entrypoint.sh:
source .env
my_server run
And, use ENV in dockerfile is another way which will affect the runtime container.