Either I'm doing something wrong or Heroku is messing up. Heroku supports targeting a particular stage in a Dockerfile. I have a multistage Dockerfile but Heroku is not respecting the build.docker.release.target in my heroku.yml. For what it's worth, targeting works fine with docker-compose.yml.
I'm trying to keep dev and prod in the same Dockerfile. Essentially dev and prod are forked from base. I could flesh it out more, but the stages are:
FROM python:3.10.0-slim-buster AS venv
...
FROM python:3.10.0-slim-buster as base
...
FROM base AS dev
...
ENTRYPOINT ["entrypoint.dev.sh"]
FROM base AS prod
...
ENTRYPOINT ["entrypoint.prod.sh"]
My heroku.yml specifically targets the prod stage:
setup:
addons:
- plan: heroku-postgresql
as: DATABASE
build:
docker:
release:
dockerfile: image/app/Dockerfile
target: prod
web: image/app/Dockerfile
config:
DJANGO_ENV: production
release:
image: web
command:
- ./deployment-tasks.sh
run:
web: gunicorn server.wsgi:application --bind 0.0.0.0:$PORT --log-level debug --access-logfile - --error-logfile -
However, Heroku builds all the stages, seems like it just runs down the Dockerfile till the end. The Heroku build logs show that first dev follows base
and then prod follows dev
I would expect it to jump from base to prod, skipping dev.
Is this an issue on my side or Heroku's?
I haven't tested this with heroku.yml because I've moved to GitHub Actions but I believe the error was having prod come after dev. Apparently the --target flag in docker build means it will stop at that stage, so it will run everything before it.
Related
I use Laravel Vapor for deploying our microservices based on Laravel. This works very good so far, if the app with their dependencies is not too large. But if it is then it gets a little bit tricky.
Vapor provides a Docker runtime for this case where you are able to deploy apps up to 10GB size.
For local development we usually use Laradock.io because its easy and flexible.
That means if we deploy from our local environment it easy to enter the workspace container and and run the vapor deploy commands. After enabling Docker Client for the workspace container it works with the vapor Docker runtime properly.
But now we integrated the deployment process into Gitlab CI Pipeline. That works very well for our small services with Vapor PHP runtime.
But for the Docker runtime I desperate on the CI deployment.
The docker runtime needs an installed docker instance where vapor will be invoked. That means in the Gitlab-ci.yml I have to add an image with installed Docker and PHP to invoke the Vapor scripts.
So I created an docker image base on the laradock workspace container but the Gitlab-runner exits always with the error message no docker deamon is available.
This is the related part of my GitLab-CI yml (the image is only local available):
testing:
image:
name: lexitaldev/vapor-docker-deploy:latest
pull_policy: never
securityContext:
privileged: true
environment: testing
stage: deploy
only:
- test
script:
- composer install
- php vendor/bin/vapor deploy test
This is the specific output:
Error Output:
================
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the
docker daemon running?
I've tried to use the standard 'laravelphp/vapor:php80' image and install docker over the script section as well.
before_script:
- apk add docker
- addgroup root docker
But nothing helped. It seems to be there is a problem with the docker.sock.
Did anybody managed to add Vapor Docker Runtime deployment to CI scripts?
Best,
Michael
I would like to tell you, that you only need to add the Service: dind, but after you do that, it will throw an error, related to the image that Gitlab create for your pipelines. So you need to create a runner with volumes, privileged flag, and tags.
I did it, using gitlab-runner on my machine.
sudo gitlab-runner register -n \
--url {{ your_url }} \
--registration-token {{your_token}} \
--executor docker \
--description "{{ Describe your runner }}" \
--docker-image "docker:20.10.12-alpine3.15" \
--docker-privileged \
--docker-volumes="/certs/client" \
--docker-volumes="cache" \
--docker-volumes="/var/run/docker.sock:/var/run/docker.sock"
--tag-list {{ a_tag_for_your_pipeline }}
Once you did that, you would need to use a docker stable version in your gitlab-ci.yml file. For some reason, it doesn't work when I was trying to use version 20 or latest
image: docker:stable
services:
- name: docker:stable:dind
before_script:
- echo $CI_JOB_TOKEN | docker login $CI_REGISTRY -u $CI_REGISTRY_USER --password-stdin
build:
tags:
- {{the tag you defined in your runner}}
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
script:
- echo $IMAGE_TAG
- docker build -t $CI_REGISTRY_IMAGE -f {{your Dockerfile}} .
- docker push $CI_REGISTRY_IMAGE
All the variables are previously defined in Gitlab, so don't worry, you can "copy & paste". Also, I added some advices that Gitlab mention on its documentation when you need to register your Docker container in Gitlab container.
So I know that there are a lot of tutorials on the topics, both docker and maven, but I'm having some confusion in combining them alltogether.
I created a multi-module Maven project with 2 modules, 2 spring applications, let's call them application 1 and application 2.
Starting each other via IntelliJ IDEA green "run" button works fine, now I'd like to automate things and run via docker.
I have Dockerfiles that looks the same in both cases:
(in both modules it's the same, only JAR name's different)
FROM adoptopenjdk:11-jre-hotspot
MAINTAINER *my name here lol*
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.9.0/wait /wait
RUN chmod +x /wait
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} application1-0.0.1-SNAPSHOT-jar-with-dependencies.jar
ENTRYPOINT ["java","-jar","/application1-0.0.1-SNAPSHOT-jar-with-dependencies.jar"]
CMD /wait && /*.jar
I also have docker-compose:
version: '2.1'
services:
application1:
container_name: app1
build:
context: ../app1
image: docker.io/myname/app1:latest
hostname: app1
ports:
- "8080:8080"
networks:
- spring-cloud-network-app1
application2:
container_name: app2
build:
context: ../app2
depends_on:
application1:
condition: service_started
links:
- application1
image: docker.io/myname/app2:latest
environment:
WAIT_HOSTS: application1:8080
ports:
- "8070:8070"
networks:
- spring-cloud-network-app2
networks:
spring-cloud-network-app1:
driver: bridge
spring-cloud-network-app2:
driver: bridge
What I do currently is:
I run maven package for each module and receive files like "application1(-2)-0.0.1-SNAPSHOT-jar-with-dependencies.jar" in both target folders.
"docker build -t springio/app1 ."
"docker-compose up --build"
And it works, but I feel I do some extra steps.
How can I do the project so that I ONLY have to run docker compose?
(after each time I change things in the code)
Again, I know it's a quite simple thing but I kinda lost the logic.
Thanks!
P.S
Ah, and about the "...docker-compose-wait/releases/download/2.9.0/wait /wait"
It's important that app start one after another, tried different solutions, unfortunately, doesn't really work as good as I would like to. But I guess I'll leave it as is.
So, again, if anyone ever wonders how to do the things I asked, here's the answer: you need multi-stage build Dockerfile.
It'll look like this:
#
# Build stage
#
FROM maven:3.6.0-jdk-11-slim AS build
COPY src /home/app/src
COPY pom.xml /home/app
RUN mvn -f /home/app/pom.xml clean package
#
# Package stage
#
FROM openjdk:11-jre-slim
COPY --from=build /home/app/target/demo-0.0.1-SNAPSHOT.jar /usr/local/lib/demo.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/usr/local/lib/demo.jar"]
What it does is it basically first creates a jar file, copies it into package stage and eventually runs.
That's allow you to run your app in docker by running only docker compose.
I'm new to Elixir and Phoenix, and having to work in CI/CD environment I'm trying to figure out how to use Phoenix with Docker.
I've tried various tutorials and videos out there, many of them doesn't work, but those who do work, they have the same result.
Phoenix server doesn't seems to find some resources (the assets folder?).
But inside my Dockerfile I'm copying the entire app folder, and I can confirm that /assets is inside the container by attaching to it.
Dockerfile:
FROM elixir:alpine
RUN apk add --no-cache build-base git
WORKDIR /app
RUN mix local.hex --force && \
mix local.rebar --force
COPY . .
RUN mix do deps.get, deps.compile
CMD ["mix", "phx.server"]
Docker-compose
version: '3.6'
services:
db:
environment:
PGDATA: /var/lib/postgresql/data/pgdata
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_HOST_AUTH_METHOD: trust
image: 'postgres:11-alpine'
restart: always
volumes:
- 'pgdata:/var/lib/postgresql/data'
web:
build: .
depends_on:
- db
environment:
MIX_ENV: dev
env_file:
- .env
ports:
- '4000:4000'
volumes:
- .:/app
volumes:
pgdata:
Steps I'm doing to create the containers and running the server:
docker-compose build
docker-compose run web mix ecto.create
docker-compose up
The database is created successfully in the db container.
What can be happening here?
Sorry if it's straightforward, I don't use Docker for a while and I still didn't understood Phoenix boilerplate completely.
If you know some good resources about Docker and CI/CD pipelines with Phoenix, I also appreciate so I can study it.
You also need to build the assets. npm install --prefix assets This needs to be done after after mix deps.get but can be done after the mix deps.compile which isn't really needed. You can start the server after mix deps.get and it will compile the deps and your app automatically.
I'm facing a strange problem (or better: two different, weird problems) trying to pass build-args to my Dockerfile through docker-compose up.
My files - initial setup
Dockerfile:
ARG NODE_VERSION
FROM node:${NODE_VERSION}
ARG NPM_REGISTRY_TOKEN
RUN echo "=====> token ${NPM_REGISTRY_TOKEN}"
... ... ...
docker-compose.yml:
version: '3'
services:
myservice:
build:
context: ./myservice
dockerfile: ../Dockerfile
args:
- NODE_VERSION=10.15.1-alpine
- NPM_REGISTRY_TOKEN
With this initial setup in place, I have the following behaviour (on Linux Mint 20, docker-compose version 1.26.2, build eefe0d31):
running docker build --build-arg NPM_REGISTRY_TOKEN=xyz123 produces in output =====> token xyz123: the NPM_REGISTRY_TOKEN arg flows to the Dockerfile
running docker-compose build --build-arg NPM_REGISTRY_TOKEN=xyz123 myservice produces in output =====> token xyz123: the NPM_REGISTRY_TOKEN arg flows to the Dockerfile
running NPM_REGISTRY_TOKEN=xyz123 docker-compose up myservice produces in output =====> token : the NPM_REGISTRY_TOKEN env arg should flow to the Dockerfile due to - NPM_REGISTRY_TOKEN (according to https://docs.docker.com/compose/compose-file/#args: You can omit the value when specifying a build argument, in which case its value at build time is the value in the environment where Compose is running) but it seems to not be available during build
My files - reloaded
Simply changing my docker-compose.yml file to
version: '3'
services:
myservice:
build:
context: ./myservice
args:
- NODE_VERSION=10.15.1-alpine
- NPM_REGISTRY_TOKEN
dockerfile: ../Dockerfile
seems to solve the problem: switching args and dockerfile entries in yml file unlocks the capability to pass environment variables to Dockerfile as build-args through docker-compose up, too. Problem solved. Or not?
Changing OS, getting new problem
So, developers in my team use a bunch of different operating systems: Linux, Mac Os, and Windows, too.
Running the same commands on the same version (1.26.2) of docker-compose on Windows 10 Professional 1909 we're getting the same problem we faced initially, both using the initial version of the docker-compose.yml file and using the version that works on Linux.
We tried passing env var from command line, setting them in the command prompt, setting them as system variables through GUI... we tried launching docker-compose up for git-bash, too, but we're not able to get the variable value in Dockerfile.
I googled a bit aaround but I've not found any reference to known bugs or limitation of the Windows version of docker-compose.
Anyone have any idea what the problem might be? Thank you very much in advance!
So, finally, after some try-and-fail on different OSs and with different configurations, I ended up with an explanation of my problem - and therefore with a viable workaround, which allowed me to reach a satisfactory configuration for my docker-compose-yml file.
Short answer: it wasn't a matter of OSs nor env var passing nor order of context / dockerfile sections - it was a matter of clash between different services in my compose file.
More in detail: my docker-compose.yml file contained an additional service, too, whose job was to initialize the database the application was pointing to:
version: '3'
services:
myservice:
build:
context: ./myservice
dockerfile: ../Dockerfile
args:
- NODE_VERSION=10.15.1-alpine
- NPM_REGISTRY_TOKEN
depends_on:
- persistence
- db_initializer
command: sh -c './wait-for localhost:5432 -- ./wait-for localhost:15672 -- npm run start:dev'
persistence:
# Setting up the DBMS here
db_initializer:
build:
context: ./myservice
dockerfile: ../Dockerfile
args:
- NODE_VERSION=10.15.1-alpine
depends_on:
- persistence
command: sh -c './wait-for localhost:5432 -- ./wait-for localhost:15672 -- npm run db:migrate'
So, the problem was that I was configuring two services based on the same, self-build image, launching it with different commands (npm run db:migrate for the db_initializer service, npm run start:dev for the application service). Apparently compose took the configuration provided for the first initialized service (db_initializer, because myservice was dependant on it) and used that configuration for both services, ignoring the (different) args section I was providing for the second container: so I was able to solve (this time really!) the problem simply merging services declaration, including all args I needed:
version: '3'
services:
myservice:
build:
context: ./myservice
dockerfile: ../Dockerfile
args:
- NODE_VERSION=10.15.1-alpine
- NPM_REGISTRY_TOKEN
depends_on:
- persistence
- db_initializer
command: sh -c './wait-for localhost:5432 -- ./wait-for localhost:15672 -- run db:migrate && npm run start:dev'
persistence:
# Setting up the DBMS here
So, after a bunch of months without collecting answers, I think it's time to share my experience, hoping it can help someone encountering this weird behaviour.
Say I'm working on a web project that runs gitlab-ci shell runner on my own ci server to build docker and deploy it to heroku, and I've gone through some docs from both gitlab and heroku like gitlab-ci: using docker build and heroku:Build and Deploy with Docker. Can I deploy the docker project without using heroku-docker plugin, which seems not so flexible to me? However I tried, the following approach build succeeded in deploying to heroku, but the app crash. Heroku logs says start script is missing in package.json, but since I'm deploying docker project, I couldn't do "start": "docker-compose up" there, could I?
#.gitlab-ci.yml
stages:
- deploy
before_script:
- npm install
- bower install
dev:
stage: deploy
script:
- docker-compose run nginx run-test
- gem install dpl
- dpl --provider=heroku --app=xixi-web-dev --api-key=$HEROKU_API_KEY
only:
- dev
# docker-compose.yml
app:
build: .
volumes:
- .:/code:ro
expose:
- "3000"
working_dir: /code
command: pm2 start app.dev.json5
nginx:
build: ./setup/nginx
restart: always
volumes:
- ./setup/nginx/sites-enabled:/etc/nginx/sites-enabled:ro
- ./dist:/var/app:ro
ports:
- "$PORT:80"
links:
- app
I don't want to use heroku docker plugin, because it seems less flexible, I can't create a app.json because I don't want to use an existing docker image for my app. Instead, I define custom Dockerfiles for app and nginx used in docker-compose.yml
Now it seems that heroku wouldn't detect my project as a docker project unless I deploy it by using heroku docker plugin, but as I mentioned above, I can't do that. Then is there any docs I'm missing on heroku or gitlab could help me out? Or do you have any idea that might be helpful? Thanks a lot!
OK, seems that heroku docker:release is required. I ended up installing heroku cli and heroku docker plugin on my CI server and use heroku docker:release --app app to release my app