When I push an existing Docker image to Heroku, Heroku provides a $PORT environment variable. How can I pass this property to the Heroku run instance?
On localhost this would work:
docker pull swaggerapi/swagger-ui
docker run -p 80:8080 swaggerapi/swagger-ui
On Heroku I should do:
docker run -p $PORT:8080 swaggerapi/swagger-ui
Is something like this possible?
The question is quite old now, but still I will write my answer here if it can be of some help to others.
I have spring-boot App along with swagger-ui Dockerized and deployed on Heroku.
This is my application.yml looks like:
server:
port: ${PORT:8080}
forward-headers-strategy: framework
servlet:
contextPath: /my-app
springdoc:
swagger-ui:
path: '/swagger-ui.html'
Below is my DockerFile configuration.
FROM maven:3.5-jdk-8 as maven_build
WORKDIR /app
COPY pom.xml .
RUN mvn clean package -Dmaven.main.skip -Dmaven.test.skip && rm -r target
COPY src ./src
RUN mvn package spring-boot:repackage
########run stage########
FROM openjdk:8-jdk-alpine
WORKDIR /app
RUN apk add --no-cache bash
COPY --from=maven_build /app/target/springapp-1.1.1.jar ./
#run the app
# 256m was necessary for me, as I am using free version so Heroku was giving me memory quota limit exception therefore, I restricted the limit to 256m
ENV JAVA_OPTS "-Xmx256m"
ENTRYPOINT ["java","${JAVA_OPTS}", "-jar","-Dserver.port=${PORT}", "springapp-1.1.1.jar"]
The commands I used to create the heroku app:
heroku create
heroku stack:set container
The commands I used to build image and deploy:
docker build -t app-image .
heroku container:push web
heroku container:release web
Finally make sure on Heroku Dashboard the dyno information looks like this:
web java \$\{JAVA_OPTS\} -jar -Dserver.port\=\$\{PORT\} springapp-1.1.1.jar
After all these steps, I was able to access the swagger-ui via
https://testapp.herokuapp.com/my-app/swagger-ui.html
Your Docker container is required to listen to HTTP traffic in the port specified by Heroku.
Looking at the Dockerfile in the Github repo for swaggerapi/swagger-ui, it looks like it already supports the PORT environment variable: https://github.com/swagger-api/swagger-ui/blob/be72c292cae62bcaf743adc6236707962bc60bad/Dockerfile#L13
So maybe you don't really need to do anything?
It looks like this image would just work, if shipped to Heroku as a web app.
Related
I have project with Laravel 9 and Vite and faced the problem when deploy app to host. I use docker, docker-compose and Gitlab CI/CD
TL;DR - manifest.json from vite must be accessible from app container, but assets build and store in other container. How can i send this file to app container?
My flow:
Push tag to Gitlab and start CI/CD
(Build stage) Build docker containers
app container with Laravel exposing php-fpm
web container with Nginx proxy requests to app and host static files
(public/build)
(Prepare stage) Clean containers from dev dependencies and upload to registry
(Deploy stage) Start docker-compose service on the host with builded containers
The problem is in manifest.json file from Vite. This is web Dockerfile:
FROM nginx:1.15.3 as base
COPY ./deployment/dockerfiles/web/default.conf /etc/nginx/conf.d/default.conf
WORKDIR /var/www/public
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
EXPOSE 80
FROM node:16.15.1-alpine as builder
WORKDIR /source
COPY ./package.json /source/
COPY ./yarn.lock /source/
RUN yarn install --frozen-lockfile
COPY ./vite.config.js /source
COPY ./resources/js /source/resources/js
COPY ./resources/css /source/resources/css
RUN yarn run build
FROM base
COPY --from=builder /source/public .
CMD ["service", "nginx", "start"]
For now, i have /public/build/ directory in web container with manifest.json and assets, but app container dont know about it - and when i start service - face the exception
Vite manifest not found at: /var/www/public/build/manifest.json
For now, i solve this problem in deploy stage - after docker-compose up just copy file from web service to app service - but this not look like production ready solution - because container in registry not ready to start without manipulation with service.
Then, question - how can i share manifest.json to app container on build stage? Or any other way?
I have created a Spring Boot web API. With the help of Docker the web API is containerized. The Dockerfile you can find below.
When I build the Dockerfile and run it with this command: docker container run -p 8080:8080 --name backend b7908762020, the container is being created. When I navigate to http://localhost:8080/mywar then it shows "HTTP Status 404 – Not Found".
Why is that the case? Is my Dockerfile correct?
FROM maven:3.3.9-jdk-8-alpine as build-env
COPY pom.xml /tmp/
COPY settings.xml /root/.m2/settings.xml
COPY src /tmp/src/
WORKDIR /tmp/
RUN mvn package -Dmaven.test.skip=true
FROM tomcat:8.0
COPY --from=build-env /tmp/target/cbm-server-0.0.1-SNAPSHOT.war $CATALINA_HOME/webapps/cbm-server.war
# The env variabel to pickup the correct environment properties file
ADD setenv.sh $CATALINA_HOME/bin/setenv.sh
ADD tomcat-users.xml $CATALINA_HOME/conf/tomcat-users.xml
EXPOSE 8009
EXPOSE 8080
CMD ["catalina.sh", "run"]
I'm having a problem installing laravel through a dockerfile. I'm using docker-compose that pulls a dockerfile where I basically have this:
FROM php:7.3-apache-stretch
*some apt-get and install composer*
WORKDIR /var/www
RUN composer create-project --prefer-dist laravel/laravel app
CMD apachectl -D FOREGROUND
but when I access the container and I will see the files that should have been created with the composer I see that it is empty even though I have seen the command executed in the container build.
The container is working perfectly and even I can access it ... only files that do not even appear.
If I run the composer command manually after the container is created the files appear.
In your Dockerfile, you used WORKDIR /var/www and then RUN composer create-project ... which makes composer create files under /var/www on the container file system.
In your docker-compose.yml file you used to start your container:
version: '3.7'
services:
app:
container_name: "app"
build:
context: ./docker
dockerfile: Dockerfile-app
ports:
- "80"
- "443"
restart: always
depends_on:
- db
volumes:
- ".:/var/www"
You are declaring a volume that will be mounted on that same location /var/www in your container.
What happens is that the volume content will take the place of whatever you had on /var/www in the container file system.
I suggest you read carefully the documentation regarding docker volumes, and more specifically the part titled Populate a volume using a container.
Now to move on, ask yourself why you needed that volume in the first place. Is it necessary to change files at run time ?
If not, just add your files at build time:
FROM php:7.3-apache-stretch
*some apt-get and install composer*
WORKDIR /var/www
RUN composer create-project --prefer-dist laravel/laravel app
COPY . /var/www
CMD apachectl -D FOREGROUND
and remove the volume for /var/www.
EDIT
Developing with the help of a Docker container
During development, you change php files on your docker host (assumed to be you development computer) and need to frequently test the result by testing your app served by the webserver from the docker container.
It would be cumbersome to have to rebuild a Docker image every time you need to test your app. The solution is to mount a volume so that the container can serve the files from your development computer:
FROM php:7.3-apache-stretch
*some apt-get and install composer*
WORKDIR /var/www
CMD apachectl -D FOREGROUND
and start it with:
version: '3.7'
services:
app:
container_name: "app"
build:
context: ./docker
dockerfile: Dockerfile-app
ports:
- "80"
- "443"
restart: always
depends_on:
- db
volumes:
- ".:/var/www"
...
When you need to run some commands within that container, just use docker exec:
docker-compose exec app composer create-project --prefer-dist laravel/laravel app
Producing project artifacts
Since what you will be deploying is not a zip/tar archive containing your source code and configurations but a docker image, you need to build the docker image you will use at deployment time.
Dockerfile for production
For production use, you want to have a Docker image which holds all required files and does not need any docker volume, excepted for holding data produced by users (uploaded files, database files, etc)
FROM php:7.3-apache-stretch
*some apt-get and install composer*
WORKDIR /var/www
COPY . /var/www
CMD apachectl -D FOREGROUND
Notice that there is no RUN composer create-project --prefer-dist laravel/laravel app in this Dockerfile. This is because this command is to initialise your project and this is a development time task, not a deployment time task.
You will also need a place to host your docker images (a Docker registry). You can deploy your own registry as a Docker container using the official registry image, or use the one provided by companies:
Gitlab.com - Gitlab Registry (free)
Docker.com - hub.docker.com (1 private image free)
Google.com - Google Container Registry
...
So you need to build a docker image, and then push that image on your registry. Best practice is to automate those tasks with the help of continuous integration tools such as Jenkins, Gitlab CI, Travis CI, Circle CI, Google Cloud Build ...
Your CI job will run the following commands:
git clone <url of you git repo> my_app
cd my_app
git checkout <some git tag>
docker build -t <registry>/<my_app>:<version>
docker login <registry> --user=<registry user> --password=<registry password>
docker push <registry>/<my_app>:<version>
Deploying your Docker image
Start you container with:
version: '3.7'
services:
app:
container_name: "app"
image: <registry>/<my_app>:<version>
ports:
- "80"
- "443"
restart: always
depends_on:
- db
...
Notice here that the docker-compose file does not build any image. For production it is a better practice to refer to an already built docker image (which has been deployed earlier on a staging environment for validation).
I am using a CI tool called Drone(drone.io). So i really want to do some integration tests with it. What i want is Drone to start my application container on some port on the drone host and then I would be able to run integration tests against it. For example in .drone.yml file:
build:
image: python3.5-uwsgi
pull: true
auth_config:
username: some_user
password: some_password
email: email
commands:
- pip install --user --no-cache-dir -r requirements.txt
- python manage.py integration_test -h 127.0.0.1:5000
# this should send various requests to 127.0.0.1:5000
# to test my application's behaviour
compose:
my_application:
# build and run a container based on dockerfile in local repo on port 5000
publish:
deploy:
Drone 0.4 can't start service from your Dockerfile, if you want start docker container, you should build it before, outside this build, and push to dockerhub or your own registry and put this in compose section, see http://readme.drone.io/usage/services/#images:bfc9941b6b6fd7b4ef09dd0ccd08af0c
You can also start your application in build, nohup python manage.py server -h 127.0.0.1:5000 & before you running your integration tests. Make sure that your application is started and listening 5000 port, before you run integration_test.
I recommend you use drone 0.5 with pipelines, you can build docker image and push that to registry before build, and use that as service inside your build.
Say I'm working on a web project that runs gitlab-ci shell runner on my own ci server to build docker and deploy it to heroku, and I've gone through some docs from both gitlab and heroku like gitlab-ci: using docker build and heroku:Build and Deploy with Docker. Can I deploy the docker project without using heroku-docker plugin, which seems not so flexible to me? However I tried, the following approach build succeeded in deploying to heroku, but the app crash. Heroku logs says start script is missing in package.json, but since I'm deploying docker project, I couldn't do "start": "docker-compose up" there, could I?
#.gitlab-ci.yml
stages:
- deploy
before_script:
- npm install
- bower install
dev:
stage: deploy
script:
- docker-compose run nginx run-test
- gem install dpl
- dpl --provider=heroku --app=xixi-web-dev --api-key=$HEROKU_API_KEY
only:
- dev
# docker-compose.yml
app:
build: .
volumes:
- .:/code:ro
expose:
- "3000"
working_dir: /code
command: pm2 start app.dev.json5
nginx:
build: ./setup/nginx
restart: always
volumes:
- ./setup/nginx/sites-enabled:/etc/nginx/sites-enabled:ro
- ./dist:/var/app:ro
ports:
- "$PORT:80"
links:
- app
I don't want to use heroku docker plugin, because it seems less flexible, I can't create a app.json because I don't want to use an existing docker image for my app. Instead, I define custom Dockerfiles for app and nginx used in docker-compose.yml
Now it seems that heroku wouldn't detect my project as a docker project unless I deploy it by using heroku docker plugin, but as I mentioned above, I can't do that. Then is there any docs I'm missing on heroku or gitlab could help me out? Or do you have any idea that might be helpful? Thanks a lot!
OK, seems that heroku docker:release is required. I ended up installing heroku cli and heroku docker plugin on my CI server and use heroku docker:release --app app to release my app