How to push entirely project of image to the docker hub? - spring-boot

I have a project Spring Boot MVC and Mysql Database with Dockerfile and docker-compose.yml and I want to push this project to the hub docker that to run every client as you know. I pushed to the docker hub successfully with the docker-compose push command, but after that when I pull my image from the hub docker it doesn't work because there are some errors occurs for an instance connection refuesed and etc error happens. but in my device it work perfectly I mean I am runing my project successfully with the docker container.
This is my Dockerfile:
FROM maven:3.8.2-jdk-11
WORKDIR /empmanagment-app
COPY . .
RUN mvn clean install
CMD mvn spring-boot:run
and this is my docker-compose.yml file
version: '3'
services:
mysql-standalone:
image: 'mysql:5.7'
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_ROOT_USER=root
- MYSQL_PASSWORD=root
- MYSQL_DATABASE=elvin_emp_managment
ports:
- "3307:3306"
networks:
- common-network
volumes:
- mysql-standalone:/var/lib/mysql
springboot-docker-container:
build: ./
image: anar1501/emp-managment
ports:
- "8080:8080"
networks:
- common-network
environment:
SPRING_DATASOURCE_URL: jdbc:mysql://mysql-standalone:3306/elvin_emp_managment?autoReconnect=true&useSSL=false
SPRING_DATASOURCE_USERNAME: "root"
SPRING_DATASOURCE_PASSWORD: "root"
depends_on:
- mysql-standalone
volumes:
- .m2:/root/.m2
volumes:
mysql-standalone:
networks:
common-network:
driver: bridge
can anyone prefer any suggest, that what I am doing?

Related

Bind mounting Spring micro-services with Docker results in Eureka not discovering other services

I had previously built a Docker image of each Spring micro-service, and in the docker-compose.yml file this is what I did to start them and have some other services being registered by Eureka's service, by overriding defaultZone passing the container's name:
version: '3'
services:
eureka-discovery:
image: eureka-discovery-service:0.0.1
ports:
- 8761:8761
zuul-gateway:
image: zuul-gateway-service:0.0.1
environment:
- eureka.client.serviceUrl.defaultZone=http://eureka-discovery:8761/eureka
depends_on:
- eureka-discovery
ports:
- 8765:8765
It worked. Now, I did things a little different, by bind mounting my project into a container for local development:
version: '3.1'
services:
eureka-discovery-service:
container_name: eureka-discovery-service
image: openjdk:11.0.9.1-jdk
ports:
- 8761:8761
volumes:
- ./eureka-discovery-service:/usr/src/app
working_dir: /usr/src/app
command: ./mvnw spring-boot:run
zuul-gateway-service:
container_name: zuul-gateway-service
image: openjdk:11.0.9.1-jdk
environment:
- eureka.client.serviceUrl.defaultZone=http://eureka-discovery-service:8761/eureka
ports:
- 8765:8765
volumes:
- ./zuul-gateway-service:/usr/src/app
working_dir: /usr/src/app
command: ./mvnw spring-boot:run
depends_on:
- eureka-discovery-service
links:
- eureka-discovery-service
But with things like this, zuul-gateway-service doesn't see eureka-discovery-service like it did before.
2020-12-17 23:26:29.997 ERROR 78 --- [freshExecutor-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_ZUUL-GATEWAY-SERVICE/7c580b412063:zuul-gateway-service:8765 - was unable to refresh its cache! status = Cannot execute request on any known server
I notice that it's using the Eureka container's ID instead of its name.
I managed to make it work by altering the command and overriding the property there instead of on environments:
command: ./mvnw spring-boot:run -Dspring-boot.run.arguments=--eureka.client.serviceUrl.defaultZone=http://eureka-discovery-service:8761/eureka
If there'd be a better way, I'd like to know.

Docker-compose for production running laravel with nginx on azure

I have an app that is working but I am getting problems to make it run on Azure.
I have the next docker-compose
version: "3.6"
services:
nginx:
image: nginx:alpine
volumes:
- ./:/var/www/
- ./setup/azure/nginx/conf.d/:/etc/nginx/template
environment:
PORT: ${PORT}
command: /bin/sh -c "envsubst '$${PORT}' < /etc/nginx/template/nginx.conf.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
networks:
- mynet
depends_on:
- app
- worker
app:
image: myimage:latest
build:
context: .
dockerfile: ./setup/azure/Dockerfile
restart: unless-stopped
tty: true
expose:
- 9000
volumes:
- uploads:/var/www/simple/public/uploads
- logos:/var/www/simple/public/logos
networks:
- mynet
worker:
image: my_image:latest
command: bash -c "/usr/local/bin/php artisan queue:work --timeout=0"
depends_on:
- app
networks:
- mynet
volumes:
uploads:
logos:
networks:
mynet:
I am unsure if the volumes in nginx ok, I think that perhaps I should create a new Dockerfile to copy the files. However, this would increase a lot the size of the project.
When using App Services on azure the development is made assigning a randomly port, that's wgy i have the envsubst instruction in command. I appreciate any other suggestion to make it run this project on Azure
I'm assuming you're trying to persist the storage in your app to a volume. Check out this doc issue. Now I don't think you need
volumes:
- ./:/var/www/
- ./setup/azure/nginx/conf.d/:/etc/nginx/template
but for
volumes:
- uploads:/var/www/simple/public/uploads
- logos:/var/www/simple/public/logos
you can create a storage account, mount it to your linux app plan (it's not available for Windows app plans yet), and mount the relative path /var/www/simple/public/uploads to the file path of the storage container.

Connect service container to db container

I'm new to docker and started to play with it on my small project.
I have dockerized the service itself with the following Docker file:
ROM adoptopenjdk:11-jdk-hotspot AS DEPENDENCIES_BUILD_IMAGE
ENV APP_HOME=/usr/app/
WORKDIR $APP_HOME
COPY build.gradle settings.gradle gradlew $APP_HOME
COPY gradle $APP_HOME/gradle
RUN ./gradlew build || return 0
COPY . .
RUN ./gradlew build
FROM adoptopenjdk/openjdk11:jdk-11.0.7_10-alpine AS FINAL
ENV JAR_TEMPLATE=myapp-0.0.1-SNAPSHOT.jar
ENV ARTIFACT_NAME=myapp.jar
ENV APP_HOME=/usr/app
WORKDIR $APP_HOME
COPY --from=DEPENDENCIES_BUILD_IMAGE $APP_HOME/build/libs/$JAR_TEMPLATE .
RUN mv $JAR_TEMPLATE $ARTIFACT_NAME
EXPOSE 8080
CMD ["java", "-jar", "budget-calculator.jar"]
Side note - I know that there's a problem that I'm always copying 0.0.1-SNAPSHOT - but I'm not sure how to solve it at the moment.
After that I wanted to connect my service to a Postgres DB with docker-compose using this confirmation:
version: '3'
services:
backend:
build: .
container_name: myapp
ports:
- "8080:8080"
links:
- "db"
depends_on:
- db
networks:
- backend
db:
restart: unless-stopped
image: postgres:10
container_name: myapp-db
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=secret
- POSTGRES_DB=myapp
ports:
- 5436:5436
networks:
- backend
networks:
backend:
After that I've updated my application.properties file to indicate that the DB link is on the other container as follow:
spring.flyway.url=jdbc:postgresql://db:5436/myapp
spring.flyway.user=postgres
spring.flyway.password=secret
spring.flyway.baseline-on-migrate=true
spring.datasource.url=jdbc:postgresql://db:5436/myapp
spring.datasource.username=postgres
spring.datasource.password=secret
spring.datasource.driverClassName=org.postgresql.Driver
Now I had 2 problems:
While I assumed that build: . will rebuild my image every time
that I'm running docker-compose up if something changed in
practice I saw that it's not the case.
When the backend service starts flyway (a migration DB library) try
to connect to the database and cannot resolve the connection.
I've seen online that the usage of - links is deprecated and I should use the networks but both do not seem to work - what am I missing?
There are 2 problems with my configurations, the first one - the internal port of Postgres was configured as 5436 while the default port of the image is 5432 (I've updated both of them to 5432)
the second one, in order to pass the IP of the DB to the service I've added the following environment variables to the service image:
environment: # Pass environment variables to the service
SPRING_DATASOURCE_URL: jdbc:postgresql://db:5432/budget
SPRING_DATASOURCE_USERNAME: postgres
SPRING_DATASOURCE_PASSWORD: secret
SPRING_FLYWAY_URL: jdbc:postgresql://db:5432/budget
SPRING_FLYWAY_USER: postgres
SPRING_FLYWAY_PASSWORD: secret
So my current working configuration is this:
version: '3.8'
services:
backend:
build: .
container_name: app-service
ports:
- "8080:8080"
depends_on:
- db
environment: # Pass environment variables to the service
SPRING_DATASOURCE_URL: jdbc:postgresql://db:5432/myapp
SPRING_DATASOURCE_USERNAME: postgres
SPRING_DATASOURCE_PASSWORD: secret
SPRING_FLYWAY_URL: jdbc:postgresql://db:5432/myapp
SPRING_FLYWAY_USER: postgres
SPRING_FLYWAY_PASSWORD: secret
db:
restart: unless-stopped
image: postgres:10
environment:
- POSTGRES_DB=myapp
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=secret
volumes:
- myapp_data:/var/lib/postgresql/data
ports:
- 5432:5432
volumes:
myapp_data:

Docker shared volume not working in MacOs

I have a docker-compose.yml file. It works fine in Windows 10 but whenever I try to run that in MacOs it doesnt work especially the shared volume.
Here is the content of my docker-compose.yml file and directory structure
version: '3'
services:
database:
image: mongo
container_name: pcore-database
ports:
- '27017:27017'
node-server:
image: node
container_name: pcore-node-server
volumes:
- ./node-services :/usr/app/node-services
working_dir: /usr/app/node-services
command: npm run dev
ports:
- '3000:3000'
links:
- database
- nginx-server
depends_on:
- database
apache-server:
image: webdevops/php-apache
container_name: pcore-apache-server
working_dir: /app
volumes:
- ./php-services :/app
ports:
- '8000:80'
Check the node-server service and nginx-server
Now when i run command docker-compose up it creates additional directories with same name and throws error.
Check the error and additional directories it created.
I dont know whats going on. Its working fine in windows 10 but in MacOs it creates additional directories and does not share the volumes. Can someone guid me?

Docker deployment works with MacOs but not with Ubuntu 16.04

I'm trying to dockerise my laravel app: https://github.com/xoco70/kendozone/tree/docker-local
My dev env is working, now I am working on a deployable app in local environment.
In MacOs, Everything is ok.
I build it with:
docker build . -f app.dockerfile.local -t kendozone:local-1.0.0
And run it with
docker-compose -f docker-compose-local.yml up --force-recreate
The problem is with npm run dev with is a webpack build command
It will just compile Sass, combine Js and CSS, and copy it to /var/www/public folder
But when I run my app in ubuntu, I can access login page but it seems to load without any css / js.
With MacOs, I can see them with no problem....
Here is my docker-compose:
version: '2'
services:
# The Application
app:
image: kendozone:local-1.0.0
working_dir: /var/www
volumes:
- codevolume:/var/www
environment:
- "DB_DATABASE=homestead"
- "DB_USERNAME=homestead"
- "DB_PASSWORD=secret"
- "DB_PORT=3306"
- "DB_HOST=database"
depends_on:
- database
# The Web Server
web:
build:
context: ./
dockerfile: nginx.dockerfile
working_dir: /var/www
volumes:
- codevolume:/var/www
ports:
- 8090:80
depends_on:
- app
# The Database
database:
image: mysql:5.7
volumes:
- dbdata:/var/lib/mysql
environment:
- "MYSQL_DATABASE=homestead"
- "MYSQL_USER=homestead"
- "MYSQL_PASSWORD=secret"
- "MYSQL_ROOT_PASSWORD=root"
ports:
- "33061:3306"
volumes:
dbdata:
codevolume:
Any Idea ???
One way to fix this is to make node available in your docker base image, and then actually run npm install and npm run production to build a production ready image of your application.

Resources