Spring boot docker updates do not appear - spring-boot

I am running a web project and a database through docker compose, but my updates do not appear on the page.
version: '3.2'
services:
app:
image: springio/gs-spring-boot-docker
ports:
- "8080:8080"
depends_on:
- mypostgres
mypostgres:
image: image
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=ps
- POSTGRES_USER=us
- POSTGRES_DB=db
I changed Application.java just printing instead of "Hello World" something, I refreshed page localhost:8080 but still no changes in my web page
I changed

Go to the directory of your Dockerfile and run the following commands:
Build the new image:
docker build --build-arg JAR_FILE=build/libs/*.jar -t springio/gs-spring-boot-docker .
and then run the new image:
docker run -p 8080:8080 -t springio/gs-spring-boot-docker

Related

How to push entirely project of image to the docker hub?

I have a project Spring Boot MVC and Mysql Database with Dockerfile and docker-compose.yml and I want to push this project to the hub docker that to run every client as you know. I pushed to the docker hub successfully with the docker-compose push command, but after that when I pull my image from the hub docker it doesn't work because there are some errors occurs for an instance connection refuesed and etc error happens. but in my device it work perfectly I mean I am runing my project successfully with the docker container.
This is my Dockerfile:
FROM maven:3.8.2-jdk-11
WORKDIR /empmanagment-app
COPY . .
RUN mvn clean install
CMD mvn spring-boot:run
and this is my docker-compose.yml file
version: '3'
services:
mysql-standalone:
image: 'mysql:5.7'
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_ROOT_USER=root
- MYSQL_PASSWORD=root
- MYSQL_DATABASE=elvin_emp_managment
ports:
- "3307:3306"
networks:
- common-network
volumes:
- mysql-standalone:/var/lib/mysql
springboot-docker-container:
build: ./
image: anar1501/emp-managment
ports:
- "8080:8080"
networks:
- common-network
environment:
SPRING_DATASOURCE_URL: jdbc:mysql://mysql-standalone:3306/elvin_emp_managment?autoReconnect=true&useSSL=false
SPRING_DATASOURCE_USERNAME: "root"
SPRING_DATASOURCE_PASSWORD: "root"
depends_on:
- mysql-standalone
volumes:
- .m2:/root/.m2
volumes:
mysql-standalone:
networks:
common-network:
driver: bridge
can anyone prefer any suggest, that what I am doing?

Unable to run gradle tests using gitlab and docker-compose

I want to run tests using Gradle after docker-compose up (Postgres DB + Spring-Boot app). All flow must be running inside the Gitlab merge request step. The problem is when I was running my test using the script part in gitlab-ci file. Important, in such a situation, we are in the correct directory where GitLab got my project. Part of gitlab-ci file:
before_script:
- ./gradlew clean build
- cp x.jar /path/x.jar
- docker-compose -f /path/docker-compose.yaml up -d
script:
- ./gradlew :functional-tests:clean test -Penv=gitlab --info
But here I can't call http://localhost:8080 -> connection refused. I try put 0.0.0.0 or 172.17.0.3 or docker.host... etc insite tests config, but it didn't work.
So, I made insite docker-compose another container where I try to run my test using the entry point command. To do that, I must have the current GitLab directory, but can't mount them.
My current solution:
Gitlab-ci:
run-functional-tests:
stage: run_functional_tests
image:
name: 'xxxx/docker-compose-java-11:0.0.7'
script:
- ./gradlew clean build -x test
- 'export SHARED_PATH="$(dirname ${CI_PROJECT_DIR})"' // current gitlab worspace dir
- cp $CI_PROJECT_DIR/x.jar $CI_PROJECT_DIR/docker/gitlab/x.jar
- docker-compose -f $CI_PROJECT_DIR/docker/gitlab/docker-compose.yaml up -d
- docker-compose -f $CI_PROJECT_DIR/docker/gitlab/docker-compose.yaml logs -f
timeout: 30m
docker-compose.yaml
version: '3'
services:
postgres:
build:
context: ../postgres
container_name: postgres
restart: always
networks:
- app-postgres
ports:
- 5432
app:
build:
context: .
dockerfile: Dockerfile
restart: always
container_name: app
depends_on:
- postgres
ports:
- "8080:8080"
networks:
- app-postgres
functional-tests:
build:
context: .
container_name: app-functional-tests
working_dir: /app
volumes:
- ${SHARED_PATH}:/app
depends_on:
- app
entrypoint: ["bash", "-c", "sleep 20 && ./gradlew :functional-tests:clean test -Penv=gitlab --info"]
networks:
- app-postgres
networks:
app-postgres:
but in such a situation my working_dir - /app - is empty. Can someone assist with that?

Docker-compose for production running laravel with nginx on azure

I have an app that is working but I am getting problems to make it run on Azure.
I have the next docker-compose
version: "3.6"
services:
nginx:
image: nginx:alpine
volumes:
- ./:/var/www/
- ./setup/azure/nginx/conf.d/:/etc/nginx/template
environment:
PORT: ${PORT}
command: /bin/sh -c "envsubst '$${PORT}' < /etc/nginx/template/nginx.conf.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
networks:
- mynet
depends_on:
- app
- worker
app:
image: myimage:latest
build:
context: .
dockerfile: ./setup/azure/Dockerfile
restart: unless-stopped
tty: true
expose:
- 9000
volumes:
- uploads:/var/www/simple/public/uploads
- logos:/var/www/simple/public/logos
networks:
- mynet
worker:
image: my_image:latest
command: bash -c "/usr/local/bin/php artisan queue:work --timeout=0"
depends_on:
- app
networks:
- mynet
volumes:
uploads:
logos:
networks:
mynet:
I am unsure if the volumes in nginx ok, I think that perhaps I should create a new Dockerfile to copy the files. However, this would increase a lot the size of the project.
When using App Services on azure the development is made assigning a randomly port, that's wgy i have the envsubst instruction in command. I appreciate any other suggestion to make it run this project on Azure
I'm assuming you're trying to persist the storage in your app to a volume. Check out this doc issue. Now I don't think you need
volumes:
- ./:/var/www/
- ./setup/azure/nginx/conf.d/:/etc/nginx/template
but for
volumes:
- uploads:/var/www/simple/public/uploads
- logos:/var/www/simple/public/logos
you can create a storage account, mount it to your linux app plan (it's not available for Windows app plans yet), and mount the relative path /var/www/simple/public/uploads to the file path of the storage container.

Docker deployment works with MacOs but not with Ubuntu 16.04

I'm trying to dockerise my laravel app: https://github.com/xoco70/kendozone/tree/docker-local
My dev env is working, now I am working on a deployable app in local environment.
In MacOs, Everything is ok.
I build it with:
docker build . -f app.dockerfile.local -t kendozone:local-1.0.0
And run it with
docker-compose -f docker-compose-local.yml up --force-recreate
The problem is with npm run dev with is a webpack build command
It will just compile Sass, combine Js and CSS, and copy it to /var/www/public folder
But when I run my app in ubuntu, I can access login page but it seems to load without any css / js.
With MacOs, I can see them with no problem....
Here is my docker-compose:
version: '2'
services:
# The Application
app:
image: kendozone:local-1.0.0
working_dir: /var/www
volumes:
- codevolume:/var/www
environment:
- "DB_DATABASE=homestead"
- "DB_USERNAME=homestead"
- "DB_PASSWORD=secret"
- "DB_PORT=3306"
- "DB_HOST=database"
depends_on:
- database
# The Web Server
web:
build:
context: ./
dockerfile: nginx.dockerfile
working_dir: /var/www
volumes:
- codevolume:/var/www
ports:
- 8090:80
depends_on:
- app
# The Database
database:
image: mysql:5.7
volumes:
- dbdata:/var/lib/mysql
environment:
- "MYSQL_DATABASE=homestead"
- "MYSQL_USER=homestead"
- "MYSQL_PASSWORD=secret"
- "MYSQL_ROOT_PASSWORD=root"
ports:
- "33061:3306"
volumes:
dbdata:
codevolume:
Any Idea ???
One way to fix this is to make node available in your docker base image, and then actually run npm install and npm run production to build a production ready image of your application.

How to have "RUN" command in docker-compose similar to dockerfile?

Docker file
FROM elasticsearch:2
RUN /usr/share/elasticsearch/bin/plugin install --batch cloud-aws
from https://www.elastic.co/blog/elasticsearch-docker-plugin-management
Can someone plz help me to add ES plugin in docker-compose file?
version: '2'
services:
nitrogen:
build: .
ports:
- "8000:8000"
volumes:
- ~/mycode:/mycode
depends_on:
- couchdb
- elasticsearch
elasticsearch:
image: elasticsearch:1.7.5
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "9200:9200"
- "9300:9300"
In above docker-compose missing is installation of plugin.
Tried this but it runs on local machine, instead of docker container.
command: /usr/share/elasticsearch/bin/plugin install elasticsearch/elasticsearch-river-couchdb/2.6.0
You have to create your own docker image like my-elasticsearch with the Dockerfile you mentioned, then in docker-compose.yml to refer to that image.

Resources