I have a project Spring Boot MVC and Mysql Database with Dockerfile and docker-compose.yml and I want to push this project to the hub docker that to run every client as you know. I pushed to the docker hub successfully with the docker-compose push command, but after that when I pull my image from the hub docker it doesn't work because there are some errors occurs for an instance connection refuesed and etc error happens. but in my device it work perfectly I mean I am runing my project successfully with the docker container.
This is my Dockerfile:
FROM maven:3.8.2-jdk-11
WORKDIR /empmanagment-app
COPY . .
RUN mvn clean install
CMD mvn spring-boot:run
and this is my docker-compose.yml file
version: '3'
services:
mysql-standalone:
image: 'mysql:5.7'
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_ROOT_USER=root
- MYSQL_PASSWORD=root
- MYSQL_DATABASE=elvin_emp_managment
ports:
- "3307:3306"
networks:
- common-network
volumes:
- mysql-standalone:/var/lib/mysql
springboot-docker-container:
build: ./
image: anar1501/emp-managment
ports:
- "8080:8080"
networks:
- common-network
environment:
SPRING_DATASOURCE_URL: jdbc:mysql://mysql-standalone:3306/elvin_emp_managment?autoReconnect=true&useSSL=false
SPRING_DATASOURCE_USERNAME: "root"
SPRING_DATASOURCE_PASSWORD: "root"
depends_on:
- mysql-standalone
volumes:
- .m2:/root/.m2
volumes:
mysql-standalone:
networks:
common-network:
driver: bridge
can anyone prefer any suggest, that what I am doing?
I have an app that is working but I am getting problems to make it run on Azure.
I have the next docker-compose
version: "3.6"
services:
nginx:
image: nginx:alpine
volumes:
- ./:/var/www/
- ./setup/azure/nginx/conf.d/:/etc/nginx/template
environment:
PORT: ${PORT}
command: /bin/sh -c "envsubst '$${PORT}' < /etc/nginx/template/nginx.conf.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
networks:
- mynet
depends_on:
- app
- worker
app:
image: myimage:latest
build:
context: .
dockerfile: ./setup/azure/Dockerfile
restart: unless-stopped
tty: true
expose:
- 9000
volumes:
- uploads:/var/www/simple/public/uploads
- logos:/var/www/simple/public/logos
networks:
- mynet
worker:
image: my_image:latest
command: bash -c "/usr/local/bin/php artisan queue:work --timeout=0"
depends_on:
- app
networks:
- mynet
volumes:
uploads:
logos:
networks:
mynet:
I am unsure if the volumes in nginx ok, I think that perhaps I should create a new Dockerfile to copy the files. However, this would increase a lot the size of the project.
When using App Services on azure the development is made assigning a randomly port, that's wgy i have the envsubst instruction in command. I appreciate any other suggestion to make it run this project on Azure
I'm assuming you're trying to persist the storage in your app to a volume. Check out this doc issue. Now I don't think you need
volumes:
- ./:/var/www/
- ./setup/azure/nginx/conf.d/:/etc/nginx/template
but for
volumes:
- uploads:/var/www/simple/public/uploads
- logos:/var/www/simple/public/logos
you can create a storage account, mount it to your linux app plan (it's not available for Windows app plans yet), and mount the relative path /var/www/simple/public/uploads to the file path of the storage container.
I am running a web project and a database through docker compose, but my updates do not appear on the page.
version: '3.2'
services:
app:
image: springio/gs-spring-boot-docker
ports:
- "8080:8080"
depends_on:
- mypostgres
mypostgres:
image: image
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=ps
- POSTGRES_USER=us
- POSTGRES_DB=db
I changed Application.java just printing instead of "Hello World" something, I refreshed page localhost:8080 but still no changes in my web page
I changed
Go to the directory of your Dockerfile and run the following commands:
Build the new image:
docker build --build-arg JAR_FILE=build/libs/*.jar -t springio/gs-spring-boot-docker .
and then run the new image:
docker run -p 8080:8080 -t springio/gs-spring-boot-docker
I'm trying to dockerise my laravel app: https://github.com/xoco70/kendozone/tree/docker-local
My dev env is working, now I am working on a deployable app in local environment.
In MacOs, Everything is ok.
I build it with:
docker build . -f app.dockerfile.local -t kendozone:local-1.0.0
And run it with
docker-compose -f docker-compose-local.yml up --force-recreate
The problem is with npm run dev with is a webpack build command
It will just compile Sass, combine Js and CSS, and copy it to /var/www/public folder
But when I run my app in ubuntu, I can access login page but it seems to load without any css / js.
With MacOs, I can see them with no problem....
Here is my docker-compose:
version: '2'
services:
# The Application
app:
image: kendozone:local-1.0.0
working_dir: /var/www
volumes:
- codevolume:/var/www
environment:
- "DB_DATABASE=homestead"
- "DB_USERNAME=homestead"
- "DB_PASSWORD=secret"
- "DB_PORT=3306"
- "DB_HOST=database"
depends_on:
- database
# The Web Server
web:
build:
context: ./
dockerfile: nginx.dockerfile
working_dir: /var/www
volumes:
- codevolume:/var/www
ports:
- 8090:80
depends_on:
- app
# The Database
database:
image: mysql:5.7
volumes:
- dbdata:/var/lib/mysql
environment:
- "MYSQL_DATABASE=homestead"
- "MYSQL_USER=homestead"
- "MYSQL_PASSWORD=secret"
- "MYSQL_ROOT_PASSWORD=root"
ports:
- "33061:3306"
volumes:
dbdata:
codevolume:
Any Idea ???
One way to fix this is to make node available in your docker base image, and then actually run npm install and npm run production to build a production ready image of your application.
I have a docker-compose.yml file with an elastic search image:
elasticsearch:
image: elasticsearch
ports:
- "9200:9200"
container_name: custom_elasticsearch_1
If I want to install additional plugins like the HQ interface or the attachment-mapper I have to do a manual installation with the following commands:
$ docker exec custom_elasticsearch_1 plugin install royrusso/elasticsearch-HQ
$ docker exec custom_elasticsearch_1 plugin install mapper-attachments
Is there a way to install them automatically when I run the docker-compose up command?
Here is a blog post by Elastic pertaining to exactly that! You need to use a Dockerfile which executes commands to extend an image. Your Dockerfile will look something like this:
FROM custom_elasticsearch_1
RUN elasticsearch-plugin install royrusso/elasticsearch-HQ
Inspired by #NickPridorozhko's answer, but updated and tested with elasticsearch^7.0.0 (with docker stack / swarm), example with analysis-icu:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
user: elasticsearch
command: >
/bin/sh -c "./bin/elasticsearch-plugin list | grep -q analysis-icu
|| ./bin/elasticsearch-plugin install analysis-icu;
/usr/local/bin/docker-entrypoint.sh"
...
The main difference are the updated commands for ^7.0.0, and the use of the docker entrypoint instead of ./bin/elasticsearch (in a stack's context, you'd get an error related to a limit of spawnable processes).
This works for me. Install plugin before and then continue with starting the elasticsearch.
elasticsearch:
image: elasticsearch
command:
- sh
- -c
- "plugin list | grep -q plugin_name || plugin install plugin_name;
/docker-entrypoint.sh elasticsearch"
The ingest-attachment plugin requires additional permissions and prompts the user during the installation. I used the yes command :
elasticsearch:
image: elasticsearch:6.8.12
command: >
/bin/sh -c "./bin/elasticsearch-plugin list | grep -q ingest-attachment
|| yes | ./bin/elasticsearch-plugin install --silent ingest-attachment;
/usr/local/bin/docker-entrypoint.sh eswrapper"
If you're using the ELK stack from sebp/elk
You need to setup your Dockerfile like
FROM sebp/elk
ENV ES_HOME /opt/elasticsearch
WORKDIR ${ES_HOME}
RUN yes | CONF_DIR=/etc/elasticsearch gosu elasticsearch bin/elasticsearch-plugin \
install -b mapper-attachments
As seen on https://elk-docker.readthedocs.io/#installing-elasticsearch-plugins.
It should also work just for Elastic Search only as well.
Just for somebody who is using elasticsearch version starting from 7 and want to install plugin through the dockerfile then use the --batch flag
FROM elasticsearch:7.16.2
RUN bin/elasticsearch-plugin install repository-azure --batch
An example with Elasticsearch v6.8.15.
For simplicity we will use a docker-compose.yml and a Dockerfile.
The content of Dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch:6.8.15
RUN elasticsearch-plugin install analysis-icu
RUN elasticsearch-plugin install analysis-phonetic
And the content docker-compose.yml:
version: '2.2'
services:
elasticsearch:
#image: docker.elastic.co/elasticsearch/elasticsearch:6.8.15
build:
context: ./
dockerfile: Dockerfile
container_name: elasticsearch
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- http.cors.enabled=true
- http.cors.allow-origin=*
- http.cors.allow-headers=X-Requested-With,Content-Type,Content-Length,Authorization
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
- esplugins1:/usr/share/elasticsearch/plugins
ports:
- 9268:9200
networks:
- esnet
elasticsearch2:
#image: docker.elastic.co/elasticsearch/elasticsearch:6.8.15\
build:
context: ./
dockerfile: Dockerfile
container_name: elasticsearch2
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- http.cors.enabled=true
- http.cors.allow-origin=*
- http.cors.allow-headers=X-Requested-With,Content-Type,Content-Length,Authorization
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata2:/usr/share/elasticsearch/data
- esplugins2:/usr/share/elasticsearch/plugins
networks:
- esnet
volumes:
esdata1:
driver: local
esdata2:
driver: local
esplugins1:
driver: local
esplugins2:
driver: local
networks:
esnet:
This is the default Elasticsearch 6.8.15 docker-compose.yml file from Elasticsearch website itself https://www.elastic.co/guide/en/elasticsearch/reference/6.8/docker.html#docker-cli-run-prod-mode. And I added two named data volumes, esplugins1 and esplugins2, for two of these nodes. So these plugins can be persisted between docker-compose down.
Note, if you ever run docker-compose down -v then these volumes will be removed!
I commented out the image line and moved that image to Dockerfile. And then using RUN command added the elasticsearch-plugin install command. This elasticsearch-plugin command is natively available in the elasticsearch container. And you can check this once you are in the container shell.