I am making a project for local development with dockerized apps. I have 3 different domain on my company that each domain has one docker-compose file with 5 services. (15 projects)
If User of my project wants to deploy only 1 service of their domain or/and 2 of the other domains projects, I have to comment out services in other docker-compose files that dont want to be deployed.
So my question is How can i comment out docker-compose(Go) files block with bash script? I want to choose the lines with their context. For example in below example i want to comment out ap2-php-fpm section. I cant make a work around solution because more projects incoming. I have to intervene go language script with bash script.
Demonstration
version: '3.3'
services:
app-php-fpm:
container_name: app
build:
context: ${src}/
volumes:
- $path:path
networks:
general-nt:
aliases:
- app
expose:
- "9000"
ap2-php-fpm:
container_name: app
build:
context: ${src}/
volumes:
- $path:path
networks:
general-nt:
aliases:
- app
expose:
- "9000"
networks:
general-nt:
external: true
I want to make this file as below with bash script.
version: '3.3'
services:
app-php-fpm:
container_name: app
build:
context: ${src}/
volumes:
- $path:path
networks:
general-nt:
aliases:
- app
expose:
- "9000"
# ap2-php-fpm:
# container_name: app
# build:
# context: ${src}/
# volumes:
# - $path:path
# networks:
# general-nt:
# aliases:
# - app
# expose:
# - "9000"
networks:
general-nt:
external: true
For many practical purposes, it may be enough to run docker-compose up with specific service names. If you run
docker-compose up -d app-php-fpm
it will start the service(s) on the command line, and anything it depends_on:, but not anything else. That would avoid the need to comment out parts of the YAML file. You could otherwise interact with the containers normally.
Related
I have an app that is working but I am getting problems to make it run on Azure.
I have the next docker-compose
version: "3.6"
services:
nginx:
image: nginx:alpine
volumes:
- ./:/var/www/
- ./setup/azure/nginx/conf.d/:/etc/nginx/template
environment:
PORT: ${PORT}
command: /bin/sh -c "envsubst '$${PORT}' < /etc/nginx/template/nginx.conf.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
networks:
- mynet
depends_on:
- app
- worker
app:
image: myimage:latest
build:
context: .
dockerfile: ./setup/azure/Dockerfile
restart: unless-stopped
tty: true
expose:
- 9000
volumes:
- uploads:/var/www/simple/public/uploads
- logos:/var/www/simple/public/logos
networks:
- mynet
worker:
image: my_image:latest
command: bash -c "/usr/local/bin/php artisan queue:work --timeout=0"
depends_on:
- app
networks:
- mynet
volumes:
uploads:
logos:
networks:
mynet:
I am unsure if the volumes in nginx ok, I think that perhaps I should create a new Dockerfile to copy the files. However, this would increase a lot the size of the project.
When using App Services on azure the development is made assigning a randomly port, that's wgy i have the envsubst instruction in command. I appreciate any other suggestion to make it run this project on Azure
I'm assuming you're trying to persist the storage in your app to a volume. Check out this doc issue. Now I don't think you need
volumes:
- ./:/var/www/
- ./setup/azure/nginx/conf.d/:/etc/nginx/template
but for
volumes:
- uploads:/var/www/simple/public/uploads
- logos:/var/www/simple/public/logos
you can create a storage account, mount it to your linux app plan (it's not available for Windows app plans yet), and mount the relative path /var/www/simple/public/uploads to the file path of the storage container.
I have a docker-compose.yml file. It works fine in Windows 10 but whenever I try to run that in MacOs it doesnt work especially the shared volume.
Here is the content of my docker-compose.yml file and directory structure
version: '3'
services:
database:
image: mongo
container_name: pcore-database
ports:
- '27017:27017'
node-server:
image: node
container_name: pcore-node-server
volumes:
- ./node-services :/usr/app/node-services
working_dir: /usr/app/node-services
command: npm run dev
ports:
- '3000:3000'
links:
- database
- nginx-server
depends_on:
- database
apache-server:
image: webdevops/php-apache
container_name: pcore-apache-server
working_dir: /app
volumes:
- ./php-services :/app
ports:
- '8000:80'
Check the node-server service and nginx-server
Now when i run command docker-compose up it creates additional directories with same name and throws error.
Check the error and additional directories it created.
I dont know whats going on. Its working fine in windows 10 but in MacOs it creates additional directories and does not share the volumes. Can someone guid me?
I want to run the Spring Boot enabled and spring cloud config project to deploy to Docker. The below is the docker-compose.yml file. But I'm getting the following error while running the file.
Error:
ERROR: yaml.parser.ParserError: while parsing a block mapping
in "./docker-compose.yml", line 4, column 4
expected <block end>, but found '<block mapping start>'
in "./docker-compose.yml", line 48, column 5
Below is my docker-compose.yml file:
version: '3'
services:
discovery:
image: pl.app.service/discovery-service:0.0.1-SNAPSHOT
ports:
- 8061:8061
config:
image: pl.app.service/config-service:0.0.1-SNAPSHOT
volumes:
- ./config-data:/config-data
environment:
- JAVA_OPTS=
-DEUREKA_SERVER=http://discovery:8761/eureka
-Dspring.cloud.config.server.native.searchLocations=/config-data
depends_on:
- discovery
ports:
- 8088:8088
proxy-service:
image: pl.app.service/proxy-service:0.0.1-SNAPSHOT
environment:
- JAVA_OPTS=
-DEUREKA_SERVER=http://discovery:8761/eureka
depends_on:
- discovery
- config
ports:
-8060:8060
employee-service:
image: pl.app.service/employee-service:0.0.1-SNAPSHOT
environment:
- JAVA_OPTS=
-DEUREKA_SERVER=http://discovery:8761/eureka
-Dspring.profiles.active=dev
restart: on-failure
depends_on:
- discovery
- config
ports:
-8090:8090
department-service:
image: pl.app.service/organization-service:0.0.1-SNAPSHOT
environment:
- JAVA_OPTS=
-DEUREKA_SERVER=http://discovery:8761/eureka
-Dspring.profiles.active=dev
restart: on-failure
depends_on:
- discovery
- config
ports:
-8091:8091
organization-service:
image: pl.app.service/organization-service:0.0.1-SNAPSHOT
environment:
- JAVA_OPTS=
-DEUREKA_SERVER=http://discovery:8761/eureka
-Dspring.profiles.active=dev
restart: on-failure
depends_on:
- discovery
- config
ports:
-8092:8092
I have tried multiple indentations changes for docker-compose.yml file.
The mentioned services are already built by maven. Need help in running the docker composer for the application.
There are multiple errors.
Make sure that you only use spaces for indentation (instead of tabs). If you are interested why tabs don't work within yaml files have a look at A YAML file cannot contain tabs as indentation
put your ports into strings (e.g. - "8060:8060"instead of - 8060:8060)
I think you are misusing environment variables. They should/must look like e.g.:
environment:
- JAVA_OPTS
- EUREKA_SERVER=http://discovery:8761/eureka
- ANOTHER_ENV_VARIABLE=/config-data
Have a look at the docs for details: https://docs.docker.com/compose/environment-variables/
After fixing your docker-compose.yml you can validate your file by running docker-compose config inside of the directory where your docker-compose.yml is located.
I’m trying to run Sonarqube in a Docker container on a Centos 7 server using docker-compose. Everything works as expected using named volumes as configured in this docker-compose.yml file:
version: "3"
services:
sonarqube:
image: sonarqube
ports:
- "9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled_plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
networks:
- sonarnet
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
driver: bridge
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled_plugins:
postgresql:
postgresql_data:
However, my /var/lib/docker/volumes directory is not large enough to house the named volumes. So, I changed the docker-compose.yml file to use bind mount volumes as shown below.
version: "3"
services:
sonarqube:
image: sonarqube
ports:
- "9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
volumes:
- /data/sonarqube/conf:/opt/sonarqube/conf
- /data/sonarqube/data:/opt/sonarqube/data
- /data/sonarqube/extensions:/opt/sonarqube/extensions
- /data/sonarqube/bundled_plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
networks:
- sonarnet
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- /data/postgresql:/var/lib/postgresql
- /data/postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
driver: bridge
However, after running docker-compose up -d, the app starts up but none of the bind mount volumes are written to. As a result, the Sonarqube plugins are not loaded and the sonar postgreSQL database is not initialized. I thought it may be a selinux issue, but I temporarily disabled it with no success. I’m unsure what to look at next.
I think my answer from "How to persist configuration & analytics across container invocations in Sonarqube docker image" would help you as well.
For good measure I have also pasted it in here:
.....
Notice this line SONARQUBE_HOME in the Dockerfile for the docker-sonarqube image. We can control this environment variable.
When using docker run. Simply do:
txt
docker run -d \
...
...
-e SONARQUBE_HOME=/sonarqube-data
-v /PERSISTENT_DISK/sonarqubeVolume:/sonarqube-data
This will make Sonarqube create the conf, data and so forth folders and store data therein. As needed.
Or with Kubernetes. In your deployment YAML file. Do:
txt
...
...
env:
- name: SONARQUBE_HOME
value: /sonarqube-data
...
...
volumeMounts:
- name: app-volume
mountPath: /sonarqube-data
And the name in the volumeMounts property points to a volume in the volumes section of the Kubernetes deployment YAML file.
This again will make Sonarqube use the /sonarqube-data mountPath for creating extenions, conf and so forth folders, then save data therein.
And voila your Sonarqube data is thereby persisted.
I hope this will help others.
N.B. Notice that the YAML and Docker run examples are not exhaustive. They focus on the issue of persisting Sonarqube data.
Try it out BobC and let me know.
Have a great day.
The below code will help you in a single command I hope so.
Create a new docker-compose file named as docker-compose.yaml,
version: "3"
services:
sonarqube:
image: sonarqube:8.2-community
depends_on:
- db
ports:
- "9000:9000"
networks:
- sonarqubenet
environment:
SONAR_JDBC_URL: jdbc:postgresql://db:5432/sonarqube
SONAR_JDBC_USERNAME: sonar
SONAR_JDBC_PASSWORD: sonar
volumes:
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_logs:/opt/sonarqube/logs
- sonarqube_temp:/opt/sonarqube/temp
restart: on-failure
container_name: sonarqube
db:
image: postgres
networks:
- sonarqubenet
environment:
POSTGRES_USER: sonar
POSTGRES_PASSWORD: sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
restart: on-failure
container_name: postgresql
networks:
sonarqubenet:
driver: bridge
volumes:
sonarqube_data:
sonarqube_extensions:
sonarqube_logs:
sonarqube_temp:
postgresql:
postgresql_data:
Then, execute the command,
$ docker-compose up -d
$ docker container ps
Sounds like the container is running and, as you mentioned, Sonarqube starts-up. When it starts, is it showing that it's using the H2 in memory db? After running docker-compose up -d, use docker logs -f <container_name> to see what's happening on Sonarqube startup.
To simplify viewing your logs with a known name, I suggest you also add a container name to your Sonarqube service. For example, container_name: sonarqube.
Also, while I know the plan is to deprecate the use of environment variables for the username, password and jdbc connection, I've had better luck in docker-compose using environment variables rather than the corresponding property value. For the connection string, try: SONARQUBE_JDBC_URL: jdbc:postgresql://db/sonar without specifying the default port for postgres.
I am trying to figure out how to setup a simple stack for development and later deployment. I want to utilize Docker to serve Traefik in a container as the public facing reverse-proxy, which then interfaces as needed with a Nginx container that is used only to serve static frontend files (HTML, CSS, JS) and a backend PHP container that runs Laravel (I'm intentionally decoupling the frontend and API for this project).
I am trying my best to learn through all of the video and written tutorials out there, but things become complicated very quickly (at least, for my uninitiated brain) and it's a bit overwhelming. I have a one-week deadline to complete this project and I'm strongly considering dropping Docker altogether for the time being out of fear that I'll spend the whole trying to mess around with the configuration instead of actually coding!
To get started, I have a simple docker-compose with the following configuration that I've verified at least runs correctly:
version: '3'
services:
reverse-proxy:
image: traefik
command: --api --docker # Enables Web UI and tells Traefik to listen to Docker.
ports:
- "80:80" # HTTP Port
- "8080:8080" # Web UI
volumes:
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events.
Now, I need to figure out how to connect Nginx and PHP/Laravel effectively.
First of all don't put yourself under stress to learn new stuff. Because if you do, learning new stuff won't feel that comfortable anymore. Take your knowledge of technology and get stuff done. When you're done and you realize you have 1/2 days to go to your deadline, try to overdeliver by including new technology. This way you won't screw your deadline and you will not be under stress figuring our new technology or configuration.
The configuration you see below is not complete nor functionally tested. I just copied most of the stuff out of 3 of my main projects in order to give you a starting-point. Traefik as-is can be complicated to set up properly.
version: '3'
# Instantiate your own configuration with a Dockerfile!
# This way you can build somewhere and just deploy your container
# anywhere without the need to copy files around.
services:
# traefik as reverse-proxy
traefik:
build:
context: .
dockerfile: ./Dockerfile-for-traefik # including traefik.toml
command: --docker
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
# this file you'll have to create manually `touch acme.json && chmod 600 acme.json`
- /home/docker/volumes/traefik/acme.json:/opt/traefik/acme.jso
networks:
- overlay
ports:
- 80:80
- 443:443
nginx:
build:
context: .
dockerfile: ./Dockerfile-for-nginx
networks:
- overlay
depends_on:
- laravel
volumes:
# you can copy your assets to production with
# `tar -c -C ./myassets . | docker cp - myfolder_nginx_1:/var/www/assets`
# there are many other ways to achieve this!
- assets:/var/www/assets
# define your application + whatever it needs to run
# important:
# - "build:" will search for a Dockerfile in the directory you're specifying
laravel:
build: ./path/to/laravel/app
environment:
MYSQL_ROOT_PASSWORD: password
ENVIRONMENT: development
MYSQL_DATABASE: your_database
MYSQL_USER: your_database_user
networks:
- overlay
links:
- mysql
volumes:
# this path is for development
- ./path/to/laravel/app:/app
# you need a database, right?
mysql:
image: mysql:5
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: your_database
MYSQL_USER: your_database_user
networks:
- overlay
volumes:
- mysql-data:/var/lib/mysql
volumes:
mysql-data:
assets:
networks:
overlay: