I'm a student and at college we're trying to setup our own backend service for our applications, as Backend like Firebase would cost us more.
We settled up on using Openstack to combine and manage the compute resources of multiple computers together in our college Lab, but now we want to make a web portal where our students can login and use the parse server dashboard.
How to setup multiple parse instances for each user and what containers to use and how?
You can try with docker compose. You would need to write a docker-compose.yml file like this:
version: '2'
services:
mongo-db:
image: mongo
ports:
- 27017:27017
parse-server1-user1:
image: parseplatform/parse-server
links:
- mongo-db
environment:
- PARSE_SERVER_APPLICATION_ID=parse1-user1
- PARSE_SERVER_MASTER_KEY=SOME_SECRET_MASTER1_USER1
- PARSE_SERVER_DATABASE_URI=mongodb://mongo-db:27017/parse1-user1
ports:
- 1337:1337
parse-server2-user1:
image: parseplatform/parse-server
links:
- mongo-db
environment:
- PARSE_SERVER_APPLICATION_ID=parse2-user1
- PARSE_SERVER_MASTER_KEY=SOME_SECRET_MASTER2_USER1
- PARSE_SERVER_DATABASE_URI=mongodb://mongo-db:27017/parse2-user1
ports:
- 1338:1337
parse-server1-user2:
image: parseplatform/parse-server
links:
- mongo-db
environment:
- PARSE_SERVER_APPLICATION_ID=parse1-user2
- PARSE_SERVER_MASTER_KEY=SOME_SECRET_MASTER1_USER2
- PARSE_SERVER_DATABASE_URI=mongodb://mongo-db:27017/parse1-user2
ports:
- 1339:1337
parse-server2-user2:
image: parseplatform/parse-server
links:
- mongo-db
environment:
- PARSE_SERVER_APPLICATION_ID=parse2-user2
- PARSE_SERVER_MASTER_KEY=SOME_SECRET_MASTER2_USER2
- PARSE_SERVER_DATABASE_URI=mongodb://mongo-db:27017/parse2-user2
ports:
- 1340:1337
parse-dashboard:
image: parseplatform/parse-dashboard
links:
- parse-server1-user1
- parse-server2-user1
- parse-server1-user2
- parse-server2-user2
depends_on:
- parse-server1-user1
- parse-server2-user1
- parse-server1-user2
- parse-server2-user2
environment:
- PARSE_DASHBOARD_CONFIG={"apps":[{"appId":"parse1-user1","serverURL":"http://localhost:1337/parse","masterKey":"SOME_SECRET_MASTER1_USER1","appName":"parse1-user1"},{"appId":"parse2-user1","serverURL":"http://localhost:1338/parse","masterKey":"SOME_SECRET_MASTER2_USER1","appName":"parse2-user1"},{"appId":"parse1-user2","serverURL":"http://localhost:1339/parse","masterKey":"SOME_SECRET_MASTER1_USER2","appName":"parse1-user2"},{"appId":"parse2-user2","serverURL":"http://localhost:1340/parse","masterKey":"SOME_SECRET_MASTER2_USER2","appName":"parse2-user2"}],"users":[{"user":"user1","pass":"secret-pass1","apps":[{"appId":"parse1-user1"},{"appId":"parse2-user1"}]},{"user":"user2","pass":"secret-pass2","apps":[{"appId":"parse1-user2"},{"appId":"parse2-user2"}]}]}
- PARSE_DASHBOARD_ALLOW_INSECURE_HTTP=1
ports:
- 4040:4040
Then run:
docker-compose up -d
Related
I have a problem about running docker-compose.yml in my Spring Boot app.
When I run this command (docker-compose up -d), I got an issue in image part.
I tried to handle with solving this issue but I couldn't do that.
How can I fix it?
Here is my issue shown below.
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
Here is my project : My Project
Here is my docker-compose.yml shown below.
version: '3.8'
services:
logstash:
image: docker.elastic.co/logstash/logstash:7.15.2
user: root
command: -f /etc/logstash/conf.d/
volumes:
- ./logstash/:/etc/logstash/conf.d/
ports:
- "5000:5000"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
depends_on:
- elasticsearch
kibana:
image: docker.elastic.co/kibana/kibana:7.15.2
user: root
volumes:
- ./kibana/:/usr/share/kibana/config/
ports:
- "5601:5601"
depends_on:
- elasticsearch
entrypoint: ["./bin/kibana", "--allow-root"]
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.2
user: root
volumes:
- ./elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
app:
image: 'springbootelk:latest'
build:
context: .
container_name: SpringBootElk
depends_on:
- db
- logstash
ports:
- '8077:8077'
environment:
- SPRING_DATASOURCE_URL=jdbc:mysql://db:3306/springbootexample?useSSL=false&serverTimezone=Turkey
- SPRING_DATASOURCE_USERNAME=springexample
- SPRING_DATASOURCE_PASSWORD=111111
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
db:
container_name: mysql-latest
image: 'mysql:latest'
ports:
- "3306:3306"
restart: always
environment:
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
volumes:
- db-data:/var/lib/mysql
# Volumes
volumes:
db-data:
There are several issues, why it doesn't work.
First thing - your tests aren't set up properly. Application tries to connect to database in the test stage. But during the test stage there is no any running containers yet. You can set it up properly, or remove MainApplicationTests class from the test directory, or by switching off your tests execution, just by adding -Dmaven.test.skip in Dockerfile for the mvnw package command. After it your image will build properly.
Second thing, you need to allow public key retrieval for your application. To do that, you can add allowPublicKeyRetrieval=true to your jdbc url. You can read more about it here: Connection Java - MySQL : Public Key Retrieval is not allowed.
These steps will allow your application to start (at least it will resolve database connectivity problems). But, I have found another issue. You set in your application configuration context-path equal to /api. Also you added #RequestMapping("/api") for your PersonController. And to access the list of persons, you will need to use the following url: http://localhost:8077/api/api/persons. Probably, it is not what you wanted. To fix it, you can remove it from any place.
I've got a docker-compose project in Visual Studio which starts 3 services. One of them use cosmosdb.
I've followed the instructions on https://hub.docker.com/r/microsoft/azure-cosmosdb-emulator/ to start the emulator in a docker container and it worked.
But now I want to get it up and running through docker-compose file. Following is my current configuration.
version: '3.4'
services:
gateway:
environment:
- ASPNETCORE_ENVIRONMENT=Development
image: ${DOCKER_REGISTRY-}gateway
ports:
- "7000:80"
depends_on:
- servicea
- serviceb
build:
context: .\ApiGateways\IAGTO.Fenix.ApiGateway
dockerfile: Dockerfile
servicea:
environment:
- ASPNETCORE_ENVIRONMENT=Development
image: ${DOCKER_REGISTRY-}servicea
depends_on:
- email.db
build:
context: .\Services\ServiceA
dockerfile: Dockerfile
serviceb:
environment:
- ASPNETCORE_ENVIRONMENT=Development
image: ${DOCKER_REGISTRY-}serviceb
build:
context: .\Services\ServiceB
dockerfile: Dockerfile
email.db:
image: microsoft/azure-cosmosdb-emulator
container_name: cosmosdb-emulator
ports:
- "8081:8081"
I can see the container running when I run docker container list
But requests to https://localhost:8081/_explorer/index.html fails.
Any help on this much appreciated
I was in the same situation but the container was started with the following docker-compose.yml and it became accessible.
I can browse https://localhost:8081/_explorer/index.html
version: '3.7'
services:
cosmosdb:
container_name: cosmosdb
image: microsoft/azure-cosmosdb-emulator
tty: true
restart: always
ports:
- "8081:8081"
- "8900:8900"
- "8901:8901"
- "8979:8979"
- "10250:10250"
- "10251:10251"
- "10252:10252"
- "10253:10253"
- "10254:10254"
- "10255:10255"
- "10256:10256"
- "10350:10350"
volumes:
- vol_cosmos:C:\CosmosDB.Emulator\bind-mount
volumes:
vol_cosmos:
Probably I needed to set "tty" or "volumes".
Using the linux cosmos db image, I set it up like this:
version: '3.4'
services:
db:
container_name: cosmosdb
image: "mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator"
tty: true
restart: always
mem_limit: 2G
cpu_count: 2
environment:
- AZURE_COSMOS_EMULATOR_PARTITION_COUNT=10
- AZURE_COSMOS_EMULATOR_ENABLE_DATA_PERSISTENCE=true
ports:
- "8081:8081"
- "8900:8900"
- "8901:8901"
- "8979:8979"
- "10250:10250"
- "10251:10251"
- "10252:10252"
- "10253:10253"
- "10254:10254"
- "10255:10255"
- "10256:10256"
- "10350:10350"
volumes:
- vol_cosmos:/data/db
volumes:
vol_cosmos:
Part of the problem is that the emulator takes a while to start, and there is a timeout of 2 minutes before it's just stops waiting.
I'm trying to hack my way through it, but I haven't had much success.
For now the image only works stand alone (via docker run) and that's it.
Have been having an issue with my setup in docker-compose on osx where the containers can't talk to each other on there own docker network either set explicitly or just in the default config. Now I'm no docker expert but from all the readings I've done sounds like this should be working out of the box.
Anyway please have a look at my config and let me know if Ive missing something really dumb. (Which I so hope as I need to move on the next task.)
version: "3"
services:
ui-app:
build: ./src/ui
env_file:
- "./envs/ui-app.env"
ports:
- "3400:3400"
networks:
- local_dev_network
links:
- api-gateway
api-gateway:
build: ./src/api-gateway
depends_on:
- redis
env_file:
- "./envs/api-gateway.env"
ports:
- "5050:5050"
networks:
- local_dev_network
links:
- redis
redis:
image: redis:alpine
ports:
- "6379:6379"
networks:
- local_dev_network
debug:
build: ./src/debug
ports:
- "5001:5001"
depends_on:
- ui-app
- redis
networks:
- local_dev_network
links:
- redis
networks:
local_dev_network:
I see that you're using links and also user network bridges.
Links are being deprecated to connect containers in order to use networks.
https://docs.docker.com/compose/compose-file/#links
So,
Let me recommend you remove links, because you've already create local_dev_network.
Although default network_mode is bridge, I would also specify it, because network_mode: host is not compatible with links. This is not needed in your case if you remove links but is a good practice.
If you don't want to move links, note that you're defining a network but you're not connecting all with all, despite of bridge definition. That's why you haven't defined several entries in links: section.
Definitively, you have two options:
links option: remove networks: section and specify in links all containers. A linked to B, C, D; B linked to A, C, D... and not A linked with B, B linked with C.
networks option (recommended):: remove links: section in your compose.
version: "3"
services:
ui-app:
build: ./src/ui
env_file:
- "./envs/ui-app.env"
ports:
- "3400:3400"
networks:
- local_dev_network
api-gateway:
build: ./src/api-gateway
depends_on:
- redis
env_file:
- "./envs/api-gateway.env"
ports:
- "5050:5050"
networks:
- local_dev_network
redis:
image: redis:alpine
ports:
- "6379:6379"
networks:
- local_dev_network
debug:
build: ./src/debug
ports:
- "5001:5001"
depends_on:
- ui-app
- redis
networks:
- local_dev_network
networks:
local_dev_network:
If nothing of this works for you, maybe try with network_mode: host, also of course removing links section
I have a docker-compsose.yml file that launch a postgis service with a shared folder of kml files. I also ave a script that can export all of those kml in my postgis database. However I would like to do so automatically after launch. How can the docker-compose read that file and run the shell command after launch ?
Thank you for the help, I am new using Docker.
version: '2'
services:
postgis:
image: mdillon/postgis
volumes:
- ~/test/dataPostgis:/var/lib/postgresql/data/pgdata
- ./postgresql:/docker-entrypoint-initdb.d
- ./KML_Data:/var/lib/postgresql/data/KML_Data
environment:
PGDATA: /var/lib/postgresql/data/pgdata
POSTGRES_PASSWORD: password
POSTGRES_DB: db
ports:
- 5432:5432
pgadmin:
image: chorss/docker-pgadmin4
ports:
- 5050:5050
volumes:
- ~/test/dataPgadminBackUp:/var/lib/postgresql/data/pgdata
- ./scripts/pgadmin:/tmp/scripts
links:
- postgis
depends_on:
- postgis
Actually I have something like that for my development environment on Windows 10 (not the entire file) :
db:
image: mysql:5.6
ports:
- "3307:3306"
environment:
- "MYSQL_ROOT_PASSWORD=password"
- "MYSQL_USER=root"
- "MYSQL_PASSWORD=password"
- "MYSQL_DATABASE=simtp"
engine:
build: ./docker/engine/
volumes:
- "c:/working_directory/simtp:/var/www/docker:rw"
- "c:/working_directory/simtp/docker/engine/php.ini:/usr/local/etc/php/conf.d/custom.ini:ro"
links:
- "db:db"
working_dir: "/var/www/simtp"
The problem is on Windows paths.
How can I do properly the thing? When I'll run docker-compose on production it will be sure that I'll receive an error.
Thanks.
Use relative paths in your docker-compose.yml:
db:
image: mysql:5.6
ports:
- "3307:3306"
environment:
- "MYSQL_ROOT_PASSWORD=password"
- "MYSQL_USER=root"
- "MYSQL_PASSWORD=password"
- "MYSQL_DATABASE=simtp"
engine:
build: ./docker/engine/
volumes:
- "./:/var/www/docker:rw"
- "./docker/engine/php.ini:/usr/local/etc/php/conf.d/custom.ini:ro"
links:
- "db:db"
working_dir: "/var/www/simtp"
Instruct them to put the docker-compose.yml file in c:/working_directory/simtp or where their files are
Regards