i don't know how to approach my problem because i don`t find similar cases to have an example.
I want to setup influx with 2 buckets to save telegraf data but only setups with init bucket.
These are the two influxdb services in my docker composer file:
influxdb:
image: influxdb:latest
volumes:
- ./influxdbv2:/root/.influxdbv2
environment:
# Use these same configurations parameters in your telegraf configuration, mytelegraf.conf.
- DOCKER_INFLUXDB_INIT_MODE=setup
- DOCKER_INFLUXDB_INIT_USERNAME=User
- DOCKER_INFLUXDB_INIT_PASSWORD=****
- DOCKER_INFLUXDB_INIT_ORG=org
- DOCKER_INFLUXDB_INIT_BUCKET=data
- DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=****
ports:
- "8086:8086"
influxdb_cli:
image: influxdb:latest
links:
- influxdb
volumes:
# Mount for influxdb data directory and configuration
- ./influxdbv2:/root/.influxdbv2
entrypoint: ["./entrypoint.sh"]
restart: on-failure:10
depends_on:
- influxdb
when inits runs influxdb setup correctly but doesn`t run the script and telegraf returns 404 when trying to write to buckets.
I ran into the same issue today and as far as I am aware you cannot currently initialize two buckets with the DOCKER_INFLUXDB_INIT_BUCKET environment variable.
So I created a shellscript called createSecondBucket.sh that I found in another answer for this question. It uses the influx cli to create a new bucket. The script looks like this:
#!/bin/sh
set -e
influx bucket create -n YOUR_BUCKET_NAME -o YOUR_ORG_NAME -r 0
Note that I had to change the line endings to unix (LF) to run the script without errors.
Inside my Dockerfile I added the following lines:
COPY ./createSecondBucket.sh /docker-entrypoint-initdb.d
RUN chmod +x /docker-entrypoint-initdb.d/createSecondBucket.sh
which have the effect that the script is executed after the container starts for the first time. I found this information on the MongoDB dockerhub page which you can find here under the "Initializing a fresh instance" headline.
Related
I am running 2 containers at the same time (connected via docker-compose on setting links && depends_on).
The depends on is not enough, so I want the script that run on entryphone of one of the container to check if the other container is running already on some port.
I tried:
#!bin/bash
until nc -z w10 <container_name> 3306
do
echo waiting for db to be ready...
sleep 2
done
echo code is ready
But this is not working..
Anyone got an idea?
I would suggest to use the depends_on approach. However, you can use some of the advanced setting of this command. Please, read the documentation of Control startup and shutdown order in Compose
You can use the wait-for-it.sh script to exactly achieve what you need. Extracted from the documentation:
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
command: ["./wait-for-it.sh", "db:5432", "--", "python", "app.py"]
db:
image: postgres
Since you are already using docker-compose to orchestrate your services a better way would be to use condition: service_healthy of the depends_on long syntax. So instead of manually waiting in one container for the other to become available docker-compose will start the former only after the latter became healthy, i.e. available.
If the depended-on container does not have a specified HEALTHCHECK in its image already you can manually define it in the docker-compose.yml with the healthcheck attribute.
Example with a mariadb database using the included healthcheck.sh script:
services:
app:
image: myapp/image
depends_on:
db:
condition: service_healthy
db:
image: mariadb
environment:
- MARIADB_ROOT_PASSWORD=password
healthcheck:
test: "healthcheck.sh --connect"
With this docker-compose up will first start the db service and wait until the db service becomes healthy, i.e. is ready to accept connections, and only then will start the app service which can immediately connect to the db.
I am trying to set up tests for my Laravel application.
The application runs with Docker compose.
When I try to start my tests with this command:
docker-compose -p tests --env-file .env_tests --rm run myapp ./vendor/bin/phpunit
the tests start to run before the database container is ready.
How can I make my tests wait for the database to become ready?
My docker-compose.yml looks like this:
version: '2'
services:
mariadb:
image: 'bitnami/mariadb:10.1'
environment:
- ALLOW_EMPTY_PASSWORD=yes
- MARIADB_USER=my_user
- MARIADB_DATABASE=my_database
- MARIADB_PASSWORD=my_password
ports:
# connect your dbeaver/workbench to localhost:${WORKBENCH_PORT}
- ${WORKBENCH_PORT}:3306
# volumes:
# Do not load databases here, as there is no
# good way for other containers to wait for this to finish
# - ./database:/docker-entrypoint-initdb.d
myapp:
tty: true
image: bitnami/laravel:6-debian-9
environment:
- DB_HOST=mariadb
- DB_USERNAME=my_user
- DB_DATABASE=my_database
- DB_PASSWORD=my_password
depends_on:
- mariadb
ports:
- 3000:3000
volumes:
- ./:/app
When I start the application normally (docker-compose up), Laravel waits for the mariadb container to finish loading, but I couldn't find out how this is done.
---- Edit ----
I found that the bitami/laravel Docker container that I use has a script called wait_for_db() that seems to wait for the database.
What I didn't find out yet is why this script is run in normal mode, but not when I start the tests.
According to the official docs, it is not possible to wait until the database is ready, but only until it has started:
However, for startup Compose does not wait until a container is “ready” (whatever that means for your particular application) - only until it’s running. There’s a good reason for this.
(...)
The best solution is to perform this check in your application code, both at startup and whenever a connection is lost for any reason.
The difference in your app's behaviour between the general case and the test case may be related to other reasons, such as the test taking less time to load (giving less time to the database to get ready) or test handling connection failure in a different way (not retrying after some time).
EDIT
Using docker-compose run overrides the entrypoint of the container. Therefore, even if originally there was a script intended to wait for the database initialization, it will not be run.
Check the docs of the command:
First, the command passed by run overrides the command defined in the service configuration. For example, if the web service configuration is started with bash, then docker-compose run web python app.py overrides it with python app.py.
I wrote a script that creates a local development environment using a docker-compose.yml file.
When running the script, I want to use a yaml linter command to validate that the file is a valid yaml before upping the environment and to do that I'm using the command yamllint.
In this docker-compose.yml file, there is more than one service which "depeneds_on" another service but when I run yamllint, it returns the following error:
47:5 error duplication of key "depends_on" in mapping (key-duplicates)
Which is not a real error, but since the lint is part of the script run then I cannot count on its exit code as it counts this error as an error while in reality, it's not.
An example portion of the docker-compose.yml file:
microservice-one:
image: ms-one:feature-local_development_env
environment:
NODE_ENV: 'development'
NPM_TOKEN: 'SECRET'
ports:
- "3013:3000"
depends_on:
- redis-cluster
microservice-two:
image: ms-two:feature-local_development_env
environment:
NODE_ENV: 'development'
NPM_TOKEN: 'SECRET'
ports:
- "3014:3000"
depends_on:
- redis-cluster
networks:
default:
Is there any other command line yaml linter that you know which will not count more than one "depends_on" as an error?
I found my answer and thought I'll share it with whoever gets here.
So the solution is to override yamllint's default configuration by creating a specific yamllint configuration file.
In my case, the file looks like so:
extends: default
rules:
key-duplicates: disable
Then, I'm running the command like so:
yamllint -d config_file docker-compose.yml
More options can be found in yamllint's official documentation page,
If you need only syntax error and nothing else , below command can be used.
yamllint -d "{rules:{}}"
I would like to configure an Oracle database on a server. For that, I am using this image from DockerHub:
https://hub.docker.com/r/sath89/oracle-12c/
Having included the image in a docker-compose.yml file, I am having trouble with overwriting the default credentials for accessing the database (the username is system while the password is oracle). This is how my docker-compose.yml file looks like:
version: '3.5'
services:
oracle12c-db:
image: sath89/oracle-12c
restart: always # restart policy
ports:
- 1521:1521
environment:
- USER=myusername
- PASS=mypass
- HOST=oracle-database
- PORT=1521
- ORACLE_SID=XE
- HTTP_PORT=8080
After successfully executing the command docker-compose up, I am still not able to access the database with the new credentials (only with the default ones). Is my docker-compose file syntactically correct or am I missing out something else here? Thanks in advance for your help!
I don't you can modify this at run time particularly easily.
Option 1 is to create your own Dockerfile based on theirs and pass in the user and password at build time (or hard code it to something else)
Option 2 is to modify their entrypoint and run the appropriate Oracle commands at startup to change the user/password
I believe it is simple question but I still do not get it from Docker-compose documentations. What is the difference between links and external_links?
I like external_links as I want to have core docker-compose and I want to extend it without overriding the core links.
What exactly I have, I am trying to setup logstash which depends on the elasticsearch. Elasticsearch is in the core docker-compose and the logstash is in the depending one. So I had to define the elastic search in the depended docker-compose as a reference as logstash need it as a link. BUT Elasticsearch has already its own links which I do not want to repeat them in the dependent one.
Can I do that with external_link instead of link?
I know that links will make sure that the link is up first before linking, does the external_link will do the same?
Any help is appreciated. Thanks.
Use links when you want to link together containers within the same docker-compose.yml. All you need to do is set the link to the service name. Like this:
---
elasticsearch:
image: elasticsearch:latest
command: elasticsearch -Des.network.host=0.0.0.0
ports:
- "9200:9200"
logstash:
image: logstash:latest
command: logstash -f logstash.conf
ports:
- "5000:5000"
links:
- elasticsearch
If you want to link a container inside of the docker-compose.yml to another container that was not included in the same docker-compose.yml or started in a different manner then you can use external_links and you would set the link to the container's name. Like this:
---
logstash:
image: logstash:latest
command: logstash -f logstash.conf
ports:
- "5000:5000"
external_links:
- my_elasticsearch_container
I would suggest the first way unless your use case for some reason requires that they cannot be in the same docker-compose.yml
I think external_link will do not do the same as links in docker-compose up command.
links waits for container to boot up and get IP address which is used in etc/hosts file, therefore external_link has already IP:hostname values name described in docker-compose file.
Moreover links will be deprecated
Here is a link to Docker-Compose project that uses Elasticsearch, Logstash, and Kibana. You will see that I'm using links:
https://github.com/bahaaldine/elasticsearch-paris-accidentology-demo