Unable configure Alerts and Actions in Kibana - elasticsearch

I'm using a Docker Compose file for ELK setup and using the latest version (above 7) for Kibana. Now I set the xpack.encryptedSavedObjects.encryptionKey parameter in the kibana.yml so that I can use the alert and actions feature. But even after that I'm not able to create alert. Can anyone help me please?
I generated 32 character encryption key using Python uuid module.

According to https://github.com/elastic/kibana/issues/57773 the environment variable XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY was missing in the kibana config. In Feb 2020 it was merged and is now working.
The encryption key XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY has to be 32 characters or longer. https://www.elastic.co/guide/en/kibana/current/using-kibana-with-security.html
A working configuration could look like this:
...
kibana:
depends_on:
- elasticsearch
image: docker.elastic.co/kibana/kibana:8.0.0-rc2
container_name: kibana
environment:
- ...
- SERVER_PUBLICBASEURL=https://kibana.stackoverflow.com/
- XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY=a7a6311933d3503b89bc2dbc36572c33a6c10925682e591bffcab6911c06786d
- ...
...

I have tried using the environment variable in my docker-compose.yml file as
kib01:
image: docker.elastic.co/kibana/kibana:${VERSION}
container_name: kib01
depends_on: {"es01": {"condition": "service_healthy"}}
ports:
- 5601:5601
environment:
SERVERNAME: localhost
ELASTICSEARCH_URL: https://es01:9200
ELASTICSEARCH_HOSTS: https://es01:9200
XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY: "743787217A45432B462D4A614EF35266"
volumes:
- /var/elasticsearch/config/certs:$CERTS_DIR
networks:
- elastic
We have changed the string format of xpack.encryptedSavedObjects.encryptionKey in to environment variable format XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY by replacing . with _ and all caps.

Maybe there is a problem with mounting the file, I opted for the environment variables in my docker-compose file.
services:
kibana:
...
environment:
...
XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY: abcd...

Related

docker-compose: no declaration was found in the volumes section

Im trying to use Docker-Compose on Microsoft Windows to create a stack for Seafile.
The error message after creating is:
Deployment error
failed to deploy a stack: Named volume “C:/Users/Administrator/Docker/Volumes/Seafile/Mysql:/var/lib/mysql:rw” is used in service “db” but no declaration was found in the volumes section. : exit status 1
Here's my problematic docker-compose.yaml file :
version: '2'
services:
db:
image: mariadb:10.5
container_name: seafile-mysql
environment:
- MYSQL_ROOT_PASSWORD=db_dev # Requested, set the root's password of MySQL service.
- MYSQL_LOG_CONSOLE=true
volumes:
- C:/Users/Administrator/Docker/Volumes/Seafile/Mysql:/var/lib/mysql # Requested, specifies the path to MySQL data persistent store.
networks:
- seafile-net
memcached:
image: memcached:1.5.6
container_name: seafile-memcached
entrypoint: memcached -m 256
networks:
- seafile-net
seafile:
image: seafileltd/seafile-mc:latest
container_name: seafile
ports:
- "9000:80"
# - "443:443" # If https is enabled, cancel the comment.
volumes:
- C:/Users/Administrator/Docker/Volumes/Seafile/Seafile:/shared # Requested, specifies the path to Seafile data persistent store.
environment:
- DB_HOST=db
- DB_ROOT_PASSWD=db_dev # Requested, the value shuold be root's password of MySQL service.
- TIME_ZONE=Etc/UTC # Optional, default is UTC. Should be uncomment and set to your local time zone.
- SEAFILE_ADMIN_EMAIL=me#example.com # Specifies Seafile admin user, default is 'me#example.com'.
- SEAFILE_ADMIN_PASSWORD=asecret # Specifies Seafile admin password, default is 'asecret'.
- SEAFILE_SERVER_LETSENCRYPT=false # Whether to use https or not.
- SEAFILE_SERVER_HOSTNAME=docs.seafile.com # Specifies your host name if https is enabled.
depends_on:
- db
- memcached
networks:
- seafile-net
networks:
seafile-net:
If you see the error "no declaration was found in the volumes section" - probably you are not declaring the volumes from the root section.
The error message can cause confusion. Here how to do it correctly:
...
services:
...
volumes:
- a:/path1
- b:/path2
...
volumes:
a:
b:
...
I know that this could be somehow scattered and I know Docker could handle it differently in another universe, but at the current version it does it in this way: the root section declares the volume, while the services section just use them.
Let me know if this was your problem.
More info:
https://docs.docker.com/storage/volumes/#use-a-volume-with-docker-compose

Why elasticsearch on docker swarm requires a transport.host=localhost setting?

I'm trying to run Elasticsearch on an docker swarm. It works as a single node cluster for now, but only when the transport.host=localhost setting is included. Here is main part of docker-compose.yml:
version: "3"
services:
elasticsearch:
image: "elasticsearch:7.4.1" #(base version)
hostname: elasticsearch
ports:
- "9200:9200"
environment:
- cluster.name=elasticsearch
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- transport.host=localhost
volumes:
- "./elasticsearch/volumes:/usr/share/elasticsearch/data"
networks:
- logger_net
volumes:
logging:
networks:
logger_net:
external: true
Above configuration results in the yellow cluster state (because some indexes require additional replica).
Elasticsearch status page is unavailable when I'm using IP of the elasticsearch docker container in a transport.host setting or without a transport.host=localhost setting.
I think that using a transport.host=localhost setting is wrong. Is proper configuration of Elasticsearch in docker swarm available?

Connect Kibana to Elasticsearch - ELASTICSEARCH_URL vs ELASTICSEARCH_HOSTS

I don't know which environment variable to use:
version: '2'
services:
kibana:
image: docker.elastic.co/kibana/kibana:6.2.4
environment:
SERVER_NAME: kibana.example.org
ELASTICSEARCH_HOSTS: http://ip-xxx-31-9-xxx.us-west-2.compute.internal:9200
ELASTICSEARCH_URL: http://ip-xxx-31-9-xxx.us-west-2.compute.internal:9200
Should I be using ELASTICSEARCH_URL or ELASTICSEARCH_HOSTS?
Since you are using the docker image of kibana 6.2.4 it has to be ELASTICSEARCH_URL. In the official guide to configure kibana 6.2 the setting ELASTICSEARCH_HOSTS is not even listed. This one came with later versions.

Running Sonarqube with docker-compose using bind mount volumes

I’m trying to run Sonarqube in a Docker container on a Centos 7 server using docker-compose. Everything works as expected using named volumes as configured in this docker-compose.yml file:
version: "3"
services:
sonarqube:
image: sonarqube
ports:
- "9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled_plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
networks:
- sonarnet
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
driver: bridge
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled_plugins:
postgresql:
postgresql_data:
However, my /var/lib/docker/volumes directory is not large enough to house the named volumes. So, I changed the docker-compose.yml file to use bind mount volumes as shown below.
version: "3"
services:
sonarqube:
image: sonarqube
ports:
- "9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
volumes:
- /data/sonarqube/conf:/opt/sonarqube/conf
- /data/sonarqube/data:/opt/sonarqube/data
- /data/sonarqube/extensions:/opt/sonarqube/extensions
- /data/sonarqube/bundled_plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
networks:
- sonarnet
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- /data/postgresql:/var/lib/postgresql
- /data/postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
driver: bridge
However, after running docker-compose up -d, the app starts up but none of the bind mount volumes are written to. As a result, the Sonarqube plugins are not loaded and the sonar postgreSQL database is not initialized. I thought it may be a selinux issue, but I temporarily disabled it with no success. I’m unsure what to look at next.
I think my answer from "How to persist configuration & analytics across container invocations in Sonarqube docker image" would help you as well.
For good measure I have also pasted it in here:
.....
Notice this line SONARQUBE_HOME in the Dockerfile for the docker-sonarqube image. We can control this environment variable.
When using docker run. Simply do:
txt
docker run -d \
...
...
-e SONARQUBE_HOME=/sonarqube-data
-v /PERSISTENT_DISK/sonarqubeVolume:/sonarqube-data
This will make Sonarqube create the conf, data and so forth folders and store data therein. As needed.
Or with Kubernetes. In your deployment YAML file. Do:
txt
...
...
env:
- name: SONARQUBE_HOME
value: /sonarqube-data
...
...
volumeMounts:
- name: app-volume
mountPath: /sonarqube-data
And the name in the volumeMounts property points to a volume in the volumes section of the Kubernetes deployment YAML file.
This again will make Sonarqube use the /sonarqube-data mountPath for creating extenions, conf and so forth folders, then save data therein.
And voila your Sonarqube data is thereby persisted.
I hope this will help others.
N.B. Notice that the YAML and Docker run examples are not exhaustive. They focus on the issue of persisting Sonarqube data.
Try it out BobC and let me know.
Have a great day.
The below code will help you in a single command I hope so.
Create a new docker-compose file named as docker-compose.yaml,
version: "3"
services:
sonarqube:
image: sonarqube:8.2-community
depends_on:
- db
ports:
- "9000:9000"
networks:
- sonarqubenet
environment:
SONAR_JDBC_URL: jdbc:postgresql://db:5432/sonarqube
SONAR_JDBC_USERNAME: sonar
SONAR_JDBC_PASSWORD: sonar
volumes:
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_logs:/opt/sonarqube/logs
- sonarqube_temp:/opt/sonarqube/temp
restart: on-failure
container_name: sonarqube
db:
image: postgres
networks:
- sonarqubenet
environment:
POSTGRES_USER: sonar
POSTGRES_PASSWORD: sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
restart: on-failure
container_name: postgresql
networks:
sonarqubenet:
driver: bridge
volumes:
sonarqube_data:
sonarqube_extensions:
sonarqube_logs:
sonarqube_temp:
postgresql:
postgresql_data:
Then, execute the command,
$ docker-compose up -d
$ docker container ps
Sounds like the container is running and, as you mentioned, Sonarqube starts-up. When it starts, is it showing that it's using the H2 in memory db? After running docker-compose up -d, use docker logs -f <container_name> to see what's happening on Sonarqube startup.
To simplify viewing your logs with a known name, I suggest you also add a container name to your Sonarqube service. For example, container_name: sonarqube.
Also, while I know the plan is to deprecate the use of environment variables for the username, password and jdbc connection, I've had better luck in docker-compose using environment variables rather than the corresponding property value. For the connection string, try: SONARQUBE_JDBC_URL: jdbc:postgresql://db/sonar without specifying the default port for postgres.

Configuring ssl in rabbitmq.config using rabbitmq docker image

My goal is to set rabbitmq with ssl support, which was achieved previously using below rabbitmq.config file, which resides in host's /etc/rabbitmq path.
Now I want to be able to configure other rabbitmq user and password than defaults guest guest.
I'm using rabbitmq docker image with following docker-compose configuration:
version: '2'
services:
rabbitmq:
build: ./rabbitmq
ports:
- "8181:8181"
expose:
- "15672"
- "8181"
volumes:
- /etc/rabbitmq:/etc/rabbitmq
environment:
RABBITMQ_DEFAULT_USER: user123
RABBITMQ_DEFAULT_PASS: 1234
Rabbitmq config:
[{rabbit,
[
{loopback_users, []},
{heartbeat,0},
{ssl_listeners, [8181]},
{ssl_options, [{cacertfile, "/etc/rabbitmq/ca/cacert.pem"},
{certfile, "/etc/rabbitmq/server/cert.pem"},
{keyfile, "/etc/rabbitmq/server/key.pem"},
{verify,verify_none},
{fail_if_no_peer_cert,false}]}
]}
].
Rabbitmq dockerfile:
from rabbitmq:management
#and some certificate generating logic
I noticed that once upon adding environment section, current rabbitmq.config file is overriden with auto generated configuration possibly by docker-entrypoint.sh file.
For building configuration using the certs I found environment variables that can do this (look here).
However didn't found any reference for defining ssl_listeners section with its port, as seen in below rabbitmq.config
My question is: how can I create the exact configuration as mentioned below using env variables OR how can I remain with mine rabbitmq.config defining rabbitmq with new user and password in some dynamic way (maybe templating the config file)?
Try this
version: '2'
services:
rabbitmq:
build: ./rabbitmq
ports:
- "8181:8181"
expose:
- "15672"
- "8181"
volumes:
- /etc/rabbitmq:/etc/rabbitmq
command: rabbitmq-server
entrypoint: ""
environment:
RABBITMQ_DEFAULT_USER: user123
RABBITMQ_DEFAULT_PASS: 1234
This will override the docker-entrpoint and just run the rabbitmq server. Now the ./docker-entrypoint.sh sets certain environment variables also. Which may be needed in your case. So to make sure you have everything needed

Resources