docker-compose.yml for elasticsearch and kibana - elasticsearch

My aim is to get the elasticsearch and kibana images from DockerHub working locally using Docker.
This does the trick and works perfectly...
docker network create mynetwork --driver=bridge
docker run -p 5601:5601 --name kibana -d --network mynetwork kibana
docker run -p 9200:9200 -p 9300:9300 --name elasticsearch -d --network mynetwork elasticsearch
Today a bird whispered in my ear and said I should learn docker-compose. So I tried to do all of what's above inside a docker-compose.yml.
Here is my attempt.
version: "2.0"
services:
elasticsearch:
image: elasticsearch:latest
ports:
- "9200:9200"
- "9300:9300"
networks:
- docker_elk
kibana:
image: kibana:latest
ports:
- "5601:5601"
networks:
- docker_elk
networks:
docker_elk:
driver: bridge
Unfortunately this does not work. I've been racking my brains as to why I always get the ECONNREFUSED error as shown below when i run docker-compse up.
$ docker-compose up
Starting training_elasticsearch_1
Recreating training_kibana_1
Attaching to training_elasticsearch_1, training_kibana_1
elasticsearch_1 | [2016-11-02 22:39:55,798][WARN ][bootstrap ] unable to install syscall filter: seccomp unavailable: your kernel is buggy and you should upgrade
elasticsearch_1 | [2016-11-02 22:39:56,036][INFO ][node ] [Caliban] version[2.4.1], pid[1], build[c67dc32/2016-09-27T18:57:55Z]
elasticsearch_1 | [2016-11-02 22:39:56,036][INFO ][node ] [Caliban] initializing ...
elasticsearch_1 | [2016-11-02 22:39:56,713][INFO ][plugins ] [Caliban] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
elasticsearch_1 | [2016-11-02 22:39:56,749][INFO ][env ] [Caliban] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/vda2)]], net usable_space [54.8gb], net total_space [59gb], spins? [possibly], types [ext4]
elasticsearch_1 | [2016-11-02 22:39:56,749][INFO ][env ] [Caliban] heap size [990.7mb], compressed ordinary object pointers [true]
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["status","plugin:kibana#1.0.0","info"],"pid":11,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["status","plugin:elasticsearch#1.0.0","info"],"pid":11,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["error","elasticsearch"],"pid":11,"message":"Request error, retrying -- connect ECONNREFUSED 172.20.0.2:9200"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["status","plugin:kbn_vislib_vis_types#1.0.0","info"],"pid":11,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["warning","elasticsearch"],"pid":11,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["warning","elasticsearch"],"pid":11,"message":"No living connections"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["status","plugin:elasticsearch#1.0.0","error"],"pid":11,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch:9200.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["status","plugin:markdown_vis#1.0.0","info"],"pid":11,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["status","plugin:metric_vis#1.0.0","info"],"pid":11,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["status","plugin:spyModes#1.0.0","info"],"pid":11,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["status","plugin:statusPage#1.0.0","info"],"pid":11,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["status","plugin:table_vis#1.0.0","info"],"pid":11,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["listening","info"],"pid":11,"message":"Server running at http://0.0.0.0:5601"}
elasticsearch_1 | [2016-11-02 22:39:58,515][INFO ][node ] [Caliban] initialized
elasticsearch_1 | [2016-11-02 22:39:58,515][INFO ][node ] [Caliban] starting ...
elasticsearch_1 | [2016-11-02 22:39:58,587][INFO ][transport ] [Caliban] publish_address {172.20.0.2:9300}, bound_addresses {[::]:9300}
elasticsearch_1 | [2016-11-02 22:39:58,594][INFO ][discovery ] [Caliban] elasticsearch/1Cf9qz7CSCqHBEEuwG7PQw
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:40:00Z","tags":["warning","elasticsearch"],"pid":11,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:40:00Z","tags":["warning","elasticsearch"],"pid":11,"message":"No living connections"}
elasticsearch_1 | [2016-11-02 22:40:01,650][INFO ][cluster.service ] [Caliban] new_master {Caliban}{1Cf9qz7CSCqHBEEuwG7PQw}{172.20.0.2}{172.20.0.2:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
elasticsearch_1 | [2016-11-02 22:40:01,661][INFO ][http ] [Caliban] publish_address {172.20.0.2:9200}, bound_addresses {[::]:9200}
elasticsearch_1 | [2016-11-02 22:40:01,661][INFO ][node ] [Caliban] started
elasticsearch_1 | [2016-11-02 22:40:01,798][INFO ][gateway ] [Caliban] recovered [1] indices into cluster_state
elasticsearch_1 | [2016-11-02 22:40:02,149][INFO ][cluster.routing.allocation] [Caliban] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:40:03Z","tags":["status","plugin:elasticsearch#1.0.0","info"],"pid":11,"state":"green","message":"Status changed from red to green - Kibana index ready","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://elasticsearch:9200."}
^CGracefully stopping... (press Ctrl+C again to force)
Stopping training_kibana_1 ... done
Stopping training_elasticsearch_1 ... done
Can someone please help me with why?
thanks

To add the hard dependency on elasticsearch for kibana, you need the depends_on variable to be set as shown below. Also, to add to #Phil McMillan's answer, you can set the elasticsearch_url variable in kibana, without static addressing using Docker's inbuilt DNS mechanism.
version: '2.1'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.4.3
container_name: elasticsearch
networks:
docker-elk:
kibana:
image: docker.elastic.co/kibana/kibana:5.4.3
container_name: kibana
environment:
- "ELASTICSEARCH_URL=http://elasticsearch:9200"
networks:
- docker-elk
depends_on:
- elasticsearch
networks:
docker-elk:
driver: bridge
Note the environment variable ELASTICSEARCH_URL=http://elasticsearch:9200 just uses has the container name (elasticsearch) which the Docker DNS server is able to resolve.

You need to include the links.
version: "2.0"
services:
elasticsearch:
image: elasticsearch:latest
ports:
- "9200:9200"
- "9300:9300"
networks:
- docker_elk
kibana:
image: kibana:latest
ports:
- "5601:5601"
links:
- elasticsearch
networks:
- docker_elk
networks:
docker_elk:
driver: bridge
UPDATED
When using the image elasticsearch:latest, it's Elasticsearch 5.0 and requires us to increase our Docker host virtual memory.
Before running the docker-compose, please make sure to run this command on your Docker host.
Linux:
su root
sysctl -w vm.max_map_count=262144
Windows (boot2docker)
docker-machine ssh default
sudo sysctl -w vm.max_map_count=262144
If you don't want to change your Docker host, just use the Elasticsearch 2.x image at elasticsearch:2

I'm using docker-compose v3 format according to this post:
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
container_name: elasticsearch
environment:
- node.name=es-node
- cluster.name=es-cluster
- discovery.type=single-node
ports:
- 9200:9200
- 9300:9300
volumes:
- local-es:/usr/share/elasticsearch/data
networks:
- es-net
kibana:
image: docker.elastic.co/kibana/kibana:7.10.2
container_name: kibana
environment:
- "ELASTICSEARCH_URL=http://elasticsearch:9200"
ports:
- 5601:5601
networks:
- es-net
depends_on:
- elasticsearch
restart: "unless-stopped"
networks:
es-net:
volumes:
local-es:

This works for me
docker-compose.yml
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
environment:
- discovery.type=single-node
ports:
- 9200:9200
kibana:
image: docker.elastic.co/kibana/kibana:7.6.2
ports:
- 5601:5601

File docker-compose.yml:
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.5.2
container_name: elasticsearch
environment:
- node.name=es-node
- cluster.name=es-cluster
- discovery.type=single-node
- xpack.security.enabled=false
ports:
- 9200:9200
- 9300:9300
volumes:
- ./docker-data/elasticsearch/data:/usr/share/elasticsearch/data
networks:
- elastic
kibana:
image: docker.elastic.co/kibana/kibana:8.5.2
container_name: kibana
ports:
- 5601:5601
networks:
- elastic
depends_on:
- elasticsearch
restart: 'unless-stopped'
networks:
elastic:
Official documentation:
Install Kibana with Docker

I have this working. No links are needed and it doesn't have anything to do with elasticsearch starting before kibana. The issue is that when running under compose, a new bridged network is defined with its own set of IPs. Kibana needs to communicate with the cluster over this bridged network - "localhost" is not available anymore for the connectivity.
You need to do a couple of things:
You need to set a couple of values in kibana.yml or under the environment: section of kibana in the compose file):
a. elasticsearch.url in kibana.yml (or ELASTICSEARCH_URL under the environment: section of kibana in the compose file) must be set to the specific IP of the cluster and port 9200 - localhost will not work, as it does when you run outside of compose.
elasticsearch.url: "http://172.16.238.10:9200"
b. You also need to set server.host (SERVER_HOST) to the bridged IP of the Kibana container.
server.host: "172.16.238.12"
Note: you still access the kibana UI from with http://127.0.0.1:5601 and you still need those "ports" commands!
You need to set an "ipam" configuration under your bridged network and assign elasticsearch and kibana static ips so that kibana can access it via its configuration above.
Something like this should suffice:
elasticsearch:
networks:
esnet:
ipv4_address: 172.16.238.10
kibana:
networks:
esnet:
ipv4_address: 172.16.238.12
networks:
esnet:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.16.238.0/24
Don't forget to use one of the documented methods to set Kibana configuration - ELASTICSEARCH_URL is required to be set!
I have a docker compose file that creates two elasticsearch nodes and a kibana instance all running on the same bridged network. It is possible.

Related

Elastic + APM Not Found (":[{\"type\":\"resource_not_found_exception\",\"reason\":\"index template matching [logs-apm.error] not found\"}]

Small question regarding Elastic + APM server please.
For testing purpose only, I would like to spin up an Elastic instance + APM server, and visualize traces in Kibana.
Therefore, I started this docker compose file, this is what I tried:
version: "3.9"
services:
elasticsearch:
networks: ["mynetwork"]
container_name: elasticsearch.mynetwork
hostname: elasticsearch.mynetwork
image: elasticsearch:8.6.0
ports:
- 9200:9200
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ulimits:
memlock:
soft: -1
hard: -1
apm-server:
networks: [ "mynetwork" ]
container_name: apm-server.mynetwork
hostname: apm-server.mynetwork
image: elastic/apm-server:8.6.0
ports:
- 8200:8200
command: >
apm-server -e
-E apm-server.rum.enabled=true
-E setup.kibana.host=kibana.mynetwork:5601
-E setup.template.settings.index.number_of_replicas=0
-E apm-server.kibana.enabled=true
-E apm-server.kibana.host=kibana.mynetwork:5601
-E output.elasticsearch.hosts=["elasticsearch.mynetwork:9200"]
cap_add: [ "CHOWN", "DAC_OVERRIDE", "SETGID", "SETUID" ]
cap_drop: [ "ALL" ]
healthcheck:
interval: 10s
retries: 12
test: curl --write-out 'HTTP %{http_code}' --fail --silent --output /dev/null http://localhost:8200/
Upon container start, the APM server is printing this error in a looping manner.
{"log.level":"error","#timestamp":"2023-01-31T22:30:31.944Z","log.logger":"beater","log.origin":{"file.name":"beater/waitready.go","file.line":62},"message":"precondition 'apm integration installed' failed: error querying Elasticsearch for integration index templates: unexpected HTTP status: 404 Not Found ({\"error\":{\"root_cause\":[{\"type\":\"resource_not_found_exception\",\"reason\":\"index template matching [traces-apm.sampled] not found\"}],\"type\":\"resource_not_found_exception\",\"reason\":\"index template matching [traces-apm.sampled] not found\"},\"status\":404}): to remediate, please install the apm integration: https://ela.st/apm-integration-quickstart","service.name":"apm-server","ecs.version":"1.6.0"}
Even after looking at the official website, I am having trouble understanding what is missing.
May I ask what should I edit in order to have Elastic and APM server running please?
Thank you

Could not communicate to Elasticsearch

I am trying to send my node app logs to fluentd to elasticsearch to kibana, but having a problem connecting fluentd with elasticsearch with docker. I want to dockerize this efk stack.
I have attached the folder structure and shared relevant files.
Following is the folder structure:
Error
Could not communicate to Elasticsearch, resetting connection and trying again. Connection refused - connect(2) for 172.20.0.2:9200 (Errno::ECONNREFUSED)
fluent.conf:
#type forward
port 24224
bind 0.0.0.0
</source>
<match *.**>
#type copy
<store>
#type elasticsearch
host elasticsearch
port 9200
user elastic
password pass
</store>
</match>
DockerFile
FROM fluent/fluentd:v1.15-1
USER root
RUN gem install elasticsearch -v 7.6.0
# RUN gem install fluent-plugin-elasticsearch -v 7.6.0
RUN gem install fluent-plugin-elasticsearch -v 4.1.1
RUN gem install fluent-plugin-rewrite-tag-filter
RUN gem install fluent-plugin-multi-format-parser
USER fluent
Docker-compose.yml
version: '3'
services:
fluentd:
build: ./fluentd
container_name: loggingFluent
volumes:
- ./fluentd/conf:/fluentd/etc
# - ./fluentd/conf/fluent.conf:/fluentd/etc/fluent.conf
ports:
- "24224:24224"
- "24224:24224/udp"
links:
- elasticsearch
depends_on:
- elasticsearch
- kibana
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.1
container_name: elasticsearch-Logging
ports:
- 9200:9200
expose:
- 9200
environment:
discovery.type: 'single-node'
ES_JAVA_OPTS: '-Xms1024m -Xmx1024m'
xpack.security.enabled: 'true'
ELASTIC_PASSWORD: 'pass'
kibana:
image: docker.elastic.co/kibana/kibana:7.8.1
container_name: kibana-Logging
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- 5601:5601
depends_on:
- elasticsearch
links:
- elasticsearch
Maybe I am missing some with docker networking because I am using docker for the first time I have checked the ports exposed by docker containers and they are fine. I have done this without docker and have used the same settings but having a problem doing it with docker. Looking forward to seeing your responses. Thank you very much.
Adding a username to the elastic search environment solved the issue :
elasticsearch-environment
environment:
discovery.type: 'single-node'
ES_JAVA_OPTS: '-Xms1024m -Xmx1024m'
xpack.security.enabled: 'true'
ELASTIC_PASSWORD: 'pass'
ELASTIC_USERNAME: 'elastic'

How to communicate between two services in Fargate using docker compose

I am trying to host Elasticsearch and kibana in AWS ECS (Fargate). I have created a docker-compose.ym file
version: '2.2'
services:
es-node:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.0
deploy:
resources:
limits:
memory: 8Gb
command: >
bash -c
'bin/elasticsearch-plugin install analysis-smartcn https://github.com/medcl/elasticsearch-analysis-stconvert/releases/download/v7.9.0/elasticsearch-analysis-stconvert-7.9.0.zip;
/usr/local/bin/docker-entrypoint.sh'
container_name: es-$ENV
environment:
- node.name=es-$ENV
- cluster.name=es-docker-cluster
- discovery.type=single-node
# - discovery.seed_hosts=es02,es03
# - cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- ELASTIC_PASSWORD=$ES_DB_PASSWORD
- xpack.security.enabled=true
logging:
driver: awslogs
options:
awslogs-group: we-two-works-db-ecs-context
awslogs-region: us-east-1
awslogs-stream-prefix: es-node
volumes:
- elastic_data:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
kibana-node:
image: docker.elastic.co/kibana/kibana:7.9.0
container_name: kibana-$ENV
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: $ES_DB_URL
ELASTICSEARCH_HOSTS: '["http://es-$ENV:9200"]'
ELASTICSEARCH_USERNAME: elastic
ELASTICSEARCH_PASSWORD: $ES_DB_PASSWORD
networks:
- elastic
logging:
options:
awslogs-group: we-two-works-db-ecs-context
awslogs-region: us-east-1
awslogs-stream-prefix: "kibana-node"
volumes:
elastic_data:
driver_opts:
performance-mode: maxIO
throughput-mode: bursting
uid: 0
gid: 0
networks:
elastic:
driver: bridge
and pass in the env variables using .env.developmentfile
ENV="development"
ES_DB_URL="localhost"
ES_DB_PORT=9200
ES_DB_USER="elastic"
ES_DB_PASSWORD="****"
and up the stack in ECS using this command after creating a docker context pointing to ECS docker compose --env-file ./.env.development up
However, after creating the stack the kibana node fails to establish communication with the elasticsearch node. Check the logs from kibana node container
{
"type": "log",
"#timestamp": "2021-12-09T02:07:04Z",
"tags": [
"warning",
"plugins-discovery"
],
"pid": 7,
"message": "Expect plugin \"id\" in camelCase, but found: beats_management"
}
{
"type": "log",
"#timestamp": "2021-12-09T02:07:04Z",
"tags": [
"warning",
"plugins-discovery"
],
"pid": 7,
"message": "Expect plugin \"id\" in camelCase, but found: triggers_actions_ui"
}
[BABEL] Note: The code generator has deoptimised the styling of /usr/share/kibana/x-pack/plugins/canvas/server/templates/pitch_presentation.js as it exceeds the max of 500KB.
After doing a research I have found that ecs cli does not support service.networks docker compose file field and it has given these instructions Communication between services is implemented by SecurityGroups within the application VPC.. I am wondering how to set these instructions in the docker-compose.yml file because the IP addresses get assigned after stack is being created.
These containers should be able to communicate with each others via their compose service names. So for example the kibana container should be able to reach the ES node using es-node. I assume this needs you need to set ELASTICSEARCH_HOSTS: '["http://es-node:9200"]'?
I am also not sure about ELASTICSEARCH_URL: $ES_DB_URL. I see you set ES_DB_URL="localhost" but that means that the kibana container will be calling localhost to try to reach the ES service (this may work on a laptop where all containers run on a flat network but that's not how it will work on ECS - where each compose service is a separate ECS service).
[UPDATE]
I took at stab at the compose file provided. Note that I have simplified it a bit to remove some variables such as the env file, the logging entries (why did you need them? Compose/ECS will create the logging infra for you).
This file works for me (with gotchas - see below):
services:
es-node:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.0
deploy:
resources:
reservations:
memory: 8Gb
command: >
bash -c
'bin/elasticsearch-plugin install analysis-smartcn https://github.com/medcl/elasticsearch-analysis-stconvert/releases/download/v7.9.0/elasticsearch-analysis-stconvert-7.9.0.zip;
/usr/local/bin/docker-entrypoint.sh'
container_name: es-node
environment:
- node.name=es-node
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- ELASTIC_PASSWORD=thisisawesome
- xpack.security.enabled=true
volumes:
- elastic_data:/usr/share/elasticsearch/data
ports:
- 9200:9200
kibana-node:
image: docker.elastic.co/kibana/kibana:7.9.0
deploy:
resources:
reservations:
memory: 8Gb
container_name: kibana-node
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: es-node
ELASTICSEARCH_HOSTS: http://es-node:9200
ELASTICSEARCH_USERNAME: elastic
ELASTICSEARCH_PASSWORD: thisisawesome
volumes:
elastic_data:
driver_opts:
performance-mode: maxIO
throughput-mode: bursting
uid: 0
gid: 0
There are two major things I had to fix:
1- the kibana task needed more horsepower (the 0.5 vCPU and 512MB of memory - default - was not enough). I set the memory to 8GB (which set the CPU to 1) and the Kibana container came up.
2- I had to increase ulimits for the ES container. Some of the error messages in the logs pointed to max file opened and vm.max_map_count which both pointed to ulimits needing being adjusted. For Fargate you need a special section in the task definition. I know there is a way to embed CFN code into the compose file via overlays but I found easier/quickert to docker compose convert the compose into a CFN file and tweak that by adding this section right below the image:
"ulimits": [
{
"name": "nofile",
"softLimit": 65535,
"hardLimit": 65535
}
]
So to recap, you'd need to take my compose above, convert it into a CFN file, add the ulimits snipped and run it directly in CFN.
You can work backwards from here to re-add your variables etc.
HTH

Kibana server is not ready yet! Trying to visualize data push to Elasticsearch

I have this problem in the architecture:
kibana | {"type":"log","#timestamp":"2021-04-19T11:02:46+00:00","tags":["warning","elasticsearch"],"pid":7,"message":"Unable to revive connection: http://localhost:9200/"}
kibana | {"type":"log","#timestamp":"2021-04-19T11:02:46+00:00","tags":["warning","elasticsearch"],"pid":7,"message":"No living connections"}
kibana | {"type":"log","#timestamp":"2021-04-19T11:02:46+00:00","tags":["warning","plugins","licensing"],"pid":7,"message":"License information could not be obtained from Elasticsearch due to Error: No Living connections error"}
kibana | {"type":"log","#timestamp":"2021-04-19T11:03:16+00:00","tags":["warning","elasticsearch"],"pid":7,"message":"Unable to revive connection: http://localhost:9200/"}
kibana | {"type":"log","#timestamp":"2021-04-19T11:03:16+00:00","tags":["warning","elasticsearch"],"pid":7,"message":"No living connections"}
kibana | {"type":"log","#timestamp":"2021-04-19T11:03:16+00:00","tags":["warning","plugins","licensing"],"pid":7,"message":"License information could not be obtained from Elasticsearch due to Error: No Living connections error"}
I insert below the docker-compose from which I've not errors except for kibana, searching about the net I've seen the problem could be the memory requirment that I need to insert but if I insert deploy and then resource I've some problem issues related on docker
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper:3.4.6
ports:
- "2181:2181"
kafka:
build: .
ports:
- "9092:9092"
expose:
- "9093"
environment:
KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9093,OUTSIDE://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INSIDE://0.0.0.0:9093,OUTSIDE://0.0.0.0:9092
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CREATE_TOPICS: "prova1:1:1,stream:1,1,output:1,1,input:1,1"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
elasticsearch:
restart: always
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.4.0
container_name: elasticsearch
ports:
- 9200:9200
environment:
- discovery.type=single-node
- ES_JAVA_OPTS:"-Xms1g-Xmx1g"
jobmanager:
image: pyflink/playgrounds:1.10.0
volumes:
- ./examples:/opt/examples
hostname: "jobmanager"
expose:
- "6123"
ports:
- "8088:8088"
command: jobmanager
environment:
- JOB_MANAGER_RPC_ADDRESS=jobmanager
taskmanager:
image: pyflink/playgrounds:1.10.0
volumes:
- ./examples:/opt/examples
expose:
- "6121"
- "6122"
depends_on:
- jobmanager
command: taskmanager
links:
- jobmanager:jobmanager
environment:
- JOB_MANAGER_RPC_ADDRESS=jobmanager
kibana:
image: docker.elastic.co/kibana/kibana:7.12.0
container_name: kibana
restart: always
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: http://localhost:9200
ELASTICSEARCH_HOSTS: "http://localhost:9200"
elasticsearch.ssl.verificationMode: none
I manage to solve through the help of Elastic team. Here's a link that let to jump to community of elastic

Why elasticsearch on docker swarm requires a transport.host=localhost setting?

I'm trying to run Elasticsearch on an docker swarm. It works as a single node cluster for now, but only when the transport.host=localhost setting is included. Here is main part of docker-compose.yml:
version: "3"
services:
elasticsearch:
image: "elasticsearch:7.4.1" #(base version)
hostname: elasticsearch
ports:
- "9200:9200"
environment:
- cluster.name=elasticsearch
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- transport.host=localhost
volumes:
- "./elasticsearch/volumes:/usr/share/elasticsearch/data"
networks:
- logger_net
volumes:
logging:
networks:
logger_net:
external: true
Above configuration results in the yellow cluster state (because some indexes require additional replica).
Elasticsearch status page is unavailable when I'm using IP of the elasticsearch docker container in a transport.host setting or without a transport.host=localhost setting.
I think that using a transport.host=localhost setting is wrong. Is proper configuration of Elasticsearch in docker swarm available?

Resources