I am following https://www.elastic.co/guide/en/elasticsearch/reference/6.5/docker.html
and
https://www.elastic.co/guide/en/kibana/6.5/docker.html
But it does not seems to work well with kibana, ES works fine.
I tried starting kibana alone, but finally i added it to one docker-compose file.
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
container_name: elasticsearch
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
container_name: elasticsearch2
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata2:/usr/share/elasticsearch/data
networks:
- esnet
kibana:
image: docker.elastic.co/kibana/kibana:6.5.4
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- 5601:5601
volumes:
esdata1:
driver: local
esdata2:
driver: local
networks:
esnet:
Kibana.yml is:
server.host: "0.0.0.0"
server.name: "kibana"
elasticsearch.url: http://elasticsearch:9200
I get following error:
kibana_1 | {"type":"log","#timestamp":"2019-06-11T08:55:30Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
The kibana container isn't on the same network as the two elasticsearch containers: it doesn't have a networks: block and so is on an automatically-created default network, but the two elasticsearch containers are on an explicitly-declared esnet network. Since they're not on the same network, inter-container DNS doesn't work.
I'd suggest just deleting all of the networks: blocks and using the default network Docker Compose creates for you. If you want an explicit named network, copy the same networks: [esnet] lines into the kibana: service block.
Related
I have successfully installed a 3-node Elastic Search cluster with a docker compose file from the Elastic Search web site with no problems elastic search link. I am trying to add Kibana to the .yml file so I can run it all with docker-compose up and was looking at this Elastic Search Kibana install site to try to figure it out what to add Kibana install site. When I try to start the file I made, I get this error kibana_1 | {"type":"log","#timestamp":"2021-02-13T11:20:31Z","tags":["error","elasticsearch","data"],"pid":8,"message":"[ConnectionError]: getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200"}. When I open the http://localhost:5601, it says Kibana server not ready yet. Can somebody please help me get this working? I marked the section I added in the .yml file.
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.11.0
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.11.0
container_name: es02
environment:
- node.name=es02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data02:/usr/share/elasticsearch/data
networks:
- elastic
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.11.0
container_name: es03
environment:
- node.name=es03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data03:/usr/share/elasticsearch/data
networks:
- elastic
##############################
# My attempt at adding Kibana to the docker file. This file works
# fine if commenting out this whole section.
kibana:
image: docker.elastic.co/kibana/kibana:7.10.2
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
ports:
- 5601:5601
##############################
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
networks:
elastic:
driver: bridge
Make sure Docker Engine is allotted at least 4GiB of memory.
docker-compose.yml
version: '3.7'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
container_name: es01
environment:
- node.name=es01
- cluster.name=docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- ELASTIC_PASSWORD=password
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
networks:
- elastic
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
container_name: es02
environment:
- node.name=es02
- cluster.name=docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- ELASTIC_PASSWORD=password
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- elastic
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
container_name: es03
environment:
- node.name=es03
- cluster.name=docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- ELASTIC_PASSWORD=password
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- elastic
kibana:
image: kibana:7.9.2
ports:
- '5601:5601'
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
logstash:
image: logstash:7.9.2
ports:
- '5000:5000'
volumes:
- type: bind
source: ./logstash_pipeline/
target: /usr/share/logstash/pipeline
read_only: true
networks:
elastic:
driver: bridge
kiban.yml
server.name: kibana
server.host: 0.0.0.0
server.port: 5601
elasticsearch.hosts: [ "http://<ELK server ip>:9200" ]
monitoring.ui.container.elasticsearch.enabled: true
## X-Pack security credentials
#
elasticsearch.username: elastic
elasticsearch.password: password
Create a directory name logstash_pipeline and within that directory create a file beats.conf
beats.conf
input {
beats {
port => 5044
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => "127.0.0.1:9200"
index => "filebeat-%{+yyyy.MM.dd}"
user => "elastic"
password => "password"
ecs_compatibility => disabled
}
}
following requirements and recommendations apply when running Elasticsearch in Docker in production.
Set vm.max_map_count to at least 262144edit The vm.max_map_count kernel setting must be set to at least 262144 for production use.
How you set vm.max_map_count depends on your platform:
The vm.max_map_count setting should be set permanently in /etc/sysctl.conf:
grep vm.max_map_count /etc/sysctl.conf
vm.max_map_count=262144
To apply the setting on a live system, run:
sysctl -w vm.max_map_count=262144
Run docker compose to bring up the three-node Elasticsearch cluster and Kibana
docker-compose up
As far as I see, there are 2 problems in your docker-compose file.
Kibana is not in the elastic network.
In the Kibana configuration, you set ELASTICSEARCH_HOSTS=http://elasticsearch:9200. However, none of your Elasticsearch containers is named elasticsearch.
The correct configuration should be somehow like this:
kibana:
image: docker.elastic.co/kibana/kibana:7.10.2
container_name: kibana
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: http://es01:9200
ELASTICSEARCH_HOSTS: '["http://es01:9200","http://es02:9200","http://es03:9200"]'
networks:
- elastic
Unable to access elasticsearch instance in Kibana while setting up by multinode cluster by docker compose.
elasticsearch instance node are running fine.
Below I have shared the docker-compose.yaml and also the issue screenshot, can somebody help me on this.
Please let me know in case of any further information needed.
docker-compose.yml
version: '2.2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name=docker-cluster
- discovery.seed_hosts=elasticsearch2,elasticsearch3
- cluster.initial_master_nodes=elasticsearch,elasticsearch2,elasticsearch3
- bootstrap.memory_lock=true
- http.cors.enabled=true
- http.cors.allow-origin=*
- "ES_JAVA_OPTS=-Xms512m -Xmx1024m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.1
container_name: elasticsearch2
environment:
- node.name=elasticsearch2
- cluster.name=docker-cluster
- discovery.seed_hosts=elasticsearch,elasticsearch3
- cluster.initial_master_nodes=elasticsearch,elasticsearch2,elasticsearch3
- bootstrap.memory_lock=true
- http.cors.enabled=true
- http.cors.allow-origin=*
- "ES_JAVA_OPTS=-Xms512m -Xmx1024m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata2:/usr/share/elasticsearch/data
networks:
- esnet
elasticsearch3:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.1
container_name: elasticsearch3
environment:
- node.name=elasticsearch3
- cluster.name=docker-cluster
- discovery.seed_hosts=elasticsearch,elasticsearch2
- cluster.initial_master_nodes=elasticsearch,elasticsearch2,elasticsearch3
- bootstrap.memory_lock=true
- http.cors.enabled=true
- http.cors.allow-origin=*
- "ES_JAVA_OPTS=-Xms512m -Xmx1024m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata3:/usr/share/elasticsearch/data
networks:
- esnet
kibana:
image: docker.elastic.co/kibana/kibana:7.8.0
container_name: kibana
environment:
SERVER_NAME: kibana.local
ELASTICSEARCH_URL: http://elasticsearch:9200
ports:
- '5601:5601'
networks:
- esnet
volumes:
esdata1:
driver: local
esdata2:
driver: local
esdata3:
driver: local
networks:
esnet:
driver: bridge
Error Screenshot
Error Screenshot
Result
Can you confirm the Elasticsearch nodes are available?
curl -XGET http://elasticsearch:9200/_cat/nodes?v
And also check which node is running as master
curl -XGET http://elasticsearch:9200/_cat/master?v
It may also be worth adding Elasticsearch as a dependency for Kibana in your docker-compose e.g.
kibana:
image: docker.elastic.co/kibana/kibana:7.8.0
container_name: kibana
depends_on: elasticsearch
environment:
SERVER_NAME: kibana.local
ELASTICSEARCH_URL: http://elasticsearch:9200
ports:
- '5601:5601'
networks:
- esnet
initial heap size [536870912] not equal to the maximum heap size, made the same, then got worked.
Initial heap size and the maximum heap size must be equal
I try to set up elasticsearch on laravel project using the docker and babenkoivan/scout-elasticsearch-driver.
When I start docker, all containers working include elasticsearch, but when I try to use
php artisan elastic:create-index "App\TutorialIndexConfigurator"
I get an error:
No alive nodes found in your cluster
Also, when I try to access to port 9200 via curl from docker workspace container, I get
curl: (7) Failed to connect to localhost port 9200: Connection refused
but when I do the same think from terminal, I get information about docker elasticsearch cluster. I think it's maybe related.
I wasted a three day for this problem and I don't have any solution, please help.
I tried to do the same things from laradock and gate the same results.
Here is my docker-compose.yml content
version: '3.1'
#volumes:
# elasticsearch:
# driver: local
volumes:
esdata1:
driver: local
esdata2:
driver: local
esdata3:
driver: local
networks:
esnet:
frontend:
backend:
services:
nginx:
image: nginx
ports:
- "80:80"
- "443:443"
volumes:
- ./hosts:/etc/nginx/conf.d
- ./www:/var/www
- ./logs:/var/log/nginx
links:
- php
networks:
esnet:
frontend:
aliases:
- api.dev
backend:
aliases:
- api.dev
mysql:
image: mysql:5.7
ports:
- "3306:3306"
volumes:
- ./mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
networks:
- esnet
- frontend
- backend
# postgres:
# image: postgres
# ports:
# - "3306:3306"
# environment:
# MYSQL_ROOT_PASSWORD: secret
adminer:
image: adminer
restart: always
ports:
- 8080:8080
php:
build: ./images/php
links:
- mysql
volumes:
- ./www:/var/www
networks:
- esnet
- frontend
- backend
workspace:
build: ./images/workspace
volumes:
- ./www:/var/www:cached
extra_hosts:
- "dockerhost:10.0.75.1"
ports:
- "2222:22"
tty: true
networks:
- esnet
- frontend
- backend
redis:
image: redis:latest
volumes:
- ./www/redis:/data
ports:
- "6379:6379"
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.4.2
container_name: elasticsearch
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- http.cors.enabled=true
- http.cors.allow-origin=*
- discovery.zen.minimum_master_nodes=2
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
- frontend
- backend
lasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:6.4.2
container_name: elasticsearch2
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch"
- http.cors.enabled=true
- http.cors.allow-origin=*
- discovery.zen.minimum_master_nodes=2
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata2:/usr/share/elasticsearch/data
networks:
- esnet
- frontend
- backend
elasticsearch3:
image: docker.elastic.co/elasticsearch/elasticsearch:6.4.2
container_name: elasticsearch3
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch"
- http.cors.enabled=true
- http.cors.allow-origin=*
- discovery.zen.minimum_master_nodes=2
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata3:/usr/share/elasticsearch/data
networks:
- esnet
- frontend
- backend
kibana:
image: 'docker.elastic.co/kibana/kibana:6.4.2'
container_name: kibana
environment:
SERVER_NAME: kibana.local
ELASTICSEARCH_URL: http://elasticsearch:9200
ports:
- '5601:5601'
networks:
- esnet
- frontend
- backend
headPlugin:
image: 'mobz/elasticsearch-head:5'
container_name: head
ports:
- '9100:9100'
networks:
- esnet
- frontend
- backend
Here is scout_elastic config
<?php
return [
'client' => [
'hosts' => [
env('SCOUT_ELASTIC_HOST', 'localhost:9200'),
],
],
'update_mapping' => env('SCOUT_ELASTIC_UPDATE_MAPPING', true),
'indexer' => env('SCOUT_ELASTIC_INDEXER', 'single'),
'document_refresh' => env('SCOUT_ELASTIC_DOCUMENT_REFRESH'),
];
And .env scout config
SCOUT_DRIVER=elastic
I believe your problem is wrong SCOUT_ELASTIC_HOST
It should be SCOUT_ELASTIC_HOST=http://elasticsearch:9200
like in the kibana service. Not http://localhost:9200
The production setup for elasticsearch in docker looks like this according the official website
version: '2'
services:
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch:5.5.2
container_name: elasticsearch1
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1g
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:5.5.2
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch1"
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1g
volumes:
- esdata2:/usr/share/elasticsearch/data
networks:
- esnet
volumes:
esdata1:
driver: local
esdata2:
driver: local
networks:
esnet:
Elasticsearch1 is exposed, and it's also connected with Elasticsearch2 by the docker network but they have their own storage.
Now is my question. How is this setup working? Is ES2 (elasticsearch2) doing nothing till ES1 (elasticsearch1) goes down, or is it replicating everything?
Because when I use the API I will always connect with localhost:9200 so I will always access ES1. I don't know what ES1 is doing with this information relative to ES2.
Another case is inside my logstash.conf I have to define the destination of my output:
output {
elasticsearch { hosts => ["elasticsearch1:9200"] }
xxx
}
I keep it internal over the docker network (logstash is linked with elasticsearch1) but I don't know if I also have to define elasticsearch2? Or what is happening now.
How are elasticsearch1 and elasticsearch2 working together?
The two instances act as a cluster. Even if you query one single node, internally your query gets forwarded to all nodes of the cluster (2 in your case) because data is shared across them. Have a look at this official page for more details: https://www.elastic.co/guide/en/elasticsearch/reference/5.5/_basic_concepts.html
I have docker compose working to make a cluster of elastic search nodes with 1 master and 2 data nodes as seen below, but I'm wondering how you extend this to add more master nodes since there is still the single point of failure with this setup if the master node goes down.
More specifically, how does the second master node interact with the host & application that's using it? Do you have to bind to a different port on the host for the second master node? And then does the application have to go through a load balancer to handle the case when either of the master noes go down?
elasticsearch_master:
image: elasticsearch:latest
command: "elasticsearch -Des.cluster.name=vi -Des.node.master=true -Des.node.data=false"
ports:
- "9200:9200"
- "9300:9300"
elasticsearch1:
image: elasticsearch:latest
command: "elasticsearch -Des.cluster.name=vi -Des.discovery.zen.ping.unicast.hosts=elasticsearch_master"
links:
- elasticsearch_master
volumes:
- "~/esdata:/usr/share/elasticsearch/data"
elasticsearch2:
image: elasticsearch:latest
command: "elasticsearch -Des.cluster.name=vi -Des.discovery.zen.ping.unicast.hosts=elasticsearch_master"
links:
- elasticsearch_master
volumes:
- "~/esdata:/usr/share/elasticsearch/data"
I know we're not doing it the same way but here is a code sample that worked for me.
services:
esmaster1:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2
container_name: esmaster1
environment:
- cluster.name=es_cluster
- bootstrap.memory_lock=true
- http.cors.enabled=true
- "http.cors.allow-origin=*"
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=esmaster1,esmaster2"
- "node.master=true"
- "node.data=true"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/home/ruan/workspace/docker/elasticsearch/data
esmaster2:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2
container_name: esmaster2
environment:
- cluster.name=es_cluster
- bootstrap.memory_lock=true
- http.cors.enabled=true
- "http.cors.allow-origin=*"
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=esmaster1,esmaster2"
- "node.master=true"
- "node.data=true"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata2:/home/ruan/workspace/docker/elasticsearch/data
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2
container_name: elasticsearch1
environment:
- cluster.name=es_cluster
- bootstrap.memory_lock=true
- http.cors.enabled=true
- "http.cors.allow-origin=*"
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=esmaster1,esmaster2"
- "node.master=false"
- "node.data=true"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata3:/home/ruan/workspace/docker/elasticsearch/data
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2
container_name: elasticsearch2
environment:
- cluster.name=es_cluster
- bootstrap.memory_lock=true
- http.cors.enabled=true
- "http.cors.allow-origin=*"
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=esmaster1,esmaster2"
- "node.master=false"
- "node.data=true"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata4:/home/ruan/workspace/docker/elasticsearch/data
volumes:
esdata1:
driver: local
esdata2:
driver: local
esdata3:
driver: local
esdata4:
driver: local
If the master node dies the second one takes over to be the master node and handles the cluster. If you want to use Kibana (or any other visualization tool) you should also add an other Elasticsearch instance to the cluster to handle that connection. This new instance needs not to be a master, data or ingest node.
I hope this helps!