Docker: How are 2 elasticsearch containers working together? - elasticsearch

The production setup for elasticsearch in docker looks like this according the official website
version: '2'
services:
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch:5.5.2
container_name: elasticsearch1
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1g
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:5.5.2
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch1"
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1g
volumes:
- esdata2:/usr/share/elasticsearch/data
networks:
- esnet
volumes:
esdata1:
driver: local
esdata2:
driver: local
networks:
esnet:
Elasticsearch1 is exposed, and it's also connected with Elasticsearch2 by the docker network but they have their own storage.
Now is my question. How is this setup working? Is ES2 (elasticsearch2) doing nothing till ES1 (elasticsearch1) goes down, or is it replicating everything?
Because when I use the API I will always connect with localhost:9200 so I will always access ES1. I don't know what ES1 is doing with this information relative to ES2.
Another case is inside my logstash.conf I have to define the destination of my output:
output {
elasticsearch { hosts => ["elasticsearch1:9200"] }
xxx
}
I keep it internal over the docker network (logstash is linked with elasticsearch1) but I don't know if I also have to define elasticsearch2? Or what is happening now.
How are elasticsearch1 and elasticsearch2 working together?

The two instances act as a cluster. Even if you query one single node, internally your query gets forwarded to all nodes of the cluster (2 in your case) because data is shared across them. Have a look at this official page for more details: https://www.elastic.co/guide/en/elasticsearch/reference/5.5/_basic_concepts.html

Related

How to I have Elastic Search (with multiple nodes) and Kibana in one docker compose file?

I have successfully installed a 3-node Elastic Search cluster with a docker compose file from the Elastic Search web site with no problems elastic search link. I am trying to add Kibana to the .yml file so I can run it all with docker-compose up and was looking at this Elastic Search Kibana install site to try to figure it out what to add Kibana install site. When I try to start the file I made, I get this error kibana_1 | {"type":"log","#timestamp":"2021-02-13T11:20:31Z","tags":["error","elasticsearch","data"],"pid":8,"message":"[ConnectionError]: getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200"}. When I open the http://localhost:5601, it says Kibana server not ready yet. Can somebody please help me get this working? I marked the section I added in the .yml file.
version: '2.2'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.11.0
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.11.0
container_name: es02
environment:
- node.name=es02
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data02:/usr/share/elasticsearch/data
networks:
- elastic
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.11.0
container_name: es03
environment:
- node.name=es03
- cluster.name=es-docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data03:/usr/share/elasticsearch/data
networks:
- elastic
##############################
# My attempt at adding Kibana to the docker file. This file works
# fine if commenting out this whole section.
kibana:
image: docker.elastic.co/kibana/kibana:7.10.2
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
ports:
- 5601:5601
##############################
volumes:
data01:
driver: local
data02:
driver: local
data03:
driver: local
networks:
elastic:
driver: bridge
Make sure Docker Engine is allotted at least 4GiB of memory.
docker-compose.yml
version: '3.7'
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
container_name: es01
environment:
- node.name=es01
- cluster.name=docker-cluster
- discovery.seed_hosts=es02,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- ELASTIC_PASSWORD=password
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
networks:
- elastic
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
container_name: es02
environment:
- node.name=es02
- cluster.name=docker-cluster
- discovery.seed_hosts=es01,es03
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- ELASTIC_PASSWORD=password
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- elastic
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
container_name: es03
environment:
- node.name=es03
- cluster.name=docker-cluster
- discovery.seed_hosts=es01,es02
- cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- ELASTIC_PASSWORD=password
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- elastic
kibana:
image: kibana:7.9.2
ports:
- '5601:5601'
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
logstash:
image: logstash:7.9.2
ports:
- '5000:5000'
volumes:
- type: bind
source: ./logstash_pipeline/
target: /usr/share/logstash/pipeline
read_only: true
networks:
elastic:
driver: bridge
kiban.yml
server.name: kibana
server.host: 0.0.0.0
server.port: 5601
elasticsearch.hosts: [ "http://<ELK server ip>:9200" ]
monitoring.ui.container.elasticsearch.enabled: true
## X-Pack security credentials
#
elasticsearch.username: elastic
elasticsearch.password: password
Create a directory name logstash_pipeline and within that directory create a file beats.conf
beats.conf
input {
beats {
port => 5044
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => "127.0.0.1:9200"
index => "filebeat-%{+yyyy.MM.dd}"
user => "elastic"
password => "password"
ecs_compatibility => disabled
}
}
following requirements and recommendations apply when running Elasticsearch in Docker in production.
Set vm.max_map_count to at least 262144edit The vm.max_map_count kernel setting must be set to at least 262144 for production use.
How you set vm.max_map_count depends on your platform:
The vm.max_map_count setting should be set permanently in /etc/sysctl.conf:
grep vm.max_map_count /etc/sysctl.conf
vm.max_map_count=262144
To apply the setting on a live system, run:
sysctl -w vm.max_map_count=262144
Run docker compose to bring up the three-node Elasticsearch cluster and Kibana
docker-compose up
As far as I see, there are 2 problems in your docker-compose file.
Kibana is not in the elastic network.
In the Kibana configuration, you set ELASTICSEARCH_HOSTS=http://elasticsearch:9200. However, none of your Elasticsearch containers is named elasticsearch.
The correct configuration should be somehow like this:
kibana:
image: docker.elastic.co/kibana/kibana:7.10.2
container_name: kibana
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: http://es01:9200
ELASTICSEARCH_HOSTS: '["http://es01:9200","http://es02:9200","http://es03:9200"]'
networks:
- elastic

Unable to access elasticsearch instance in Kibana while setting up by multinode cluster

Unable to access elasticsearch instance in Kibana while setting up by multinode cluster by docker compose.
elasticsearch instance node are running fine.
Below I have shared the docker-compose.yaml and also the issue screenshot, can somebody help me on this.
Please let me know in case of any further information needed.
docker-compose.yml
version: '2.2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name=docker-cluster
- discovery.seed_hosts=elasticsearch2,elasticsearch3
- cluster.initial_master_nodes=elasticsearch,elasticsearch2,elasticsearch3
- bootstrap.memory_lock=true
- http.cors.enabled=true
- http.cors.allow-origin=*
- "ES_JAVA_OPTS=-Xms512m -Xmx1024m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.1
container_name: elasticsearch2
environment:
- node.name=elasticsearch2
- cluster.name=docker-cluster
- discovery.seed_hosts=elasticsearch,elasticsearch3
- cluster.initial_master_nodes=elasticsearch,elasticsearch2,elasticsearch3
- bootstrap.memory_lock=true
- http.cors.enabled=true
- http.cors.allow-origin=*
- "ES_JAVA_OPTS=-Xms512m -Xmx1024m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata2:/usr/share/elasticsearch/data
networks:
- esnet
elasticsearch3:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.1
container_name: elasticsearch3
environment:
- node.name=elasticsearch3
- cluster.name=docker-cluster
- discovery.seed_hosts=elasticsearch,elasticsearch2
- cluster.initial_master_nodes=elasticsearch,elasticsearch2,elasticsearch3
- bootstrap.memory_lock=true
- http.cors.enabled=true
- http.cors.allow-origin=*
- "ES_JAVA_OPTS=-Xms512m -Xmx1024m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata3:/usr/share/elasticsearch/data
networks:
- esnet
kibana:
image: docker.elastic.co/kibana/kibana:7.8.0
container_name: kibana
environment:
SERVER_NAME: kibana.local
ELASTICSEARCH_URL: http://elasticsearch:9200
ports:
- '5601:5601'
networks:
- esnet
volumes:
esdata1:
driver: local
esdata2:
driver: local
esdata3:
driver: local
networks:
esnet:
driver: bridge
Error Screenshot
Error Screenshot
Result
Can you confirm the Elasticsearch nodes are available?
curl -XGET http://elasticsearch:9200/_cat/nodes?v
And also check which node is running as master
curl -XGET http://elasticsearch:9200/_cat/master?v
It may also be worth adding Elasticsearch as a dependency for Kibana in your docker-compose e.g.
kibana:
image: docker.elastic.co/kibana/kibana:7.8.0
container_name: kibana
depends_on: elasticsearch
environment:
SERVER_NAME: kibana.local
ELASTICSEARCH_URL: http://elasticsearch:9200
ports:
- '5601:5601'
networks:
- esnet
initial heap size [536870912] not equal to the maximum heap size, made the same, then got worked.
Initial heap size and the maximum heap size must be equal

Kibana fails to connect to Elasticsearch on docker

I am following https://www.elastic.co/guide/en/elasticsearch/reference/6.5/docker.html
and
https://www.elastic.co/guide/en/kibana/6.5/docker.html
But it does not seems to work well with kibana, ES works fine.
I tried starting kibana alone, but finally i added it to one docker-compose file.
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
container_name: elasticsearch
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
container_name: elasticsearch2
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata2:/usr/share/elasticsearch/data
networks:
- esnet
kibana:
image: docker.elastic.co/kibana/kibana:6.5.4
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- 5601:5601
volumes:
esdata1:
driver: local
esdata2:
driver: local
networks:
esnet:
Kibana.yml is:
server.host: "0.0.0.0"
server.name: "kibana"
elasticsearch.url: http://elasticsearch:9200
I get following error:
kibana_1 | {"type":"log","#timestamp":"2019-06-11T08:55:30Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
The kibana container isn't on the same network as the two elasticsearch containers: it doesn't have a networks: block and so is on an automatically-created default network, but the two elasticsearch containers are on an explicitly-declared esnet network. Since they're not on the same network, inter-container DNS doesn't work.
I'd suggest just deleting all of the networks: blocks and using the default network Docker Compose creates for you. If you want an explicit named network, copy the same networks: [esnet] lines into the kibana: service block.

How can I use an elasticsearch add-on container/service with ddev?

How can I set up a service/container to provide elasticsearch with ddev? I have tried some experiments from https://ddev.readthedocs.io/en/latest/users/extend/additional-services/ but don't have enough docker-compose knowhow to do one for elasticsearch.
Edit 2022-03: There is now an official elasticsearch ddev-get add-on for ddev v1.19+, ddev get drud/ddev-elasticsearch, see https://github.com/drud/ddev-elasticsearch.
#thursdaybw provided this recipe in https://github.com/drud/ddev/pull/1320, but it never gained traction and nobody reviewed it, so it's being moved here to percolate and incubate in the community. Please provide your suggestions if you use it.
Edit 2019-09-30: There is now an Elasticsearch example in ddev-contrib at https://github.com/drud/ddev-contrib/tree/master/docker-compose-services/elasticsearch
Basic information (and reviewed examples) for setting up additional services is at https://ddev.readthedocs.io/en/latest/users/extend/additional-services/
version: '3.6'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.1
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- VIRTUAL_HOST=$DDEV_HOSTNAME # This defines the host name the service should be accessible from. This will be sitename.ddev.local
- HTTP_EXPOSE=9200
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200
labels:
# These labels ensure this service is discoverable by ddev
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.approot: $DDEV_APPROOT
volumes:
esdata1:
driver: local
For starting a single node, the given example hasn't helped me out. Without providing further error messages, the container was stopped again. Using the following configuration, I was able to start just one ES node and not as cluster (as given in the previous answer):
version: '3.6'
services:
elasticsearch:
container_name: ddev-${DDEV_SITENAME}-elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.1
environment:
- node.name=${DDEV_SITENAME}-es01
- discovery.type=single-node
- cluster.name=docker-${DDEV_SITENAME}-es-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.platform: ddev
com.ddev.app-type: elasticsearch
com.ddev.approot: $DDEV_APPROOT
web:
links:
- elasticsearch:elasticsearch
volumes:
esdata01:
driver: local
name: "${DDEV_SITENAME}-es"
Additionally, using this configuration, you could directly access the node using the host name elasticsearch from within another container.

How do you set up an elastic search cluster with multiple master nodes in docker compose?

I have docker compose working to make a cluster of elastic search nodes with 1 master and 2 data nodes as seen below, but I'm wondering how you extend this to add more master nodes since there is still the single point of failure with this setup if the master node goes down.
More specifically, how does the second master node interact with the host & application that's using it? Do you have to bind to a different port on the host for the second master node? And then does the application have to go through a load balancer to handle the case when either of the master noes go down?
elasticsearch_master:
image: elasticsearch:latest
command: "elasticsearch -Des.cluster.name=vi -Des.node.master=true -Des.node.data=false"
ports:
- "9200:9200"
- "9300:9300"
elasticsearch1:
image: elasticsearch:latest
command: "elasticsearch -Des.cluster.name=vi -Des.discovery.zen.ping.unicast.hosts=elasticsearch_master"
links:
- elasticsearch_master
volumes:
- "~/esdata:/usr/share/elasticsearch/data"
elasticsearch2:
image: elasticsearch:latest
command: "elasticsearch -Des.cluster.name=vi -Des.discovery.zen.ping.unicast.hosts=elasticsearch_master"
links:
- elasticsearch_master
volumes:
- "~/esdata:/usr/share/elasticsearch/data"
I know we're not doing it the same way but here is a code sample that worked for me.
services:
esmaster1:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2
container_name: esmaster1
environment:
- cluster.name=es_cluster
- bootstrap.memory_lock=true
- http.cors.enabled=true
- "http.cors.allow-origin=*"
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=esmaster1,esmaster2"
- "node.master=true"
- "node.data=true"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/home/ruan/workspace/docker/elasticsearch/data
esmaster2:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2
container_name: esmaster2
environment:
- cluster.name=es_cluster
- bootstrap.memory_lock=true
- http.cors.enabled=true
- "http.cors.allow-origin=*"
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=esmaster1,esmaster2"
- "node.master=true"
- "node.data=true"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata2:/home/ruan/workspace/docker/elasticsearch/data
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2
container_name: elasticsearch1
environment:
- cluster.name=es_cluster
- bootstrap.memory_lock=true
- http.cors.enabled=true
- "http.cors.allow-origin=*"
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=esmaster1,esmaster2"
- "node.master=false"
- "node.data=true"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata3:/home/ruan/workspace/docker/elasticsearch/data
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2
container_name: elasticsearch2
environment:
- cluster.name=es_cluster
- bootstrap.memory_lock=true
- http.cors.enabled=true
- "http.cors.allow-origin=*"
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=esmaster1,esmaster2"
- "node.master=false"
- "node.data=true"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata4:/home/ruan/workspace/docker/elasticsearch/data
volumes:
esdata1:
driver: local
esdata2:
driver: local
esdata3:
driver: local
esdata4:
driver: local
If the master node dies the second one takes over to be the master node and handles the cluster. If you want to use Kibana (or any other visualization tool) you should also add an other Elasticsearch instance to the cluster to handle that connection. This new instance needs not to be a master, data or ingest node.
I hope this helps!

Resources