How to use the official docker elasticsearch container? - elasticsearch

I have the following Dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch:5.4.0
RUN elasticsearch
EXPOSE 80
I think the 3rd line is never reached.
When I try to access the dockercontainer from my local machine through:
172.17.0.2:9300
I get nothing, what am I missing? I want to access elasticsearch from the local host machine.

I recommend using docker-compose (which makes lot of things much easier) with following configuration.
Configuration (for development)
Configuration starts 3 services: elastic itself and extra utilities
for development like kibana and head plugin (these could be omitted, if you don't need them).
In the same directory you will need three files:
docker-compose.yml
elasticsearch.yml
kibana.yml
With following contents:
docker-compose.yml
version: '2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.4.0
container_name: elasticsearch_540
environment:
- http.host=0.0.0.0
- transport.host=0.0.0.0
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
volumes:
- esdata:/usr/share/elasticsearch/data
- ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- 9200:9200
- 9300:9300
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
mem_limit: 2g
cap_add:
- IPC_LOCK
kibana:
image: docker.elastic.co/kibana/kibana:5.4.0
container_name: kibana_540
environment:
- SERVER_HOST=0.0.0.0
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- 5601:5601
headPlugin:
image: mobz/elasticsearch-head:5
container_name: head_540
ports:
- 9100:9100
volumes:
esdata:
driver: local
elasticsearch.yml
cluster.name: "chimeo-docker-cluster"
node.name: "chimeo-docker-single-node"
network.host: 0.0.0.0
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: "Authorization"
kibana.yml
server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch:9200
elasticsearch.username: elastic
elasticsearch.password: changeme
xpack.monitoring.ui.container.elasticsearch.enabled: true
Running
With above three files in the same directory and that directory set as current working directory you do (could require sudo, depends how you have your docker-compose set up):
docker-compose up
It will start up and you will see logs from three different services: elasticsearch_540, kibana_540 and head_540.
After initial start up you will have your elastic cluster available for http under 9200 and for tcp under 9300. Validate with following curl if the cluster started up:
curl -u elastic:changeme http://localhost:9200/_cat/health
Then you can view and play with your cluster using either kibana (with credentials elastic / changeme):
http://localhost:5601/
or head plugin:
http://localhost:9100/?base_uri=http://localhost:9200&auth_user=elastic&auth_password=changeme

Your container is auto exiting because of insufficient virtual memory, by default to run an elastic search container your memory should be a min of 262144 but if you run this command sysctl vm.max_map_countand see it will be around 65530. Please increase your virtual memory count by using this command sysctl -w vm.max_map_count=262144 and run the container again docker run IMAGE IDthen you should have your container running and you should be able to access elastic search at port 9200 or 9300
edit : check this link https://www.elastic.co/guide/en/elasticsearch/reference/5.0/vm-max-map-count.html#vm-max-map-count

Best would be to follow the official elasticsearch documentation which has a nice section on single node elasticsearch cluster Also running a multi-node elasticsearch cluster using docker-compose.
Please refer to version specific documentation, which can be accessed in the version drop-down present in elasticsearch official documentation.

Related

Could not communicate to Elasticsearch

I am trying to send my node app logs to fluentd to elasticsearch to kibana, but having a problem connecting fluentd with elasticsearch with docker. I want to dockerize this efk stack.
I have attached the folder structure and shared relevant files.
Following is the folder structure:
Error
Could not communicate to Elasticsearch, resetting connection and trying again. Connection refused - connect(2) for 172.20.0.2:9200 (Errno::ECONNREFUSED)
fluent.conf:
#type forward
port 24224
bind 0.0.0.0
</source>
<match *.**>
#type copy
<store>
#type elasticsearch
host elasticsearch
port 9200
user elastic
password pass
</store>
</match>
DockerFile
FROM fluent/fluentd:v1.15-1
USER root
RUN gem install elasticsearch -v 7.6.0
# RUN gem install fluent-plugin-elasticsearch -v 7.6.0
RUN gem install fluent-plugin-elasticsearch -v 4.1.1
RUN gem install fluent-plugin-rewrite-tag-filter
RUN gem install fluent-plugin-multi-format-parser
USER fluent
Docker-compose.yml
version: '3'
services:
fluentd:
build: ./fluentd
container_name: loggingFluent
volumes:
- ./fluentd/conf:/fluentd/etc
# - ./fluentd/conf/fluent.conf:/fluentd/etc/fluent.conf
ports:
- "24224:24224"
- "24224:24224/udp"
links:
- elasticsearch
depends_on:
- elasticsearch
- kibana
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.1
container_name: elasticsearch-Logging
ports:
- 9200:9200
expose:
- 9200
environment:
discovery.type: 'single-node'
ES_JAVA_OPTS: '-Xms1024m -Xmx1024m'
xpack.security.enabled: 'true'
ELASTIC_PASSWORD: 'pass'
kibana:
image: docker.elastic.co/kibana/kibana:7.8.1
container_name: kibana-Logging
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- 5601:5601
depends_on:
- elasticsearch
links:
- elasticsearch
Maybe I am missing some with docker networking because I am using docker for the first time I have checked the ports exposed by docker containers and they are fine. I have done this without docker and have used the same settings but having a problem doing it with docker. Looking forward to seeing your responses. Thank you very much.
Adding a username to the elastic search environment solved the issue :
elasticsearch-environment
environment:
discovery.type: 'single-node'
ES_JAVA_OPTS: '-Xms1024m -Xmx1024m'
xpack.security.enabled: 'true'
ELASTIC_PASSWORD: 'pass'
ELASTIC_USERNAME: 'elastic'

How to communicate between two services in Fargate using docker compose

I am trying to host Elasticsearch and kibana in AWS ECS (Fargate). I have created a docker-compose.ym file
version: '2.2'
services:
es-node:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.0
deploy:
resources:
limits:
memory: 8Gb
command: >
bash -c
'bin/elasticsearch-plugin install analysis-smartcn https://github.com/medcl/elasticsearch-analysis-stconvert/releases/download/v7.9.0/elasticsearch-analysis-stconvert-7.9.0.zip;
/usr/local/bin/docker-entrypoint.sh'
container_name: es-$ENV
environment:
- node.name=es-$ENV
- cluster.name=es-docker-cluster
- discovery.type=single-node
# - discovery.seed_hosts=es02,es03
# - cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- ELASTIC_PASSWORD=$ES_DB_PASSWORD
- xpack.security.enabled=true
logging:
driver: awslogs
options:
awslogs-group: we-two-works-db-ecs-context
awslogs-region: us-east-1
awslogs-stream-prefix: es-node
volumes:
- elastic_data:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
kibana-node:
image: docker.elastic.co/kibana/kibana:7.9.0
container_name: kibana-$ENV
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: $ES_DB_URL
ELASTICSEARCH_HOSTS: '["http://es-$ENV:9200"]'
ELASTICSEARCH_USERNAME: elastic
ELASTICSEARCH_PASSWORD: $ES_DB_PASSWORD
networks:
- elastic
logging:
options:
awslogs-group: we-two-works-db-ecs-context
awslogs-region: us-east-1
awslogs-stream-prefix: "kibana-node"
volumes:
elastic_data:
driver_opts:
performance-mode: maxIO
throughput-mode: bursting
uid: 0
gid: 0
networks:
elastic:
driver: bridge
and pass in the env variables using .env.developmentfile
ENV="development"
ES_DB_URL="localhost"
ES_DB_PORT=9200
ES_DB_USER="elastic"
ES_DB_PASSWORD="****"
and up the stack in ECS using this command after creating a docker context pointing to ECS docker compose --env-file ./.env.development up
However, after creating the stack the kibana node fails to establish communication with the elasticsearch node. Check the logs from kibana node container
{
"type": "log",
"#timestamp": "2021-12-09T02:07:04Z",
"tags": [
"warning",
"plugins-discovery"
],
"pid": 7,
"message": "Expect plugin \"id\" in camelCase, but found: beats_management"
}
{
"type": "log",
"#timestamp": "2021-12-09T02:07:04Z",
"tags": [
"warning",
"plugins-discovery"
],
"pid": 7,
"message": "Expect plugin \"id\" in camelCase, but found: triggers_actions_ui"
}
[BABEL] Note: The code generator has deoptimised the styling of /usr/share/kibana/x-pack/plugins/canvas/server/templates/pitch_presentation.js as it exceeds the max of 500KB.
After doing a research I have found that ecs cli does not support service.networks docker compose file field and it has given these instructions Communication between services is implemented by SecurityGroups within the application VPC.. I am wondering how to set these instructions in the docker-compose.yml file because the IP addresses get assigned after stack is being created.
These containers should be able to communicate with each others via their compose service names. So for example the kibana container should be able to reach the ES node using es-node. I assume this needs you need to set ELASTICSEARCH_HOSTS: '["http://es-node:9200"]'?
I am also not sure about ELASTICSEARCH_URL: $ES_DB_URL. I see you set ES_DB_URL="localhost" but that means that the kibana container will be calling localhost to try to reach the ES service (this may work on a laptop where all containers run on a flat network but that's not how it will work on ECS - where each compose service is a separate ECS service).
[UPDATE]
I took at stab at the compose file provided. Note that I have simplified it a bit to remove some variables such as the env file, the logging entries (why did you need them? Compose/ECS will create the logging infra for you).
This file works for me (with gotchas - see below):
services:
es-node:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.0
deploy:
resources:
reservations:
memory: 8Gb
command: >
bash -c
'bin/elasticsearch-plugin install analysis-smartcn https://github.com/medcl/elasticsearch-analysis-stconvert/releases/download/v7.9.0/elasticsearch-analysis-stconvert-7.9.0.zip;
/usr/local/bin/docker-entrypoint.sh'
container_name: es-node
environment:
- node.name=es-node
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- ELASTIC_PASSWORD=thisisawesome
- xpack.security.enabled=true
volumes:
- elastic_data:/usr/share/elasticsearch/data
ports:
- 9200:9200
kibana-node:
image: docker.elastic.co/kibana/kibana:7.9.0
deploy:
resources:
reservations:
memory: 8Gb
container_name: kibana-node
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: es-node
ELASTICSEARCH_HOSTS: http://es-node:9200
ELASTICSEARCH_USERNAME: elastic
ELASTICSEARCH_PASSWORD: thisisawesome
volumes:
elastic_data:
driver_opts:
performance-mode: maxIO
throughput-mode: bursting
uid: 0
gid: 0
There are two major things I had to fix:
1- the kibana task needed more horsepower (the 0.5 vCPU and 512MB of memory - default - was not enough). I set the memory to 8GB (which set the CPU to 1) and the Kibana container came up.
2- I had to increase ulimits for the ES container. Some of the error messages in the logs pointed to max file opened and vm.max_map_count which both pointed to ulimits needing being adjusted. For Fargate you need a special section in the task definition. I know there is a way to embed CFN code into the compose file via overlays but I found easier/quickert to docker compose convert the compose into a CFN file and tweak that by adding this section right below the image:
"ulimits": [
{
"name": "nofile",
"softLimit": 65535,
"hardLimit": 65535
}
]
So to recap, you'd need to take my compose above, convert it into a CFN file, add the ulimits snipped and run it directly in CFN.
You can work backwards from here to re-add your variables etc.
HTH

Why elasticsearch on docker swarm requires a transport.host=localhost setting?

I'm trying to run Elasticsearch on an docker swarm. It works as a single node cluster for now, but only when the transport.host=localhost setting is included. Here is main part of docker-compose.yml:
version: "3"
services:
elasticsearch:
image: "elasticsearch:7.4.1" #(base version)
hostname: elasticsearch
ports:
- "9200:9200"
environment:
- cluster.name=elasticsearch
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- transport.host=localhost
volumes:
- "./elasticsearch/volumes:/usr/share/elasticsearch/data"
networks:
- logger_net
volumes:
logging:
networks:
logger_net:
external: true
Above configuration results in the yellow cluster state (because some indexes require additional replica).
Elasticsearch status page is unavailable when I'm using IP of the elasticsearch docker container in a transport.host setting or without a transport.host=localhost setting.
I think that using a transport.host=localhost setting is wrong. Is proper configuration of Elasticsearch in docker swarm available?

How to create 3 node (1 master,2 worker) elasticsearch cluster on docker swarm?

I want to create 3 node elasticsearch cluster with 1 master node and 2 worker node. ES v6 and Swarm v1.18. Anyone could help?
You need create a stack of elasticsearch with 3 services.
Create file 'elasticsearch-swarm.yaml'
sudo nano elasticsearch-swarm.yaml
Type the instruction
version: '3.7'
services:
elasticsearch1:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4
hostname: elasticsearch1
volumes:
- elasticsearch1-data:/usr/share/elasticsearch/data
environment:
- cluster.name=elasticsearch-cluster
- "discovery.zen.ping.unicast.hosts=tasks.elasticsearch1"
- "network.host=0.0.0.0"
- "node.max_local_storage_nodes=2"
ports:
- "9200:9200"
networks:
- elasticsearch_distributed
deploy:
replicas: 3
restart_policy:
delay: 30s
max_attempts: 10
window: 120s
volumes:
elasticsearch1-data:
networks:
elasticsearch_distributed:
driver: overlay
Deploy stack file
sudo docker stack deploy --compose-file=elasticsearch-swarm.yaml elasticsearch
This command will create 3 replicas of elasticsearch server inside a same cluster.
If you receive a error that you don't have max_map_count enough and ask to set at least 262144 execute the bellow steps:
Edit file /etc/sysctl.conf
sudo nano /etc/sysctl.conf
Add the key in the end of file
vm.max_map_count=262144
Apply settings to the current instance
sudo sysctl -w vm.max_map_count=262144

Configure an ELK cluster in docker containers

I'm trying to configure an ELK cluster using 2 docker containers.
I'm using the following image:
es241_l240_k461: Elasticsearch 2.4.1, Logstash 2.4.0, and Kibana 4.6.1. (reference: https://hub.docker.com/r/sebp/elk/ )
I have created 2 docker containers for that image with docker-compose; each works perfectly in standalone mode.
I want to link the 2 ELK nodes between each other in way to create the cluster, but I haven't found a proper solution.
The Elasticsearch node in container1 doesn't communicate with the Elasticsearch node in container2.
These are the two docker-compose.yml:
CONTAINER1:
version: '2'
services:
elasticsearch01:
image: sebp/elk:es241_l240_k461
ports:
- "5601:5601"
- "9200:9200"
- "9300:9300"
- "5044:5044"
volumes:
- /opt/ELK1/logstash/conf.d:/etc/logstash/conf.d
privileged: true
CONTAINER2:
version: '2'
services:
elasticsearch02:
image: sebp/elk:es241_l240_k461
ports:
- "5602:5601"
- "9201:9200"
- "9301:9300"
- "5045:5044"
volumes:
- /opt/ELK2/logstash/conf.d:/etc/logstash/conf.d
privileged: true
I'have configured the elasticsearch.yml inside the docker containers in this way:
NODE IN CONTAINER1:
cluster.name: elasticsearchcluster
node.name: node1
network.host: 0.0.0.0
network.bind_host: 0.0.0.0
discovery.zen.ping.unicast.hosts: ["127.0.0.1", "172.21.0.2"]
discovery.zen.minimum_master_nodes: 1
NODE IN CONTAINER2:
cluster.name: elasticsearchcluster
node.name: node2
network.host: 0.0.0.0
network.bind_host: 0.0.0.0
discovery.zen.ping.unicast.hosts: ["127.0.0.1", "172.22.0.2"]
discovery.zen.minimum_master_nodes: 1
The key is the discovery.zen.ping.unicast.hosts parameter: I don't have the real IP address, because it's a docker container.
I tried docker inspect elasticsearch01, I have the following "IPAddress" property:
"NetworkSettings": {
...
"Networks": {
"ELK1_default": {
...
"Gateway": "172.22.0.1",
"IPAddress": "172.22.0.2",
...
}
}
}
But it doesn't work if I set that IP address.
How to configure the cluster properly?
EDIT
Trying the host ip-address and the port, the node 1 starts, the node 2 fails with no errors.
discovery.zen.ping.unicast.hosts: ["127.0.0.1", "192.168.0.1:9300"] -> OK
discovery.zen.ping.unicast.hosts: ["127.0.0.1", "192.168.0.2:9300"] -> FAILS with no errors
Instead of using a prepared docker file with ELK stack, you could go for something like this:
version: '3'
services:
elasticsearch:
image: elasticsearch:2.4.1
ports:
- 9200:9200
networks:
- elk
elasticsearch_slave:
image: elasticsearch:2.4.1
networks:
- elk
depends_on:
- elasticsearch
command: elasticsearch --discovery.zen.ping.unicast.hosts=elasticsearch
logstash:
image: logstash:2.3.3
hostname: logstash
networks:
- elk
volumes:
- ./logstash.conf:/config/logstash.conf
depends_on:
- elasticsearch
ports:
- 5044:5044
command: logstash -f /config/logstash.conf
kibana:
image: kibana:4.5.1
hostname: kibana
networks:
- elk
depends_on:
- elasticsearch
- logstash
ports:
- 5601:5601
networks:
elk:
driver: bridge
Once you start your images using docker-compose up -d, then you can scale slaves using following command docker-compose scale elasticsearch_slave=5
Once that's done - you'll have 5 slaves + client node that opens up port 9200 as a gateway to the whole cluster.
For instance, after doing that, http://localhost:9200/_cat/nodes?v displays following:
Thanks to Evaldas Buinauskas, I've found the solution with the stack!
First, we need only one docker-compose.yml.
In that file we need to configure two services (one for each container to create), and a network to be shared between the two services.
This is the new docker-compose.yml:
version: '2'
services:
elk1:
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms2048m -Xmx2048m"
image: sebp/elk:es241_l240_k461
networks:
- elk_net
ports:
- "5601:5601"
- "9200:9200"
- "9300:9300"
- "5044:5044"
volumes:
- /opt/elk/logstash/conf.d:/etc/logstash/conf.d
privileged: true
elk2:
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms2048m -Xmx2048m"
image: sebp/elk:es241_l240_k461
networks:
- elk_net
ports:
- "5602:5601"
- "9201:9200"
- "9301:9300"
- "5045:5044"
depends_on:
- elk1
volumes:
- /opt/elk/logstash/conf.d:/etc/logstash/conf.d
privileged: true
networks:
elk_net:
driver: bridge
The command docker-compose up will create 3 elements:
the container elk1
the container elk2
the network elk_net
With the docker network inspect elk_net command we can view the (docker) ip addresses assigned to the 2 containers.
The elasticsearch.yml files have to be configured as follows:
cluster.name: elasticsearchcluster
node.name: node1
network.host: 0.0.0.0
network.bind_host: 0.0.0.0
network.publish_host: ${IP_ADDRESS_ELK1}
discovery.zen.ping.unicast.hosts: ["${IP_ADDRESS_ELK2}"]
discovery.zen.minimum_master_nodes: 1
cluster.name: elasticsearchcluster
node.name: node2
network.host: 0.0.0.0
network.bind_host: 0.0.0.0
network.publish_host: ${IP_ADDRESS_ELK2}
discovery.zen.ping.unicast.hosts: ["${IP_ADDRESS_ELK1}"]
discovery.zen.minimum_master_nodes: 1
With this configuration, the cluster works perfectly: the two nodes are merged correctly and the Http get to each elasticsearch server returns all the documents saved in the 2 nodes.

Resources