Basic authentication in ELK Stack - elasticsearch

I am trying to enable basic authentication in our ELK Stack. The below steps have tried to enable the authentication
1. docker run -d -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.10.1
2. docker exec -it stoic_darwin /bin/bash
3. In side the container executed # bin/elasticsearch-certutil ca
4. No password entered and exited from the container.
5. Copied the generated file to host system - docker cp stoic_darwin:/usr/share/elasticsearch/elastic-stack-ca.p12 .
6. Updated the docker-compose file as below.
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
- discovery.seed_hosts=elasticsearch
- cluster.initial_master_nodes=elasticsearch
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=true
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.keystore.type=PKCS12
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.keystore.path=elastic-stack-ca.p12
- xpack.security.transport.ssl.truststore.path=elastic-stack-ca.p12
- xpack.security.transport.ssl.truststore.type=PKCS12
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- ./elastic-stack-ca.p12:/usr/share/elasticsearch/config/elastic-stack-ca.p12
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
kibana:
image: docker.elastic.co/kibana/kibana:7.10.1
container_name: kibana
environment:
ELASTICSEARCH_URL: "http://elasticsearch:9200"
ELASTICSEARCH_USERNAME: "kibana"
ELASTICSEARCH_PASSWORD: "kibana"
ports:
- 5601:5601
depends_on:
- elasticsearch
volumes:
esdata1:
driver: local
7. # docker-compose up -d elasticsearch
8. But it fails with below errors.
elasticsearch | "at org.elasticsearch.xpack.core.ssl.SSLService.loadSSLConfigurations(SSLService.java:524) ~[?:?]",
elasticsearch | ElasticsearchSecurityException[failed to load SSL configuration [xpack.security.transport.ssl]]; nested: ElasticsearchException[failed to initialize SSL TrustManager - not permitted to read truststore file [/usr/share/elasticsearch/config/elastic-stack-ca.p12]]; nested: AccessDeniedException[/usr/share/elasticsearch/config/elastic-stack-ca.p12];
elasticsearch | Likely root cause: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/config/elastic-stack-ca.p12
elasticsearch | at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:90)
elasticsearch | at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
elasticsearch | at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
elasticsearch | at java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:218)
elasticsearch | "at org.elasticsearch.xpack.core.ssl.SSLService.<init>(SSLService.java:142) ~[?:?]",
elasticsearch | "at org.elasticsearch.xpack.core.XPackPlugin.createSSLService(XPackPlugin.java:455) ~[?:?]",
elasticsearch | "at org.elasticsearch.xpack.core.XPackPlugin.createComponents(XPackPlugin.java:288) ~[?:?]",
elasticsearch | "at org.elasticsearch.node.Node.lambda$new$15(Node.java:553) ~[elasticsearch-7.10.1.jar:7.10.1]",
elasticsearch | "at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:271) ~[?:?]",
elasticsearch | "at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) ~[?:?]",
elasticsearch | "at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[?:?]",
elasticsearch | at java.base/java.nio.file.Files.newByteChannel(Files.java:375)
I believe to enable the basic authendiation with ELK requires the SSL certificate to connect with single/multiple clusters. So how can i resolve this error?
And also is there way to generate the cerfiticatePerformed at step-3 and to setup the build-in users password using bin/elasticsearch-setup-passwords interactive command in docker-compose itself?
(Or) if there any simple way to enable authentication through docker compose ll be helpful. Pls help me with the steps. thanks in advance.
Above error was due to permission have modified the permission and fixed it. Now my logstash contentiously throwing below erorrs.
logstash | [2021-01-27T06:27:42,365][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/_xpack'"}
logstash | [2021-01-27T06:27:42,738][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash | [2021-01-27T06:28:12,358][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/_xpack'"}
logstash | [2021-01-27T06:28:12,753][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash | [2021-01-27T06:28:42,366][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/_xpack'"}
logstash | [2021-01-27T06:28:42,766][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
In my logstash conf file output section already have provided authentication credentials.
output {
elasticsearch {
action => "index"
hosts => "http://elasticsearch:9200"
index => "project-info"
user => "elastic"
password => "password"

Related

Could not communicate to Elasticsearch

I am trying to send my node app logs to fluentd to elasticsearch to kibana, but having a problem connecting fluentd with elasticsearch with docker. I want to dockerize this efk stack.
I have attached the folder structure and shared relevant files.
Following is the folder structure:
Error
Could not communicate to Elasticsearch, resetting connection and trying again. Connection refused - connect(2) for 172.20.0.2:9200 (Errno::ECONNREFUSED)
fluent.conf:
#type forward
port 24224
bind 0.0.0.0
</source>
<match *.**>
#type copy
<store>
#type elasticsearch
host elasticsearch
port 9200
user elastic
password pass
</store>
</match>
DockerFile
FROM fluent/fluentd:v1.15-1
USER root
RUN gem install elasticsearch -v 7.6.0
# RUN gem install fluent-plugin-elasticsearch -v 7.6.0
RUN gem install fluent-plugin-elasticsearch -v 4.1.1
RUN gem install fluent-plugin-rewrite-tag-filter
RUN gem install fluent-plugin-multi-format-parser
USER fluent
Docker-compose.yml
version: '3'
services:
fluentd:
build: ./fluentd
container_name: loggingFluent
volumes:
- ./fluentd/conf:/fluentd/etc
# - ./fluentd/conf/fluent.conf:/fluentd/etc/fluent.conf
ports:
- "24224:24224"
- "24224:24224/udp"
links:
- elasticsearch
depends_on:
- elasticsearch
- kibana
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.1
container_name: elasticsearch-Logging
ports:
- 9200:9200
expose:
- 9200
environment:
discovery.type: 'single-node'
ES_JAVA_OPTS: '-Xms1024m -Xmx1024m'
xpack.security.enabled: 'true'
ELASTIC_PASSWORD: 'pass'
kibana:
image: docker.elastic.co/kibana/kibana:7.8.1
container_name: kibana-Logging
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- 5601:5601
depends_on:
- elasticsearch
links:
- elasticsearch
Maybe I am missing some with docker networking because I am using docker for the first time I have checked the ports exposed by docker containers and they are fine. I have done this without docker and have used the same settings but having a problem doing it with docker. Looking forward to seeing your responses. Thank you very much.
Adding a username to the elastic search environment solved the issue :
elasticsearch-environment
environment:
discovery.type: 'single-node'
ES_JAVA_OPTS: '-Xms1024m -Xmx1024m'
xpack.security.enabled: 'true'
ELASTIC_PASSWORD: 'pass'
ELASTIC_USERNAME: 'elastic'

How to run container of beat that required authentication from Elasticsearch

The main purpose: I want to use Logstash for collecting logs files that rely on remote server.
My ELK stack were created by using docker-compose.yml
version: '3.3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
ports:
- "9200:9200"
- "9300:9300"
volumes:
- '/share/elk/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro'
environment:
ES_JAVA_OPTS: "-Xmx512m -Xms256m"
ELASTIC_PASSWORD: changeme
discovery.type: single-node
networks:
- elk
deploy:
mode: replicated
replicas: 1
logstash:
image: docker.elastic.co/logstash/logstash:7.5.1
ports:
- "5000:5000"
- "9600:9600"
volumes:
- '/share/elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro'
- '/share/elk/logstash/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro'
environment:
LS_JAVA_OPTS: "-Xmx512m -Xms256m"
networks:
- elk
deploy:
mode: replicated
replicas: 1
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
ports:
- "5601:5601"
volumes:
- '/share/elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml:ro'
networks:
- elk
deploy:
mode: replicated
replicas: 1
networks:
elk:
driver: overlay
and then I want to install a filebeat at the target host in order to send log to the ELK host.
docker run docker.elastic.co/beats/filebeat-oss:7.5.1 setup \
-E setup.kibana.host=x.x.x.x:5601 \
-E ELASTIC_PASSWORD="changeme" \
-E output.elasticsearch.hosts=["x.x.x.x:9200"]
but once hit the enter, the error occurs
Exiting: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http://x.x.x.x:9200: 401 Unauthorized: {"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}]
Also tried with -E ELASTICS_USERNAME="elastic" the error still persists
You should disable the basic x-pack security which is by default enabled in Elasticsearch 7.X version, under environment variable of ES docker image, mentioned below and start ES docker container.
xpack.security.enabled : false
After this, no need to pass ES creds and you can also remove below from your ES env. var:
ELASTIC_PASSWORD: changeme

Kibana 7.7.0 Basic version: management tab missing Security panel when started from docker

Context: I want to use XPACK in order to control which user can see which Dasboard only with free version.
I downloaded Kibana 7.7.0 zip from here, installed it and I can see Security options to create users/roles. In fact, I created an index, an user and a role and successfully set the index to this role with this installed Elastic/Kibana in my Windows.
The issue happeans only with Elastic/Kibana started from docker. I started Kibana 7.7.0 from a docker and I can't see Security panel under Management page. Googling I found I must use Basic version Instead of Open Source. As far as I can see, the docker-compose bellow is downloading Basic version since there isn't "sso" at the end. Also I must use installers provided by Elastic instad of Apache. Well, as far as I see it is pulling image not related to Apache.
I am not sure if the issue is only with Kibana since I could enable xpack security on Elastic and run elasticsearch-setup-passwords interactive inside the elastic docker container. I can log ing in Kibana with Elastic user but I don't see Security tab under Management.
Also, I am getting issue from LogStash trying to connect to ElasticSearch even though I set the logstash_system (see logstash.conf bellow).
You can see that I have enabled xpack.security.enabled=true on ElasticSearch.
docker-compose.yml
version: '3.2'
services:
zoo1:
image: elevy/zookeeper:latest
environment:
MYID: 1
SERVERS: zoo1
ports:
- "2181:2181"
kafka1:
image: wurstmeister/kafka
command: [start-kafka.sh]
depends_on:
- zoo1
links:
- zoo1
ports:
- "9092:9092"
environment:
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka1:9092
KAFKA_BROKER_ID: 1
KAFKA_ADVERTISED_PORT: 9092
KAFKA_LOG_RETENTION_HOURS: "168"
KAFKA_LOG_RETENTION_BYTES: "100000000"
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181
KAFKA_CREATE_TOPICS: "log:1:1"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
filebeat:
image: docker.elastic.co/beats/filebeat:7.7.0
command: filebeat -e -strict.perms=false
volumes:
- "//c/Users/my-comp/docker_folders/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro"
- "//c/Users/my-comp/docker_folders/sample-logs:/sample-logs"
links:
- kafka1
depends_on:
- kafka1
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.7.0
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=true
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- "//c/Users/my-comp/docker_folders/esdata:/usr/share/elasticsearch/data"
ports:
- "9200:9200"
kibana:
image: docker.elastic.co/kibana/kibana:7.7.0
volumes:
- "//c/Users/my-comp/docker_folders/kibana.yml:/usr/share/kibana/config/kibana.yml"
restart: always
environment:
- SERVER_NAME=kibana.localhost
- ELASTICSEARCH_HOSTS=http://x.x.x.x:9200
ports:
- "5601:5601"
links:
- elasticsearch
depends_on:
- elasticsearch
logstash:
image: docker.elastic.co/logstash/logstash:7.7.0
volumes:
- "//c/Users/my-comp/docker_folders/logstash.conf:/config-dir/logstash.conf"
restart: always
command: logstash -f /config-dir/logstash.conf
ports:
- "9600:9600"
- "7777:7777"
links:
- elasticsearch
- kafka1
kibana.yml
server.name: kibana
server.host: "0"
xpack.monitoring.ui.container.elasticsearch.enabled: false
elasticsearch.ssl.verificationMode: none
elasticsearch.username: "kibana"
elasticsearch.password: "k12345"
logstash.conf
input{
kafka{
codec => "json"
bootstrap_servers => "kafka1:9092"
topics => ["app_logs","request_logs"]
tags => ["myapp"]
}
}
filter {
*** not relevant
}
output {
elasticsearch {
hosts => ["http://x.x.x.x:9200"]
index => "%{[fields][topic_name]}-%{+YYYY.MM.dd}"
user => "logstash_system"
password => "l12345"
}
}
In case it is worth to mention, LogStash is failling to connect to ElasticSearch with this log and, as you can see from logstash.conf I set up logstash_system (the user created from elasticsearch-setup-passwords interactive)
logstash_1 | [2020-05-19T20:18:45,559][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash_1 | [2020-05-19T20:19:13,815][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/_xpack'"}
So, my straight question is: am I missing some extra configuration in order to enable Security on Kibana? Surrounding questions are: is Kibana/Elastic from docker not the same from zip file? Am I missing some extra configurationin order to allow Logstash to connect to ElasticSearch
*** edited
LogStash is still failling to connect to ElasticSearch after I changed to
logstash.conf
...
output {
elasticsearch {
#hosts => [ "${ELASTIC_HOST1}", "${ELASTIC_HOST2}", "${ELASTIC_HOST3}" ]
#hosts => ["http://192.168.99.100:9200"]
index => "%{[fields][topic_name]}-%{+YYYY.MM.dd}"
xpack.monitoring.elasticsearch.hosts: ["http://192.168.99.100:9200"]
xpack.monitoring.elasticsearch.username: "logstash_system"
xpack.monitoring.elasticsearch.password: => "l12345"
}
}
The logs are
logstash_1 | WARNING: All illegal access operations will be denied in a future release
logstash_1 | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
logstash_1 | [2020-05-20T13:39:05,095][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
logstash_1 | [2020-05-20T13:39:05,120][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.7.0"}
logstash_1 | [2020-05-20T13:39:06,134][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
logstash_1 | [2020-05-20T13:39:06,150][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
logstash_1 | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
logstash_1 | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
logstash_1 | [2020-05-20T13:39:08,008][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash_1 | [2020-05-20T13:39:08,408][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash_1 | [2020-05-20T13:39:08,506][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/_xpack'"}
filebeat_1 | 2020-05-20T13:38:53.069Z INFO log/harvester.go:297 Harvester started for file: /sample-logs/request-2019-11-17F.log
logstash_1 | [2020-05-20T13:39:08,611][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
logstash_1 | [2020-05-20T13:39:11,449][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [A-Za-z0-9_-], [ \\t\\r\\n], \"#\", \"=>\" at line 86, column 7 (byte 2771) after output {\r\n elasticsearch {\r\n #hosts => [ \"${ELASTIC_HOST1}\", \"${ELASTIC_HOST2}\", \"${ELASTIC_HOST3}\" ]\r\n\t#hosts => [\"http://192.168.99.100:9200\"]\r\n index => \"%{[fields][topic_name]}-%{+YYYY.MM.dd}\"\r\n\txpack", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:58:in `compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:66:in `compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:28:in `block in compile_sources'", "org/jruby/RubyArray.java:2577:in `map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:27:in `compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:181:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:67:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:43:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:342:in `block in converge_state'"]}
I guess the most relevant part of this log is:
logstash_1 | [2020-05-20T13:39:08,008][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash_1 | [2020-05-20T13:39:08,408][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash_1 | [2020-05-20T13:39:08,506][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/_xpack'"}
Take a note it is failling with ""Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/_xpack'" error. I guess that in my particular docker setups it demands to be the Docker Machine IP which in my case is 192.168.99.100. Is there someway to replace elasticsearch by this IP?

failed to load elasticseach nodes

Recently, I began to learn jhipster. After I have biuld a jhipster project and run it ,but it throws a error :
ERROR 16725 --- [ restartedMain] .d.e.r.s.AbstractElasticsearchRepository : failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{yaU7NNKDRzK_pZir0MCKNw}{localhost}{127.0.0.1:9300}]
and here is my elasticsearch.yml:
version: '2'
services:
cmpmanager-elasticsearch:
image: elasticsearch:5.6.5
# volumes:
# - ~/volumes/jhipster/CMPManager/elasticsearch/:/usr/share/elasticsearch/data/
ports:
- 9200:9200
- 9300:9300
command: -Enetwork.host=0.0.0.0 -Ediscovery.type=single-node
If you are access the ES nodes from another host check the node status using below command,
curl '192.168.x.y:9200/_cat/indices?v'
If it failed to show status, you have to open ES to access via public.
host=0.0.0.0
Not works for me also.

docker-compose.yml for elasticsearch and kibana

My aim is to get the elasticsearch and kibana images from DockerHub working locally using Docker.
This does the trick and works perfectly...
docker network create mynetwork --driver=bridge
docker run -p 5601:5601 --name kibana -d --network mynetwork kibana
docker run -p 9200:9200 -p 9300:9300 --name elasticsearch -d --network mynetwork elasticsearch
Today a bird whispered in my ear and said I should learn docker-compose. So I tried to do all of what's above inside a docker-compose.yml.
Here is my attempt.
version: "2.0"
services:
elasticsearch:
image: elasticsearch:latest
ports:
- "9200:9200"
- "9300:9300"
networks:
- docker_elk
kibana:
image: kibana:latest
ports:
- "5601:5601"
networks:
- docker_elk
networks:
docker_elk:
driver: bridge
Unfortunately this does not work. I've been racking my brains as to why I always get the ECONNREFUSED error as shown below when i run docker-compse up.
$ docker-compose up
Starting training_elasticsearch_1
Recreating training_kibana_1
Attaching to training_elasticsearch_1, training_kibana_1
elasticsearch_1 | [2016-11-02 22:39:55,798][WARN ][bootstrap ] unable to install syscall filter: seccomp unavailable: your kernel is buggy and you should upgrade
elasticsearch_1 | [2016-11-02 22:39:56,036][INFO ][node ] [Caliban] version[2.4.1], pid[1], build[c67dc32/2016-09-27T18:57:55Z]
elasticsearch_1 | [2016-11-02 22:39:56,036][INFO ][node ] [Caliban] initializing ...
elasticsearch_1 | [2016-11-02 22:39:56,713][INFO ][plugins ] [Caliban] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
elasticsearch_1 | [2016-11-02 22:39:56,749][INFO ][env ] [Caliban] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/vda2)]], net usable_space [54.8gb], net total_space [59gb], spins? [possibly], types [ext4]
elasticsearch_1 | [2016-11-02 22:39:56,749][INFO ][env ] [Caliban] heap size [990.7mb], compressed ordinary object pointers [true]
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["status","plugin:kibana#1.0.0","info"],"pid":11,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["status","plugin:elasticsearch#1.0.0","info"],"pid":11,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["error","elasticsearch"],"pid":11,"message":"Request error, retrying -- connect ECONNREFUSED 172.20.0.2:9200"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["status","plugin:kbn_vislib_vis_types#1.0.0","info"],"pid":11,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["warning","elasticsearch"],"pid":11,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["warning","elasticsearch"],"pid":11,"message":"No living connections"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["status","plugin:elasticsearch#1.0.0","error"],"pid":11,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch:9200.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["status","plugin:markdown_vis#1.0.0","info"],"pid":11,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["status","plugin:metric_vis#1.0.0","info"],"pid":11,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["status","plugin:spyModes#1.0.0","info"],"pid":11,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["status","plugin:statusPage#1.0.0","info"],"pid":11,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["status","plugin:table_vis#1.0.0","info"],"pid":11,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:39:58Z","tags":["listening","info"],"pid":11,"message":"Server running at http://0.0.0.0:5601"}
elasticsearch_1 | [2016-11-02 22:39:58,515][INFO ][node ] [Caliban] initialized
elasticsearch_1 | [2016-11-02 22:39:58,515][INFO ][node ] [Caliban] starting ...
elasticsearch_1 | [2016-11-02 22:39:58,587][INFO ][transport ] [Caliban] publish_address {172.20.0.2:9300}, bound_addresses {[::]:9300}
elasticsearch_1 | [2016-11-02 22:39:58,594][INFO ][discovery ] [Caliban] elasticsearch/1Cf9qz7CSCqHBEEuwG7PQw
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:40:00Z","tags":["warning","elasticsearch"],"pid":11,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:40:00Z","tags":["warning","elasticsearch"],"pid":11,"message":"No living connections"}
elasticsearch_1 | [2016-11-02 22:40:01,650][INFO ][cluster.service ] [Caliban] new_master {Caliban}{1Cf9qz7CSCqHBEEuwG7PQw}{172.20.0.2}{172.20.0.2:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
elasticsearch_1 | [2016-11-02 22:40:01,661][INFO ][http ] [Caliban] publish_address {172.20.0.2:9200}, bound_addresses {[::]:9200}
elasticsearch_1 | [2016-11-02 22:40:01,661][INFO ][node ] [Caliban] started
elasticsearch_1 | [2016-11-02 22:40:01,798][INFO ][gateway ] [Caliban] recovered [1] indices into cluster_state
elasticsearch_1 | [2016-11-02 22:40:02,149][INFO ][cluster.routing.allocation] [Caliban] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
kibana_1 | {"type":"log","#timestamp":"2016-11-02T22:40:03Z","tags":["status","plugin:elasticsearch#1.0.0","info"],"pid":11,"state":"green","message":"Status changed from red to green - Kibana index ready","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://elasticsearch:9200."}
^CGracefully stopping... (press Ctrl+C again to force)
Stopping training_kibana_1 ... done
Stopping training_elasticsearch_1 ... done
Can someone please help me with why?
thanks
To add the hard dependency on elasticsearch for kibana, you need the depends_on variable to be set as shown below. Also, to add to #Phil McMillan's answer, you can set the elasticsearch_url variable in kibana, without static addressing using Docker's inbuilt DNS mechanism.
version: '2.1'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.4.3
container_name: elasticsearch
networks:
docker-elk:
kibana:
image: docker.elastic.co/kibana/kibana:5.4.3
container_name: kibana
environment:
- "ELASTICSEARCH_URL=http://elasticsearch:9200"
networks:
- docker-elk
depends_on:
- elasticsearch
networks:
docker-elk:
driver: bridge
Note the environment variable ELASTICSEARCH_URL=http://elasticsearch:9200 just uses has the container name (elasticsearch) which the Docker DNS server is able to resolve.
You need to include the links.
version: "2.0"
services:
elasticsearch:
image: elasticsearch:latest
ports:
- "9200:9200"
- "9300:9300"
networks:
- docker_elk
kibana:
image: kibana:latest
ports:
- "5601:5601"
links:
- elasticsearch
networks:
- docker_elk
networks:
docker_elk:
driver: bridge
UPDATED
When using the image elasticsearch:latest, it's Elasticsearch 5.0 and requires us to increase our Docker host virtual memory.
Before running the docker-compose, please make sure to run this command on your Docker host.
Linux:
su root
sysctl -w vm.max_map_count=262144
Windows (boot2docker)
docker-machine ssh default
sudo sysctl -w vm.max_map_count=262144
If you don't want to change your Docker host, just use the Elasticsearch 2.x image at elasticsearch:2
I'm using docker-compose v3 format according to this post:
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
container_name: elasticsearch
environment:
- node.name=es-node
- cluster.name=es-cluster
- discovery.type=single-node
ports:
- 9200:9200
- 9300:9300
volumes:
- local-es:/usr/share/elasticsearch/data
networks:
- es-net
kibana:
image: docker.elastic.co/kibana/kibana:7.10.2
container_name: kibana
environment:
- "ELASTICSEARCH_URL=http://elasticsearch:9200"
ports:
- 5601:5601
networks:
- es-net
depends_on:
- elasticsearch
restart: "unless-stopped"
networks:
es-net:
volumes:
local-es:
This works for me
docker-compose.yml
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
environment:
- discovery.type=single-node
ports:
- 9200:9200
kibana:
image: docker.elastic.co/kibana/kibana:7.6.2
ports:
- 5601:5601
File docker-compose.yml:
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.5.2
container_name: elasticsearch
environment:
- node.name=es-node
- cluster.name=es-cluster
- discovery.type=single-node
- xpack.security.enabled=false
ports:
- 9200:9200
- 9300:9300
volumes:
- ./docker-data/elasticsearch/data:/usr/share/elasticsearch/data
networks:
- elastic
kibana:
image: docker.elastic.co/kibana/kibana:8.5.2
container_name: kibana
ports:
- 5601:5601
networks:
- elastic
depends_on:
- elasticsearch
restart: 'unless-stopped'
networks:
elastic:
Official documentation:
Install Kibana with Docker
I have this working. No links are needed and it doesn't have anything to do with elasticsearch starting before kibana. The issue is that when running under compose, a new bridged network is defined with its own set of IPs. Kibana needs to communicate with the cluster over this bridged network - "localhost" is not available anymore for the connectivity.
You need to do a couple of things:
You need to set a couple of values in kibana.yml or under the environment: section of kibana in the compose file):
a. elasticsearch.url in kibana.yml (or ELASTICSEARCH_URL under the environment: section of kibana in the compose file) must be set to the specific IP of the cluster and port 9200 - localhost will not work, as it does when you run outside of compose.
elasticsearch.url: "http://172.16.238.10:9200"
b. You also need to set server.host (SERVER_HOST) to the bridged IP of the Kibana container.
server.host: "172.16.238.12"
Note: you still access the kibana UI from with http://127.0.0.1:5601 and you still need those "ports" commands!
You need to set an "ipam" configuration under your bridged network and assign elasticsearch and kibana static ips so that kibana can access it via its configuration above.
Something like this should suffice:
elasticsearch:
networks:
esnet:
ipv4_address: 172.16.238.10
kibana:
networks:
esnet:
ipv4_address: 172.16.238.12
networks:
esnet:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.16.238.0/24
Don't forget to use one of the documented methods to set Kibana configuration - ELASTICSEARCH_URL is required to be set!
I have a docker compose file that creates two elasticsearch nodes and a kibana instance all running on the same bridged network. It is possible.

Resources