Config Elasticsearch and Kibana with docker - elasticsearch

I'm working with docker for the first time.
I successfully installed elasticsearch and kibana on docker, but when I try to connect kibana with elastic I get a red status with the following errors:
ui settings Elasticsearch plugin is red
plugin:elasticsearch#5.1.1 Authentication Exception
I'm not sure but I think the problem is kibana doesn't pass elastic x-pack authentication.
Now, I'm trying to disable this authentication via elastic yml file, according to the instructions here.
But I can't find the yml file anywhere (I searched /usr/share/elasticsearch but I can't find either config directory or elasticsearch.yml file).
How do I config elastic with docker?
P.S.
I'm working with ubuntu 16.04

For Debian/Ubuntu/Mint, you can find the config files under /etc folder.
/etc/elasticsearch/elasticsearch.yml
Take a look at: https://www.elastic.co/guide/en/elasticsearch/reference/2.4/setup-dir-layout.html

I'm wondering why this is even happening. With the following docker-compose.yml it's working fine for me with security enabled:
---
version: '2'
services:
kibana:
image: docker.elastic.co/kibana/kibana:5.1.1
links:
- elasticsearch
ports:
- 5602:5601
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.1.1
cap_add:
- IPC_LOCK
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9201:9200
volumes:
esdata1:
driver: local

I successfully run elastic and kibana using the official elastic docker. Somehow, the container version in the official elastic documention didn't work for me.

If you prefer to start a container using docker run and not through compose file. (only use this for dev envs, not recommended on prod envs)
docker network create elastic
docker run --network=elastic --name=elasticsearch docker.elastic.co/elasticsearch/elasticsearch:5.2.2
docker run --network=elastic -p 5601:5601 docker.elastic.co/kibana/kibana:5.2.2
A brief description can be found here:
https://discuss.elastic.co/t/kibana-docker-image-doesnt-connect-to-elasticsearch-image/79511/4

Related

Connect Kibana to Elasticsearch - ELASTICSEARCH_URL vs ELASTICSEARCH_HOSTS

I don't know which environment variable to use:
version: '2'
services:
kibana:
image: docker.elastic.co/kibana/kibana:6.2.4
environment:
SERVER_NAME: kibana.example.org
ELASTICSEARCH_HOSTS: http://ip-xxx-31-9-xxx.us-west-2.compute.internal:9200
ELASTICSEARCH_URL: http://ip-xxx-31-9-xxx.us-west-2.compute.internal:9200
Should I be using ELASTICSEARCH_URL or ELASTICSEARCH_HOSTS?
Since you are using the docker image of kibana 6.2.4 it has to be ELASTICSEARCH_URL. In the official guide to configure kibana 6.2 the setting ELASTICSEARCH_HOSTS is not even listed. This one came with later versions.

Elasticsearch 5.1 and Docker - How to get networking configured properly to reach Elasticsearch from the host

Using Elasticsearch:latest (v5.1) from the Docker public repo, I created my own image containing Cerebro. I am now attempting to get Elasticsearch networking properly configured so that I can connect to Elasticsearch from Cerebro. Cerebro running inside of the container I created, renders properly on my host at: http://localhost:9000.
After committing my image, I created my Docker container with the following:
sudo docker run -d -it --privileged --name es5.1 --restart=always \
-p 9200:9200 \
-p 9300:9300 \
-p 9000:9000 \
-v ~/elasticsearch/5.1/config:/usr/share/elasticsearch/config \
-v ~/elasticsearch/5.1/data:/usr/share/elasticsearch/data \
-v ~/elasticsearch/5.1/cerebro/conf:/root/cerebro-0.4.2/conf \
elasticsearch_cerebro:5.1 \
/root/cerebro-0.4.2/bin/cerebro
my elasticsearch.yml in ~/elasticsearch/5.1/config currently has the following network and discovery entries specified:
network.publish_host: 192.168.1.26
discovery.zen.ping.unicast.hosts: ["192.168.1.26:9300"]
I have also tried 0.0.0.0 and not specifying the values to default to the loopback for these settings. In addition, I've tried specifying network.host with a combination of values. No matter how I set this, elasticsearch logs on startup:
[info] play.api.Play - Application started (Prod)
[info] p.c.s.NettyServer - Listening for HTTP on /0:0:0:0:0:0:0:0:9000
[error] p.c.s.n.PlayDefaultUpstreamHandler - Cannot invoke the action
java.net.ConnectException: Connection refused: localhost/127.0.0.1:9200
… cascading errors because of this connection refusal...
No matter how I set the elasticsearch.yml networking, the error message on Elasticsearch startup does not change. I verified that the elasticsearch.yml is being picked-up inside of the Docker container. Please let me know were I'm going wrong with this configuration.
Well, it looks like I"m answering my own question after a days-worth of battle with this! The issue was that elasticsearch wasn't started inside of the container. To determine this, I got a terminal into the container:
docker exec -it es5.1 bash
Once in the container, I checked service status:
service elasticsearch status
To this, the OS responded with:
[FAIL] elasticsearch is not running ... failed!
I started it with:
service elasticsearch start
I add a single script that I'll call from docker run to start elasticsearch and cerebro and that should do the trick. However, I would still like to hear if there is a better way to configure this.
I made a github docker-compose repo that will spin up a elasticsearch, kibana, logstash, cerebro cluster
https://github.com/Shuliyey/elkc
========================================================================
On the other hand, in regard to the actual problem (elasticsearch_cerebro not working).
To get the elasticsearch and cerebro working in one docker container. Need to use supervisor
https://docs.docker.com/engine/admin/using_supervisord/
will update with more details
No need to use supervisor at all. A very simple way to solve this is to use docker-compose and bundle Elasticsearch and Cerebro together, like this:
docker-compose.yml:
version: '2'
services:
elasticsearch:
build: elasticsearch
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- ./elasticsearch/data:/usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx1500m -Xms1500m"
networks:
- elk
cerebro:
build: cerebro
volumes:
- ./cerebro/config/application.conf:/opt/cerebro/conf/application.conf
ports:
- "9000:9000"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
elasticsearch/Dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch:5.5.1
cerebro/Dockerfile:
FROM yannart/cerebro
Then you run docker-compose build and docker-compose up. When everything is started, you can access ES at http://localhost:9200 and Cerebro at http://localhost:9000

linking kibana with elasticsearch

I have the following docker containers running on my box...
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5da7523e527b kibana "/docker-entrypoint.s" About a minute ago Up About a minute 0.0.0.0:5601->5601/tcp elated_lovelace
20aea0e545ca elasticsearch "/docker-entrypoint.s" 3 hours ago Up 3 hours 0.0.0.0:9200->9200/tcp, 9300/tcp sad_meitner
My aim was to get kibana to link to my elasticsearch container however when I hit kibana it's telling me that I do not have any document stores. I know this is not right because I definitely have documents in elasticsearch. I'm guessing my link command is wrong.
This is the docker command I used to start the kibana container.
docker run -p 5601:5601 --link sad_meitner:elasticsearch -d kibana
Can someone tell me what I've done wrong?
thanks
First of all, Linking is a legacy feature, Create a user defined network first:
docker network create mynetwork --driver=bridge
Now use mynetwork for containers you want to be able to communicate with each other.
docker run -p 5601:5601 --name kibana -d --network mynetwork kibana
docker run -p 9200:9200 -p 9300:9300 --name elasticsearch -d --network mynetwork elasticsearch
Docker will run a dns server for your user defined network, so you can ping other container by name.
docker exec -it kibana /bin/bash
ping elasticsearch
You can use telnet or curl to verify kibana->elasticsearch connectivity from kibana container.
p.s I used official (library) docker images for ELK stack with user defined networking recently and it worked like a charm.
you can add ENV ELASTICSEARCH_URL=elasticsearch:9200 to your Dockerfile before build kibana, then use docker-compose to run elasticsearch with kibana like this:
version: '2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.3.0
container_name: elasticsearch
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
kibana:
image: docker.elastic.co/kibana/kibana:5.3.0
container_name: kibana
ports:
- "5601:5601"
depends_on:
- elasticsearch

Docker-compose links vs external_links

I believe it is simple question but I still do not get it from Docker-compose documentations. What is the difference between links and external_links?
I like external_links as I want to have core docker-compose and I want to extend it without overriding the core links.
What exactly I have, I am trying to setup logstash which depends on the elasticsearch. Elasticsearch is in the core docker-compose and the logstash is in the depending one. So I had to define the elastic search in the depended docker-compose as a reference as logstash need it as a link. BUT Elasticsearch has already its own links which I do not want to repeat them in the dependent one.
Can I do that with external_link instead of link?
I know that links will make sure that the link is up first before linking, does the external_link will do the same?
Any help is appreciated. Thanks.
Use links when you want to link together containers within the same docker-compose.yml. All you need to do is set the link to the service name. Like this:
---
elasticsearch:
image: elasticsearch:latest
command: elasticsearch -Des.network.host=0.0.0.0
ports:
- "9200:9200"
logstash:
image: logstash:latest
command: logstash -f logstash.conf
ports:
- "5000:5000"
links:
- elasticsearch
If you want to link a container inside of the docker-compose.yml to another container that was not included in the same docker-compose.yml or started in a different manner then you can use external_links and you would set the link to the container's name. Like this:
---
logstash:
image: logstash:latest
command: logstash -f logstash.conf
ports:
- "5000:5000"
external_links:
- my_elasticsearch_container
I would suggest the first way unless your use case for some reason requires that they cannot be in the same docker-compose.yml
I think external_link will do not do the same as links in docker-compose up command.
links waits for container to boot up and get IP address which is used in etc/hosts file, therefore external_link has already IP:hostname values name described in docker-compose file.
Moreover links will be deprecated
Here is a link to Docker-Compose project that uses Elasticsearch, Logstash, and Kibana. You will see that I'm using links:
https://github.com/bahaaldine/elasticsearch-paris-accidentology-demo

"message":"No living connections","node_env":"production"

I'am trying to install Kibana 4 in my machine but it's giving the following errors.
{"#timestamp":"2015-04-15T06:25:50.688Z","level":"error","node_env":"production","error":"Request error, retrying -- connect ECONNREFUSED"}
{"#timestamp":"2015-04-15T06:25:50.693Z","level":"warn","message":"Unable to revive connection: http://0.0.0.0:9200/","node_env":"production"}
{"#timestamp":"2015-04-15T06:25:50.693Z","level":"warn","message":"No living connections","node_env":"production"}
{"#timestamp":"2015-04-15T06:25:50.698Z","level":"fatal","message":"No Living connections","node_env":"production","error":{"message":"No Living connections","name":"Error","stack":"Error: No Living connections\n at sendReqWithConnection (/home/kibana-4.0.0-rc1-linux-x64/src/node_modules/elasticsearch/src/lib/transport.js:174:15)\n
The ECONNREFUSED is telling you that it can't connect to Elasticsearch. The http://0.0.0.0:9200/ tells you what it's trying to connect to.
You need to modify the config/kibana.yml and change the elasticsearch_url setting to point to your cluster. If you are running Elasticsearch on the same box, the correct value is http://localhost:9200.
Your elastic search is down.
In my case it was because the environment variable Java_Home was not set
correctly.You have to manually set it. These are the guides lines to do it :
Go to your PC Environments.
Create a new variable,with variable name Java_Home. The variable value should be java installation path.
Make sure your path has no spaces. If your Java is in Program Files(x86) you can use shortcut which is : progra~2 instead of Program Files(x86).
As a result you have something like this : C:\Progra~2\Java\jre1.8.0_131
There is another reason why this might happen in the case you are using AWS Elasticsearch service.
No grant right access policies for ES and not loading right AWS credential will be the root cause.
There is one more posibility, maybe your elasticsearch does not run properly as you want: please check this link and try to dockerize the elasticsearch.
for me this docker-compose.yml file can dockerize the elasticsearch:
services:
elasticsearch:
image: "${CREATED_IMAGE_NAME_PREFIX}:1"
container_name: efk_elastic
build:
context: ./elasticsearch
args:
EFK_VERSION: $EFK_VERSION
ELASTIC_PORT1: $ELASTIC_PORT1
ELASTIC_PORT2: $ELASTIC_PORT2
environment:
# node.name: node
# cluster.name: elasticsearch-default
ES_JAVA_OPTS: -Xms1g -Xmx1g
discovery.type: single-node
ELASTIC_PASSWORD: changeme
http.cors.enabled: "true"
http.cors.allow-credentials: "true"
http.cors.allow-headers: X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
http.cors.allow-origin: /https?:\/\/localhost(:[0-9]+)?/
hostname: elasticsearch
ports:
- "${ELASTIC_EXPOSED_PORT1}:$ELASTIC_PORT1"
- "$ELASTIC_EXPOSED_PORT2:${ELASTIC_PORT2}"
volumes:
# - type: bind
# source: ./elasticsearch/config/elasticsearch.yml
# target: /usr/share/elasticsearch/config/elasticsearch.yml
# read_only: true
- type: volume
source: elasticsearch_data
target: /usr/share/elasticsearch/data
networks:
- efk
please note that this is not complete. for more details please see my GitHub repository

Resources