Kibana server is not ready yet! Trying to visualize data push to Elasticsearch - elasticsearch

I have this problem in the architecture:
kibana | {"type":"log","#timestamp":"2021-04-19T11:02:46+00:00","tags":["warning","elasticsearch"],"pid":7,"message":"Unable to revive connection: http://localhost:9200/"}
kibana | {"type":"log","#timestamp":"2021-04-19T11:02:46+00:00","tags":["warning","elasticsearch"],"pid":7,"message":"No living connections"}
kibana | {"type":"log","#timestamp":"2021-04-19T11:02:46+00:00","tags":["warning","plugins","licensing"],"pid":7,"message":"License information could not be obtained from Elasticsearch due to Error: No Living connections error"}
kibana | {"type":"log","#timestamp":"2021-04-19T11:03:16+00:00","tags":["warning","elasticsearch"],"pid":7,"message":"Unable to revive connection: http://localhost:9200/"}
kibana | {"type":"log","#timestamp":"2021-04-19T11:03:16+00:00","tags":["warning","elasticsearch"],"pid":7,"message":"No living connections"}
kibana | {"type":"log","#timestamp":"2021-04-19T11:03:16+00:00","tags":["warning","plugins","licensing"],"pid":7,"message":"License information could not be obtained from Elasticsearch due to Error: No Living connections error"}
I insert below the docker-compose from which I've not errors except for kibana, searching about the net I've seen the problem could be the memory requirment that I need to insert but if I insert deploy and then resource I've some problem issues related on docker
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper:3.4.6
ports:
- "2181:2181"
kafka:
build: .
ports:
- "9092:9092"
expose:
- "9093"
environment:
KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9093,OUTSIDE://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INSIDE://0.0.0.0:9093,OUTSIDE://0.0.0.0:9092
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CREATE_TOPICS: "prova1:1:1,stream:1,1,output:1,1,input:1,1"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
elasticsearch:
restart: always
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.4.0
container_name: elasticsearch
ports:
- 9200:9200
environment:
- discovery.type=single-node
- ES_JAVA_OPTS:"-Xms1g-Xmx1g"
jobmanager:
image: pyflink/playgrounds:1.10.0
volumes:
- ./examples:/opt/examples
hostname: "jobmanager"
expose:
- "6123"
ports:
- "8088:8088"
command: jobmanager
environment:
- JOB_MANAGER_RPC_ADDRESS=jobmanager
taskmanager:
image: pyflink/playgrounds:1.10.0
volumes:
- ./examples:/opt/examples
expose:
- "6121"
- "6122"
depends_on:
- jobmanager
command: taskmanager
links:
- jobmanager:jobmanager
environment:
- JOB_MANAGER_RPC_ADDRESS=jobmanager
kibana:
image: docker.elastic.co/kibana/kibana:7.12.0
container_name: kibana
restart: always
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: http://localhost:9200
ELASTICSEARCH_HOSTS: "http://localhost:9200"
elasticsearch.ssl.verificationMode: none

I manage to solve through the help of Elastic team. Here's a link that let to jump to community of elastic

Related

Docker-Compose - TheHive, Cortex, Elasticsearch using Cassandra - question regarding ES localhost listening

I'm deploying in AWS Ubuntu instance, on a VM using this yml:
version: "3.7"
services:
cassandra:
container_name: cassandra
image: cassandra:3.11
restart: unless-stopped
hostname: cassandra
environment:
- MAX_HEAP_SIZE=1G
- HEAP_NEWSIZE=1G
- CASSANDRA_CLUSTER_NAME=thp
volumes:
- ./cassandra/data:/var/lib/cassandra/data
networks:
- Hive
elasticsearch:
container_name: elasticsearch
image: elasticsearch:7.11.1
environment:
- http.host=0.0.0.0
- discovery.type=single-node
- cluster.name=hive
- script.allowed_types= inline
- thread_pool.search.queue_size=100000
- thread_pool.write.queue_size=10000
- gateway.recover_after_nodes=1
- xpack.security.enabled=false
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms256m -Xmx256m
ulimits:
nofile:
soft: 65536
hard: 65536
volumes:
- ./elasticsearch/data:/usr/share/elasticsearch/data
- ./elasticsearch/logs:/usr/share/elasticsearch/logs
networks:
- Hive
cortex:
container_name: cortex
image: thehiveproject/cortex:latest
depends_on:
- elasticsearch
environment:
- 'JOB_DIRECTORY=/opt/cortex/jobs'
ports:
- '0.0.0.0:9001:9001'
volumes:
- ./cortex/application.conf:/etc/cortex/application.conf
- '/var/run/docker.sock:/var/run/docker.sock'
- ./cortex/log/:/var/log/cortex
- /tmp:/tmp
#- ./cortex/Cortex-Analyzers:/opt/cortex/analyzers
#- .cortex/Cortex-Analyzers/analyzers.json:/opt/cortex/analyzers/analyzers.json
privileged: true
networks:
- Hive
thehive:
container_name: thehive
image: 'thehiveproject/thehive4:latest'
restart: unless-stopped
depends_on:
- cassandra
ports:
- '0.0.0.0:9000:9000'
volumes:
- ./thehive/application.conf:/etc/thehive/application.conf
- ./thehive/data:/opt/thp/thehive/data
- ./thehive/index:/opt/thp/thehive/index
command:
--cortex-port 9001
--cortex-keys ${CORTEX_KEY}
networks:
- Hive
networks:
Hive:
driver: bridge
and additional 2 yml application.conf files for thehive and cortex. The problem I have is that when I look up docker instances using docker ps or docker compose ps I can see that cortex and thehive are on 0.0.0.0:9000 and 0.0.0.0:9001 respectively but elasticsearch only shows 9200/tcp, 9300/tcp. How can I get access to web interface of ES locally? I can't figure this out. Using netstat I can't find port 9200 or 9300 listening anywhere.
Elasticsearch does not natively come with a web interface. Elasticsearch exposes a REST api where third party interfaces can interact with.
One of the most popular tools for visualizing or viewing data in the elastic stack is Kibana which interfaces with Elasticsearch. See link for more details: https://www.elastic.co/kibana/
ES API Reference: https://www.elastic.co/guide/en/elasticsearch/reference/current/rest-apis.html

How can I run Elasticsearch on Docker Via docker-compose

I could not find any docker-compose file for run Elasticsearch on docker. I found a few but but it doesn't work.
You can use this;
version: '3.1'
services:
elasticsearch:
container_name: elasticsearch_compose
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
ports:
- 9200:9200
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
environment:
- xpack.monitoring.enabled=true
- xpack.watcher.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
networks:
- elastic
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:7.9.2
ports:
- 5601:5601
depends_on:
- elasticsearch
environment:
- ELASTICSEARCH_URL=http://localhost:9200
networks:
- elastic
networks:
elastic:
driver: bridge
volumes:
elasticsearch-data:

Elasticsearch service running on minikube cluster not reachable from within the cluster

I am using kompose to deploy this docker-compose.yaml
version: '3'
services:
webapp:
build:
context: ../../../
dockerfile: config/docker/dev/Dockerfile-dev
container_name: myWebApp-dev
command: ["/bin/sh", "-ec","sleep 1000"]
image: 'localhost:5002/webapp:1'
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
- ELASTICSEARCH_HOST=elasticsearch
labels:
kompose.image-pull-policy: 'IfNotPresent'
kompose.service.type: nodeport
ports:
- "4000:4000"
- "3000:3000"
depends_on:
- elasticsearch
links:
- elasticsearch
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
container_name: elasticsearch
command: ["/bin/sh", "-ec","sleep 1000"]
environment:
- node.name=elasticsearch
- discovery.seed_hosts=es02
- cluster.initial_master_nodes=elasticsearch,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
container_name: es02
command: ["/bin/sh", "-ec","sleep 1000"]
environment:
- node.name=es02
- discovery.seed_hosts=elasticsearch
- cluster.initial_master_nodes=elasticsearch,es02
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata02:/usr/share/elasticsearch/data
to minikube.
The elasticsearch pod and service are running. However, the webapp cannot access the elasticsearch cluster as I get a connection refused error when curling from within the webapp pod -> curl: (7) Failed to connect to 10.108.5.31 port 9200: Connection refused. Does anyone know what the reason for this problem is and how to fix it?
In elasticsearch section, you have a shell command to sleep. And, never started any elasticsearch instances after that.
command: ["/bin/sh", "-ec","sleep 1000"]
So, looks like, there is no elasticsearch running inside the container and that's why connection refused is happening.
To Fix:
Get rid of command: of elasticsearch and es02, that way, default command will be used.
Note:
Now, When the elasticsearch starts, You will face two error (described below) with this compose yaml in kubernetes. These are unrelated to this post, But I will try to giving you direction where to look.
ERROR: [2] bootstrap checks failed
[1]: memory locking requested for elasticsearch process but memory is not locked
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
Here,
you need to update host system for vm.max_map_count. Exec into minikube virtualbox by minikube ssh and run sudo -s sysctl -w vm.max_map_count=262144 to change the map_count of host kernel. It will work, because docker/container doesn't provide kernel level isolation.
For minikube,
minikube ssh 'sudo -s sysctl -w vm.max_map_count=262144'
ulimit is not available in kompose. See issue here. So either you have to get rid of both, bootstrap.memory_lock=true from environment: sections, or you may need to update the docker image. This question is already asked here in stackoverflow.
So the improved kompose yaml (works well on minikube):
version: '3'
services:
webapp:
build:
context: ../../../
dockerfile: config/docker/dev/Dockerfile-dev
container_name: myWebApp-dev
command: ["/bin/sh", "-ec","sleep 1000"]
image: 'localhost:5002/webapp:1'
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
- ELASTICSEARCH_HOST=elasticsearch
labels:
kompose.image-pull-policy: 'IfNotPresent'
kompose.service.type: nodeport
ports:
- "4000:4000"
- "3000:3000"
depends_on:
- elasticsearch
links:
- elasticsearch
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
- discovery.seed_hosts=es02
- cluster.initial_master_nodes=elasticsearch,es02
- cluster.name=docker-cluster
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
container_name: es02
environment:
- node.name=es02
- discovery.seed_hosts=elasticsearch
- cluster.initial_master_nodes=elasticsearch,es02
- cluster.name=docker-cluster
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- esdata02:/usr/share/elasticsearch/data
However, I would suggest to follow the elasticsearch official doc instead of using compose to install elasticsearch in kubernetes.

Kibana fails to connect to Elasticsearch on docker

I am following https://www.elastic.co/guide/en/elasticsearch/reference/6.5/docker.html
and
https://www.elastic.co/guide/en/kibana/6.5/docker.html
But it does not seems to work well with kibana, ES works fine.
I tried starting kibana alone, but finally i added it to one docker-compose file.
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
container_name: elasticsearch
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- esnet
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
container_name: elasticsearch2
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "discovery.zen.ping.unicast.hosts=elasticsearch"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata2:/usr/share/elasticsearch/data
networks:
- esnet
kibana:
image: docker.elastic.co/kibana/kibana:6.5.4
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- 5601:5601
volumes:
esdata1:
driver: local
esdata2:
driver: local
networks:
esnet:
Kibana.yml is:
server.host: "0.0.0.0"
server.name: "kibana"
elasticsearch.url: http://elasticsearch:9200
I get following error:
kibana_1 | {"type":"log","#timestamp":"2019-06-11T08:55:30Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
The kibana container isn't on the same network as the two elasticsearch containers: it doesn't have a networks: block and so is on an automatically-created default network, but the two elasticsearch containers are on an explicitly-declared esnet network. Since they're not on the same network, inter-container DNS doesn't work.
I'd suggest just deleting all of the networks: blocks and using the default network Docker Compose creates for you. If you want an explicit named network, copy the same networks: [esnet] lines into the kibana: service block.

cadvisor, elasticsearch, docker: no Elasticsearch node available

I'm trying to connect cadvisor to elasticsearch with docker and I'm getting the error:
cadvisor.go:113] Failed to initialize storage driver: failed to create the elasticsearch client - no Elasticsearch node available
docker-compose.yml
version: '2'
services:
elasticsearch:
image: "elasticsearch:2.3.3"
container_name: "elasticsearch"
ports:
- "9200:9200"
kibana:
image: "kibana:4.5.1"
container_name: "kibana"
ports:
- "5601:5601"
links:
- elasticsearch
cadvisor:
image: "google/cadvisor:latest"
container_name: "cadvisor"
ports:
- "8080:8080"
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
links:
- elasticsearch
restart: always
command: -storage_driver="elasticsearch" -storage_driver_es_host="http://elasticsearch:9200"
If I change the command to
command: -storage_driver="elasticsearch" -storage_driver_es_host="http://172.22.0.5:9200"
everything works just fine. Any ideas?
what you are missing is an index in elasticsearch, unfortunately this is not well documented
go to your kibana dashboard, dev tools and send this request:
PUT /.kibana/index-pattern/cadvisor
{"title" : "cadvisor", "timeFieldName": "container_stats.timestamp"}

Resources