Getting "Kibana server is not ready yet" when running from docker - elasticsearch

I'm trying to run elasticsearch and kibana via dockers, and I'm getting errors with kibana.
I'm using elasticsearch and kibana version 7.6.2
and Ubuntu 18.04.6 LTS
I run elasticsearch with the following command:
docker run -p 127.0.0.1:9200:9200 -p 127.0.0.1:9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.6.2
And it seems that elasticsearch is on (I can bulk documents and get information about the index from python code).
I'm running kibana with the following commands:
docker network create elastic
docker run --net elastic -p 127.0.0.1:5601:5601 -e "ELASTICSEARCH_HOSTS=http://127.0.0.1:9200" docker.elastic.co/kibana/kibana:7.6.2
I see the following message in the web browser: Kibana server is not ready yet
And I see the following logs in the console:
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["info","savedobjects-service"],"pid":7,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."}
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["error","elasticsearch","data"],"pid":7,"message":"Request error, retrying\nHEAD http://127.0.0.1:9200/.apm-agent-configuration => connect ECONNREFUSED 127.0.0.1:9200"}
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["error","elasticsearch","data"],"pid":7,"message":"Request error, retrying\nGET http://127.0.0.1:9200/_xpack => connect ECONNREFUSED 127.0.0.1:9200"}
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["error","elasticsearch","admin"],"pid":7,"message":"Request error, retrying\nGET http://127.0.0.1:9200/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip => connect ECONNREFUSED 127.0.0.1:9200"}
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["warning","elasticsearch","data"],"pid":7,"message":"Unable to revive connection: http://127.0.0.1:9200/"}
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["warning","elasticsearch","data"],"pid":7,"message":"No living connections"}
Could not create APM Agent configuration: No Living connections
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["warning","elasticsearch","data"],"pid":7,"message":"Unable to revive connection: http://127.0.0.1:9200/"}
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["warning","elasticsearch","data"],"pid":7,"message":"No living connections"}
How can I run kibana via docker ?

Did you try enrolling kibana to you elasticsearch cluster?
The enrollment token is valid for 30 minutes. If you need to generate a new enrollment token, run the elasticsearch-create-enrollment-token tool on your existing node. This tool is available in the Elasticsearch bin directory of the Docker container.
For example, run the following command on the existing es01 node to generate an enrollment token for newer nodes to be added:
docker exec -it es01
/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s
node
When you start Kibana, a unique link is output to your terminal.
To access Kibana, click the generated link in your terminal.
Then in your browser, paste the enrollment token that you copied when starting Elasticsearch and click the button to connect your Kibana instance with Elasticsearch.
Log in to Kibana as the elastic user with the password that was generated when you started Elasticsearch.
More details here

You've created a docker network for the Kibana container, but the Elastic container is not joined to it. Since you can access Elastic from your localhost:9200, there is no need to use the elastic network for the Kibana container.
Update the Kibana docker run command to docker run -p 127.0.0.1:5601:5601 -e "ELASTICSEARCH_HOSTS=http://host.docker.internal:9200" docker.elastic.co/kibana/kibana:7.6.2
This removes the join to the elastic network, and updates the ELASTICSEARCH_HOSTS environment variable so that it uses the localhost of the machine instead of container.

Related

Can't access Elastic Search after installing using Docker Desktop on Mac

I'm on mac. Try to run Elastic Search use Docker Desktop. Below is the commands I ran. I have no problem to run query in Kibana, the problem is I can't connect to Elastic Search via localhost:9200. Please help!!
$docker network create elastic
$docker pull docker.elastic.co/elasticsearch/elasticsearch:8.1.2
$docker run --name es-node01 --net elastic -p 9200:9200 -p 9300:9300 -t docker.elastic.co/elasticsearch/elasticsearch:8.1.2
$docker pull docker.elastic.co/kibana/kibana:8.1.2

$docker run --name kib-01 --net elastic -p 5601:5601 docker.elastic.co/kibana/kibana:8.1.2
Please remember to be specific in your questions. "Can't connect" - what happens when you try to connect?
If what you get is
curl: (52) Empty reply from server
The problem is that elasticsearch 8.0+ defaults to turning security on, so you need a certificate and a password.
Full instructions are on the elasticsearch website
But the two steps that are new are:
Copy the certificate:
docker cp es01:/usr/share/elasticsearch/config/certs/http_ca.crt .
and use the certificate and the password with curl:
curl --cacert http_ca.crt -u elastic https://localhost:9200
You will be prompted for a password. The password is printed when your Elasticsearch instance starts. Yes, it's buried in the log output, but there are blank lines, it's not hard to find. It looks like:
-> Elasticsearch security features have been automatically configured!
-> Authentication is enabled and cluster connections are encrypted.
-> Password for the elastic user (reset with `bin/elasticsearch-reset-password -u elastic`):
PLoSd-3iJTncwmdSAwaku
-> HTTP CA certificate SHA-256 fingerprint:
722460137abbd54249a056698d4ac3d05495de9c18e7ac4aba9e3e07814fe3c79
There is an extra step (and the same user (elastic) and password to access kibana.
Directions here: https://www.elastic.co/guide/en/kibana/current/docker.html

Filebeat unable to send data to logstash which results in empty data in elastic & kibana

I am trying to deploy ELK stack in openshift platform (OKD - v3.11) and using filebeat to automatically detect the logs.
The kibana dashboard is up, elastic & logstash api's are working fine but the filebeat is not sending the data to logstash since I do not see any data polling on the logstash listening on 5044 port.
So I found that from elastic forums that the following iptables command would resolve my issue but no luck,
iptables -A OUTPUT -t mangle -p tcp --dport 5044 -j MARK --set-mark 10
Still nothing is polling on the logstash listener. Please help me if I am missing anything and let me know if you need any more information.
NOTE:
The filebeat.yml, logstash.yml & logstash.conf files are working perfectly while deployed in the plain kubernetes.
The steps I have followed to debug this issue are:
Check if Kibana is coming up,
Check if Elastic API's are working,
Check if Logstash is accessible from Filebeat.
Everything is working fine in my case. Added log levels in Filebeat.yml and found "Permission Denied" error while filebeat is accessing the docker container logs under "/var/lib/docker/containers//" folder.
Fixed the issue by setting selinux to "Permissive" by running the following command,
sudo setenforce Permissive
After this ELK started to sync the logs.

Kibana: Unable to revive connection: http://elastic-url:9200/

I installed on Centos8:
elasticsearch version 7.3.1
kibana version 7.3.1
curl -I localhost:9200/status is ok
curl -I localhost:5601/status --> kibana is not ready yet
In machine with centos7 (.226) all is ok
This is kibana log:
Can somebody help me please?
Elasticsearch 7.x.x requires cluster bootstrapping at first launch and Kibana won't start unless Elasticsearch is ready and each node is running Elasticsearch in version 7.x.x.
I will write steps which you would normally do on a real machine, so that anybody else could do the same. In docker it may look similarly, except that you are working in the containers.
Before we kick off, stop kibana and elasticsearch:
service kibana stop
service elasticsearch stop
killall kibana
killall elasticsearch
Make sure it's dead:
service kibana status
service elasticsearch status
Then head into /etc/elasticsearch/ and edit elasticsearch.yml file. Add at the end of the file:
cluster.initial_master_nodes:
- master-a
- master-b
- master-c
Where master-* will be equal to node.name on each node. Save and exit. Start Elasticsearch and then Kibana. On machines with lower memory (~4GB and probably in Docker too, as it normally gives 4GB memory for containers) you may have to start Kibana first, let it "compile", stop it, start Elasticsearch and back Kibana.
On machines with puppet make sure that puppet or cron is not running, just in case not to start off kibana/elastic too early.
Here's source: https://www.elastic.co/guide/en/elasticsearch/reference/master/modules-discovery-bootstrap-cluster.html

NodePort Service not accessible from webapp inside minikube cluster, but from outside

I have installed a kubernetes elasticsearch (v. 7.0.1) environment with a deployment and service using type NodePort running on minikube. When I hit kubectl get services, I get the relevant line:
elasticsearch NodePort 10.101.5.85 <none> 9200:31066/TCP 27m
If I do
$curl http://$(minikube ip):31066
I get the usual elasticsearch page. If, however, I do
root#webapp-5489d8d6fd-2ml2w:/# curl http://localhost:9200
as root of a webapp on the same cluster, I get error:
curl: (7) Failed to connect to localhost port 9200: Connection refused
Can anyone hint at the reason for my problem?
First of all, your elasticsearch service is NodePort type with ports 9200:31066/TCP.
It means that elasticsearch is using port 9200, and NodePort is using 31066 port.
1) curl http://$(minikube ip):31066
MinikubeIP is your node ip. You can verify this using $ kubectl describe node So if you are use port 31066 it connects correctly.
2) curl http://localhost:9200
You did not provide any information about other Deployments or pods so I assume you have Elasticsearch deployment with pod.
If you will execute $ curl http://localhost:9200 in elasticsearch container it will work, because elasticsearch is running inside (local) this container.
If you want to curl from other (non elasticsearch pod) you have to use service which you have created with elasticsearch port.
$ curl elasticsearch:9200 or $ curl 10.101.5.85:9200
From other containers you can also curl using NodeIP with NodePort
$ curl $(minikube ip):31066 same like in point 1.
Usefull links:
https://gardener.cloud/050-tutorials/content/howto/service-access/
Hope it helps!

How to setup Elasticsearch server on remote Ubuntu server?

I have purchased space of 1Gb Ubuntu server to deploy my Elasticsearch Application.
I followed the guide below to deploy Elasticsearch server.
Link to guide
Now whenever I try to access Elasticsearch server using a curl command, it shows the following error
curl: (7) Failed to connect to 0.0.0.0 port 9200: Connection refused
Here is the curl command I tried
curl -XGET '0.0.0.0:9200/?pretty'
Which step could I have missed or is not shown in the guide?
Thank you
Is your elasticsearch service running?
Check with the following command
systemctl status elasticsearch
If it is not running try to start it with
systemctl start elasticsearch
After a few minutes check if it is still running or crashed using systemctl status elasticsearch. If it has crashed please add more details to your question.

Resources