Elasticsearch Kibana is in Red Status - elasticsearch

Hi when I access to kibana I get this:
Status: Red
Anyone know what happens to Elasticsearch/Kibana? Or what I should do to fix this?
PS. I am using elasticsearch 5.1.1

Kibana is unable to communicate with Elasticsearch. The Kibana logs should give you more insight into the issue.
The elasticsearch section of the kibana.yml defines how to connect to Elasticsearch. elasticsearch.url specifies the URL of the Elasticsearch instance. From the machine running Kibana, you should be able to access this URL.
For example, using cURL:
> curl http://localhost:9200
{
"name" : "sRp21hL",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "ZGmAUJMwRP-txHNr73vcNg",
"version" : {
"number" : "7.0.0-alpha1",
"build_hash" : "ba9e2e4",
"build_date" : "2018-01-23T14:08:40.916181Z",
"build_snapshot" : true,
"lucene_version" : "7.2.1",
"minimum_wire_compatibility_version" : "6.3.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}

We have resolved this issue with below steps:
Execute the below command to get process are running on port number
Command: lsof -i tcp:5600 (5600 is port number)
Kill the processes running
Do a fresh Kibana restart

Related

Elastic search status is running but still getting no reply on curl localhost:9200

I am using elastic search in my laravel project using package elasticquent, I followed this specific tutorial
Laravel Elastic Search
and before that I installed elastic search on my OS(Ubuntu) by following the documentation.
After setting all things I get no reply error after spending some time on error I ran to some solution which says I have to make changes in my elasticsearch.yml file
Changes I made
cluster.name = <my-cluster-name>
node.name = <my-node-name> //basically my System name
I got credentials for above field by the following command
curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic http://localhost:9200
which gives response as follows
{
"name" : "Cp8oag6",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "AT69_T_DTp-1qgIJlatQqA",
"version" : {
"number" : "8.4.1",
"build_type" : "tar",
"build_hash" : "f27399d",
"build_flavor" : "default",
"build_date" : "2016-03-30T09:51:41.449Z",
"build_snapshot" : false,
"lucene_version" : "9.3.0",
"minimum_wire_compatibility_version" : "1.2.3",
"minimum_index_compatibility_version" : "1.2.3"
},
"tagline" : "You Know, for Search"
}
so after doing all setup and following the tutorial when I visit my search url I get
No nodes alive and If I try
curl -X GET localhost:9200 It gives no reply error.

Basic authentication with hey tool fails

I have an elasticsearch cluster which I can hit as follows:
▶ curl -k https://localhost:9200 -u username:password
{
"name" : "elasticsearch-sample-es-master-1",
"cluster_name" : "elasticsearch-sample",
"cluster_uuid" : "XXXXXXXXXXXXXXXXXXX",
"version" : {
"number" : "7.11.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "8ced7813d6f16d2ef30792e2fcde3e755795ee04",
"build_date" : "2021-02-08T22:44:01.320463Z",
"build_snapshot" : false,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
(I have port-forwarded the service locally)
However when hitting it with hey
hey https://localhost:9200 -a username:password
Status code distribution:
[401] 200 responses
Any ideas?
You can try the following
hey https://username:password#localhost:9200
That works!
Note: There are open issues in the official hey repository, https://github.com/rakyll/hey/issues/150

Launching Elastic Kibana - internal server 500 error - [illegal_argument_exception] application privileges must refer to at least one resource"}

I launched Kibana in my Elastic Cloud account and see this message. Why can I not log in to my Kibana account? I restarted my deployment and see the same error.
If this is relevant, I should add that there is an issue with my Elastic Search. It is apparently "unhealthy".
However, when I launch the Elastic Search instance, I get an apparently healthy response.
{
"name" : "instance-0000000003",
"cluster_name" : "d972<hidden for privacy>665aee2",
"cluster_uuid" : "9IOP<hidden for privacy>iflw",
"version" : {
"number" : "7.5.2",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "8bec5<hidden for privacy>580cd1afcdf",
"build_date" : "2020-01-15T12:11:52.313576Z",
"build_snapshot" : false,
"lucene_version" : "8.3.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
If you stop the Kibana server and restart it, the default space document will be created.
You get this error if .kibana index gets deleted while Kibana is running. It seems something is removing your .kibana index, in my case it was Curator. Thus, I added these two lines in Curator's action_file.yml file under actions/1/filters:
---
actions:
1:
filters:
- filtertype: kibana
exclude: True

Elastic Search remote connection from Spring boot application

I am having sping boot application and if i use my local elastic search i am able to connect but if i used the remote elastic search i'am not able to connect.
spring:
data:
elasticsearch:
cluster-name: es_psc
cluster-nodes: 100.84.57.2:9300
if i run in browser(http://100.84.57.2:9200/) I am able to see the details
{
"name" : "i58Q9JC",
"cluster_name" : "es_psc",
"cluster_uuid" : "EKeTJwviQvWeTzbS1h1w4w",
"version" : {
"number" : "6.4.3",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "fe40335",
"build_date" : "2018-10-30T23:17:19.084789Z",
"build_snapshot" : false,
"lucene_version" : "7.4.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
But I i run my spring boot application its giving below error:
Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{6SiLGoTHTmmiuzBTvweb3A}{100.84.57.2}{100.84.57.2:9300}]
You are using the wrong port in your config file. You should be using port 9200, which is meant for rest communication.
Port 9300 is used for internal communication between Elasticsearch nodes.

X-Pack 401 Logstash cant connect to Elasticsearch

I changed JVM heap sized and Logstash cant connect to Elasticsearch, after restarting Elasticsearch
logstash.yml
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.hosts: ["es:9200"]
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: pass
logs by Logstash
[WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://es:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://es:9200/'"}
curl from logstash to es with same credentials
logstash: curl -ulogstash_system es:9200
Enter host password for user 'logstash_system':
{
"name" : "elasticsearch",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "",
"version" : {
"number" : "7.5.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "",
"build_date" : "2019-11-26T01:06:52.518245Z",
"build_snapshot" : false,
"lucene_version" : "8.3.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}

Resources