License information could not be obtained from Elasticsearch for the [data] cluster - elasticsearch

I have installed elasticsearch 7.5.1 and the same version of Kibana. My es cluster seems fine, but Kibana is not able to connect to the elasticsearch.
Kiabana.yml is as below:
server.port: 5601
server.host: "<IP of the kibana instance>"
server.name: "<Name of the kibana instance>"
elasticsearch.hosts: [ "https://<IP of ES instance 1>:443" , "https://<IP of ES instance 2>:443" ]
elasticsearch.username: "<kibana_user>"
elasticsearch.password: "<kibana_user_password>"
server.ssl.enabled: true
server.ssl.certificate:
server.ssl.key:
xpack.security.enabled: true
xpack.reporting.kibanaServer.port: 443
xpack.reporting.kibanaServer.protocol: https
elasticsearch.ssl.certificateAuthorities: [ "" ]
elasticsearch.ssl.verificationMode: certificate
logging.dest: /etc/kibana/log/kibana.log
I have tried both kibana_oss and the non_oss, but I get the same error.

This can be happen if ES cluster master node has not set correctly.
Test ES cluster health first
curl <ES_IP:PORT>/_cluster/health?pretty

Related

Couldn't configure Elastic Retry or update the kibana.yml file manually

I am tring to configure kibana but I get flowing error
Couldn't configure Elastic Retry or update the kibana.yml file manually.
this is my /etc/kibana/kibana.yml
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601
server.host: '127.0.0.1'
kibana.index: ".kibana"
elasticsearch.username: "user"
#elasticsearch.password: "pass"
xpack.encryptedSavedObjects.encryptionKey: 706c88e045c127e21b81c902425cdb54
xpack.reporting.encryptionKey: d67296d7d4958bdd1594e965e6b97ab9
xpack.security.encryptionKey: d496d7cb6a5983c213f7902767069744
xpack.encryptedSavedObjects.encryptionKey: 706c88e045c127e21b81c902425cdb54
xpack.reporting.encryptionKey: d67296d7d4958bdd1594e965e6b97ab9
xpack.security.encryptionKey: d496d7cb6a5983c213f7902767069744
how can I fix this error???
pleas help me!!!!

how to xpack security reset in elasticsearch?

I want to reset the ID and password of elasticsearch and kibana.
I tried to reset it, but an error occurred as below.
ubuntu#elk:/usr/share/elasticsearch/bin$ sudo ./elasticsearch-setup-passwords auto
error occured
Connection failure to: http://10.0.10.4:9200/_security/_authenticate?pretty failed: Connection refused
ERROR: Failed to connect to elasticsearch at http://10.0.10.4:9200/_security/_authenticate?pretty. Is the URL correct and elasticsearch running?
my elasticsearch.yml file
xpack.security.enabled: true
network.host: 0.0.0.0
http.port: 9200
discovery.type: single-node
my kibana.yml file
server.port: 5601
server.host: 0.0.0.0
elasticsearch.hosts: ["http://localhost:9200"]
elasticsearch.username: "elastic"
elasticsearch.password: "dlswp12"
#xpack.security.encryptionKey: "something_at_least_32_characters"
#xpack.security.sessionTimeout: 600000
#xpack.monitoring.enabled: false
how to xpack security(id & password) reset in elasticsearch?

Kibana configuration foe es with production and monitoring clusters

I referred this and this
Why do we need two fields to let Kibana know where is the monitoring data.
elasticsearch.hosts
monitoring.ui.elasticsearch.hosts
But when I give my monitoring cluster at either of these properties, it works. I assumed badly that elasticsearch.hosts is my actual production cluster than monitoring cluster.
Apart from why part, is my understanding correct about this integration attributes?
Any thoughts? Thanks.
Kibana.yml:
server.host: "ip.ad.re.ss"
#elasticsearch.hosts: ["http://host1:9200","http://host2:9200","FewMoreHosts"]
monitoring.ui.elasticsearch.hosts: ["http://MonitoringNode:9200"]
I haven't changed any part in elasticsearch.yml of monitoring node.
metricbeat.yml:
output.elasticsearch:
host: ["http://MonitoringNode:9200"]
setup.kibana:
host: kibanaHost
In modules.d/elasticsearch-xpack.yml, I left the default configurations.
elasticsearch.yml:
cluster.name: es_cluster
node.name: master-1
node.data: false
node.master: true
node.ingest: true
node.max_local_storage_nodes: 3
transport.tcp.port: 9300
bootstrap.memory_lock: false
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["master1.ip", "master2.ip","master3.ip"]
cluster.initial_master_nodes: ["master-1","master-2"]
Monitoring cluster yml:
network.host: 0.0.0.0
discovery.type: single-node
When I enable both properties in Kibana.yml, I get the below error in the log.
{
"type": "log",
"#timestamp": "2021-04-21T14:48:34-04:00",
"tags": [
"error",
"plugins",
"data",
"data",
"indexPatterns"
],
"pid": 29959,
"message": "Error: No indices match pattern \"metricbeat-*\"\n at createNoMatchingIndicesError (/usr/share/kibana/src/plugins/data/server/index_patterns/fetcher/lib/errors.js:45:29)\n at convertEsError (/usr/share/kibana/src/plugins/data/server/index_patterns/fetcher/lib/errors.js:71:12)\n at callFieldCapsApi (/usr/share/kibana/src/plugins/data/server/index_patterns/fetcher/lib/es_api.js:69:38)\n at runMicrotasks (<anonymous>)\n at processTicksAndRejections (internal/process/task_queues.js:93:5)\n at getFieldCapabilities (/usr/share/kibana/src/plugins/data/server/index_patterns/fetcher/lib/field_capabilities/field_capabilities.js:35:23)\n at IndexPatternsFetcher.getFieldsForWildcard (/usr/share/kibana/src/plugins/data/server/index_patterns/fetcher/index_patterns_fetcher.js:49:31)\n at IndexPatternsApiServer.getFieldsForWildcard (/usr/share/kibana/src/plugins/data/server/index_patterns/index_patterns_api_client.js:27:12)\n at IndexPatternsService.refreshFieldSpecMap (/usr/share/kibana/src/plugins/data/common/index_patterns/index_patterns/index_patterns.js:216:27)\n at IndexPatternsService.getSavedObjectAndInit (/usr/share/kibana/src/plugins/data/common/index_patterns/index_patterns/index_patterns.js:320:23) {\n data: null,\n isBoom: true,\n isServer: false,\n output: {\n statusCode: 404,\n payload: {\n statusCode: 404,\n error: 'Not Found',\n message: 'No indices match pattern \"metricbeat-*\"',\n code: 'no_matching_indices'\n },\n headers: {}\n }\n}"
}
But if I set only monitoring.ui.elasticsearch.hosts, Kibana shows the data.
The elasticsearch.hosts is the place where you will set the hosts where your data is stored, the data you want to query, this should be your production cluster.
The monitoring.ui.elasticsearch.hosts is the place where you will set the hosts of your monitoring cluster if you have a separated monitoring cluster.
Depending on the size of your cluster it is recommended to have a separated cluster just for monitoring, this could be a single-node cluster using the basic license, for example.
The security tokens that are used in these contexts are cluster-specific, therefore you cannot use a single Kibana instance to connect to both production and monitoring clusters
As mentioned in the documentation, I think that we need to have separate Kibana instance to collect data from both clusters.
I am not sure much about the security tokens.

Kibana startup fails with License information and later with Unable to retrieve version information

I'm tried to follow this guideline for installing ELK on Centos 8 (on top of one AWS cluster).
After installing elastic and kibana, the kibana startup failed with:
*"message":"License information could not be obtained from Elasticsearch
I googled it, and realized I should use OSS version (latest is 7.10.2)
so make sure to install only OSS version. you can use this guideline
after that, I got new error from kibana.log
-08T07:19:32Z","tags":["error","savedobjects-service"],"pid":62767,"message":"Unable to retrieve version information from Elasticsearch nodes."}
I tried to google it, but no solution worked for me.
my kibana.yaml:
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: "[my public AWS instance ip:9200]"
my elasticsearch.yaml:
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: "[my private AWS instance ip]"
cluster.initial_master_nodes: "[my private AWS instance ip]"
Update:
If I'm changing this line in kibana.yaml file to:
elasticsearch.hosts: "http://localhost:9200"
Then it works. what is the root cause? why it can't access elastic public IP but only local?
Per #leandrojmp comment, the issue was indeed with the public IP in elasticsearch.hosts. Once I replaced it to my private ip, it works
also:
When installing the Elastic Stack, you must use the same version across the entire stack. For example, if you are using Elasticsearch 7.9.3, you install Beats 7.9.3, APM Server 7.9.3, Elasticsearch Hadoop 7.9.3, Kibana 7.9.3, and Logstash 7.9.3.
Using docker, I had to specify the elasticsearch.hosts as an environment variable: -e "ELASTICSEARCH_HOSTS=http://localhost:9200", so:
docker run -p 5601:5601 -e "ELASTICSEARCH_HOSTS=http://localhost:9200" arm64v8/kibana:7.16.3
Set elasticsearch.hosts ipaddress as local system's host ipaddress in kibana.yml file. Also you need to mount local kibana.yml file while running docker container.
docker run -d --name kibana -p 5601:5601 -v /home/users/mySystemUserName/config/kibana.yml:/opt/kibana/config/kibana.yml kibana:7.16.3
Add the below configs in
kibana.yml
server.name: kibana
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: [ "http://192.168.0.102:9200" ]

kibana not able to connect to server elasticsearch index - ECONNREFUSED

I have elasticsearch server running having indexes, say server XX.XXX.XXX.XXX:9200.
I have index in the server ES cluster XX.XXX.XXX.XXX:9200 for which I am trying to create dashboards in my localhost:5601 (Kibana)
In my kibana.yml I have this configuration:
server.port: 5601
server.host: "localhost"
# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://XX.XXX.XXX.XXX:9200"
In Elasticsearch.yml I have this config:
network.host: 0.0.0.0 (to accept all the IPs)
http.port: 9200
But I am getting this error when running kibana.yml :
connect ECONNREFUSED http://XX.XXX.XXX.XXX:9200
Unable to connect to ElasticSearch at http://XX.XXX.XXX.XXX:9200
Can anyone tell me where am I doing wrong here to get the kibana up and running with the server index of ES?
In your kibana.yml put this configuration:
server.port: 5601
server.host: "0.0.0.0"

Resources