When I am trying to start Kibana I am facing the following issue. I first restarted my elasticsearch server it was running successfully. After starting Elasticsearch I tried to start Kibana but no luck.
{"name":"Kibana","hostname":"ABCD","pid":3848,"level":30,"msg":"Elasticsearch is still initializing the kibana index... Trying again in 2.5 second.","time":"2015-07-03T07:35:34.936Z","v":0}
Thanks in advance
the curl -XDELETE http://localhost:9200/.kibana command works fine, however you lose all your Kibana settings (indexes, graphs, dashboards); by just querying the index I've solved the problem, without losing my data. For example:
curl -s http://localhost:9200/.kibana/_recovery?pretty
curl -XPUT 'localhost:9200/.kibana/_settings' -d '
{
"index" : {
"number_of_replicas" : 0
}
}'
Then start Kibana, it should work.
Gael Le Moellic
Warning: Removing .kibana index will make you lose all your kibana settings (indexes, graphs, dashboards)
This behavior is sometimes caused by an existing .kibana index.
Kindly delete the .kibana index in elasticsearch using following command:
curl -XDELETE http://localhost:9200/.kibana
After deleting the index, restart Kibana.
If the problem still persists, and you are willing to lose any existing data, you can try deleting all indexes using following command:
curl -XDELETE http://localhost:9200/*
Followed by restarting Kibana.
Note: localhost:9200 is the elasticsearch server's host:port, which may be different in your case.
Sometimes you need to wait few minutes after restarting ES.
That can be also connected with low disk space.
Observed on AWS t2.small machine with ELK stack.
Something is wrong with your Kibana index inside elasticsearch.
I had the same message and i just deleted my Kibana index and then,
when i restarted it, a new index of Kibana was created by the service.
Related
Hi, I am new to Elassandra. I want to make setup(windows 10) and hit queries from elastic search url in stored documents. I have installed Elassandra and start it is working fine but i am not able to access elastic search url. I also tried to configure host and http.port in elasticsearch.yml but it did not work.
from bin i am running cassandra -f -e. here is no error in logs but still not able to access ES on localhost:9200
Please help me out on the steps.
Thanks in advance.
I have a stack of Filebeat->Logstash-Elasticsearch-Kibana. Recently I had an issue with the server disk space and my logs weren't writing for 1 day. Once I got the issue fixed I can't see my logs for this certain day when the whole stack didn't work.
This command doesn't provide my custom index, only system ones.
$ curl -XGET localhost:9200/_cat/indices?pretty | grep 2019.10.17
What should I do to create index and restore/push logs to it?
Is it the issue of logstash or Elasticsearch?
Looks like logstash.
I am trying to reindex from elasticsearch 1.0 to elasticsearch 5.0 directly by using reindex from remote option
Both the versions are installed in the remote system and running on port number 9200 and 9201 respectively.
I have followed the steps of indexing from remote. 1st I have created snapshot of the data in elasticsearch 1.0. The mapping for the data is created in elasticsearch 5.0 with a new index name. but whenever I try to post the json document using the curl command :
curl -XPOST "localhost:9201/_reindex" -d #reindex.json
{
"source": {
"remote": {
"host": "localhost:9200",
"index" : "customer"
}
},
"dest": {
"index": "new_customer"
}
}
I am getting an error like this.
Please help me resolve the issue
Please copy & paste the error messages instead of creating a screenshot in the future.
Your screenshot shows, that Elasticsearch actually returns a useful error message: you did not specify a scheme for the hostname. A scheme in this example means you have to specify http or https as part of the hostname.
Answering because i lack the reputation to comment.
Probably the following isn't the cause of your error but it'll help you once you get ahead of it.
A snippet from ES documentation:
A snapshot of an index created in 2.x can be restored to 5.x.
A snapshot of an index created in 1.x can be restored to 2.x.
A snapshot of an index created in 1.x can not be restored to 5.x.
To restore a snapshot of an index created in 1.x to 5.x you can restore it to a 2.x cluster and use reindex-from-remote to rebuild the index in a 5.x cluster.
Link to documentation
Kibana 4.3 has great features for importing/exporting dashboards, searches, and visualizations. However, the related index-patterns are not contained in the the generated export.json file. When importing an export.json file into another kibana index, Kibana reports errors Could not locate that index-pattern-field (id: <index-pattern name>).
How do you migrate kibana's index-patterns from one Elasticsearch instance to another?
Thanks,
Nathan
From the official documentation (emphasis added)
Exported dashboards do not include their associated index patterns. Re-create the index patterns manually before importing saved dashboards to a Kibana instance running on another Elasticsearch cluster.
Since index patterns are saved in the .kibana index as well like anything else, what you can do instead of having to recreate them manually, is to save them using an adhoc tool, such as (e.g.) elasticdump, like this:
elasticdump \
--input=http://host1:9200/.kibana \
--input-index=.kibana/index-pattern \
--output=http://host2:9200/.kibana \
--output-index=.kibana/index-pattern \
--type=data
You could also use snapshot/restore on your .kibana index
For anyone trying to migrate an AWS elasticsearch instance from one cluster to a new one... (hope this helps...)
I had a similar problem to the OP (I was trying to migrate data from one AWS Elasticsearch instance to a new one, using the AWS instructions).
For some reason, restoration of the cluster would fail with the following cryptic error.
"cannot restore index [.kibana] because it's open"
After much googling and head scratching, I decided it would be easier to migrate the .kibana index separately to the rest of the indices
I tried #Val's awesome suggestion to use elasticdump , however, #Val's example didn't work for me.
I ended up basing my command on the example from the elasticdump readme,
elasticdump \
--input=https://search-some-prod-instance.ap-southeast-2.es.amazonaws.com/.kibana \
--output=https://search-other-prod-instance.ap-southeast-2.es.amazonaws.com/.kibana \
--type=data
After running this command, the indices from my old kibana were now available in the new kibana. (finally :p)
nb: I also used the kibana "management->saved objects export / import" to migrate my visualisations, searches, dashboards, etc...
Kibana3 works successfully when ElasticSearch is in a different machine, by setting elasticsearch: "http://different_machine_ip:9200" in config.js of Kibana3.
Now , I want to run all three of them in my local machine for testing. I'm using Windows7 and using Chrome browser. I installed Kibana 3 on Tomcat7. I started the embedded ElasticSearch from LogStash jar file.
I set the ElasticSearch location to "localhost:9200" or "127.0.0.1:9200" or "computer_name:9200". When I check Kibana3 on the browser, the ElasticSearch query revealed via spying has no logstash index.
curl -XGET 'http://localhost:9200//_search?pretty' -d ''
As you can see, the index part is empty, showing // only. The expected query should look like this.
curl -XGET 'http://localhost:9200/logstash-2013.08.13/_search?pretty' -d 'Some JSON Data'
The browser is able to call ElasticSearch API successfully. Example, typing http://localhost:9200/logstash-2013.08.13/_mapping?pretty=true on the address bar returns the mapping of logstash index. This proves that there is no probelm in connecting to ElasticSearch.
The problem here is that the index is empty from Kibana query. Why is the index empty?
Kibana 3 works differently from Kibana 1 and 2. It runs entirely in the browser.
The config file is read by javascript and executed in your browser, so localhost:9200 tells Kibana to look for ElasticSearch running on the laptop in front of you, not the server.
BTW - Recent versions of LogStash has Kibana bundled, so you don't have to host it independently.