I have a stack of Filebeat->Logstash-Elasticsearch-Kibana. Recently I had an issue with the server disk space and my logs weren't writing for 1 day. Once I got the issue fixed I can't see my logs for this certain day when the whole stack didn't work.
This command doesn't provide my custom index, only system ones.
$ curl -XGET localhost:9200/_cat/indices?pretty | grep 2019.10.17
What should I do to create index and restore/push logs to it?
Is it the issue of logstash or Elasticsearch?
Looks like logstash.
Related
after following this tuto (https://www.bmc.com/blogs/elasticsearch-logs-beats-logstash/) in order to use logstash to analyze some log files, my index was created fine at the first time, then I wanted to re-index new files with new filters and new repositories so I deleted via "curl XDELETE" the index and now when I restart logstash and filebeat, the index is not created anymore.. I dont see any errors while launching the components.
Do I need to delete something else in order to re-create my index?
Ok since my guess (see comments) was correct, here's the explanation:
To avoid that filebeat reads and publishes lines of a file over and over again, it uses a registry to store the current state of the harvester:
The registry file stores the state and location information that Filebeat uses to track where it was last reading.
As you stated, filebeat successfully harvested the files, sent the lines to logstash and logstash published the events to elasticsearch which created the desired index. Since filebeat updated its registry, no more lines had to be harvested and thus no events were published to logstash again, even when you deleted the index. When you inserted some new lines, filebeat reopened the harvester and published only the new lines (which came after the "registry checkpoint") to logstash.
The default location of the registry file is ${path.data}/registry (see Filebeat's Directory Layout Overview).
... maybe the curl api call is not the best solution to restart the index
This has nothing to do with deleting the index. Deleting the index happens inside elasticsearch. Filebeat has no clue about your actions in elasticsearch.
Q: Is there a way to re-create an index based on old logs?
Yes, there are some ways you should take into consideration:
You can use the reindex API which copies documents from one index to another. You can update the documents while reindexing them into the new index.
In contrast to the reindex you can use the update by query API to update documents that will remain in the original index.
Lastly you could of course delete the registry file. However this could cause data loss. But for development purposes I guess that's fine.
Hope I could help you.
I have been planning to use ELK for our production environment and seems to be running into a weird problem -
the problem is that while loading a sample of the production log file I realized that there is a huge mismatch in the number of events being published by Filebeat and what we see in kibana. My first doubt was on filebeat but i could verify that all the events were successfully received in logstash.
I also checked logstash (by enabling debug mode ) and could see all the events were received and processed (i am using the following filters date , json ) and i could see them getting processed successfully
but when i do a search in kibana I only get to see the percent of the number of logs being actually published (e.g. only 16000 out of 350K). No exception or error in either logstash or elasticsearch logs.
I have tried zapping the entire data by doing the following so far :
Stopped all processes for ES, Logstash and kibana.
Deleted all the index files, cleared the cache , deleted mappings
stopped filebeat, deleted registry files (since its running in windows)
Restarted elasticsearch, logstash and filebeat (in that order)
but same results. i get only 2 out of 8 records (in the shortened file) and even less when i use the full file
i tried increasing the time windows in kibana to 10 years (:)) to see if they are being pushed to the wrong year but got nothing
I have read almost all threads related to the missing data but nothing seems to work.
any pointers would help !
I'm new to the ELK stack so I'm not sure what the problem is. I have a configuration file (see screenshot, it's based on the elasticsearch tutorial):
Configuration File
Logstash is able to read the logs (it says Pipeline main started) but when the configuration file is run, elasticsearch doesn't react. I can search through the files
However, when I open Kibana, it says no results found. I checked and made sure that my range is the full day.
Any help would be appreciated!
When I am trying to start Kibana I am facing the following issue. I first restarted my elasticsearch server it was running successfully. After starting Elasticsearch I tried to start Kibana but no luck.
{"name":"Kibana","hostname":"ABCD","pid":3848,"level":30,"msg":"Elasticsearch is still initializing the kibana index... Trying again in 2.5 second.","time":"2015-07-03T07:35:34.936Z","v":0}
Thanks in advance
the curl -XDELETE http://localhost:9200/.kibana command works fine, however you lose all your Kibana settings (indexes, graphs, dashboards); by just querying the index I've solved the problem, without losing my data. For example:
curl -s http://localhost:9200/.kibana/_recovery?pretty
curl -XPUT 'localhost:9200/.kibana/_settings' -d '
{
"index" : {
"number_of_replicas" : 0
}
}'
Then start Kibana, it should work.
Gael Le Moellic
Warning: Removing .kibana index will make you lose all your kibana settings (indexes, graphs, dashboards)
This behavior is sometimes caused by an existing .kibana index.
Kindly delete the .kibana index in elasticsearch using following command:
curl -XDELETE http://localhost:9200/.kibana
After deleting the index, restart Kibana.
If the problem still persists, and you are willing to lose any existing data, you can try deleting all indexes using following command:
curl -XDELETE http://localhost:9200/*
Followed by restarting Kibana.
Note: localhost:9200 is the elasticsearch server's host:port, which may be different in your case.
Sometimes you need to wait few minutes after restarting ES.
That can be also connected with low disk space.
Observed on AWS t2.small machine with ELK stack.
Something is wrong with your Kibana index inside elasticsearch.
I had the same message and i just deleted my Kibana index and then,
when i restarted it, a new index of Kibana was created by the service.
Kibana3 works successfully when ElasticSearch is in a different machine, by setting elasticsearch: "http://different_machine_ip:9200" in config.js of Kibana3.
Now , I want to run all three of them in my local machine for testing. I'm using Windows7 and using Chrome browser. I installed Kibana 3 on Tomcat7. I started the embedded ElasticSearch from LogStash jar file.
I set the ElasticSearch location to "localhost:9200" or "127.0.0.1:9200" or "computer_name:9200". When I check Kibana3 on the browser, the ElasticSearch query revealed via spying has no logstash index.
curl -XGET 'http://localhost:9200//_search?pretty' -d ''
As you can see, the index part is empty, showing // only. The expected query should look like this.
curl -XGET 'http://localhost:9200/logstash-2013.08.13/_search?pretty' -d 'Some JSON Data'
The browser is able to call ElasticSearch API successfully. Example, typing http://localhost:9200/logstash-2013.08.13/_mapping?pretty=true on the address bar returns the mapping of logstash index. This proves that there is no probelm in connecting to ElasticSearch.
The problem here is that the index is empty from Kibana query. Why is the index empty?
Kibana 3 works differently from Kibana 1 and 2. It runs entirely in the browser.
The config file is read by javascript and executed in your browser, so localhost:9200 tells Kibana to look for ElasticSearch running on the laptop in front of you, not the server.
BTW - Recent versions of LogStash has Kibana bundled, so you don't have to host it independently.