I am trying to deleate all the logs that are were stored 14 days ago or before in elasticsearch. I have installed curator , and created the config file and the action file, in this way:
curator.yml configuration file
My elasticsearch is running in localhost:8080 ,and kibana in localhost:80
delete_indices action file
With both configurations file, I execute the currator with the config files and i obtain this:
command execution
You can see in the following image, my index name in kibana:
filebeat index in kibana
I've already tried many things, however I didn't manage to make it work, it allways says there is no index with this name. Do someone know where could be the issue?
Edit 1:
With your help, I managed to get the exact index name, however I still have the same problem:
modified delete_indices.yml file
That's what i get when i enter GET _cat/indices:
my indices
The problem was that curator will not act on any index associated with an ILM policy without setting allow_ilm_indices to to true.
The solution was:
More information: https://www.elastic.co/guide/en/elasticsearch/client/curator/5.8/option_allow_ilm.html
Related
I have managed to process log files using the ELK kit and I can now see my logs on Kibana.
I have scoured the internet and can't seem to find a way to remove all the old logs, viewable in Kibana, from months ago. (Well an explaination that I understand). I just want to clear my Kibana and start a fresh by loading new logs and them being the only ones displayed. Does anyone know how I would do that?
Note: Even if I remove all the Index Patterns (in Management section), the processed logs are still there.
Context: I have been looking at using ELK to analyse testing logs in my work. For that reason, I am using ElasticSearch, Kibana and Logstatsh v5.4, and I am unable to download a newer version due to company restrictions.
Any help would be much appreciated!
Kibana screenshot displaying logs
Update:
I've typed "GET /_cat/indices/*?v&s=index" into the Dev Tools>Console and got a list of indices.
I initially used the "DELETE" function, and it didn't appear to be working. However, after restarting everything, it worked the seond time and I was able to remove all the existing indices which subsiquently removed all logs being displayed in Kibana.
SUCCESS!
Kibana is just the visualization part of the elastic stack, your data is stored in elasticsearch, to get rid of it you need to delete your index.
The 5.4 version is very old and already passed the EOL date, it does not have any UI to delete the index, you will need to use the elasticsearch REST API to delete it.
You can do it from kibana, just click in Dev Tools, first you will need to list your index using the cat indices endpoint.
GET "/_cat/indices?v&s=index&pretty"
After that you will need to use the delete api endpoint to delete your index.
DELETE /name-of-your-index
On the newer versions you can do it using the Index Management UI, you should try to talk with your company to get the new version.
I am new to elasticsearch,this question might look weird but is it possible to index a file automatically (i.e given a file path, elasticsearch should index the contents of it automatically).I have got some open source tool like elasticdump and tried using it for the purpose,but I prefer some plugins of elasticsearch which can support almost all elasticsearch versions.. Can anyone suggest me?
I have my Logstash configured with the following output:
output {
hosts => ["http://myhost/elasticsearch"]
}
This is a valid URL, as I can cURL commands to Elasticsearch with it, such as
curl "http://myhost/elasticsearch/_cat/indices?v"
returns my created indices.
However, when Logstash attempts to create a template, it uses the following URL:
http://myhost/_template/logstash
when I would expect it to use
http://myhost/elasticsearch/_template/logstash
It appears that the /elasticsearch portion of my URL is being chopped off. What's going on here? Is "elasticsearch" a reserved word in the URL that is removed? As far as I can tell, when I issue http://myhost/elasticsearch/elasticsearch, it attempts to find an index named "elasticsearch" which leads me to believe it isn't reserved.
Upon changing the endpoint URL to be
http://myhost/myes
Logstash is still attempting to access
http://myhost/_template/logstash
What might be the problem?
EDIT
Both Logstash and Elasticsearch are v5.0.0
You have not specified which version of logstash you are using. If you are using one of the 2.x versions, you need to use use the path => '/myes/' parameter to specify that your ES instance is behind a proxy. In 2.x, the hosts parameter was just a list of hosts, not URIs.
I have created some index in Elasticsearch with mapper attachment plugin. However, when I try to create index in Kibana, I could not find back any data created in Elasticsearch for making dashboard in Kibana
Is there any way to resolve this issue?
Try running http://:9200/_cat/indices?v
The above will return all indexes you have. Once you verified that your mapper attachment index is there, go to Kibana at Settings tab and select the checkbox that say your index do not contain time series data. Now write your index name and I hope you find it. Also, make sure your Kibana is configured to point to the Elasticsearch server your index resides. This is configured in the config/kibana.yaml.
Hope I have managed to help!
I'm trying to setup a ELK stack on EC2, Ubuntu 14.04 instance. But everything install, and everything is working just fine, except for one thing.
Logstash is not creating an index on Elasticsearch. Whenever I try to access Kibana, it wants me to choose an index, from Elasticsearch.
Logstash is in the ES node, but the index is missing. Here's the message I get:
"Unable to fetch mapping. Do you have indices matching the pattern?"
Am I missing something out? I followed this tutorial: Digital Ocean
EDIT:
Here's the screenshot of the error I'm facing:
Yet another screenshot:
I got identical results on Amazon AMI (Centos/RHEL clone)
In fact exactly as per aboveā¦ Until I injected some data into Elastic - this creates the first day index - then Kibana starts working. My simple .conf is:
input {
stdin {
type => "syslog"
}
}
output {
stdout {codec => rubydebug }
elasticsearch {
host => "localhost"
port => 9200
protocol => http
}
}
then
cat /var/log/messages | logstash -f your.conf
Why stdin you ask? Well it's not super clear anywhere (also a new Logstash user - found this very unclear) that Logstash will never terminate (e.g. when using the file plugin) - it's designed to keep watching.
But using stdin - Logstash will run - send data to Elastic (which creates index) then go away.
If I did the same thing above with the file input plugin, it would never create the index - I don't know why this is.
I finally managed to identify the issue. For some reason, the port 5000 is being accessed by another service, which is not allowing us to accept any incoming connection. So all your have to do is to edit the logstash.conf file, and change the port from 5000 to 5001 or anything of your convenience.
Make sure all of your logstash-forwarders are sending the logs to the new port, and you should be good to go. If you have generated the logstash-forwarder.crt using the FQDN method, then the logstash-forwarder should be pointing to the same FQDN and not an IP.
Is this Kibana3 or 4?
If it's Kibana4, can you click on settings in the top-line menu, choose indices and then make sure that the index name contains 'logstash-*', then click in the 'time-field' name and choose '#timestamp'
I've added a screenshot of my settings below, be careful which options you tick.