Elastic Search Get index API to get indices older than 30 days - elasticsearch

we are using elastic-search 5 and i need to delete indices older than 30 days through API.Can any can help me out for this?

You can use curator to do that.
https://www.elastic.co/guide/en/elasticsearch/client/curator/5.x/delete_indices.html
curator --host <hostname> delete indices --older-than 30

Related

Elasticsearch delete documents from index

I have an Elasticsearch cluster on Kubernetes, I also have a curator that deletes indices older than 7 days.
I want to change the curator to work according to a certain condition:
If document key1=value1 delete these documents delete after 10 days, otherwise delete after 7 days.
Is there any way to do it?
Curator is limited to index deletion as a whole and not at the document level.
What Curator does under the hood is call DELETE index-name and there is not way to configure it to call the delete by query API which is what you're asking for.

How to delete document in kibana by days or data

I am using AWS Elastic search. Right now my kibana has 0 free space. I want to delete documents that are older than 30days.
Is there any setting for that or query which can clear documents which is older than 30 days.
You could simple use the delete by query to delete the documents which are older than 30 days :)

Speed up parsing in Elasticsearch Logstash Kibana

I am new to ELK [ Elasticsearch Logstash and Kibana]. I installed Elasticsearch Logstash and Kibana in one server. Then installed Logstash in two machines. Total RAM in each system is around 30 GB. Total file to parse is around 300 GB. It took 6 days to filter out the searchd item[I searched for 10 digit number, timestamp and Name of one API from all these logs]and dispay it in Kibana. Am i doing something wrong here. Is there any other way to speed up this process.
Thanks in Advance,
Paul
You can filter out based on the time in Kibana UI. Also, if you are pushing to Logstash from any beat logger, Logstash takes time to push it to Elastic Search.
There are many beat applications which will directly push the data to Elastic Search.

How to delete Elasticsearch clusters

How can I delete ES clusters?
Every time I start ES locally, it brings my indexes back to cluster state, which is now up to 33 and I believe taking up much of my RAM (8 GBs).
I only have 3 very small indexes, the biggest being just about 3 MBs.
Simply delete all the indices that you do not need. Have a look at https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-delete-index.html
You should delete the index that you want
For example, get your indices
curl -X GET http://127.0.0.1:9200/_cat/indices?v=
I got an index called web, for example, just delete it
curl -X DELETE http://127.0.0.1:9200/web
See more here

Elasticsearch Marvel set .marvel-2015-* indices to max number

Hey I am using marvel alongside elasticsearch and I am trying to avoid using curator to clean indices that look like ".marvel-2015-*" is there a specific config or set of configs that I can use to accomplish this.
Note: I am using chef to provision the node and inside of the logstash cookbook I am setting the attribute in default.rb like so
default['logstash']['instance_default']['curator_days_to_keep'] = 14
I would assume this sets the max amount of these indices to 14. But when I added some fake ".marvel-2015-*" indices they still appear and are not cleared out.
I realize that I am talking about a tool for working with marvel curator and marvel itself, but I am new to these tools and I need help connecting these dots.
Ideally I want marvel to have the logic to just remove these indices by itself, and I don't know if there is some option to accomplish this in the plugins/marvel/marvel-1.3.1.jar
Any help would be appreciated.
I agree that ideally Marvel should provide this as a configuration option but, at time of writing, it does not and over time, the marvel indexes can become quite big. Here's an example for a cluster I'm currently managing:
I know you want to avoid using Curator but short of writing your own script or plugin to manage this, it is by far the easiest way to deal with this.
To purge Marvel indexes older than 30 days, you can do:
curator delete indices --timestring '%Y.%m.%d' --prefix '.marvel-2' --older-than 30 --time-unit 'days'
To test what would be deleted, I recommend to first use --dry-run :
curator --dry-run delete indices --timestring '%Y.%m.%d' --prefix '.marvel-2' --older-than 30 --time-unit 'days'

Resources