Why auto delete index policy doesn't work in elasticsearch? - elasticsearch

I created a index policy in Kibana to delete index order than 7 days. Below is the configuration:
And I have indexes who are using this policy but none of them get deleted. Below is one of the index setting configuration. It has already specified the policy to use: metrics-log-retention. Is there anything I missed?
{
"aws-logs-2022-02-01" : {
"settings" : {
"index" : {
"lifecycle" : {
"name" : "metrics-log-retention"
},
"routing" : {
"allocation" : {
"include" : {
"_tier_preference" : "data_content"
}
}
},
"number_of_shards" : "1",
"provided_name" : "aws-logs-2022-02-01",
"creation_date" : "1643673636747",
"priority" : "100",
"number_of_replicas" : "1",
"uuid" : "lLmO753nRpuw6bauKIJI2Q",
"version" : {
"created" : "7150299"
}
}
}
}
}
Below is the hot phase. I have disabled all options under hot as shown in below screenshot. but it still doesn't work.
Below is the raw data for the index policy:
{
"metrics-log-retention" : {
"version" : 4,
"modified_date" : "2022-02-10T22:24:14.492Z",
"policy" : {
"phases" : {
"hot" : {
"min_age" : "0ms",
"actions" : {
"rollover" : {
"max_size" : "50gb",
"max_primary_shard_size" : "50gb",
"max_age" : "1d"
}
}
},
"delete" : {
"min_age" : "6d",
"actions" : {
"delete" : {
"delete_searchable_snapshot" : true
}
}
}
}
},
"in_use_by" : {
"indices" : [
"aws-logs-2022-02-01",
"aws-logs-2022-02-04",
"aws-logs-2022-02-05",
"aws-logs-2022-02-02",
"aws-logs-2022-02-03",
"aws-metrics-2022-02-01",
"aws-metrics-2022-02-07",
"aws-logs-2022-02-08",
"aws-metrics-2022-02-06",
"aws-logs-2022-02-09",
"aws-logs-2022-02-06",
"aws-metrics-2022-02-09",
"aws-logs-2022-02-07",
"aws-metrics-2022-02-08",
"aws-metrics-2022-02-03",
"aws-metrics-2022-02-02",
"aws-metrics-2022-02-05",
"aws-metrics-2022-02-04",
"aws-logs-2022-02-11",
"aws-logs-2022-02-12",
"aws-logs-2022-02-10",
"aws-logs-2022-02-13",
"aws-metrics-2022-02-10",
"aws-metrics-2022-02-12",
"aws-metrics-2022-02-11",
"aws-metrics-2022-02-13"
],
"data_streams" : [ ],
"composable_templates" : [ ]
}
}
}

As you can see on the hot phase advanced settings, the default rollover settings are 30 days or 50GB, so your indexes will stay in the hot phase for 30 days, unless they grow over 50GB before.
Once the index gets out of the hot phase it gets into the delete phase and if you hover over the (i) icon, you can see that the 7 days are calculated AFTER the roll over from the hot phase.
So if you really want your indexes to be deleted after 7 days, you need to:
configure the hot phase to be shorter (say 6 days)
configure the delete phase to kick in after 1 day from rollover
That way, the index will be created and stay six days in the hot phase and then be deleted after one day.

Just add it to crontab on ES host, it will delete old indices automatically
0 7 * * * curl -u LOGIN:PASSWORD -XDELETE http://localhost:9200/aws-logs-$(date --date="7 days ago" +"%Y.%m.%d")

Related

Simple Example of ElasticSearch Index LifeCycle Policy in Kibana

Version of Elastic Search 7.10.2
and Xpack is enabled and the licence is Basic
Hot Phase of metricbeat policy
Delete Phase of metricbeat policy
WHY IS IT NOT GETTING APPLIED ?
metricbeat-7.10.2-2021.02.10-000001 index details
{
"indices" : {
"metricbeat-7.10.2-2021.02.10-000001" : {
"index" : "metricbeat-7.10.2-2021.02.10-000001",
"managed" : true,
"policy" : "metricbeat",
"lifecycle_date_millis" : 1612959479882,
"age" : "8m",
"phase" : "hot",
"phase_time_millis" : 1612959480192,
"action" : "rollover",
"action_time_millis" : 1612959917863,
"step" : "check-rollover-ready",
"step_time_millis" : 1612959917863,
"phase_execution" : {
"policy" : "metricbeat",
"phase_definition" : {
"min_age" : "0ms",
"actions" : {
"rollover" : {
"max_size" : "5b",
"max_age" : "5s",
"max_docs" : 5
}
}
},
"version" : 2,
"modified_date_in_millis" : 1612959551839
}
}
}
}
If a policy is modified AFTER an index has been created, it might not kick in as you expect.
ILM runs every 10 minutes by default, but that can be changed via the indices.lifecycle.poll_interval cluster setting.

Elasticsearch ILM not rolling

I have configured my ILM to rollover when the indice size be 20GB or after passing 30 days in the hot node
but my indice passed 20GB and still didn't pass to the cold node
and when I run: GET _cat/indices?v I get:
green open packetbeat-7.9.2-2020.10.22-000001 RRAnRZrrRZiihscJ3bymig 10 1 63833049 0 44.1gb 22gb
Could you tell me how to solve that please !
Knowing that in my packetbeat file configuration, I have just changed the number of shards:
setup.template.settings:
index.number_of_shards: 10
index.number_of_replicas: 1
when I run the command GET packetbeat-7.9.2-2020.10.22-000001/_settings I get this output:
{
"packetbeat-7.9.2-2020.10.22-000001" : {
"settings" : {
"index" : {
"lifecycle" : {
"name" : "packetbeat",
"rollover_alias" : "packetbeat-7.9.2"
},
"routing" : {
"allocation" : {
"include" : {
"_tier_preference" : "data_content"
}
}
},
"mapping" : {
"total_fields" : {
"limit" : "10000"
}
},
"refresh_interval" : "5s",
"number_of_shards" : "10",
"provided_name" : "<packetbeat-7.9.2-{now/d}-000001>",
"max_docvalue_fields_search" : "200",
"query" : {
"default_field" : [
"message",
"tags",
"agent.ephemeral_id",
"agent.id",
"agent.name",
"agent.type",
"agent.version",
"as.organization.name",
"client.address",
"client.as.organization.name",
and the output of the command GET /packetbeat-7.9.2-2020.10.22-000001/_ilm/explain is :
{
"indices" : {
"packetbeat-7.9.2-2020.10.22-000001" : {
"index" : "packetbeat-7.9.2-2020.10.22-000001",
"managed" : true,
"policy" : "packetbeat",
"lifecycle_date_millis" : 1603359683835,
"age" : "15.04d",
"phase" : "hot",
"phase_time_millis" : 1603359684332,
"action" : "rollover",
"action_time_millis" : 1603360173138,
"step" : "check-rollover-ready",
"step_time_millis" : 1603360173138,
"phase_execution" : {
"policy" : "packetbeat",
"phase_definition" : {
"min_age" : "0ms",
"actions" : {
"rollover" : {
"max_size" : "50gb",
"max_age" : "30d"
}
}
},
"version" : 1,
"modified_date_in_millis" : 1603359683339
}
}
}
}
It's weird that it's 50GB !!
Thanks for your help
So I found the solution of this problem.
After updating the policy, I removed the policy from the index using it, and then added it again to those index.

Problem with Elasticsearch Index Lifecycle Policy that doesn't rollover

In order to evaluate its potential to help on our daily operations, I have deployed Elastic Search and Kibana (7.7.1 with BASIC license) and created an index template for Ntopng (our monitoring platform).
Since indexes keep growing, I want to delete Ntopng indexes older than 20 days or so, therefore I have created a life cycle policy called ntopng where the time-stamped index should rollover after 1 day (for testing purposes) and then will be deleted after 2 days of the rollover:
Next I picked a time-stamped index created that day and applied the lifecycle policy to it:
Before that, I had to create an alias for that Index, so I did it manually:
POST /_aliases
{
"actions" : [
{ "add" : { "index" : "ntopng-2020.09.09", "alias" : "ntopng_Alias" } }
]
}
All looked good after that ( I guess) as no errors or alarms were displayed:
"indices" : {
"ntopng-2020.09.09" : {
"index" : "ntopng-2020.09.09",
"managed" : true,
"policy" : "ntopng",
"lifecycle_date_millis" : 1599609600433,
"age" : "20.14h",
"phase" : "hot",
"phase_time_millis" : 1599681721821,
"action" : "rollover",
"action_time_millis" : 1599680521920,
"step" : "check-rollover-ready",
"step_time_millis" : 1599681721821,
"is_auto_retryable_error" : true,
"failed_step_retry_count" : 1,
"phase_execution" : {
"policy" : "ntopng",
"phase_definition" : {
"min_age" : "0ms",
"actions" : {
"rollover" : {
"max_age" : "1d"
},
"set_priority" : {
"priority" : 100
}
}
},
"version" : 4,
"modified_date_in_millis" : 1599509572867
}
}
My expectation was that in the next day the policy would be automatically rolled over to the next index (ntopng-2020.10.10) so that the initial index would be eventually deleted the next two days.
Instead, I got the following errors:
GET ntopng-*/_ilm/explain
{
"indices" : {
"ntopng-2020.09.09" : {
"index" : "ntopng-2020.09.09",
"managed" : true,
"policy" : "ntopng",
"lifecycle_date_millis" : 1599609600433,
"age" : "1.94d",
"phase" : "hot",
"phase_time_millis" : 1599776521822,
"action" : "rollover",
"action_time_millis" : 1599680521920,
"step" : "ERROR",
"step_time_millis" : 1599777121822,
"failed_step" : "check-rollover-ready",
"is_auto_retryable_error" : true,
"failed_step_retry_count" : 80,
"step_info" : {
"type" : "illegal_argument_exception",
"reason" : """index name [ntopng-2020.09.09] does not match pattern '^.*-\d+$'""",
"stack_trace" : """java.lang.IllegalArgumentException: index name [ntopng-2020.09.09] does not match pattern '^.*-\d+$'
at org.elasticsearch.action.admin.indices.rollover.TransportRolloverAction.generateRolloverIndexName(TransportRolloverAction.java:241)
at org.elasticsearch.action.admin.indices.rollover.TransportRolloverAction.masterOperation(TransportRolloverAction.java:133)
at org.elasticsearch.action.admin.indices.rollover.TransportRolloverAction.masterOperation(TransportRolloverAction.java:73)
at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.lambda$doStart$3(TransportMasterNodeAction.java:170)
at org.elasticsearch.action.ActionRunnable$2.doRun(ActionRunnable.java:73)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:225)
at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.doStart(TransportMasterNodeAction.java:170)
at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.start(TransportMasterNodeAction.java:133)
at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:110)
at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:59)
at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:153)
at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.apply(SecurityActionFilter.java:123)
at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:151)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:129)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:64)
at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:83)
at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:72)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:399)
at org.elasticsearch.xpack.core.ClientHelper.executeAsyncWithOrigin(ClientHelper.java:92)
at org.elasticsearch.xpack.core.ClientHelper.executeWithHeadersAsync(ClientHelper.java:155)
at org.elasticsearch.xpack.ilm.LifecyclePolicySecurityClient.doExecute(LifecyclePolicySecurityClient.java:51)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:399)
at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1234)
at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.rolloverIndex(AbstractClient.java:1736)
at org.elasticsearch.xpack.core.ilm.WaitForRolloverReadyStep.evaluateCondition(WaitForRolloverReadyStep.java:127)
at org.elasticsearch.xpack.ilm.IndexLifecycleRunner.runPeriodicStep(IndexLifecycleRunner.java:173)
at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggerPolicies(IndexLifecycleService.java:329)
at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggered(IndexLifecycleService.java:267)
at org.elasticsearch.xpack.core.scheduler.SchedulerEngine.notifyListeners(SchedulerEngine.java:183)
at org.elasticsearch.xpack.core.scheduler.SchedulerEngine$ActiveSchedule.run(SchedulerEngine.java:211)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
at java.base/java.lang.Thread.run(Thread.java:832)
"""
},
"phase_execution" : {
"policy" : "ntopng",
"phase_definition" : {
"min_age" : "0ms",
"actions" : {
"rollover" : {
"max_age" : "1d"
},
"set_priority" : {
"priority" : 100
}
}
},
"version" : 4,
"modified_date_in_millis" : 1599509572867
}
}
"ntopng-2020.09.10" : {
"index" : "ntopng-2020.09.10",
"managed" : true,
"policy" : "ntopng",
"lifecycle_date_millis" : 1599696000991,
"age" : "22.57h",
"phase" : "hot",
"phase_time_millis" : 1599776521844,
"action" : "rollover",
"action_time_millis" : 1599696122033,
"step" : "ERROR",
"step_time_millis" : 1599777121839,
"failed_step" : "check-rollover-ready",
"is_auto_retryable_error" : true,
"failed_step_retry_count" : 67,
"step_info" : {
"type" : "illegal_argument_exception",
"reason" : "index.lifecycle.rollover_alias [ntopng_Alias] does not point to index [ntopng-2020.09.10]",
"stack_trace" : """java.lang.IllegalArgumentException: index.lifecycle.rollover_alias [ntopng_Alias] does not point to index [ntopng-2020.09.10]
at org.elasticsearch.xpack.core.ilm.WaitForRolloverReadyStep.evaluateCondition(WaitForRolloverReadyStep.java:104)
at org.elasticsearch.xpack.ilm.IndexLifecycleRunner.runPeriodicStep(IndexLifecycleRunner.java:173)
at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggerPolicies(IndexLifecycleService.java:329)
at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggered(IndexLifecycleService.java:267)
at org.elasticsearch.xpack.core.scheduler.SchedulerEngine.notifyListeners(SchedulerEngine.java:183)
at org.elasticsearch.xpack.core.scheduler.SchedulerEngine$ActiveSchedule.run(SchedulerEngine.java:211)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
at java.base/java.lang.Thread.run(Thread.java:832)
"""
},
"phase_execution" : {
"policy" : "ntopng",
"phase_definition" : {
"min_age" : "0ms",
"actions" : {
"rollover" : {
"max_age" : "1d"
},
"set_priority" : {
"priority" : 100
}
}
}
The first index error reads "index name [ntopng-2020.09.09] does not match pattern '^.*-\d+$"
while second one displays: ""index.lifecycle.rollover_alias [ntopng_Alias] does not point to index [ntopng-2020.09.10]"
Please note that I'm learning the basics on ES Index management, so I'd appreciate any clue on what the problem might be.
OK, I just found that the index name must end with a numeric pattern like 0001 and not 2020.09.09 So I may need to find an alternative way to make it work.
As per the kibana regex you can date and time pattern as well, instead of the 2020.01.01 use 2020-01-01
This should work as well. You can check the regex here : https://regex101.com/r/VclptX/1

Setting enabled to true Elasticsearch

I'm new to elasticsearch. I have an index type as follows
{
"myindex" : {
"mappings" : {
"systemChanges" : {
"_all" : {
"enabled" : false
},
"properties" : {
"autoChange" : {
"type" : "boolean"
},
"changed" : {
"type" : "object",
"enabled" : false
},
"created" : {
"type" : "date",
"format" : "strict_date_optional_time||epoch_millis"
}
}
}
}
}
}
I'm unable to fetch the details having changed.new = completed. After some research i have found that it's because the changed field is set to enabled : false. and I need to change the same. I tried as follows
curl -X PUT "localhost:9200/myindex/" -H 'Content-Type: application/json' -d' {
"mappings": {
"systemChanges" : {
"properties" : {
"changed" : {
"enabled" : true
}
}
}
}
}'
But I'm getting error as following.
{"error":{"root_cause":[{"type":"index_already_exists_exception","reason":"already exists","index":"myindex"}],"type":"index_already_exists_exception","reason":"already exists","index":"myindex"},"status":400}
How can I change the enabled to true in order to fetch the details of the changed.new field?
you are trying to add an index again with the same name and hence the error.
See the below link for updating a mapping
https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html
The enabled setting can be updated on existing fields using the PUT mapping API.

Elasticsearch TTL not working

I use elasticsearch for logs, I don't want to use daily index to delete them with a cron job but with the TTL. I 've actived and set TTL with the value: 30s. I have a succesfull answer when I send this operation and I can see the TTL value(in milliseconds) when I do the mapping request.
All seems good but documents are not be deleted...
_mapping :
{
"logs" : {
"webservers" : {
"_ttl" : {
"default" : 30000
},
"properties" : {
#timestamp" : {
"type" : "date",
"format" : "dateOptionalTime"
}
}
}
}
}
I guess you just need to enable _ttl for your type, which is disabled by default. Have a look here.
{
"webservers" : {
"_ttl" : { "enabled" : true, "default" : "30s" }
}
}

Resources