I have set a simple ILM policy on my fluentd.* indices to be deleted after (for testing - ) a short period of time.
ILM:
PUT _ilm/policy/fluentd
{
"policy": {
"phases": {
"hot": {
"min_age": "0ms",
"actions": {
"rollover": {
"max_age": "1d",
"max_size": "1gb"
},
"set_priority": {
"priority": 100
}
}
},
"delete": {
"min_age": "4d",
"actions": {
"delete": {}
}
}
}
}
}
Index Template:
PUT _template/fluentd
{
"order": 0,
"index_patterns": [
"fluentd.*"
],
"settings": {
"index": {
"lifecycle": {
"name": "fluentd"
}
}
},
"aliases": {
"fluent": {}
}
}
With these settings, I expected ES to delete indices older than 5-6 days, but there are still indices from 3 weeks ago in ES. Currently, it says there are 108 linked indices to this ILM policy.
What is it actually doing, it seems it's not doing anything at all... how to delete indices after x days?
I tried first to use the "index template" but it's useless, it does not apply settings to each index (maybe yes but only on creation????).
Then I put the ILM on the index by hand (another bug: you can't select all index and hit "add ILM policy" - you need to add the policy one by one) which required me to click about 600 times.
Now the problem was, I had "hot" phase defined but it didn't trigger (it's buggy?) - because the hot phase didn't trigger (i set to to "rollover after 1 day after index creation") - the delete phase didn't either. When I removed the hot phase and applied the ILM to index again with only delete - it worked! but adding and removing all this is buggy, I get Ooops, something went wrong errors here and there.
I don't understand why I have to remove the ILM and reapply it to each index when I change something in the ILM policy. It's 1000% inconvenient.
ES really needs to put some work into it, it's still too beta and I got a hell lot of status code 500, although I am using most recent version directly on Elastic Cloud.
With these settings, I expected ES to delete indices older than 5-6 days, but there are still indices from 3 weeks ago in ES. Currently, it says there are 108 linked indices to this ILM policy.
With your settings, the delete phase starts at 4 day from rollover. If you want to start the delete phase at 4 day from "index creation" you need to remove the rollover action from the hot phase:
{
"policy": {
"phases": {
"hot": {
"min_age": "0ms",
"actions": {
"set_priority": {
"priority": 100
}
}
},
"delete": {
"min_age": "4d",
"actions": {
"delete": {}
}
}
}
}
}
I tried first to use the "index template" but it's useless, it does not apply settings to each index (maybe yes but only on creation????).
Yes, it works on index creation.
Then I put the ILM on the index by hand (another bug: you can't select all index and hit "add ILM policy" - you need to add the policy one by one) which required me to click about 600 times.
Kibana does not allow you to apply ILM policy to all index, but the elasticsearch API allows it!
Simply open a kibana dev tools and run the follow request:
PUT fluentd.*/_settings
{
"index": {
"lifecycle": {
"name": "fluentd"
}
}
}
Now the problem was, I had "hot" phase defined but it didn't trigger (it's buggy?) - because the hot phase didn't trigger (i set to to "rollover after 1 day after index creation") - the delete phase didn't either. When I removed the hot phase and applied the ILM to index again with only delete - it worked! but adding and removing all this is buggy, I get Ooops, something went wrong errors here and there.
If rollover phase was not triggered, the ILM could not progress.
I don't understand why I have to remove the ILM and reapply it to each index when I change something in the ILM policy. It's 1000% inconvenient.
Because the ILM definition are cached on each index.
see the doc: https://www.elastic.co/guide/en/elasticsearch/reference/current/ilm-index-lifecycle.html#ilm-phase-execution
A little bit late, but maybe it will help somebody.
Another reason can be like that was mentioned here:
ILM is not really intended to be used on 1m lifecycle. I Do not
believe You will achieve your desired behavior. My understanding is
that ILM is an opportunistic background task it is not preemptive so
it is not going to execute on the exact time frame.
It's designed to work on the order of hours or days not minutes.
I have the same situation at my indices and I checked - indices are deleted, but later than I sat them up.
Related
I have created ILM policy with following configuration :
{
"policy": {
"phases": {
"hot": {
"actions": {
"rollover": {
"max_docs": 15000
}
}
},
"delete": {
"min_age": "1d",
"actions": {
"delete": {}
}
}
}
}
}
With the index_template for matching indices.
Now when I bootstrap the index with initial index lets say <index-name>-000001 then the expectation is to rollover the index once docs.count reaches to 15000.
The rollover is happening but at random docs.count and I'm not sure why it is happening.
Also docs.count is not updating I will have to manually hit refresh API, then docs count is getting updated at /_cat/indices. Please let me know what is wrong in the config. Did I missing anything?
The problem was with the refresh_interval.
By default, Elasticsearch periodically refreshes indices every second, but only on indices that have received one search request or more in the last 30 seconds.
My indices were not getting search request so couldn't update the indices.
I changed the default setting, I added "refresh_interval":"10s" in my index template which inherited into my newly created index.
Also I changed the _cluster/setting poll interval to 1 minute for testing and it worked!
Can I filter the documents in elastic search before rolling them up, or can I define filter query in Roll up job, If yes how?
There's no way to filter data before rolling it up into a new rolled up index. However, you can achieve what you want by first defining a filtered alias and then rolling up on that alias.
Say, you want to roll up index test but only for customers 1, 2 and 3. You can create the following filtered alias:
POST /_aliases
{
"actions": [
{
"add": {
"index": "test",
"alias": "filtered-test",
"filter": { "terms": { "customer.id": [1, 2, 3] } }
}
}
]
}
And then you can roll up on the filtered-test alias instead of the test index and that will only roll up data from customers 1, 2 and 3:
PUT _rollup/job/sensor
{
"index_pattern": "filtered-test",
"rollup_index": "customer_rollup",
...
}
PS: It is worth noting that you're not alone but Elastic folks specifically decided not to allow filtering in roll-ups for various reasons (you can read more in the issue I linked to). The issue has been reopened because there's a big refactor of the roll up feature going on. Stay tuned...
I'm new to setting up a proper Lifecycle policy, so I'm hoping someone can please give me a hand with this. So, I have an existing index getting created on a weekly basis. This is a third party integration (they provided me with the pipeline and index template for the incoming logs). Logs are being created weekly in the pattern "name-YYYY-MM-DD". I'm attempting to setup a lifecycle policy for these indexes so they transition from hot->warm->delete. So far, I have done the following:
Updated the index template to add the policy and set an alias:
{
"index": {
"lifecycle": {
"name": "Cloudflare",
"rollover_alias": "cloudflare"
},
"mapping": {
"ignore_malformed": "true"
},
"number_of_shards": "1",
"number_of_replicas": "1"
On the existing indexes, set the alias and which one is the "write" index:
POST /_aliases
{
"actions" : [
{
"add" : {
"index" : "cloudflare-2020-07-13",
"alias" : "cloudflare",
"is_write_index" : true
}
}
]
}
POST /_aliases
{
"actions" : [
{
"add" : {
"index" : "cloudflare-2020-07-06",
"alias" : "cloudflare",
"is_write_index" : false
}
}
]
}
Once I did that, I started seeing the following 2 errors (1 on each index):
ILM error #1
ILM error #2
I'm not sure why the "is not the write index" error is showing up on the older index. Perhaps this is because it is still "hot" and trying to move it to another phase without it being the write index?
For the second error, is this because the name of the index is wrong for rollover?
I'm also not clear if this is a good scenario for rollover. These indexes are being created weekly, which I assume is ok. I would think normally you would create a single index and let the policy split off the older ones based upon your criteria (size, age, etc). Should I change this or can I make this policy work with existing weekly files? In case you need it, here is part of the pipeline that I imported into ElasticSearch that I believe is responsible for the index naming:
{
"date_index_name" : {
"field" : "EdgeStartTimestamp",
"index_name_prefix" : "cloudflare-",
"date_rounding" : "w",
"timezone" : "UTC",
"date_formats" : [
"uuuu-MM-dd'T'HH:mm:ssX",
"uuuu-MM-dd'T'HH:mm:ss.SSSX",
"yyyy-MM-dd'T'HH:mm:ssZ",
"yyyy-MM-dd'T'HH:mm:ss.SSSZ"
]
}
},
So, for me at the moment the more important error is the "number_format_exception". I'm thinking it is due to this setting I'm seeing in the index (provided_name):
{
"settings": {
"index": {
"lifecycle": {
"name": "Cloudflare",
"rollover_alias": "cloudflare"
},
"mapping": {
"ignore_malformed": "true"
},
"number_of_shards": "1",
"provided_name": "<cloudflare-{2020-07-20||/w{yyyy-MM-dd|UTC}}>",
"creation_date": "1595203589799",
"priority": "100",
"number_of_replicas": "1",
I believe this "provided_name" is getting established from the pipeline's "date_index_name" I provided above. If this is the issue, is there a way to create a fixed index name via the ingest pipeline without it changing based upon the date? I would rather just create a fixed index and let the lifecycle policy handle the split offs (i.e. 0001, 0002, etc).
I've been looking for a way to create a fixed index name without the "date_index_name" processor, but I haven't found a way to do this yet. Or, if I can create an index name with a date and add a suffix that would allow the LifeCycle policy manager (ILM) to add the incremental number at the end, that might work as well. Any help here would be greatly appreciated!
The main issue is that the existing indexes do not end with a sequence number (i.e. 0001, 0002, etc), hence the ILM doesn't really know how to proceed.
The name of this index must match the template’s index pattern and end with a number
You'd be better off letting ILM manage the index creation and rollover, since that's exactly what it's supposed to do. All you need to do is to keep writing to the same cloudflare alias and that's it. No need for a date_index_name ingest processor.
So your index template is correct as it is.
Next you need to bootstrap the initial index
PUT cloudflare-2020-08-11-000001
{
"aliases": {
"cloudflare": {
"is_write_index": true
}
}
}
You can then either reindex your old indices into ILM-managed indices or apply lifecycle policies to your old indices.
I have indexes around 250 GB all-together in 3 host i.e. 750 GB data in ELK cluster.
So how can I rotate ELK logs to keep three months data in my ELK cluster and older logs should be pushed some other place.
You could create your index using "indexname-%{+YYYY.MM}" naming format. This will create a distinct index every month.
You could then filter this index, based on timestamp, using a plugin like curator.
The curator could help you set up a CRON job to purge those older indexes or back them up on some s3 repository.
Reference - Backup or Restore using curator
Moreover, you could even restore these backup indexes whenever needed directly from s3 repo for historical analysis.
Answer by dexter_ is correct, but as the answer is old, a better answer would be:
version 7.x of elastic stack provides a index life cycle management policies, which can be easily managed with kibana GUI and is native to elk stack.
PS, you still have to manage the indices like "indexname-%{+YYYY.MM}" as suggested dexter_
elastic.co/guide/en/elasticsearch/reference/current/index-lifecycle-management.html
It took me a while to figure out exact syntax and rules, so I'll post the final policy I used to remove old indexes (it's based on the example from https://aws.amazon.com/blogs/big-data/automating-index-state-management-for-amazon-opensearch-service-successor-to-amazon-elasticsearch-service/):
{
"policy": {
"description": "Removes old indexes",
"default_state": "active",
"states": [
{
"name": "active",
"transitions": [
{
"state_name": "delete",
"conditions": {
"min_index_age": "14d"
}
}
]
},
{
"name": "delete",
"actions": [
{
"delete": {}
}
],
"transitions": []
}
],
"ism_template": {
"index_patterns": [
"mylogs-*"
]
}
}
}
It will automatically apply the policy for any new mylogs-* indexes, but you'll need to apply it manually for existing ones (under "Index Management" -> "Indices").
In our application , we are creating the elasticsearch index daily basis and index pattern is index-. (eg. index-17-09-2019). But our application is accessing the index through an alias which is pointing the current index. Now attaching and removing of the alias with the index is done through a cron job. Is it possible to do it through through index template as we are avoiding the cron job.
We can attach alias with the index through index template but I am not sure whether we can detach the alias with the old index and add it to the new index through index template.
That can be done with built-in index lifecycle management (ILM). Your application will be sending data to index alias and ILM will take care of the rest.
Here is the description of how it can be done, but basically you need to:
1. Create ILM job
PUT /_ilm/policy/my_policy
{
"policy": {
"phases": {
"hot": {
"actions": {
"rollover": {
"max_age": "1d"
}
}
}
}
}
}
2. Create an index template with ILM policy attached
PUT _template/my_template
{
"index_patterns": ["test-*"],
"settings": {
"number_of_shards": 1,
"number_of_replicas": 1,
"index.lifecycle.name": "my_policy",
"index.lifecycle.rollover_alias": "test-alias"
}
}
3. Start the process by creating init index
PUT test-000001
{
"aliases": {
"test-alias":{
"is_write_index": true
}
}
}
That will help you with handling creation of new index every day without external CRON job. You can also extend your policy, later on to e.g. delete old indices after 7 days after rollover.
Hope that helps.