Delete daily filebeat index - elasticsearch

I'm using the kibana interface to manage ELK in Kubernetes. ELK creates a new filebeat index every day filebeat-<date> with several GB.
I created a index lifecycle policy but I can only add it to an existing index.
I want it to be added to new filebeat indexes as well.
Kibana has the concept of index patters but I cannot find the place to link it to a policy.
I want to know if this is possbile to do in Kibana?
I'm using kibana 7.12.0

you need to add the ILM policy to the index as per https://www.elastic.co/guide/en/elasticsearch/reference/7.12/ilm-with-existing-indices.html
however it should be handled automatically in 7.12, unless you've changed the default config? https://www.elastic.co/guide/en/beats/filebeat/7.12/ilm.html

Related

How to do some pre-processing of custom logs before indexing in elastic search?

I am using "Custom Logs Integration" from Fleet. I have done following things and I can see the logs as well in Kibana.
I have created Custom Policy and added "Custom Logs Integration" to that policy.
Assigned my elastic agent (one of my local server) to this custom policy.
Go to the, Kibana -> Discover tab and able to see my logs in Kibana.
Want to do some pre-processing before indexing docs (already done the same using logstash using grok filters), Not sure how can I do the same using Elastic agents?
Note: I am aware about the Ingest Pipeline, but not sure how can I add those pipeline in above steps. (I dn't want to use ingest APIs because I want to automate everything.)
Version:
ElasticSearch : 7.10.2
To add ingest pipeline for Custom Logs Integration in Fleet via Kibana (and not API)
Create the ingest pipeline (Stack Management > Ingest Pipeline)
Edit the concerned Custom Log Integration > Custom logs file > Expand the Advanced options > Under Cutom configurations add the pipeline as below (its YAML format and not json)
pipeline: "logs-api-json"
Elasticsearch 7.12.0
You are right that Ingest Pipelines are the right tool for you, if you don't want to put a Logstash instance in the middle. The way you can integrate them in your flow is to:
create the ingest pipeline that does your pre-processing job (if you're familiar with Logstash, you won't have much troubles as the ingest processors are very similar to the Logstash filters),
set the pipeline created above as the default pipeline of the interested indices, so that it is applied to every document you're adding to those indices. You set the default pipeline in the index settings
PUT your-index/_settings
{
"index.default_pipeline": "your-ingest-pipeline"
}

Kibana index pattern not saved

I have an ELK stack running on the Kubernetes cluster with security enabled. Everything is running fine and I am able to push data to an index. After logging in to Kibana as an admin user, and I to "Discover" it asks me to create an index pattern. So I have some metricbeat data, and I create a pattern and saved it. But when I go back to discover, it is prompting me to create an index pattern again!
I don't find any errors in Kibana/Elastic pods
Really appreciate any pointers
Elastisearch version: 7.10.1
What finally worked for me was destroy and recreate Kibana. After recreating kibana i was able to see all the index patterns i have been trying to save

Where the elasticsearch data is stored?

I've installed filebeat in a server, collecting all the logs from all the containers i have. With filebeat i indicate to which elasticsearch and kibana hosts he must send them (both, elasticsearch and kibana are running as a service in another server). So now all the logs appear in kibana. My question is, all those logs that appear there, are stored somewhere? In elasticsearch or in kibana?
Thank you in advance
All the data is stored inside Elasticsearch.
Kibana is a visualization engine on top of Elasticsearch. Kibana itself also stores its configuration data inside an internal Elasticsearch index called .kibana.
Whatever you can see from Kibana always comes from Elasticsearch.
You can learn more about Elasticsearch here and Kibana here.

ELK logstash cant create index in ES

after following this tuto (https://www.bmc.com/blogs/elasticsearch-logs-beats-logstash/) in order to use logstash to analyze some log files, my index was created fine at the first time, then I wanted to re-index new files with new filters and new repositories so I deleted via "curl XDELETE" the index and now when I restart logstash and filebeat, the index is not created anymore.. I dont see any errors while launching the components.
Do I need to delete something else in order to re-create my index?
Ok since my guess (see comments) was correct, here's the explanation:
To avoid that filebeat reads and publishes lines of a file over and over again, it uses a registry to store the current state of the harvester:
The registry file stores the state and location information that Filebeat uses to track where it was last reading.
As you stated, filebeat successfully harvested the files, sent the lines to logstash and logstash published the events to elasticsearch which created the desired index. Since filebeat updated its registry, no more lines had to be harvested and thus no events were published to logstash again, even when you deleted the index. When you inserted some new lines, filebeat reopened the harvester and published only the new lines (which came after the "registry checkpoint") to logstash.
The default location of the registry file is ${path.data}/registry (see Filebeat's Directory Layout Overview).
... maybe the curl api call is not the best solution to restart the index
This has nothing to do with deleting the index. Deleting the index happens inside elasticsearch. Filebeat has no clue about your actions in elasticsearch.
Q: Is there a way to re-create an index based on old logs?
Yes, there are some ways you should take into consideration:
You can use the reindex API which copies documents from one index to another. You can update the documents while reindexing them into the new index.
In contrast to the reindex you can use the update by query API to update documents that will remain in the original index.
Lastly you could of course delete the registry file. However this could cause data loss. But for development purposes I guess that's fine.
Hope I could help you.

ELK Docker - Kibana saved objects

Does anyone know if it's possible to provide to kibana dockerized container saved objects (dashboards/ visualizations) during the startup of the container? I didn't notice any specific configuration for this on the elastic.co guides. Are there volumes on the container on which I can copy my .json files
Thanks
Kibana uses an index in Elasticsearch to store saved searches, visualizations and dashboards.
It creates a new index if the index doesn’t already exist.
kibana.index: ".kibana"

Resources