Provision ILM policy for Elastic Agent (Fleet) streams - elasticsearch

I'm new to Elastic Agent configuration and running the latest ECK build on Kubernetes.
While I really like to visibility Fleet offers, it seems I'm unable to provision custom ILM policies like I can with Beats, for example:
setup.ilm:
enabled: true
overwrite: true
policy_name: "filebeat"
policy_file: /usr/share/filebeat/ilm-policy.json
pattern: "{now/d}-000001"
Is there a way I can create a policy that does daily rollovers and only keeps the data for 90 days, preferably in code like the above block?
I've tried to add a ilm config to the Elasticsearch config block, but you can't provide this config to Elasticsearch.
config:
node.roles: ["data", "ingest", "remote_cluster_client"]
setup.ilm:
enabled: true
overwrite: true
policy_name: "logs"
policy_file: /usr/share/elasticsearch/ilm-policy.json
pattern: "{now/d}-000001"

Related

Filebeat Kubernetes cannot output to ElasticSearch

Filebeat Kubernetes cannot output to ElasticSearch,
ElasticSearch is OK.
filebeat is daemonset,relevant environment variables have been added.
filebeat.yml
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
hints.default_config:
enabled: false
type: container
paths:
- /var/log/containers/*-${data.container.id}.log
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
Kubernetes
Use nginx app to test:
image=nginx:latest
Deployment annotations have been added.
co.elastic.logs/enabled: "true"
pod.yaml (in node1)
But cannot output to ElasticSearch,Logs and indexes for related input are not seen.
filebeat pod(node1) logs
Expect the filebeat to collect logs for the specified container(Pod) to elasticsearch.
#baymax first off, you don't need to explicitly define the property anywhere:
co.elastic.logs/enabled: "true"
since filebeat, by default, reads all the container log files on the node.
Secondly, you are disabling hints.default_config which ensures filebeat will only read the log files of pods which are annotated as above; however, you haven't provided any template config to be used for reading such log files.
For more info, read: https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover-hints.html
Thirdly, in your filebeat logs, do you see any harvester being started, handles created and events published ? Posting a snapshot of logs doesn't give a clear picture. May be try starting filebeat in debug mode for few minutes and paste the logs here in proper formatting.

How to create multiple filebeats dashboard in Kibana

I have multiple Filebeats are running in multiple systems with custom index name. Filebeat send data to Logstash then logstash send data to Elasticsearch. Every thing is working fine logs are showing in Discovery tab. But the issue is when i trying to load dashboard in kibana by using 'Filebeat setup -e' the dashboards are not getting load in it it and showing the error(Image is attached)
image
image1353×453 24 KB
Filebeat.yml
filebeat.inputs:
type: log
enabled: true
paths:
/var/log/.log
filebeat.config.modules:
path: ${path.config}/modules.d/.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.dashboards.enabled: true
setup.dashboards.index: "care-stagging-"
setup.kibana:
host: "http://xx.xx.xx.xx:5601"
username: "elastic"
password: "VKkLOmFXUupzgXNnahp"
ssl.verification_mode: none
output.logstash:
hosts: ["xx.xx.xx.xx:5044"]
index: "care-stagging"
setup.template.name: "care-stagging"
setup.template.pattern: "care-stagging-"
setup.ilm.enabled: false
setup.template.enabled: true
setup.template.overwrite: false
processors:
add_fields:
fields:
host.ip: "xx.xx.xx.xx"
logging.metrics.period: 30s
Please share how can i load multiples filebeat dashboards in kibana

How to Use a Custom Ingest Pipeline with a Filebeat Module

How do I use a custom ingest pipeline with a Filebeat module? In my case, I'm using the apache module.
According to multiple sources, this is supposedly configurable via output.elasticsearch.pipeline / output.elasticsearch.pipelines[pipeline]. Sources follow:
https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html#pipelines-option-es
https://stackoverflow.com/a/58726519/1026263
https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html#pipelines-option-es
However, after many attempts at different permutations, I have never been able to influence which ingest pipeline is used by the Filebeat; it always uses the module's stock ingest pipeline.
This is just one of the many attempts:
filebeat.config:
filebeat.modules:
- module: apache
access:
enabled: true
var.paths: ["/var/log/apache2/custom_access*"]
error:
enabled: true
var.paths: ["/var/log/apache2/custom_error*"]
filebeat.config.modules:
reload.enabled: true
reload.period: 5s
output.elasticsearch:
hosts: ["${ELASTICSEARCH_URL}"]
pipeline: "apache_with_optional_x_forwarded_for"
Running filebeat with debug (-d "*") shows the following, which, I assume, demonstrates that my specification has been ignored. (I can also tell by the resulting docs in Elasticsearch that my custom pipeline was sidestepped.)
2021-12-16T23:23:47.464Z DEBUG [processors] processing/processors.go:203 Publish event: {
"#timestamp": "2021-12-16T23:23:47.464Z",
"#metadata": {
"beat": "filebeat",
"type": "_doc",
"version": "7.10.2",
"pipeline": "filebeat-7.10.2-apache-access-pipeline"
},
I have tried this in both Filebeat v6.8 and v7.10 (in the docker.elastic.co/beats/filebeat docker images).
This is similar to these threads, which never had a satisfactory conclusion:
How to use custom ingest pipelines with docker autodiscover
How to specify pipeline for Filebeat Nginx module?
Well, according to this PR to on the beats repository, to override the module pipeline you need to specify the custom pipeline in the input configuration, not on the output.
Try this:
filebeat.modules:
- module: apache
access:
enabled: true
input.pipeline: your-custom-pipeline
var.paths: ["/var/log/apache2/custom_access*"]
error:
enabled: true
input.pipeline: your-custom-pipeline
var.paths: ["/var/log/apache2/custom_error*"]

Filebeat 7.5.1 Missing a step for custom index names config?

I should be able to see a new custom index name dev-7.5.1-yyyy.MM.dd within Kibana, but instead it is still called filebeat.
I have the following set it my filebeat.yml config file, which should allow me to see dev-* index in Kibana. However these are still indexing as filebeat-7.5.1-yyyy-MM.dd. Filebeat is running on a ec2 instance that can communicate with the elastic-search server on port 9200.
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/*.log
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.elasticsearch:
hosts: ["my_es_host_ip:9200"]
index: "dev-%{[agent.version]}-%{+yyyy.MM.dd}"
setup.template.enabled: true
setup.template.name: "dev-%{[agent.version]}"
setup.template.pattern: "dev-%{[agent.version]}-*"
Is there step I'm missing? From reading over the the docs and the reference file, I believe I have everything set correctly.
Appreciate any advice.
Looks like the following setting is needed in his case:
setup.ilm.enabled: false
After adding that to my config, it worked as expected. There is a open issue to improve how to find this out here: https://github.com/elastic/beats/issues/11866
Use these configuration to have customer index template and name.
setup.ilm.enabled: false
setup.template.name: "dev"
setup.template.pattern: "dev-*"
index: "dev-%{[agent.version]}-%{+yyyy.MM.dd}"
When index lifecycle management (ILM) is enabled, the default index is
"filebeat-%{[agent.version]}-%{+yyyy.MM.dd}-%{index_num}", for
example, "filebeat-7.13.0-2021-05-25-000001". Custom index settings
are ignored when ILM is enabled. If you’re sending events to a cluster
that supports index lifecycle management, see Index lifecycle
management (ILM) to learn how to change the index name.
If index lifecycle management is enabled (which is typically the
default), setup.template.name and setup.template.pattern are ignored.
Ref:
https://www.elastic.co/guide/en/beats/filebeat/current/change-index-name.html
https://www.elastic.co/guide/en/beats/filebeat/current/configuration-template.html
https://www.elastic.co/guide/en/beats/filebeat/current/ilm.html

How to configure metricbeat helm chart to output to elasticsearch?

I'm trying to install metricbeat helm chart to forward my kubernetes metrics to elasticsearch.
Default configuration works but when I configure output to elasticsearch, the pod tell me
Exiting: error unpacking config data: more than one namespace configured accessing 'output' (source:'metricbeat.yml')
I download the values.yaml and modify output.file in both daemonset and deployment from
output.file:
path: "/usr/share/metricbeat/data"
filename: metricbeat
rotate_every_kb: 10000
number_of_files: 5
to
output.file:
enable: false
output.elasticsearch:
enable: true
hosts: ["http://192.168.10.156:9200/"]
How do I modify the config to forward metrics to elasticsearch?
According to the fine manual, the property is actually enabled: not enable: so I would presume you actually want:
output.file:
enabled: false
Although to be honest, I always thought you could have as many outputs as you wish, but that is clearly not true

Resources