Automated Setup of Kibana and Elasticsearch with Filebeat Module in Elastic Cloud for Kubernetes (ECK) - elasticsearch

I'm trying out the K8s Operator (a.k.a. ECK) and so far, so good.
However, I'm wondering what the right pattern is for, say, configuring Kibana and Elasticsearch with the Apache module.
I know I can do it ad hoc with:
filebeat setup --modules apache2 --strict.perms=false \
--dashboards --pipelines --template \
-E setup.kibana.host="${KIBANA_URL}"
But what's the automated way to do it? I see some docs for the Kibana dashboard portion of it but what about the rest (pipelines, etc.)?
Note: At some point, I may end up actually running a beat for the K8s cluster, but I'm not at that stage yet. At the moment, I just want to set Elasticsearch/Kibana up with the Apache module additions so that external Apache services' Filebeats can get ingested/displayed properly.
FYI, I'm on version 6.8 of the Elastic stack for now.

you can try auto-discovery using label based approach.
config:
filebeat.autodiscover:
providers:
- type: kubernetes
hints.default_config.enabled: "false"
templates:
- condition.contains:
kubernetes.labels.app: "apache"
config:
- module: apache
access:
enabled: true
var.paths: ["/path/to/log/apache/access.log*"]
error:
enabled: true
var.paths: ["/path/to/log/apache/error.log*"]

Related

Filebeat Kubernetes cannot output to ElasticSearch

Filebeat Kubernetes cannot output to ElasticSearch,
ElasticSearch is OK.
filebeat is daemonset,relevant environment variables have been added.
filebeat.yml
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
hints.default_config:
enabled: false
type: container
paths:
- /var/log/containers/*-${data.container.id}.log
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
Kubernetes
Use nginx app to test:
image=nginx:latest
Deployment annotations have been added.
co.elastic.logs/enabled: "true"
pod.yaml (in node1)
But cannot output to ElasticSearch,Logs and indexes for related input are not seen.
filebeat pod(node1) logs
Expect the filebeat to collect logs for the specified container(Pod) to elasticsearch.
#baymax first off, you don't need to explicitly define the property anywhere:
co.elastic.logs/enabled: "true"
since filebeat, by default, reads all the container log files on the node.
Secondly, you are disabling hints.default_config which ensures filebeat will only read the log files of pods which are annotated as above; however, you haven't provided any template config to be used for reading such log files.
For more info, read: https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover-hints.html
Thirdly, in your filebeat logs, do you see any harvester being started, handles created and events published ? Posting a snapshot of logs doesn't give a clear picture. May be try starting filebeat in debug mode for few minutes and paste the logs here in proper formatting.

Filebeat read all logs, not only that one defined in configuration

I try to configure filebeat version 7.17.5 (amd64), libbeat 7.17.5, for reading Spring boot logs and sending them via logstash to elasticsearch. All works fine, logs are send and I can read it in Kibana but the problem is that I configured filebeat in file /etc/filebeat/filebeat.yml and defined there only one source of logs, but filebeat's still getting all logs from /var/log
It's my only one config for inputs:
filebeat.inputs:
- type: filestream
id: some_id
enabled: true
paths:
- "/var/log/dir_with_logs/application.log"
But when I check status of filebeat a have the information that:
[input] log/input.go:171 Configured paths: [/var/log/auth.log* /var/log/secure*]
And also I have logs from files: auth or secure in Kibana, which I don't want to have.
What I'm doing wrong or what I don't know what I should?
Based on the configured paths of /var/log/auth.log* and /var/log/secure*, I think this is the Filebeat system module. You can disable the system module by renaming /etc/filebeat/modules.d/system.yml to /etc/filebeat/modules.d/system.yml.disabled.
Alternatively you can run the filebeat modules command to disable the module (it simply renames the file for you).
filebeat modules disable system

Filebeat's GCP Module keep getting hash config error

I am currently trying to forward GCP's Cloud Logging to Filebeat to be forwarded to Elasticsearch following this docs with the GCP module settings on filebeat according to this docs
Currently I am only trying to forward audit logs so my gcp.yml module is as follows
- module: gcp
vpcflow:
enabled: false
var.project_id: my-gcp-project-id
var.topic: gcp-vpc-flowlogs
var.subscription_name: filebeat-gcp-vpc-flowlogs-sub
var.credentials_file: ${path.config}/gcp-service-account-xyz.json
#var.internal_networks: [ "private" ]
firewall:
enabled: false
var.project_id: my-gcp-project-id
var.topic: gcp-vpc-firewall
var.subscription_name: filebeat-gcp-firewall-sub
var.credentials_file: ${path.config}/gcp-service-account-xyz.json
#var.internal_networks: [ "private" ]
audit:
enabled: true
var.project_id: <my prod name>
var.topic: sample_topic
var.subscription_name: filebeat-gcp-audit
var.credentials_file: ${path.config}/<something>.<something>
When I run sudo filebeat setup I keep getting this error
2021-05-21T09:02:25.232Z ERROR cfgfile/reload.go:258 Error loading configuration files: 1 error: Unable to hash given config: missing field accessing '0.firewall' (source:'/etc/filebeat/modules.d/gcp.yml')
Although I can start the service, but I don't seem to see any logs forwarded from GCP's Cloud Logging pub/sub topic to elastic search.
Help or tips on best practice too would be appreciated.
Update
If I were to follow the docs in here, it would give me the same error but in audit

Filebeat 7.5.1 Missing a step for custom index names config?

I should be able to see a new custom index name dev-7.5.1-yyyy.MM.dd within Kibana, but instead it is still called filebeat.
I have the following set it my filebeat.yml config file, which should allow me to see dev-* index in Kibana. However these are still indexing as filebeat-7.5.1-yyyy-MM.dd. Filebeat is running on a ec2 instance that can communicate with the elastic-search server on port 9200.
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/*.log
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.elasticsearch:
hosts: ["my_es_host_ip:9200"]
index: "dev-%{[agent.version]}-%{+yyyy.MM.dd}"
setup.template.enabled: true
setup.template.name: "dev-%{[agent.version]}"
setup.template.pattern: "dev-%{[agent.version]}-*"
Is there step I'm missing? From reading over the the docs and the reference file, I believe I have everything set correctly.
Appreciate any advice.
Looks like the following setting is needed in his case:
setup.ilm.enabled: false
After adding that to my config, it worked as expected. There is a open issue to improve how to find this out here: https://github.com/elastic/beats/issues/11866
Use these configuration to have customer index template and name.
setup.ilm.enabled: false
setup.template.name: "dev"
setup.template.pattern: "dev-*"
index: "dev-%{[agent.version]}-%{+yyyy.MM.dd}"
When index lifecycle management (ILM) is enabled, the default index is
"filebeat-%{[agent.version]}-%{+yyyy.MM.dd}-%{index_num}", for
example, "filebeat-7.13.0-2021-05-25-000001". Custom index settings
are ignored when ILM is enabled. If you’re sending events to a cluster
that supports index lifecycle management, see Index lifecycle
management (ILM) to learn how to change the index name.
If index lifecycle management is enabled (which is typically the
default), setup.template.name and setup.template.pattern are ignored.
Ref:
https://www.elastic.co/guide/en/beats/filebeat/current/change-index-name.html
https://www.elastic.co/guide/en/beats/filebeat/current/configuration-template.html
https://www.elastic.co/guide/en/beats/filebeat/current/ilm.html

How to configure metricbeat helm chart to output to elasticsearch?

I'm trying to install metricbeat helm chart to forward my kubernetes metrics to elasticsearch.
Default configuration works but when I configure output to elasticsearch, the pod tell me
Exiting: error unpacking config data: more than one namespace configured accessing 'output' (source:'metricbeat.yml')
I download the values.yaml and modify output.file in both daemonset and deployment from
output.file:
path: "/usr/share/metricbeat/data"
filename: metricbeat
rotate_every_kb: 10000
number_of_files: 5
to
output.file:
enable: false
output.elasticsearch:
enable: true
hosts: ["http://192.168.10.156:9200/"]
How do I modify the config to forward metrics to elasticsearch?
According to the fine manual, the property is actually enabled: not enable: so I would presume you actually want:
output.file:
enabled: false
Although to be honest, I always thought you could have as many outputs as you wish, but that is clearly not true

Resources