how to make filebeat pick up project-specific configuration file - elasticsearch

I have the following yml file, in a project directory not the global file beat configuration directory:
filebeat:
idle_timeout: 5s
prospectors:
paths:
- "data-log/*"
output:
elasticsearch:
hosts: ["localhost:9200"]
Running filebeat -configtest produces no output.
Running filebeat produces no output either.
I would like the running filebeat daemon to dynamically pick up the configuration from this directory, assuming that the command filebeat should do that. I know this can be set up in the global config file, but I would rather perfrom this dynamically.
What am I doing wrong or what assumptions implied here are false?

try to strace filebeat process with
strace -fp {pid} -s 1024, the lines you should be looking for are stat({file_name}.
This way you will see if filebeat resolves path correctly.

Related

I keep seeing in Kibana logs from files that are not configured filebeat.yml

I'm a beginner in ELK.
I have Elasticsearch 8.5.3, Filebeat and Kibana all running on the same Windows machine.
In the filebeat.yml I have configured the following paths:
type: filestream
Unique ID among all inputs, an ID is required.
id: my-filestream-id
Change to true to enable this input configuration.
enabled: true
Paths that should be crawled and fetched. Glob based paths.
paths:
- C:\ProgramData\mycompany\Logs\specific.Log
I want Filebeat to ship data from that specific file only.
For some reason, no matter what I configure under paths
Filebeat ships data from all the log files in C:\ProgramData\mycompany\Logs.
Each time I change the paths to test something I restart Filebeat:
filebeat.exe -c C:\ProgramData\Elastic\Beats\filebeat\filebeat.yml
The path to the yml file is verified and correct.
Yet, the result is the same.
I see in Kibana the data and documents from all the files in that folder.
Filebeat is running in powershell and no errors there.
I tried to delete the Filebeat registry and it didn't help.
I also tried also to restart the elasticsearch, filebeat and kibana all together.
What am I missing here?

Filebeat Kubernetes cannot output to ElasticSearch

Filebeat Kubernetes cannot output to ElasticSearch,
ElasticSearch is OK.
filebeat is daemonset,relevant environment variables have been added.
filebeat.yml
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
hints.default_config:
enabled: false
type: container
paths:
- /var/log/containers/*-${data.container.id}.log
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
Kubernetes
Use nginx app to test:
image=nginx:latest
Deployment annotations have been added.
co.elastic.logs/enabled: "true"
pod.yaml (in node1)
But cannot output to ElasticSearch,Logs and indexes for related input are not seen.
filebeat pod(node1) logs
Expect the filebeat to collect logs for the specified container(Pod) to elasticsearch.
#baymax first off, you don't need to explicitly define the property anywhere:
co.elastic.logs/enabled: "true"
since filebeat, by default, reads all the container log files on the node.
Secondly, you are disabling hints.default_config which ensures filebeat will only read the log files of pods which are annotated as above; however, you haven't provided any template config to be used for reading such log files.
For more info, read: https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover-hints.html
Thirdly, in your filebeat logs, do you see any harvester being started, handles created and events published ? Posting a snapshot of logs doesn't give a clear picture. May be try starting filebeat in debug mode for few minutes and paste the logs here in proper formatting.

Filebeat read all logs, not only that one defined in configuration

I try to configure filebeat version 7.17.5 (amd64), libbeat 7.17.5, for reading Spring boot logs and sending them via logstash to elasticsearch. All works fine, logs are send and I can read it in Kibana but the problem is that I configured filebeat in file /etc/filebeat/filebeat.yml and defined there only one source of logs, but filebeat's still getting all logs from /var/log
It's my only one config for inputs:
filebeat.inputs:
- type: filestream
id: some_id
enabled: true
paths:
- "/var/log/dir_with_logs/application.log"
But when I check status of filebeat a have the information that:
[input] log/input.go:171 Configured paths: [/var/log/auth.log* /var/log/secure*]
And also I have logs from files: auth or secure in Kibana, which I don't want to have.
What I'm doing wrong or what I don't know what I should?
Based on the configured paths of /var/log/auth.log* and /var/log/secure*, I think this is the Filebeat system module. You can disable the system module by renaming /etc/filebeat/modules.d/system.yml to /etc/filebeat/modules.d/system.yml.disabled.
Alternatively you can run the filebeat modules command to disable the module (it simply renames the file for you).
filebeat modules disable system

Content repeat collecting problems while use filebeat

Recently we will use filebeat to collect our system logs to elasticsearch vias:
${local_log_file} -> filebeat -> kafka -> logstash -> elasticsearch -> kibana
While testing our system, we found a scenario that filebeat will repeatly collect logs which means that it will collect logs from the start of file once there is a change.
here is my configuration for filebeat:
filebeat.prospectors:
- input_type: log
paths:
- /home/XXX/exp/*.log
scan_frequency: 1s
#tail_files: true
#================================ Outputs =====================================
#----------------------------- Logstash output --------------------------------
# output.logstash:
# hosts: ["localhost:5044"]
#----------------------------- Kafka output -----------------------------------
output.kafka:
enabled: true
hosts: ["10.10.1.103:9092"]
topic: egou
#----------------------------- console output --------------------------------
output.console:
enabled: true
pretty: true
Notice:
we construct the log files manually, and we are sure that there is a blank line at the end of file
to make a console, we open the output.console
once there is content appended to the end of log file, filebeat will collect from the beginning of the file.But we hope just fetching the change of file.
filebeat version is 5.6.X
Hope any useful hint can be offered by u all
I think this is because of the editor, you are using, creates a new file on save with new meta-data. Filebeat identifies the state of the file using its meta-data, not the content.
try,
echo "something" >> /path/to/file.log
ref: https://discuss.elastic.co/t/filebeat-repeatedly-sending-old-entries-in-log-file/55796

Filebeat doesn't forward data to logstash

I have a setup using elasticsearch, kibana, logstash on one vm machine and filebeat on the slave machine. I managed to send syslog messages and logs from auth.log file following the tutorial from here. In the filebeat log I saw that the messages are published, but when I try to send a json file I don't see any publish event ( I see just Flushing spooler because of timemout. Events flushed: 0).
My filebeat.yml file is
filebeat:
prospectors:
-
paths:
# - /var/log/auth.log
# - /var/log/syslog
# - /var/log/*.log
- /home/slave/data_2/*
input_type: log
document_type: log
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["192.168.132.207:5044"]
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
level: debug
to_files: true
to_syslog: false
files:
path: /var/log/mybeat
name: mybeat.log
keepfiles: 7
rotateeverybytes: 10485760 # = 10MB
PLEASE NOTE that tabs are not allowed in your filebeat.yml!!!! I used notepad++ and view>Show>whitespace and TAB. Sure enough there was a TAB char in a blank line and filebeat wouldn't start. Use filebeat -c filebeat.yml -configtest and it will give you more information.
Go in your logstash input for filebeat and comment the tls section!
Don't forget to check your log file permissions. If everything is rooted, filebeat won't have read access to it.
Set your file group to adm.
sudo chgrp adm /var/log/*.log

Resources