filebeat config changes on old log data - elasticsearch

I have a small question regarding filebeat config. I have done the following changes:
from:
processors:
#- add_host_metadata: ~
to:
processors:
- add_host_metadata: ~
But this adds the fields only to the logs which are new and the old logs do not have host metadata fields. Is there any way in which we can achieve that. SO advised to delete registry but then the user was not able to get the logs (Resend old logs from filebeat to logstash) Is this even advisable?

Related

I keep seeing in Kibana logs from files that are not configured filebeat.yml

I'm a beginner in ELK.
I have Elasticsearch 8.5.3, Filebeat and Kibana all running on the same Windows machine.
In the filebeat.yml I have configured the following paths:
type: filestream
Unique ID among all inputs, an ID is required.
id: my-filestream-id
Change to true to enable this input configuration.
enabled: true
Paths that should be crawled and fetched. Glob based paths.
paths:
- C:\ProgramData\mycompany\Logs\specific.Log
I want Filebeat to ship data from that specific file only.
For some reason, no matter what I configure under paths
Filebeat ships data from all the log files in C:\ProgramData\mycompany\Logs.
Each time I change the paths to test something I restart Filebeat:
filebeat.exe -c C:\ProgramData\Elastic\Beats\filebeat\filebeat.yml
The path to the yml file is verified and correct.
Yet, the result is the same.
I see in Kibana the data and documents from all the files in that folder.
Filebeat is running in powershell and no errors there.
I tried to delete the Filebeat registry and it didn't help.
I also tried also to restart the elasticsearch, filebeat and kibana all together.
What am I missing here?

How to send custom logs in a specified path to filebeat running inside docker

I am new to filebeat and elk. I am trying to send custom logs using filebeat to elastic search directly.Both the elk stack and filebeat are running inside docker containers.. The custom logs are in the folder home/username/docker/hello.log. Here is my filebeat.yml file:
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
filebeat.inputs:
- type: log
enabled: true
paths:
- /home/raju/elk/docker/*.log
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
processors:
- add_cloud_metadata: ~
output.elasticsearch:
hosts: ["my_ip:9200"]
And here is my custom log file:
This is a custom log file
Sending logs to elastic search
And these are the commands using which I am using to run filebeat.
docker run -d \
--name=filebeat \
--user=root \
--volume="$(pwd)/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro" \
--volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \
--volume="/var/run/docker.sock:/var/run/docker.sock:ro" \
docker.elastic.co/beats/filebeat:8.5.3 filebeat -e --strict.perms=false
When i use the above commands to run filebeat I can see the logs of the docker containers on my kibana dashboard. But I am struggling on how to make filebeat to read my custom logs from the specified location above and show me the lines inside the log file on kibana dashboard.
Anyhelp would be appreciated.
Filebeat inputs generally can accept multiple log file paths for harvesting them. In your case, you just need to add the log file location to your log filebeat input path attribute, similar to:
filebeat.inputs:
- type: log
enabled: true
paths:
- /home/raju/elk/docker/*.log
- /home/username/docker/hello.log

Filebeat Kubernetes cannot output to ElasticSearch

Filebeat Kubernetes cannot output to ElasticSearch,
ElasticSearch is OK.
filebeat is daemonset,relevant environment variables have been added.
filebeat.yml
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
hints.default_config:
enabled: false
type: container
paths:
- /var/log/containers/*-${data.container.id}.log
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
Kubernetes
Use nginx app to test:
image=nginx:latest
Deployment annotations have been added.
co.elastic.logs/enabled: "true"
pod.yaml (in node1)
But cannot output to ElasticSearch,Logs and indexes for related input are not seen.
filebeat pod(node1) logs
Expect the filebeat to collect logs for the specified container(Pod) to elasticsearch.
#baymax first off, you don't need to explicitly define the property anywhere:
co.elastic.logs/enabled: "true"
since filebeat, by default, reads all the container log files on the node.
Secondly, you are disabling hints.default_config which ensures filebeat will only read the log files of pods which are annotated as above; however, you haven't provided any template config to be used for reading such log files.
For more info, read: https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover-hints.html
Thirdly, in your filebeat logs, do you see any harvester being started, handles created and events published ? Posting a snapshot of logs doesn't give a clear picture. May be try starting filebeat in debug mode for few minutes and paste the logs here in proper formatting.

Filebeat read all logs, not only that one defined in configuration

I try to configure filebeat version 7.17.5 (amd64), libbeat 7.17.5, for reading Spring boot logs and sending them via logstash to elasticsearch. All works fine, logs are send and I can read it in Kibana but the problem is that I configured filebeat in file /etc/filebeat/filebeat.yml and defined there only one source of logs, but filebeat's still getting all logs from /var/log
It's my only one config for inputs:
filebeat.inputs:
- type: filestream
id: some_id
enabled: true
paths:
- "/var/log/dir_with_logs/application.log"
But when I check status of filebeat a have the information that:
[input] log/input.go:171 Configured paths: [/var/log/auth.log* /var/log/secure*]
And also I have logs from files: auth or secure in Kibana, which I don't want to have.
What I'm doing wrong or what I don't know what I should?
Based on the configured paths of /var/log/auth.log* and /var/log/secure*, I think this is the Filebeat system module. You can disable the system module by renaming /etc/filebeat/modules.d/system.yml to /etc/filebeat/modules.d/system.yml.disabled.
Alternatively you can run the filebeat modules command to disable the module (it simply renames the file for you).
filebeat modules disable system

Filebeat doesn't forward data to logstash

I have a setup using elasticsearch, kibana, logstash on one vm machine and filebeat on the slave machine. I managed to send syslog messages and logs from auth.log file following the tutorial from here. In the filebeat log I saw that the messages are published, but when I try to send a json file I don't see any publish event ( I see just Flushing spooler because of timemout. Events flushed: 0).
My filebeat.yml file is
filebeat:
prospectors:
-
paths:
# - /var/log/auth.log
# - /var/log/syslog
# - /var/log/*.log
- /home/slave/data_2/*
input_type: log
document_type: log
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["192.168.132.207:5044"]
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
level: debug
to_files: true
to_syslog: false
files:
path: /var/log/mybeat
name: mybeat.log
keepfiles: 7
rotateeverybytes: 10485760 # = 10MB
PLEASE NOTE that tabs are not allowed in your filebeat.yml!!!! I used notepad++ and view>Show>whitespace and TAB. Sure enough there was a TAB char in a blank line and filebeat wouldn't start. Use filebeat -c filebeat.yml -configtest and it will give you more information.
Go in your logstash input for filebeat and comment the tls section!
Don't forget to check your log file permissions. If everything is rooted, filebeat won't have read access to it.
Set your file group to adm.
sudo chgrp adm /var/log/*.log

Resources