unmarshal errors in filebeat configuration - yaml

I have configured filebeat for logstash. But while starting the filebeat I am getting following error :
main.go:42: CRIT Config error: Error reading config file: YAML config parsing failed on /etc/filebeat/filebeat.yml: yaml: unmarshal errors:
line 2: cannot unmarshal !!str `paths:
...` into []config.ProspectorConfig. Exiting.
I have configured filebeat on other server with same configuration and there it is working perfectly but I don't understand why am I getting this syntax error on this server.
Here is the configuration file :
filebeat:
prospectors: |-
paths:
'/var/log/*.log'
registry_file: /var/lib/filebeat/registry
config_dir: /etc/filebeat/conf.d
output:
elasticsearch:
enabled: false
hosts:
- 52.35.55.85:9200
logstash:
enabled: true
hosts:
- 52.32.18.237:5044
file:
enabled: false
path: /tmp/filebeat
filename: filebeat
rotate_every_kb: 1000
number_of_files: 7

I don't know anything about filebeat (or even Go, really), but this error message:
cannot unmarshal !!str `paths:
...` into []config.ProspectorConfig. Exiting.
...suggests to me that it's expecing the paths value to be a sequence (YAML parlance for array), not a scalar (string). Instead of this:
paths:
'/var/log/*.log'
...try this:
paths:
- '/var/log/*.log'
...or, since the quotes are extraneous here:
paths:
- /var/log/*.log

Related

Filebeat unable to send logs to Kafka

File Beat is unable to send logs from a particular folder, This is the application logs folder.
Things that have been tried :
Created a new topic in kafka to retest the settings.
Checked for file permission for the folder and the file to send.
Updated to the filebeats to 6.7 from 5.5
changed from filebeat.prospector to filebeat.inputs
Running Configuration :
filebeat.inputs:
- type: log
paths:
- /var/log/containers/*.log
fields_under_root: true
output.kafka:
hosts: ["10.0.0.0:9092"]
topic: "testtopic"
codec.json:
pretty: true
With this i am able to see all the logs in "testtopic"
Not running Configuration :
filebeat.inputs:
- type: log
paths:
- /app/log/server/*.log
fields_under_root: true
output.kafka:
hosts: ["10.0.0.0:9092"]
topic: "testtopic"
codec.json:
pretty: true
Expected Results : Logs from the path /app/log/server/*.log should be sent to Kafka

Filebeat & test inputs

I'm working on a Filebeat solution and I'm having a problem setting up my configuration. Let me explain my setup:
I have a app that produces a csv file that contains data that I want to input in to ElasticSearch using Filebeats.
I'm using Filebeat 5.6.4 running on a windows machine.
Provided below is my filebeat.ymal configuration:
filebeat.inputs:
- type: log
enabled: true
paths:
- C:\App\fitbit-daily-activites-heart-rate-*.log
output.elasticsearch:
hosts: ["http://esldemo.com:9200"]
index: "fitbit-daily-activites-heartrate-%{+yyyy.MM.dd}"
setup.template:
name: "fitbit-daily-activites-heartrate"
pattern: "fitbit-daily-activites-heartrate-*"
fields: "fitbit-heartrate-fields.yml"
overwrite: false
settings:
index.number_of_shards: 1
index.number_of_replicas: 0
And my data looks like this:
0,2018-12-13 00:00:02.000,66.0,$
1,2018-12-13 00:00:07.000,66.0,$
2,2018-12-13 00:00:12.000,67.0,$
3,2018-12-13 00:00:17.000,67.0,$
4,2018-12-13 00:00:27.000,67.0,$
5,2018-12-13 00:00:37.000,66.0,$
6,2018-12-13 00:00:52.000,66.0,$
I'm trying to figure out why my configuration is not picking up my data and outputting it to ElasticSearch. Please help.
There are some differences in the way you configure Filebeat in versions 5.6.X and in the 6.X branch.
For 5.6.X you need to configure your input like this:
filebeat.prospectors:
- input_type: log
paths:
- 'C:/App/fitbit-daily-activites-heart-rate-*.log'
You also need to put your path between single quotes and use forward slashes.
Filebeat 5.6.X configuration

Where to find the valid structure of filebeat?

I'm trying to config a simple filebeat.yml file, but I get syntax errors and I cannot find more details on what is the correct structure for that. Here is my filebeat.yml file but I get the error "line 8: did not find expected '-' indicator"
filebeat:
prospectors:
- input_type: log
paths:
- /var/log/telnet.log
# fields: {nodeIP: "130.245.82.32"}
document_type: telnet
- input_type: log
paths:
- /var/log/ssh.log
document_type: myssh
registry_file: /var/lib/filebeat/registry
.......
How can I find what it expects to see?
I found the bug in my YML code, there is one space before the second input_type that messed up everything.

Filebeat is processing all the logs instead of the specified application logs

I have an app server, where I have configured filebeat(through Chef) to extract the logs and publish it to logstash(a separate ELK server), and subsequently to ES and Kibana.
I have configured filebeat to process logs only from /opt/app_logs/*.log, but it seems it is reading logs from other locations too, because in the /etc/filebeat configuration directory, I have filebeat.full.yml and other yml files generated automatically, and they seem to have all those other file locations, thus due to such a huge amount of logs, logstash service is getting out of memory within minutes with logstash.log. How can I not autogenerate the other yml files?
I tried to remove this file and also tried to comment out all the /var/log paths from the prospectors, but then filebeat itself is not starting.
filebeat.yml file:
filebeat:
prospectors: []
registry_file: "/var/lib/filebeat/registry"
config_dir: "/etc/filebeat"
output:
logstash:
hosts:
- elk_host:5044
index: logstash-filebeat
shipper:
name: serverA
tags:
- A
logging:
to_files: 'true'
files:
path: "/var/log/filebeat"
name: filebeat_log
rotateeverybytes: '10485760'
level: info
prospectors:
- paths:
- "/opt/app_logs/*.log"
encoding: plain
input_type: log
ignore_older: 24h
The main problem with your configuration is that for Filebeat 1.2.3 you have the prospectors list defined twice and second one is not in the correct location.
The second problem is that you have defined the config_dir as /etc/filebeat. config_dir is used to specify an additional directory where to look for config files. It should never be set to /etc/filebeat because this is where the main config file should be located. See https://stackoverflow.com/a/39987501/503798 for usage information.
A third problem is that you have used string types in to_files and rotateeverybytes. They should be boolean and integer types respectively.
Here's how the config should look for Filebeat 1.x.
filebeat:
registry_file: "/var/lib/filebeat/registry"
config_dir: "/etc/filebeat/conf.d"
prospectors:
- paths:
- "/opt/app_logs/*.log"
encoding: plain
input_type: log
ignore_older: 24h
output:
logstash:
hosts:
- elk_host:5044
index: logstash-filebeat
shipper:
name: serverA
tags:
- A
logging:
to_files: true
files:
path: "/var/log/filebeat"
name: filebeat_log
rotateeverybytes: 10485760
level: info
I highly recommend that you upgrade to Filebeat 5.x because it has better configuration validation using filebeat -configtest.

Filebeat doesn't forward data to logstash

I have a setup using elasticsearch, kibana, logstash on one vm machine and filebeat on the slave machine. I managed to send syslog messages and logs from auth.log file following the tutorial from here. In the filebeat log I saw that the messages are published, but when I try to send a json file I don't see any publish event ( I see just Flushing spooler because of timemout. Events flushed: 0).
My filebeat.yml file is
filebeat:
prospectors:
-
paths:
# - /var/log/auth.log
# - /var/log/syslog
# - /var/log/*.log
- /home/slave/data_2/*
input_type: log
document_type: log
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["192.168.132.207:5044"]
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
level: debug
to_files: true
to_syslog: false
files:
path: /var/log/mybeat
name: mybeat.log
keepfiles: 7
rotateeverybytes: 10485760 # = 10MB
PLEASE NOTE that tabs are not allowed in your filebeat.yml!!!! I used notepad++ and view>Show>whitespace and TAB. Sure enough there was a TAB char in a blank line and filebeat wouldn't start. Use filebeat -c filebeat.yml -configtest and it will give you more information.
Go in your logstash input for filebeat and comment the tls section!
Don't forget to check your log file permissions. If everything is rooted, filebeat won't have read access to it.
Set your file group to adm.
sudo chgrp adm /var/log/*.log

Resources