Filebeat not starting in windows - elasticsearch

Facing problem with staring up the Filebeat in windows 10, i have modified the filebeat prospector log path with elasticsearch log folder located in my local machine "E:" drive also i have validated the format of filebeat.yml after made the correction but still am getting below error on start up.
Filebeat version : 6.2.3
Windows version: 64 bit
Filebeat.yml (validated yml format)
filebeat.prospectors:
-
type: log
enabled: true
paths:
- 'E:\Research\ELK\elasticsearch-6.2.3\logs\*.log'
filebeat.config.modules:
path: '${path.config}/modules.d/*.yml'
reload.enabled: false
setup.template.settings:
index.number_of_shards: 3
setup.kibana:
host: 'localhost:5601'
output.elasticsearch:
hosts:
- 'localhost:9200'
username: elastic
password: elastic
Filebeat Startup Log:
E:\Research\ELK\filebeat-6.2.3-windows-x86_64>filebeat --setup -e
2018-03-24T22:58:39.660+0530 INFO instance/beat.go:468 Home path: [E:\Research\ELK\filebeat-6.2.3-windows-x86_64] Config path: [E:\Research\ELK\filebeat-6.2.3-windows-x86_64] Data path: [E:\Research\ELK\filebeat-6.2.3-windows-x86_64\data] Logs path: [E:\Research\ELK\filebeat-6.2.3-windows-x86_64\logs]
2018-03-24T22:58:39.661+0530 INFO instance/beat.go:475 Beat UUID: f818bcc0-25bb-4545-bcd4-3523366a4c0e
2018-03-24T22:58:39.662+0530 INFO instance/beat.go:213 Setup Beat: filebeat; Version: 6.2.3
2018-03-24T22:58:39.662+0530 INFO elasticsearch/client.go:145 Elasticsearch url: http://localhost:9200
2018-03-24T22:58:39.665+0530 INFO pipeline/module.go:76 Beat name: DESKTOP-J932HJH
2018-03-24T22:58:39.666+0530 INFO [monitoring] log/log.go:97 Starting metrics logging every 30s
2018-03-24T22:58:39.666+0530 INFO elasticsearch/client.go:145 Elasticsearch url: http://localhost:9200
2018-03-24T22:58:39.672+0530 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.3
2018-03-24T22:58:39.672+0530 INFO kibana/client.go:69 Kibana url: http://localhost:5601
2018-03-24T22:59:08.882+0530 INFO instance/beat.go:583 Kibana dashboards successfully loaded.
2018-03-24T22:59:08.882+0530 INFO elasticsearch/client.go:145 Elasticsearch url: http://localhost:9200
2018-03-24T22:59:08.885+0530 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.3
2018-03-24T22:59:08.888+0530 INFO instance/beat.go:301 filebeat start running.
2018-03-24T22:59:08.888+0530 INFO registrar/registrar.go:108 Loading registrar data from E:\Research\ELK\filebeat-6.2.3-windows-x86_64\data\registry
2018-03-24T22:59:08.888+0530 INFO registrar/registrar.go:119 States Loaded from registrar: 5
2018-03-24T22:59:08.888+0530 INFO crawler/crawler.go:48 Loading Prospectors: 1
2018-03-24T22:59:08.889+0530 INFO log/prospector.go:111 Configured paths: [E:\Research\ELK\elasticsearch-6.2.3\logs\*.log]
2018-03-24T22:59:08.890+0530 INFO log/harvester.go:216 Harvester started for file: E:\Research\ELK\elasticsearch-6.2.3\logs\elasticsearch.log
2018-03-24T22:59:08.892+0530 ERROR fileset/factory.go:69 Error creating prospector: No paths were defined for prospector accessing config
2018-03-24T22:59:08.892+0530 INFO crawler/crawler.go:109 Stopping Crawler
2018-03-24T22:59:08.893+0530 INFO crawler/crawler.go:119 Stopping 1 prospectors
2018-03-24T22:59:08.897+0530 INFO log/prospector.go:410 Scan aborted because prospector stopped.
2018-03-24T22:59:08.897+0530 INFO log/harvester.go:216 Harvester started for file: E:\Research\ELK\elasticsearch-6.2.3\logs\elasticsearch_deprecation.log
2018-03-24T22:59:08.897+0530 INFO prospector/prospector.go:121 Prospector ticker stopped
2018-03-24T22:59:08.898+0530 INFO prospector/prospector.go:138 Stopping Prospector: 18361622063543553778
2018-03-24T22:59:08.898+0530 INFO log/harvester.go:237 Reader was closed: E:\Research\ELK\elasticsearch-6.2.3\logs\elasticsearch.log. Closing.
2018-03-24T22:59:08.898+0530 INFO crawler/crawler.go:135 Crawler stopped
2018-03-24T22:59:08.899+0530 INFO registrar/registrar.go:210 Stopping Registrar
2018-03-24T22:59:08.908+0530 INFO registrar/registrar.go:165 Ending Registrar
2018-03-24T22:59:08.910+0530 INFO instance/beat.go:308 filebeat stopped.
2018-03-24T22:59:08.948+0530 INFO [monitoring] log/log.go:132 Total non-zero metrics
2018-03-24T22:59:08.948+0530 INFO [monitoring] log/log.go:133 Uptime: 29.3387858s
2018-03-24T22:59:08.949+0530 INFO [monitoring] log/log.go:110 Stopping metrics logging.
2018-03-24T22:59:08.950+0530 ERROR instance/beat.go:667 Exiting: No paths were defined for prospector accessing config
Exiting: No paths were defined for prospector accessing config

Check this path ${path.config}/modules.d/
or check by command line "filebeat.exe modules list", if some modules are active, which do not work with windows.
For instance the system.yml (module) does not run on plain windows, because there is no syslog. But the system module is active by default. So you have to disable it first.
If I have it enabled, I run in the exactly the same error message, and filebeat stops.

Rewrite the first part of the yml using this format:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
Remove also the empty new line and take attention to the indentation.

I understand that this topic is a bit old however looking at the amount of views that this has received at the time of posting this (June 2019), I think it would be safe to add more informations as this is fairly frustrating to get while very easy to fix.
Before I explain what I did, allow me to say I had this problem on a Linux system but the problem/solution should be the same on all plateforms.
After having updated the logback-spring.xml and restarted the service, it kept refusing spitting back the following error:
ERROR instance/beat.go:824 Exiting: Can only start an input when all related states are finished: {Id:163850-64780 Finished:false Fileinfo:0xc42016c1a0 Source:/some/path/here/error.log Offset:0 Timestamp:2019-06-13 09:15:35.481163602 -0400 EDT m=+0.107516982 TTL:-1ns Type:log Meta:map[] FileStateOS:163850-64780}
My solution was simply to edit the /etc/filebeat/filebeat.yml and comment as much stuff as I could (Going back to nearly a vanilla/basic configuration).
After having done so, restarting filebeat worked and this ended up being a duplicate path entry with another file somewhere in the system, possibly under the modules.

Related

Filebeat not creating index in Opensearch

I have installed Filebeat-oss 7.12.0 and opensearch-2.4.0 and opensearchDashboard-2.4.0 on Windows.
Every service is working fine.
But index is not getting created in Opensearch dashboard.
There is no error.
Logs are:
INFO log/harvester.go:302 Harvester started for file: D:\data\logs.txt
2022-12-08T18:28:17.584+0530 INFO [crawler] beater/crawler.go:141 Starting input (ID: 16780016071726099597)
2022-12-08T18:28:17.585+0530 INFO [crawler] beater/crawler.go:108 Loading and starting Inputs completed. Enabled inputs: 2
2022-12-08T18:28:17.585+0530 INFO cfgfile/reload.go:164 Config reloader started
2022-12-08T18:28:17.584+0530 INFO [input.filestream] compat/compat.go:111 Input filestream starting
2022-12-08T18:28:17.585+0530 INFO cfgfile/reload.go:224 Loading of config files completed.
2022-12-08T18:28:20.428+0530 INFO [add_cloud_metadata] add_cloud_metadata/add_cloud_metadata.go:101 add_cloud_metadata: hosting provider type not detected.
2022-12-08T18:28:21.428+0530 INFO [publisher_pipeline_output] pipeline/output.go:143 Connecting to backoff(elasticsearch(http://localhost:9200))
2022-12-08T18:28:21.428+0530 INFO [publisher] pipeline/retry.go:219 retryer: send unwait signal to consumer
2022-12-08T18:28:21.428+0530 INFO [publisher] pipeline/retry.go:223 done
2022-12-08T18:28:21.433+0530 INFO [esclientleg] eslegclient/connection.go:314 Attempting to connect to Elasticsearch version 2.4.0
2022-12-08T18:28:21.537+0530 INFO [esclientleg] eslegclient/connection.go:314 Attempting to connect to Elasticsearch version 2.4.0
2022-12-08T18:28:21.620+0530 INFO template/load.go:117 Try loading template filebeat-7.12.0 to Elasticsearch
filebeat.yml is:
filebeat.inputs:
- type: log
paths:
- D:\data\*
- type: filestream
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- D:\data\*
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
#============================== Kibana =====================================
setup.kibana:
host: "localhost:5601"
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
I don't know what the problem is. No index is created in Opensearch dashboard with name filebeat-7.12.0.
#Android see my reply on this thread: https://stackoverflow.com/a/74984260/6101900.
You cannot forward events from filebeat to opensearch since its not elasticsearch.

Run filebeat on windows 10

I 'm trying to run filebeat on windows 10 and send to data to elasticsearch and kibana all on localhost. This is my config file filebeat.yml
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
# - /var/log/*.log
- D:\AppData\Elastic\filebeat\logs
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
# filestream is an input for collecting log messages from files. It is going to replace log input in the future.
- type: filestream
# Change to true to enable this input configuration.
enabled: false
# Paths that should be crawled and fetched. Glob based paths.
paths:
# - /var/log/*.log
- D:\AppData\Elastic\filebeat\logs
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#prospector.scanner.exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
# ================================== General ===================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
# =================================== Kibana ===================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
host: "localhost:5601"
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
# =============================== Elastic Cloud ================================
# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
# ================================== Outputs ===================================
# Configure what output to use when sending the data collected by the beat.
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
#username: "elastic"
#password: "changeme"
# ------------------------------ Logstash Output -------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
# ================================== Logging ===================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]
# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
#monitoring.enabled: false
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:
# ============================== Instrumentation ===============================
# Instrumentation support for the filebeat.
#instrumentation:
# Set to true to enable instrumentation of filebeat.
#enabled: false
# Environment in which filebeat is running on (eg: staging, production, etc.)
#environment: ""
# APM Server hosts to report instrumentation results to.
#hosts:
# - http://localhost:8200
# API Key for the APM Server(s).
# If api_key is set then secret_token will be ignored.
#api_key:
# Secret token for the APM Server(s).
#secret_token:
# ================================= Migration ==================================
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true
i 've run
./filebeat -c filebeat.yml -e
This is the result
2021-09-20T09:55:05.324+0700 INFO instance/beat.go:665 Home path: [D:\AppData\Elastic\filebeat] Config path: [D:\AppData\Elastic\filebeat] Data path: [D:\AppData\Elastic\filebeat\data] Logs path: [D:\AppData\Elastic\filebeat\logs]
2021-09-20T09:55:05.327+0700 INFO instance/beat.go:673 Beat ID: 3957662b-f353-4de0-a6a7-3260cb6481ba
2021-09-20T09:55:05.375+0700 INFO [beat] instance/beat.go:1014 Beat info {"system_info": {"beat": {"path": {"config": "D:\\AppData\\Elastic\\filebeat", "data": "D:\\AppData\\Elastic\\filebeat\\data", "home": "D:\\AppData\\Elastic\\filebeat", "logs": "D:\\AppData\\Elastic\\filebeat\\logs"}, "type": "filebeat", "uuid": "3957662b-f353-4de0-a6a7-3260cb6481ba"}}}
2021-09-20T09:55:05.376+0700 INFO [beat] instance/beat.go:1023 Build info {"system_info": {"build": {"commit": "703d589a09cfdbfd7f84c1d990b50b6b7f62ac29", "libbeat": "7.14.1", "time": "2021-08-26T09:12:57.000Z", "version": "7.14.1"}}}
2021-09-20T09:55:05.376+0700 INFO [beat] instance/beat.go:1026 Go runtime info {"system_info": {"go": {"os":"windows","arch":"amd64","max_procs":12,"version":"go1.16.6"}}}
2021-09-20T09:55:05.403+0700 INFO [beat] instance/beat.go:1030 Host info {"system_info": {"host": {"architecture":"x86_64","boot_time":"2021-09-20T08:27:03.12+07:00","name":"nnhai2","ip":["fe80::6036:7939:ebe1:1d3e/64","192.168.82.42/23","fe80::984a:b076:b82f:bedf/64","169.254.190.223/16","fe80::d148:87f2:9bc8:8452/64","169.254.132.82/16","fe80::c4c:5978:a65:9c2a/64","169.254.156.42/16","fe80::488c:c4a3:51de:f987/64","169.254.249.135/16","fe80::e1fb:7ed2:192d:c665/64","169.254.198.101/16","::1/128","127.0.0.1/8","fe80::a42f:ed21:3139:b3a7/64","172.28.96.1/20"],"kernel_version":"10.0.19041.1237 (WinBuild.160101.0800)","mac":["70:b5:e8:5a:d1:0a","ac:82:47:8d:80:2e","ac:82:47:8d:80:2f","ae:82:47:8d:80:2e","00:09:0f:fe:00:01","ac:82:47:8d:80:32","00:15:5d:d4:9f:62"],"os":{"type":"windows","family":"windows","platform":"windows","name":"Windows 10 Pro","version":"10.0","major":10,"minor":0,"patch":0,"build":"19042.1237"},"timezone":"+07","timezone_offset_sec":25200,"id":"85952915-f150-4943-835a-55ae79b7bcb0"}}}
2021-09-20T09:55:05.404+0700 INFO [beat] instance/beat.go:1059 Process info {"system_info": {"process": {"cwd": "D:\\AppData\\Elastic\\filebeat", "exe": "D:\\AppData\\Elastic\\filebeat\\filebeat.exe", "name": "filebeat.exe", "pid": 15268, "ppid": 21388, "start_time": "2021-09-20T09:55:04.931+0700"}}}
2021-09-20T09:55:05.405+0700 INFO instance/beat.go:309 Setup Beat: filebeat; Version: 7.14.1
2021-09-20T09:55:05.405+0700 INFO [index-management] idxmgmt/std.go:184 Set output.elasticsearch.index to 'filebeat-7.14.1' as ILM is enabled.
2021-09-20T09:55:05.405+0700 INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://localhost:9200
2021-09-20T09:55:05.406+0700 INFO [publisher] pipeline/module.go:113 Beat name: nnhai2
2021-09-20T09:55:05.410+0700 INFO instance/beat.go:473 filebeat start running.
2021-09-20T09:55:05.410+0700 INFO [monitoring] log/log.go:118 Starting metrics logging every 30s
2021-09-20T09:55:05.418+0700 INFO memlog/store.go:119 Loading data file of 'D:\AppData\Elastic\filebeat\data\registry\filebeat' succeeded. Active transaction id=0
2021-09-20T09:55:05.418+0700 INFO memlog/store.go:124 Finished loading transaction log file for 'D:\AppData\Elastic\filebeat\data\registry\filebeat'. Active transaction id=0
2021-09-20T09:55:05.420+0700 INFO [registrar] registrar/registrar.go:109 States Loaded from registrar: 0
2021-09-20T09:55:05.420+0700 INFO [crawler] beater/crawler.go:71 Loading Inputs: 2
2021-09-20T09:55:05.420+0700 INFO [input] log/input.go:164 Configured paths: [D:\AppData\Elastic\filebeat\logs] {"input_id": "444806ec-503a-4a80-812f-a8c78e3f69a4"}
2021-09-20T09:55:05.421+0700 INFO [crawler] beater/crawler.go:141 Starting input (ID: 1263043090716372778)
2021-09-20T09:55:05.456+0700 INFO [input] log/input.go:164 Configured paths: [c:\ProgramData\Elastic\Elasticsearch\logs\*_deprecation.log c:\ProgramData\Elastic\Elasticsearch\logs\*_deprecation.json] {"input_id": "aa771bd1-e31c-4061-bfe6-2897ff20dde4"}
2021-09-20T09:55:05.456+0700 INFO [input] log/input.go:164 Configured paths: [c:\ProgramData\Elastic\Elasticsearch\logs\gc.log.* c:\ProgramData\Elastic\Elasticsearch\logs\gc.log] {"input_id": "5c90cdc0-ca6b-4d51-a33c-3ce661ff324b"}
2021-09-20T09:55:05.457+0700 INFO [input] log/input.go:164 Configured paths: [c:\ProgramData\Elastic\Elasticsearch\logs\*.log c:\ProgramData\Elastic\Elasticsearch\logs\*_server.json] {"input_id": "c7ba61e1-c4cb-42e6-8d9e-7acaa5c0d982"}
2021-09-20T09:55:05.457+0700 INFO [input] log/input.go:164 Configured paths: [c:\ProgramData\Elastic\Elasticsearch\logs\*_index_search_slowlog.log c:\ProgramData\Elastic\Elasticsearch\logs\*_index_indexing_slowlog.log c:\ProgramData\Elastic\Elasticsearch\logs\*_index_search_slowlog.json c:\ProgramData\Elastic\Elasticsearch\logs\*_index_indexing_slowlog.json] {"input_id": "a83c170f-f55c-4de2-b6e3-969a3686c403"}
2021-09-20T09:55:05.458+0700 INFO [input] log/input.go:164 Configured paths: [c:\ProgramData\Elastic\Elasticsearch\logs\*_access.log c:\ProgramData\Elastic\Elasticsearch\logs\*_audit.log c:\ProgramData\Elastic\Elasticsearch\logs\*_audit.json] {"input_id": "0696e6da-a676-4e5b-a550-0165db7b89af"}
2021-09-20T09:55:05.472+0700 INFO [input] log/input.go:164 Configured paths: [c:\programdata\MySQL\MySQL Server*\error.log*] {"input_id": "cc0b720d-c796-47a2-87e7-7a3244fe8174"}
2021-09-20T09:55:05.472+0700 INFO [input] log/input.go:164 Configured paths: [c:\programdata\MySQL\MySQL Server*\mysql-slow.log*] {"input_id": "2129affd-14ec-4468-8710-39eccfffb356"}
2021-09-20T09:55:05.487+0700 INFO [input] log/input.go:164 Configured paths: [c:\programdata\nginx\logs\*access.log*] {"input_id": "44009138-8a88-4b07-8bd2-d04ab9d520bd"}
2021-09-20T09:55:05.487+0700 INFO [input] log/input.go:164 Configured paths: [c:\programdata\nginx\logs\error.log*] {"input_id": "daa74eb0-538d-45fc-b657-13431186e186"}
2021-09-20T09:55:05.488+0700 INFO [crawler] beater/crawler.go:108 Loading and starting Inputs completed. Enabled inputs: 1
2021-09-20T09:55:05.488+0700 INFO cfgfile/reload.go:164 Config reloader started
2021-09-20T09:55:05.498+0700 INFO [input] log/input.go:164 Configured paths: [c:\ProgramData\Elastic\Elasticsearch\logs\*.log c:\ProgramData\Elastic\Elasticsearch\logs\*_server.json] {"input_id": "77a9d5c1-6f98-414d-b368-dec4f7163ed2"}
2021-09-20T09:55:05.499+0700 INFO [input] log/input.go:164 Configured paths: [c:\ProgramData\Elastic\Elasticsearch\logs\*_index_search_slowlog.log c:\ProgramData\Elastic\Elasticsearch\logs\*_index_indexing_slowlog.log c:\ProgramData\Elastic\Elasticsearch\logs\*_index_search_slowlog.json c:\ProgramData\Elastic\Elasticsearch\logs\*_index_indexing_slowlog.json] {"input_id": "beaa87b1-8ce8-4374-bee5-7372846a1968"}
2021-09-20T09:55:05.500+0700 INFO [input] log/input.go:164 Configured paths: [c:\ProgramData\Elastic\Elasticsearch\logs\*_access.log c:\ProgramData\Elastic\Elasticsearch\logs\*_audit.log c:\ProgramData\Elastic\Elasticsearch\logs\*_audit.json] {"input_id": "b6c46042-3acf-440e-b788-3dfdaf789c10"}
2021-09-20T09:55:05.500+0700 INFO [input] log/input.go:164 Configured paths: [c:\ProgramData\Elastic\Elasticsearch\logs\*_deprecation.log c:\ProgramData\Elastic\Elasticsearch\logs\*_deprecation.json] {"input_id": "d673b872-e9b2-496d-96b5-1e9792169b76"}
2021-09-20T09:55:05.501+0700 INFO [input] log/input.go:164 Configured paths: [c:\ProgramData\Elastic\Elasticsearch\logs\gc.log.* c:\ProgramData\Elastic\Elasticsearch\logs\gc.log] {"input_id": "5e4465a8-9ffb-42c6-ab6f-c9d269419ed4"}
2021-09-20T09:55:05.501+0700 INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://localhost:9200
2021-09-20T09:55:08.389+0700 INFO [add_cloud_metadata] add_cloud_metadata/add_cloud_metadata.go:101 add_cloud_metadata: hosting provider type not detected.
2021-09-20T09:55:15.302+0700 INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.14.1
2021-09-20T09:55:15.454+0700 INFO [modules] fileset/pipelines.go:133 Elasticsearch pipeline loaded. {"pipeline": "filebeat-7.14.1-elasticsearch-gc-pipeline"}
2021-09-20T09:55:15.874+0700 INFO [modules] fileset/pipelines.go:133 Elasticsearch pipeline loaded. {"pipeline": "filebeat-7.14.1-elasticsearch-server-pipeline"}
2021-09-20T09:55:15.986+0700 INFO [modules] fileset/pipelines.go:133 Elasticsearch pipeline loaded. {"pipeline": "filebeat-7.14.1-elasticsearch-server-pipeline-plaintext"}
2021-09-20T09:55:16.108+0700 INFO [modules] fileset/pipelines.go:133 Elasticsearch pipeline loaded. {"pipeline": "filebeat-7.14.1-elasticsearch-server-pipeline-json"}
2021-09-20T09:55:16.227+0700 INFO [modules] fileset/pipelines.go:133 Elasticsearch pipeline loaded. {"pipeline": "filebeat-7.14.1-elasticsearch-slowlog-pipeline"}
2021-09-20T09:55:16.335+0700 INFO [modules] fileset/pipelines.go:133 Elasticsearch pipeline loaded. {"pipeline": "filebeat-7.14.1-elasticsearch-slowlog-pipeline-plaintext"}
2021-09-20T09:55:16.450+0700 INFO [modules] fileset/pipelines.go:133 Elasticsearch pipeline loaded. {"pipeline": "filebeat-7.14.1-elasticsearch-slowlog-pipeline-json"}
2021-09-20T09:55:16.558+0700 INFO [modules] fileset/pipelines.go:133 Elasticsearch pipeline loaded. {"pipeline": "filebeat-7.14.1-elasticsearch-audit-pipeline"}
2021-09-20T09:55:16.674+0700 INFO [modules] fileset/pipelines.go:133 Elasticsearch pipeline loaded. {"pipeline": "filebeat-7.14.1-elasticsearch-audit-pipeline-json"}
2021-09-20T09:55:16.789+0700 INFO [modules] fileset/pipelines.go:133 Elasticsearch pipeline loaded. {"pipeline": "filebeat-7.14.1-elasticsearch-audit-pipeline-plaintext"}
2021-09-20T09:55:16.898+0700 INFO [modules] fileset/pipelines.go:133 Elasticsearch pipeline loaded. {"pipeline": "filebeat-7.14.1-elasticsearch-deprecation-pipeline"}
2021-09-20T09:55:17.003+0700 INFO [modules] fileset/pipelines.go:133 Elasticsearch pipeline loaded. {"pipeline": "filebeat-7.14.1-elasticsearch-deprecation-pipeline-plaintext"}
2021-09-20T09:55:17.111+0700 INFO [modules] fileset/pipelines.go:133 Elasticsearch pipeline loaded. {"pipeline": "filebeat-7.14.1-elasticsearch-deprecation-pipeline-json"}
2021-09-20T09:55:17.116+0700 INFO [input] log/input.go:164 Configured paths: [c:\programdata\MySQL\MySQL Server*\error.log*] {"input_id": "0043e24c-4f0f-487a-ab3e-2d2254e613ac"}
2021-09-20T09:55:17.117+0700 INFO [input] log/input.go:164 Configured paths: [c:\programdata\MySQL\MySQL Server*\mysql-slow.log*] {"input_id": "eddd4b6e-3b7e-4cca-a0b0-e5422c6b7ccf"}
2021-09-20T09:55:17.117+0700 INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://localhost:9200
2021-09-20T09:55:17.120+0700 INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.14.1
2021-09-20T09:55:17.137+0700 INFO [input] log/input.go:164 Configured paths: [c:\programdata\nginx\logs\*access.log*] {"input_id": "4af5fcde-53a7-4e67-9819-c153919b5f05"}
2021-09-20T09:55:17.137+0700 INFO [input] log/input.go:164 Configured paths: [c:\programdata\nginx\logs\error.log*] {"input_id": "f4dafaa1-89cf-48d8-927e-eb292c5b186f"}
2021-09-20T09:55:17.137+0700 INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://localhost:9200
2021-09-20T09:55:17.141+0700 INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.14.1
2021-09-20T09:55:17.153+0700 INFO cfgfile/reload.go:224 Loading of config files completed.
2021-09-20T09:55:35.468+0700 INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":78,"time":{"ms":93}},"total":{"ticks":390,"time":{"ms":405},"value":390},"user":{"ticks":312,"time":{"ms":312}}},"handles":{"open":306},"info":{"ephemeral_id":"f9fdf685-5d19-459c-a646-fceea8d53c4e","uptime":{"ms":30152},"version":"7.14.1"},"memstats":{"gc_next":19839152,"memory_alloc":12307216,"memory_sys":34381016,"memory_total":62482208,"rss":62062592},"runtime":{"goroutines":78}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":3,"starts":3},"reloads":1,"scans":1},"output":{"events":{"active":0},"type":"elasticsearch"},"pipeline":{"clients":10,"events":{"active":0},"queue":{"max_events":4096}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":12}}}}}
2021-09-20T09:56:05.478+0700 INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":109,"time":{"ms":32}},"total":{"ticks":421,"time":{"ms":32},"value":421},"user":{"ticks":312}},"handles":{"open":307},"info":{"ephemeral_id":"f9fdf685-5d19-459c-a646-fceea8d53c4e","uptime":{"ms":60164},"version":"7.14.1"},"memstats":{"gc_next":19839152,"memory_alloc":12714808,"memory_total":62889800,"rss":62160896},"runtime":{"goroutines":78}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":3}},"output":{"events":{"active":0}},"pipeline":{"clients":10,"events":{"active":0}}},"registrar":{"states":{"current":0}}}}}
2021-09-20T09:56:35.470+0700 INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":140,"time":{"ms":46}},"total":{"ticks":452,"time":{"ms":46},"value":452},"user":{"ticks":312}},"handles":{"open":304},"info":{"ephemeral_id":"f9fdf685-5d19-459c-a646-fceea8d53c4e","uptime":{"ms":90157},"version":"7.14.1"},"memstats":{"gc_next":19839152,"memory_alloc":13110648,"memory_total":63285640,"rss":62164992},"runtime":{"goroutines":72}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":3}},"output":{"events":{"active":0}},"pipeline":{"clients":10,"events":{"active":0}}},"registrar":{"states":{"current":0}}}}}
but i have n't seen any data in kibana.
This is the result of GET _cat/indices?v
#! Elasticsearch built-in security features are not enabled. Without authentication, your cluster could be accessible to anyone. See https://www.elastic.co/guide/en/elasticsearch/reference/7.14/security-minimal-setup.html to enable security.
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .geoip_databases qeZJyRoKRlW7Xu7EE2ytcw 1 0 42 0 40.8mb 40.8mb
green open .apm-custom-link 5aba-WkIS5yl-Eui7vpfcQ 1 0 0 0 208b 208b
green open .kibana_task_manager_7.14.1_001 1zkRA6c6SXiMcqCtSyrSqQ 1 0 14 344 1.4mb 1.4mb
green open .apm-agent-configuration dppb9LlKQLWTzYBiLZjViA 1 0 0 0 208b 208b
yellow open filebeat-7.14.1-2021.09.19-000001 TpsVGELhRwC-_dxGH7nGKQ 1 1 0 0 208b 208b
green open .async-search KZgq-leNT_qt_dM8TOUQ6A 1 0 0 0 231b 231b
green open .kibana_7.14.1_001 D4UISLPMQlGmCjNgGIrOTw 1 0 2251 11 2.7mb 2.7mb
green open .kibana-event-log-7.14.1-000001 cspR3zh9T1emwvNA131noQ 1 0 3 0 16.4kb 16.4kb
green open .tasks 5dHd_BZpSVilDmVQy6kE7w 1 0 4 0 27.3kb 27.3kb
is it required specific structure log file or i can put any thing in there or where can i get sample log file to test the connection to put in my folder at D:\AppData\Elastic\filebeat\logs ?
Also, where can i find some best practice to config filebeat, i 've read the document at https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-installation-configuration.html. But it is too simple, many things were not explained like how to config and test modules (we have dozens modules pensando, postgresql, proofpoint, rabbitmq,....)

Filebeat is not sending the logs to the logstash

I am using filebeat and ELK stack.I am not getting the logs from filebeat to logstach. Can any one help.
Filebeaat version : 6.3.0
ELK version : 6.0.0
filebeat config :--
filebeat.prospectors:
- type: log
enabled: true
paths:
- '/var/lib/docker/containers/*/*.log'
ignore_older: 0
scan_frequency: 10s
json.message_key: log
json.keys_under_root: true
json.add_error_key: true
multiline.pattern: "^[[:space:]]+(at|\\.{3})\\b|^Caused by:"
multiline.negate: false
multiline.match: after
registry_file: usr/share/filebeat/data/registry
output.logstash:
hosts: ["172.31.34.173:5044"]
Filebeat logs :--
2018-07-23T08:29:34.701Z INFO instance/beat.go:225 Setup Beat: filebeat; Version: 6.3.0
2018-07-23T08:29:34.701Z INFO pipeline/module.go:81 Beat name: ff01ed6d5ae4
2018-07-23T08:29:34.702Z WARN [cfgwarn] beater/filebeat.go:61 DEPRECATED: prospectors are deprecated, Use `inputs` instead. Will be removed in version: 7.0.0
2018-07-23T08:29:34.702Z INFO [monitoring] log/log.go:97 Starting metrics logging every 30s
2018-07-23T08:29:34.702Z INFO instance/beat.go:315 filebeat start running.
2018-07-23T08:29:34.702Z INFO registrar/registrar.go:75 No registry file found under: /usr/share/filebeat/data/registry. Creating a new registry file.
2018-07-23T08:29:34.704Z INFO registrar/registrar.go:112 Loading registrar data from /usr/share/filebeat/data/registry
2018-07-23T08:29:34.704Z INFO registrar/registrar.go:123 States Loaded from registrar: 0
2018-07-23T08:29:34.704Z WARN beater/filebeat.go:354 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2018-07-23T08:29:34.704Z INFO crawler/crawler.go:48 Loading Inputs: 1
2018-07-23T08:29:34.705Z INFO log/input.go:111 Configured paths: [/var/lib/docker/containers/*/*.log]
2018-07-23T08:29:34.705Z INFO input/input.go:87 Starting input of type: log; ID: 2696038032251986622
2018-07-23T08:29:34.705Z INFO crawler/crawler.go:82 Loading and starting Inputs completed. Enabled inputs: 1
2018-07-23T08:30:04.705Z INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":20,"time":{"ms":22}},"total":{"ticks":50,"time":{"ms":60},"value":50},"user":{"ticks":30,"time":{"ms":38}}},"info":{"ephemeral_id":"5193ce7d-8d09-4e9d-ab4e-e55a5972b4
Bit late to reply I know but I was having the same issue and after some searching, I found this layout to work for me.
filebeat.prospectors:
- paths:
- '<path to your log>'
multiline.pattern: '<whatever pattern is needed>'
multiline.negate: true
multiline.match: after
processors:
- decode_json_fields:
fields: ['<whatever field you need to decode']
target: json
Here's a link to a similar problem.

filebeat ignore logiles in multible prospectors

I try to configure a filebeat with multible prospectors. Filebeat register all of the prospectors but ignores the localhost log files from appA and the log files from appB
My filebeat.yml:
filebeat.prospectors:
- type: log
paths:
- /vol1/appA_instance01/logs/wrapper_*.log
- /vol1/appA_instance02/logs/wrapper_*.log
fields:
log_type: "appAlogs"
environment: "stage1"
exclude_files: [".gz$"]
- type: log
paths:
- /vol1/appA_instance01/logs/localhost.*.log
- /vol1/appA_instance02/logs/localhost.*.log
fields:
log_type: "localhostlogs"
environment: "stage1"
exclude_files: [".gz$"]
- type: log
paths:
- /vol1/appB_instance01/logs/*.log
- /vol1/appB_instance02/logs/*.log
fields:
log_type: "appBlogs"
environment: "stage1"
exclude_files: [".gz$"]
output.logstash:
hosts: ["<HOST>:5044"]
The filebeat log file:
2017-11-15T17:32:56+01:00 INFO Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2017-11-15T17:32:56+01:00 INFO Setup Beat: filebeat; Version: 5.6.3
2017-11-15T17:32:56+01:00 INFO Max Retries set to: 3
2017-11-15T17:32:56+01:00 INFO Activated logstash as output plugin.
2017-11-15T17:32:56+01:00 INFO Publisher name: host
2017-11-15T17:32:56+01:00 INFO Flush Interval set to: 1s
2017-11-15T17:32:56+01:00 INFO Max Bulk Size set to: 2048
2017-11-15T17:32:56+01:00 INFO filebeat start running.
2017-11-15T17:32:56+01:00 INFO Registry file set to: /var/lib/filebeat/registry
2017-11-15T17:32:56+01:00 INFO Loading registrar data from /var/lib /filebeat/registry
2017-11-15T17:32:56+01:00 INFO States Loaded from registrar: 222
2017-11-15T17:32:56+01:00 INFO Loading Prospectors: 3
2017-11-15T17:32:56+01:00 INFO Starting Registrar
2017-11-15T17:32:56+01:00 INFO Start sending events to output
2017-11-15T17:32:56+01:00 INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017-11-15T17:32:56+01:00 INFO Prospector with previous states loaded: 40
2017-11-15T17:32:56+01:00 INFO Starting prospector of type: log; id: 12115431240338587115
2017-11-15T17:32:56+01:00 INFO Harvester started for file: /vol1/appA_instance01/logs/wrapper_20171115.log
2017-11-15T17:32:56+01:00 INFO Prospector with previous states loaded: 182
2017-11-15T17:32:56+01:00 INFO Starting prospector of type: log; id: 18163435272915459714
2017-11-15T17:32:56+01:00 INFO Prospector with previous states loaded: 0
2017-11-15T17:32:56+01:00 INFO Starting prospector of type: log; id: 16959079668827945694
2017-11-15T17:32:56+01:00 INFO Loading and starting Prospectors completed. Enabled prospectors: 3
2017-11-15T17:33:06+01:00 INFO Harvester started for file: /vol1/appA_instance02/logs/wrapper_20171115.log
What's the reason why filebeat ignores the logiles?
/vol1/appA_instance01/logs/localhost.*.log
/vol1/appA_instance02/logs/localhost.*.log
/vol1/appB_instance01/logs/*.log
/vol1/appB_instance02/logs/*.log
greetings niesel
The attached log shows that all three prospectors has been started and the registry file seem to have states. Are you sure that ignored log files haven't been read before by Filebeat? Does it read new lines from those log files?
Logfiles are not reread by Filebeat. So it is possible that those files were previously read.

Duplicated log events when deleting registry

I'm currently working on a PoC ELK installation and I'd like to re-send every log line of a file which is registered in Filebeat for testing purposes.
This is what I do:
I stop Filebeat
I delete the index in Logstash through Kibana
I delete the Filebeat registry file
I start Filebeat
In Kibana I can see that twice as many events are there as log lines, and I can also see that every event is duplicated once.
Why is that?
Filebeat logs:
2017-05-05T14:25:16+02:00 INFO Setup Beat: filebeat; Version: 5.2.2
2017-05-05T14:25:16+02:00 INFO Max Retries set to: 3
2017-05-05T14:25:16+02:00 INFO Activated logstash as output plugin.
2017-05-05T14:25:16+02:00 INFO Publisher name: anonymized
2017-05-05T14:25:16+02:00 INFO Flush Interval set to: 1s
2017-05-05T14:25:16+02:00 INFO Max Bulk Size set to: 2048
2017-05-05T14:25:16+02:00 INFO filebeat start running.
2017-05-05T14:25:16+02:00 INFO No registry file found under: /var/lib/filebeat/registry. Creating a new registry file.
2017-05-05T14:25:16+02:00 INFO Loading registrar data from /var/lib/filebeat/registry
2017-05-05T14:25:16+02:00 INFO States Loaded from registrar: 0
2017-05-05T14:25:16+02:00 INFO Loading Prospectors: 1
2017-05-05T14:25:16+02:00 INFO Prospector with previous states loaded: 0
2017-05-05T14:25:16+02:00 INFO Loading Prospectors completed. Number of prospectors: 1
2017-05-05T14:25:16+02:00 INFO All prospectors are initialised and running with 0 states to persist
2017-05-05T14:25:16+02:00 INFO Starting Registrar
2017-05-05T14:25:16+02:00 INFO Start sending events to output
2017-05-05T14:25:16+02:00 INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017-05-05T14:25:16+02:00 INFO Starting prospector of type: log
2017-05-05T14:25:16+02:00 INFO Harvester started for file: /some/where/anonymized.log
2017-05-05T14:25:46+02:00 INFO Non-zero metrics in the last 30s: registrar.writes=2 libbeat.logstash.publish.read_bytes=54 libbeat.logstash.publish.write_bytes=32390 libbeat.logstash.published_and_acked_events=578 filebeat.harvester.running=1 registar.states.current=1 libbeat.logstash.call_count.PublishEvents=1 libbeat.publisher.published_events=578 publish.events=579 filebeat.harvester.started=1 registrar.states.update=579 filebeat.harvester.open_files=1
2017-05-05T14:26:16+02:00 INFO No non-zero metrics in the last 30s
Deleting the registry file created the problem.
Filebeat management the state of a file and the ACK of the event with the prospector(in memory) and with the Registry File(persisted in disk).
Please read the documentation Here
You can management the _id field of each event by yourself, so that any event that is duplicated (for any reason, even in production environment) will not have two of them in elasticsearch, but will update the event.
Create the following configuration in your logstash pipeline config file.
#if your logs don't have a unique ID, use the following to generate one
fingerprint{
#with the message field or choose other(s) that can give you a uniqueID
source => ["message"]
target => "LogID"
key => "something"
method => "MD5"
concatenate_sources => true
}
#in your output section
elasticsearch{
hosts => ["localhost:9200"]
document_id => "%{LogID}"
index => "yourindex"
}

Resources