filebeat loading input is 0 and filebeat don't have any log.
filebeat should read inputs that are some logs and send it to logstash. i have some filters in logstash.conf, but i removed it temporarily. and logstash send these to elastic and finally kibana.
filebeat.config.modules:
path: "${path.config}/modules.d/*.yml"
reload.enabled: true
reload.period: 10s
filebeat.inputs:
enabled: true
paths:
- /var/log/TestLog/*.log
type: log
filebeat.registry.path: /var/lib/filebeat/registry/filebeat
logging.files:
name: filebeat.log
path: /var/log/filebeat
logging.level: info
logging.selectors:
- "*"
logging.to_files: true
monitoring.enabled: false
output.logstash:
enabled: true
hosts:
- "192.168.80.20:5044"
setup.kibana: ~
setup.template.settings:
index.number_of_shards: 1
response of journalctl -fu filebeat is
INFO instance/beat.go:422 filebeat start running.
INFO registrar/migrate.go:104 No registry home found. Create: /var/lib/filebeat/registry/filebeat/filebeat
INFO registrar/migrate.go:112 Initialize registry meta file
INFO registrar/registrar.go:108 No registry file found under: /var/lib/filebeat/registry/filebeat/filebeat/data.json. Creating a new registry file.
INFO registrar/registrar.go:145 Loading registrar data from /var/lib/filebeat/registry/filebeat/filebeat/data.json
INFO registrar/registrar.go:152 States Loaded from registrar: 0
WARN beater/filebeat.go:368 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
INFO crawler/crawler.go:72 Loading Inputs: 0
INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 0
INFO cfgfile/reload.go:171 Config reloader started
INFO log/input.go:148 Configured paths: [/var/log/elasticsearch/*_access.log /var/log/elasticsearch/*_audit.log /var/log/elasticsearch/*_audit.json]
INFO log/input.go:148 Configured paths: [/var/log/elasticsearch/*_deprecation.log /var/log/elasticsearch/*_deprecation.json]
INFO log/input.go:148 Configured paths: [/var/log/elasticsearch/gc.log.[0-9]* /var/log/elasticsearch/gc.log]
INFO log/input.go:148 Configured paths: [/var/log/elasticsearch/*.log /var/log/elasticsearch/*_server.json]
INFO log/input.go:148 Configured paths: [/var/log/elasticsearch/*_index_search_slowlog.log /var/log/elasticsearch/*_index_indexing_slowlog.log /var/log/elasticsearch/*_index_search_slowlog.json /var/log/elasticsearch/*_index_indexing_slowlog.json]
INFO input/input.go:114 Starting input of type: log; ID: 10720371839583549447
INFO input/input.go:114 Starting input of type: log; ID: 8161597721645621668
INFO input/input.go:114 Starting input of type: log; ID: 15537576637552474368
INFO input/input.go:114 Starting input of type: log; ID: 14070679154152675563
INFO input/input.go:114 Starting input of type: log; ID: 7953850694515857477
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/my-application_audit.json
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/elasticsearch_audit.json
INFO log/input.go:148 Configured paths: [/var/log/logstash/logstash-plain*.log]
INFO log/input.go:148 Configured paths: [/var/log/logstash/logstash-slowlog-plain*.log]
INFO input/input.go:114 Starting input of type: log; ID: 17306378383715639109
INFO input/input.go:114 Starting input of type: log; ID: 14725834876846155099
INFO log/harvester.go:253 Harvester started for file: /var/log/logstash/logstash-plain.log
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/my-application.log
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/my-application_index_search_slowlog.json
INFO log/harvester.go:253 Harvester started for file: /var/log/logstash/logstash-slowlog-plain.log
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/my-application_deprecation.log
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/elasticsearch_deprecation.log
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.27
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/elasticsearch_server.json
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/elasticsearch_deprecation.json
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.31
INFO log/input.go:148 Configured paths: [/var/log/auth.log* /var/log/secure*]
INFO log/input.go:148 Configured paths: [/var/log/messages* /var/log/syslog*]
INFO input/input.go:114 Starting input of type: log; ID: 14797590234914819083
INFO input/input.go:114 Starting input of type: log; ID: 16974178264304869863
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/my-application_deprecation.json
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/my-application_server.json
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/elasticsearch_index_indexing_slowlog.json
INFO log/harvester.go:253 Harvester started for file: /var/log/secure-20191201
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/elasticsearch.log
INFO log/harvester.go:253 Harvester started for file: /var/log/messages-20191117
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/my-application_index_indexing_slowlog.json
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.02
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log
INFO log/harvester.go:253 Harvester started for file: /var/log/messages-20191124
INFO log/harvester.go:253 Harvester started for file: /var/log/secure
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/elasticsearch_index_search_slowlog.log
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.03
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.08
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.18
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.11
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.26
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.06
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.12
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.20
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.29
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.21
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.07
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.13
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.19
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.28
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.22
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.24
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.23
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.05
INFO log/harvester.go:253 Harvester started for file: /var/log/secure-20191110
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.09
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.10
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/my-application_index_search_slowlog.log
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.14
INFO log/harvester.go:253 Harvester started for file: /var/log/secure-20191117
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.16
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.30
INFO log/harvester.go:253 Harvester started for file: /var/log/secure-20191124
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.01
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.04
INFO log/harvester.go:253 Harvester started for file: /var/log/messages-20191201
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.15
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.17
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.00
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/gc.log.25
INFO log/harvester.go:253 Harvester started for file: /var/log/messages
INFO log/harvester.go:253 Harvester started for file: /var/log/messages-20191110
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/elasticsearch_index_indexing_slowlog.log
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/elasticsearch_index_search_slowlog.json
INFO log/harvester.go:253 Harvester started for file: /var/log/elasticsearch/my-application_index_indexing_slowlog.log
INFO pipeline/output.go:95 Connecting to backoff(async(tcp://192.168.80.20:5044))
INFO pipeline/output.go:105 Connection to backoff(async(tcp://192.168.80.20:5044)) established
INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":550,"time":{"ms":560}},"total":{"ticks":4600,"time":{"ms":4612},"value":4600},"user":{"ticks":4050,"time":{"ms":4052}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":70},"info":{"ephemeral_id":"e901ac2b-21fa-47b1-a84d-3ddc10b068fd","uptime":{"ms":30285}},"memstats":{"gc_next":57786240,"memory_alloc":50264424,"memory_total":511186464,"rss":92864512},"runtime":{"goroutines":387}},"filebeat":{"events":{"active":4139,"added":34923,"done":30784},"harvester":{"open_files":64,"running":64,"started":64}},"libbeat":{"config":{"module":{"running":0},"reloads":2},"output":{"events":{"acked":30720,"active":4096,"batches":17,"total":34816},"read":{"bytes":96},"type":"logstash","write":{"bytes":5233807}},"pipeline":{"clients":9,"events":{"active":4119,"filtered":64,"published":34836,"retry":2048,"total":34903},"queue":{"acked":30720}}},"registrar":{"states":{"current":63,"update":30784},"writes":{"success":48,"total":48}},"system":{"cpu":{"cores":2},"load":{"1":3.55,"15":3.97,"5":4.77,"norm":{"1":1.775,"15":1.985,"5":2.385}}}}}}
its my logsatsh.conf
input {
beats {
port => 5044
ssl => false
}
}
filter {
}
output {
elasticsearch {
hosts => ["192.168.80.20:9200"]
manage_template => false
}
}
Can you share the logstash pipeline.yml for further investigation?
And also it is better first you try to input one specific log file and change the filebeat.input arrangement as below.
filebeat.inputs:
- type: log
enabled: true
paths:
- /home/playground/logs/test.log
Related
I have installed Filebeat-oss 7.12.0 and opensearch-2.4.0 and opensearchDashboard-2.4.0 on Windows.
Every service is working fine.
But index is not getting created in Opensearch dashboard.
There is no error.
Logs are:
INFO log/harvester.go:302 Harvester started for file: D:\data\logs.txt
2022-12-08T18:28:17.584+0530 INFO [crawler] beater/crawler.go:141 Starting input (ID: 16780016071726099597)
2022-12-08T18:28:17.585+0530 INFO [crawler] beater/crawler.go:108 Loading and starting Inputs completed. Enabled inputs: 2
2022-12-08T18:28:17.585+0530 INFO cfgfile/reload.go:164 Config reloader started
2022-12-08T18:28:17.584+0530 INFO [input.filestream] compat/compat.go:111 Input filestream starting
2022-12-08T18:28:17.585+0530 INFO cfgfile/reload.go:224 Loading of config files completed.
2022-12-08T18:28:20.428+0530 INFO [add_cloud_metadata] add_cloud_metadata/add_cloud_metadata.go:101 add_cloud_metadata: hosting provider type not detected.
2022-12-08T18:28:21.428+0530 INFO [publisher_pipeline_output] pipeline/output.go:143 Connecting to backoff(elasticsearch(http://localhost:9200))
2022-12-08T18:28:21.428+0530 INFO [publisher] pipeline/retry.go:219 retryer: send unwait signal to consumer
2022-12-08T18:28:21.428+0530 INFO [publisher] pipeline/retry.go:223 done
2022-12-08T18:28:21.433+0530 INFO [esclientleg] eslegclient/connection.go:314 Attempting to connect to Elasticsearch version 2.4.0
2022-12-08T18:28:21.537+0530 INFO [esclientleg] eslegclient/connection.go:314 Attempting to connect to Elasticsearch version 2.4.0
2022-12-08T18:28:21.620+0530 INFO template/load.go:117 Try loading template filebeat-7.12.0 to Elasticsearch
filebeat.yml is:
filebeat.inputs:
- type: log
paths:
- D:\data\*
- type: filestream
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- D:\data\*
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
#============================== Kibana =====================================
setup.kibana:
host: "localhost:5601"
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
I don't know what the problem is. No index is created in Opensearch dashboard with name filebeat-7.12.0.
#Android see my reply on this thread: https://stackoverflow.com/a/74984260/6101900.
You cannot forward events from filebeat to opensearch since its not elasticsearch.
I am using filebeat and ELK stack.I am not getting the logs from filebeat to logstach. Can any one help.
Filebeaat version : 6.3.0
ELK version : 6.0.0
filebeat config :--
filebeat.prospectors:
- type: log
enabled: true
paths:
- '/var/lib/docker/containers/*/*.log'
ignore_older: 0
scan_frequency: 10s
json.message_key: log
json.keys_under_root: true
json.add_error_key: true
multiline.pattern: "^[[:space:]]+(at|\\.{3})\\b|^Caused by:"
multiline.negate: false
multiline.match: after
registry_file: usr/share/filebeat/data/registry
output.logstash:
hosts: ["172.31.34.173:5044"]
Filebeat logs :--
2018-07-23T08:29:34.701Z INFO instance/beat.go:225 Setup Beat: filebeat; Version: 6.3.0
2018-07-23T08:29:34.701Z INFO pipeline/module.go:81 Beat name: ff01ed6d5ae4
2018-07-23T08:29:34.702Z WARN [cfgwarn] beater/filebeat.go:61 DEPRECATED: prospectors are deprecated, Use `inputs` instead. Will be removed in version: 7.0.0
2018-07-23T08:29:34.702Z INFO [monitoring] log/log.go:97 Starting metrics logging every 30s
2018-07-23T08:29:34.702Z INFO instance/beat.go:315 filebeat start running.
2018-07-23T08:29:34.702Z INFO registrar/registrar.go:75 No registry file found under: /usr/share/filebeat/data/registry. Creating a new registry file.
2018-07-23T08:29:34.704Z INFO registrar/registrar.go:112 Loading registrar data from /usr/share/filebeat/data/registry
2018-07-23T08:29:34.704Z INFO registrar/registrar.go:123 States Loaded from registrar: 0
2018-07-23T08:29:34.704Z WARN beater/filebeat.go:354 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2018-07-23T08:29:34.704Z INFO crawler/crawler.go:48 Loading Inputs: 1
2018-07-23T08:29:34.705Z INFO log/input.go:111 Configured paths: [/var/lib/docker/containers/*/*.log]
2018-07-23T08:29:34.705Z INFO input/input.go:87 Starting input of type: log; ID: 2696038032251986622
2018-07-23T08:29:34.705Z INFO crawler/crawler.go:82 Loading and starting Inputs completed. Enabled inputs: 1
2018-07-23T08:30:04.705Z INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":20,"time":{"ms":22}},"total":{"ticks":50,"time":{"ms":60},"value":50},"user":{"ticks":30,"time":{"ms":38}}},"info":{"ephemeral_id":"5193ce7d-8d09-4e9d-ab4e-e55a5972b4
Bit late to reply I know but I was having the same issue and after some searching, I found this layout to work for me.
filebeat.prospectors:
- paths:
- '<path to your log>'
multiline.pattern: '<whatever pattern is needed>'
multiline.negate: true
multiline.match: after
processors:
- decode_json_fields:
fields: ['<whatever field you need to decode']
target: json
Here's a link to a similar problem.
Facing problem with staring up the Filebeat in windows 10, i have modified the filebeat prospector log path with elasticsearch log folder located in my local machine "E:" drive also i have validated the format of filebeat.yml after made the correction but still am getting below error on start up.
Filebeat version : 6.2.3
Windows version: 64 bit
Filebeat.yml (validated yml format)
filebeat.prospectors:
-
type: log
enabled: true
paths:
- 'E:\Research\ELK\elasticsearch-6.2.3\logs\*.log'
filebeat.config.modules:
path: '${path.config}/modules.d/*.yml'
reload.enabled: false
setup.template.settings:
index.number_of_shards: 3
setup.kibana:
host: 'localhost:5601'
output.elasticsearch:
hosts:
- 'localhost:9200'
username: elastic
password: elastic
Filebeat Startup Log:
E:\Research\ELK\filebeat-6.2.3-windows-x86_64>filebeat --setup -e
2018-03-24T22:58:39.660+0530 INFO instance/beat.go:468 Home path: [E:\Research\ELK\filebeat-6.2.3-windows-x86_64] Config path: [E:\Research\ELK\filebeat-6.2.3-windows-x86_64] Data path: [E:\Research\ELK\filebeat-6.2.3-windows-x86_64\data] Logs path: [E:\Research\ELK\filebeat-6.2.3-windows-x86_64\logs]
2018-03-24T22:58:39.661+0530 INFO instance/beat.go:475 Beat UUID: f818bcc0-25bb-4545-bcd4-3523366a4c0e
2018-03-24T22:58:39.662+0530 INFO instance/beat.go:213 Setup Beat: filebeat; Version: 6.2.3
2018-03-24T22:58:39.662+0530 INFO elasticsearch/client.go:145 Elasticsearch url: http://localhost:9200
2018-03-24T22:58:39.665+0530 INFO pipeline/module.go:76 Beat name: DESKTOP-J932HJH
2018-03-24T22:58:39.666+0530 INFO [monitoring] log/log.go:97 Starting metrics logging every 30s
2018-03-24T22:58:39.666+0530 INFO elasticsearch/client.go:145 Elasticsearch url: http://localhost:9200
2018-03-24T22:58:39.672+0530 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.3
2018-03-24T22:58:39.672+0530 INFO kibana/client.go:69 Kibana url: http://localhost:5601
2018-03-24T22:59:08.882+0530 INFO instance/beat.go:583 Kibana dashboards successfully loaded.
2018-03-24T22:59:08.882+0530 INFO elasticsearch/client.go:145 Elasticsearch url: http://localhost:9200
2018-03-24T22:59:08.885+0530 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.3
2018-03-24T22:59:08.888+0530 INFO instance/beat.go:301 filebeat start running.
2018-03-24T22:59:08.888+0530 INFO registrar/registrar.go:108 Loading registrar data from E:\Research\ELK\filebeat-6.2.3-windows-x86_64\data\registry
2018-03-24T22:59:08.888+0530 INFO registrar/registrar.go:119 States Loaded from registrar: 5
2018-03-24T22:59:08.888+0530 INFO crawler/crawler.go:48 Loading Prospectors: 1
2018-03-24T22:59:08.889+0530 INFO log/prospector.go:111 Configured paths: [E:\Research\ELK\elasticsearch-6.2.3\logs\*.log]
2018-03-24T22:59:08.890+0530 INFO log/harvester.go:216 Harvester started for file: E:\Research\ELK\elasticsearch-6.2.3\logs\elasticsearch.log
2018-03-24T22:59:08.892+0530 ERROR fileset/factory.go:69 Error creating prospector: No paths were defined for prospector accessing config
2018-03-24T22:59:08.892+0530 INFO crawler/crawler.go:109 Stopping Crawler
2018-03-24T22:59:08.893+0530 INFO crawler/crawler.go:119 Stopping 1 prospectors
2018-03-24T22:59:08.897+0530 INFO log/prospector.go:410 Scan aborted because prospector stopped.
2018-03-24T22:59:08.897+0530 INFO log/harvester.go:216 Harvester started for file: E:\Research\ELK\elasticsearch-6.2.3\logs\elasticsearch_deprecation.log
2018-03-24T22:59:08.897+0530 INFO prospector/prospector.go:121 Prospector ticker stopped
2018-03-24T22:59:08.898+0530 INFO prospector/prospector.go:138 Stopping Prospector: 18361622063543553778
2018-03-24T22:59:08.898+0530 INFO log/harvester.go:237 Reader was closed: E:\Research\ELK\elasticsearch-6.2.3\logs\elasticsearch.log. Closing.
2018-03-24T22:59:08.898+0530 INFO crawler/crawler.go:135 Crawler stopped
2018-03-24T22:59:08.899+0530 INFO registrar/registrar.go:210 Stopping Registrar
2018-03-24T22:59:08.908+0530 INFO registrar/registrar.go:165 Ending Registrar
2018-03-24T22:59:08.910+0530 INFO instance/beat.go:308 filebeat stopped.
2018-03-24T22:59:08.948+0530 INFO [monitoring] log/log.go:132 Total non-zero metrics
2018-03-24T22:59:08.948+0530 INFO [monitoring] log/log.go:133 Uptime: 29.3387858s
2018-03-24T22:59:08.949+0530 INFO [monitoring] log/log.go:110 Stopping metrics logging.
2018-03-24T22:59:08.950+0530 ERROR instance/beat.go:667 Exiting: No paths were defined for prospector accessing config
Exiting: No paths were defined for prospector accessing config
Check this path ${path.config}/modules.d/
or check by command line "filebeat.exe modules list", if some modules are active, which do not work with windows.
For instance the system.yml (module) does not run on plain windows, because there is no syslog. But the system module is active by default. So you have to disable it first.
If I have it enabled, I run in the exactly the same error message, and filebeat stops.
Rewrite the first part of the yml using this format:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
Remove also the empty new line and take attention to the indentation.
I understand that this topic is a bit old however looking at the amount of views that this has received at the time of posting this (June 2019), I think it would be safe to add more informations as this is fairly frustrating to get while very easy to fix.
Before I explain what I did, allow me to say I had this problem on a Linux system but the problem/solution should be the same on all plateforms.
After having updated the logback-spring.xml and restarted the service, it kept refusing spitting back the following error:
ERROR instance/beat.go:824 Exiting: Can only start an input when all related states are finished: {Id:163850-64780 Finished:false Fileinfo:0xc42016c1a0 Source:/some/path/here/error.log Offset:0 Timestamp:2019-06-13 09:15:35.481163602 -0400 EDT m=+0.107516982 TTL:-1ns Type:log Meta:map[] FileStateOS:163850-64780}
My solution was simply to edit the /etc/filebeat/filebeat.yml and comment as much stuff as I could (Going back to nearly a vanilla/basic configuration).
After having done so, restarting filebeat worked and this ended up being a duplicate path entry with another file somewhere in the system, possibly under the modules.
I try to configure a filebeat with multible prospectors. Filebeat register all of the prospectors but ignores the localhost log files from appA and the log files from appB
My filebeat.yml:
filebeat.prospectors:
- type: log
paths:
- /vol1/appA_instance01/logs/wrapper_*.log
- /vol1/appA_instance02/logs/wrapper_*.log
fields:
log_type: "appAlogs"
environment: "stage1"
exclude_files: [".gz$"]
- type: log
paths:
- /vol1/appA_instance01/logs/localhost.*.log
- /vol1/appA_instance02/logs/localhost.*.log
fields:
log_type: "localhostlogs"
environment: "stage1"
exclude_files: [".gz$"]
- type: log
paths:
- /vol1/appB_instance01/logs/*.log
- /vol1/appB_instance02/logs/*.log
fields:
log_type: "appBlogs"
environment: "stage1"
exclude_files: [".gz$"]
output.logstash:
hosts: ["<HOST>:5044"]
The filebeat log file:
2017-11-15T17:32:56+01:00 INFO Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2017-11-15T17:32:56+01:00 INFO Setup Beat: filebeat; Version: 5.6.3
2017-11-15T17:32:56+01:00 INFO Max Retries set to: 3
2017-11-15T17:32:56+01:00 INFO Activated logstash as output plugin.
2017-11-15T17:32:56+01:00 INFO Publisher name: host
2017-11-15T17:32:56+01:00 INFO Flush Interval set to: 1s
2017-11-15T17:32:56+01:00 INFO Max Bulk Size set to: 2048
2017-11-15T17:32:56+01:00 INFO filebeat start running.
2017-11-15T17:32:56+01:00 INFO Registry file set to: /var/lib/filebeat/registry
2017-11-15T17:32:56+01:00 INFO Loading registrar data from /var/lib /filebeat/registry
2017-11-15T17:32:56+01:00 INFO States Loaded from registrar: 222
2017-11-15T17:32:56+01:00 INFO Loading Prospectors: 3
2017-11-15T17:32:56+01:00 INFO Starting Registrar
2017-11-15T17:32:56+01:00 INFO Start sending events to output
2017-11-15T17:32:56+01:00 INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017-11-15T17:32:56+01:00 INFO Prospector with previous states loaded: 40
2017-11-15T17:32:56+01:00 INFO Starting prospector of type: log; id: 12115431240338587115
2017-11-15T17:32:56+01:00 INFO Harvester started for file: /vol1/appA_instance01/logs/wrapper_20171115.log
2017-11-15T17:32:56+01:00 INFO Prospector with previous states loaded: 182
2017-11-15T17:32:56+01:00 INFO Starting prospector of type: log; id: 18163435272915459714
2017-11-15T17:32:56+01:00 INFO Prospector with previous states loaded: 0
2017-11-15T17:32:56+01:00 INFO Starting prospector of type: log; id: 16959079668827945694
2017-11-15T17:32:56+01:00 INFO Loading and starting Prospectors completed. Enabled prospectors: 3
2017-11-15T17:33:06+01:00 INFO Harvester started for file: /vol1/appA_instance02/logs/wrapper_20171115.log
What's the reason why filebeat ignores the logiles?
/vol1/appA_instance01/logs/localhost.*.log
/vol1/appA_instance02/logs/localhost.*.log
/vol1/appB_instance01/logs/*.log
/vol1/appB_instance02/logs/*.log
greetings niesel
The attached log shows that all three prospectors has been started and the registry file seem to have states. Are you sure that ignored log files haven't been read before by Filebeat? Does it read new lines from those log files?
Logfiles are not reread by Filebeat. So it is possible that those files were previously read.
I am trying to fetch the twitter data in HDFS but getting issue.
Here is my flume.conf file
TwitterAgent.sources= Twitter
TwitterAgent.channels= MemChannel
TwitterAgent.sinks=HDFS
TwitterAgent.sources.TwitterSource.type=org.apache.flume.source.twitter.TwitterSource
TwitterAgent.sources.Twitter.channels=MemChannel
TwitterAgent.sources.Twitter.consumerKey=xxxxxxxxxxx
TwitterAgent.sources.Twitter.consumerSecret= xxxxxxxxxxxxxxx
TwitterAgent.sources.Twitter.accessToken=xxxxxxxxxx
TwitterAgent.sources.Twitter.accessTokenSecret=xxxxxxxxxxx
TwitterAgent.sources.Twitter.keywords= hadoop,election,sports, cricket,Big data
TwitterAgent.sinks.HDFS.channel=MemChannel
TwitterAgent.sinks.HDFS.type=hdfs
TwitterAgent.sinks.HDFS.hdfs.path=hdfs://localhost:9000/user/flume/tweets
TwitterAgent.sinks.HDFS.hdfs.fileType=DataStream
TwitterAgent.sinks.HDFS.hdfs.writeformat=Text
TwitterAgent.sinks.HDFS.hdfs.batchSize=1000
TwitterAgent.sinks.HDFS.hdfs.rollSize=0
TwitterAgent.sinks.HDFS.hdfs.rollCount=10000
TwitterAgent.sinks.HDFS.hdfs.rollInterval=600
TwitterAgent.channels.MemChannel.type=memory
TwitterAgent.channels.MemChannel.capacity=10000
TwitterAgent.channels.MemChannel.transactionCapacity=100
In Env.sh file, I have the path
#FLUME_CLASSPATH="/usr/lib/flume-sources-1.0-SNAPSHOT.jar"
Now I am using the below command to get the data-
[cloudera#quickstart etc]$ flume-ng agent -n TwitterAgent -c conf -f /etc/flume-ng/conf/flume.conf
It showing some logs but I am getting the below error and it is getting stuck after HDFS sink started.
16/09/25 05:18:36 WARN conf.FlumeConfiguration: Could not configure source Twitter due to: Component has no type. Cannot configure. Twitter
org.apache.flume.conf.ConfigurationException: Component has no type. Cannot configure. Twitter
at org.apache.flume.conf.ComponentConfiguration.configure(ComponentConfiguration.java:76)
at org.apache.flume.conf.source.SourceConfiguration.configure(SourceConfiguration.java:56)
at org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.validateSources(FlumeConfiguration.java:567)
at org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.isValid(FlumeConfiguration.java:346)
at org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.access$000(FlumeConfiguration.java:213)
at org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:127)
at org.apache.flume.conf.FlumeConfiguration.<init>(FlumeConfiguration.java:109)
at org.apache.flume.node.PropertiesFileConfigurationProvider.getFlumeConfiguration(PropertiesFileConfigurationProvider.java:189)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:89)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
16/09/25 05:18:36 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration for agents: [TwitterAgent]
16/09/25 05:18:36 INFO node.AbstractConfigurationProvider: Creating channels
16/09/25 05:18:36 INFO channel.DefaultChannelFactory: Creating instance of channel MemChannel type memory
16/09/25 05:18:36 INFO node.AbstractConfigurationProvider: Created channel MemChannel
16/09/25 05:18:36 INFO sink.DefaultSinkFactory: Creating instance of sink: HDFS, type: hdfs
16/09/25 05:18:36 INFO node.AbstractConfigurationProvider: Channel MemChannel connected to [HDFS]
16/09/25 05:18:36 INFO node.Application: Starting new configuration:{ sourceRunners:{} sinkRunners:{HDFS=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#3963542c counterGroup:{ name:null counters:{} } }} channels:{MemChannel=org.apache.flume.channel.MemoryChannel{name: MemChannel}} }
16/09/25 05:18:36 INFO node.Application: Starting Channel MemChannel
16/09/25 05:18:36 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: MemChannel: Successfully registered new MBean.
16/09/25 05:18:36 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: MemChannel started
16/09/25 05:18:36 INFO node.Application: Starting Sink HDFS
16/09/25 05:18:36 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SINK, name: HDFS: Successfully registered new MBean.
16/09/25 05:18:36 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: HDFS started
In configuration file please replace
TwitterAgent.sources.TwitterSource.type=org.apache.flume.source.twitter.TwitterSource
by
TwitterAgent.sources.Twitter.type=org.apache.flume.source.twitter.TwitterSource