efk---Filebeat failed to fill data into redis - efk

Filebeat failed to fill data into redis
Open the dbug query but don't know what the reason is. Help me find out why. Thanks
filebeat config
enter image description here
redis log
enter image description here
filebeat log
2018-11-28T19:02:52.470+0800 DEBUG [input] input/input.go:152 Run input
2018-11-28T19:02:52.470+0800 DEBUG [input] log/input.go:174 Start next scan
2018-11-28T19:02:52.470+0800 DEBUG [input] log/input.go:404 Check file for harvesting: /application/nginx/logs/access_json-1.log
2018-11-28T19:02:52.470+0800 DEBUG [input] log/input.go:494 Update existing file for harvesting: /application/nginx/logs/access_json-1.log, offset: 1758536
2018-11-28T19:02:52.470+0800 DEBUG [input] log/input.go:548 File didn't change: /application/nginx/logs/access_json-1.log
2018-11-28T19:02:52.470+0800 DEBUG [input] log/input.go:404 Check file for harvesting: /application/nginx/logs/access_json.log
2018-11-28T19:02:52.470+0800 DEBUG [input] log/input.go:494 Update existing file for harvesting: /application/nginx/logs/access_json.log, offset: 35070
2018-11-28T19:02:52.470+0800 DEBUG [input] log/input.go:546 Harvester for file is still running: /application/nginx/logs/access_json.log
2018-11-28T19:02:52.470+0800 DEBUG [input] log/input.go:195 input states cleaned up. Before: 2, After: 2, Pending: 0
2018-11-28T19:02:55.468+0800 DEBUG [harvester] log/log.go:102 End of file reached: /application/nginx/logs/access_json.log; Backoff now.

Related

Filebeat index is getting created but with 0 documents

I am trying to index my custom log file using filebeat. I am successfully running filebeat with pre-built modules like mysql, nginx etc. But when I actually try to use it with my application specific log file, index is created with 0 documents.
I could not find anywhere in the filebeats document if there are any specific steps need to be taken to ensure indexing takes place for the custom log files.
I did not get any error when I setup filebeats or run filebeats post setup.
Below is the filebeat.yml:
filebeat.inputs:
- type: log
enabled: true
paths:
- /Applications/MAMP/htdocs/247around-adminp-aws/application/logs/log-2020-12-21.log
include_lines: ['^INFO', '^ERROR']
fields:
app_id: crm
filebeat.config.modules:
setup.template.settings:
index.number_of_shards: 1
path: ${path.config}/modules.d/*.yml
setup.kibana:
output.elasticsearch:
hosts: ["localhost:9200"]
processors:
As can be seen, it is majorly default .yml file with very minor changes.
My custom log file log-2020-12-21.php is:
INFO - 2020-12-21 15:10:26 --> index Logging details have been captured for employee. Details are : Array
INFO - 2020-12-21 15:10:36 --> editpartner partner_id:1
INFO - 2020-12-21 15:10:36 --> SELECT DISTINCT service_id, brand, active
ERROR - 2020-12-21 15:10:36 --> Query error: Expression #1 of SELECT list is not in GROUP BY clause and contains nonaggregated column 'boloaaka.collateral.id' which is not functionally dependent on columns in GROUP BY clause; this is incompatible with sql_mode=only_full_group_by
INFO - 2020-12-21 15:10:36 --> Database Error: A Database Error Occurred<br/>Array
ERROR - 2020-12-21 15:10:54 --> Query error: Expression #5 of SELECT list is not in GROUP BY clause and contains nonaggregated column 'boloaaka.service_centres.district' which is not functionally dependent on columns in GROUP BY clause; this is incompatible with sql_mode=only_full_group_by
INFO - 2020-12-21 15:10:54 --> Database Error: A Database Error Occurred<br/>Array
INFO - 2020-12-21 23:53:21 --> Loginindex
INFO - 2020-12-21 23:54:50 --> Loginindex
INFO - 2020-12-21 23:55:42 --> Loginindex
INFO - 2020-12-21 23:56:24 --> Loginindex
Index file is getting created with 0 documents:
Log file showing logs for filebeats setup and filebeats running:
https://pastebin.com/TK6uYXuq
Please help:
Why there are no error messages if something is wrong because of which documents are not getting indexed? I should be getting some error if things are not right.
How should I index my log file?
Where should I add pattern for my log file like key-value pair which would help me in searching the documents for relevant values later on?
Thanks for your help.
In your filebeat configuration, are you sure you are referring to the exact file where your logs are stored? Your 'paths' in filebeat.yml is referring to a .log file extension while the custom log file you've pasted is log-2020-12-21.php Try changing your paths to match this .php extension instead.
If filebeat correctly picks this file up, you could see something like the code below in your filebeat logs
INFO log/harvester.go:287 Harvester started for file: /Applications/MAMP/htdocs/247around-adminp-aws/application/logs/log-2020-12-21.php

Spring boot logging file name

🐞 Bug report ??
image
logging:
level:
com.zaxxer.hikari: DEBUG
org.springframework: INFO
org.kafka.test: TRACE
file: "logs/%d{yyyy-MM-dd HH_mm_ss} pid-${PID}.log"
pattern.console: "%d{HH:mm:ss} - %msg%n"
Hello.
please help with the file name.
The time format does not work well.
I expected to see 1 file named "2020-02-07 10_38_40 pid-17996.log"
I got 2 files and the file names are bad.
Please do not advise using logback-spring.xml
I configure logs through .yml

logstash not runs config

I'm using filebeat on client side > logstash on serverside > elasticsearch on server side
filebeat on clientside works properly by sending file, but the configuration i've made on logstash returning
Fail
[WARN ] 2019-12-18 14:53:30.987 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[FATAL] 2019-12-18 14:53:31.341 [LogStash::Runner] runner - Logstash could not be started because there is already another instance using the configured data directory. If you wish to run multiple instances, you must change the "path.data" setting.
[ERROR] 2019-12-18 14:53:31.364 [LogStash::Runner] Logstash - java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
Here is my configfile
input {
beats {
port =>5044
}
}
filter {
grok {
match => { "message" =>"%{TIMESTAMP_ISO8601:timestamp}] %{WORD:test}\[%{NUMBER:nom}]\[%{DATA:tes}\] %{DATA:module_name}\: %{WORD:method}%{GREEDYDATA:log_message}" }
}
}
output {
elasticsearch
{
hosts => "127.0.0.1:9200"
index=>"test_log_pbx"
}
}
code to run my logstash config
/usr/share/logstash/bin/logstash -f logstash.conf
when i run configtest it returns
Thread.exclusive is deprecated, use Thread::Mutex
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2019-12-18 14:59:53.300 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2019-12-18 14:59:56.566 [LogStash::Runner] Reflections - Reflections took 139 ms to scan 1 urls, producing 20 keys and 40 values
Configuration OK
[INFO ] 2019-12-18 14:59:57.923 [LogStash::Runner] runner - Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
please help me i dont know whats wrong
A logstash instance already running, so you can not run another instance.If you made your logstash as service, you should stop the service. If you want to run multiple instances, you should modify pipelines.yml
If you want to learn more about pipelines.yml, I put link the below.
https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html

Spring Boot: Log to multiple files. All DEBUGs to debug.log, all INFOs to info.log

I'm using logging provided by Spring Boot in an application.yml like this:
logging:
file: log/info.log
level.com.mycompany.app: INFO
What I actually want is:
1) Log every DEBUG message from our application (com.mycompany.app) to debug.log,
(optional: every INFO message from the whole app / ROOT to debug.log, too)
2) log every INFO message from the whole app / ROOT to info.log
So in pseudo code, it should look like this:
logging:
level: DEBUG
file: debug.log
com.mycompany.app: DEBUG
level:
ROOT: INFO
file: debug.log
level:
ROOT: INFO
file: info.log
How can I achieve this? Please note, we're using SLF4j, not logback (I've read in other threads about logback for writing to multiple files).
Regards,
Bernhard

flume agent throws debug,what could be the issue?

When i try to run the flume agent, i am getting following statement repeatedly.unless i am stopping the task forcefully it displaying continuously, what could be the issue
please help me out
2013-05-27 03:47:12,517 (conf-file-poller-0) [DEBUG - org.apache.flume.conf.file.AbstractFileConfigurationProvider$FileWatcherRunnable.run(AbstractFileConfigurationProvider.java:188)] Checking file:/etc/flume-ng/conf![enter image description here][1]/loclog.conf for changes
2013-05-27 03:47:12,517 (conf-file-poller-0) [DEBUG - org.apache.flume.conf.file.AbstractFileConfigurationProvider$FileWatcherRunnable.run(AbstractFileConfigurationProvider.java:188)] Checking file:/etc/flume-ng/conf![enter image description here][1]/loclog.conf for changes
2013-05-27 03:47:12,517 (conf-file-poller-0) [DEBUG - org.apache.flume.conf.file.AbstractFileConfigurationProvider$FileWatcherRunnable.run(AbstractFileConfigurationProvider.java:188)] Checking file:/etc/flume-ng/conf![enter image description here][1]/loclog.conf for changes
This is normal behaviour and should be ignored.
Flume automatically checks its config files for changes. If it detects a change it will reconfigure itself with those changes. The DEBUG entries you see above are Flume checking its config file.
Note that the reconfiguration process will not pick up all changes. I've noticed that new sources and sinks will often require a process restart.

Resources