Logstash is unable to find log4j2.properties configuration file - ruby

When trying to run logstash 5 on windows:
C:\Development\workspace\logstash>C:\Development\Software\logstash-5.1.2\bin\logstash.bat
-f robot-log.js
It gives following error:
Could not find log4j2 configuration at path /Development/Software/logstash-5.1.2/config/log4j2.properties. Using default config which logs to console
15:03:53.667 [[main]-pipeline-manager] INFO logstash.filters.multiline - Grok loading patterns from file {:path=>"C:/Development/Software/logstash-5.1.2/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-4.0.2/patterns/aws"}
15:03:53.684 [[main]-pipeline-manager] INFO logstash.filters.multiline - Grok loading patterns from file {:path=>"C:/Development/Software/logstash-5.1.2/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-4.0.2/patterns/bacula"}
15:03:53.693 [[main]-pipeline-manager] INFO logstash.filters.multiline - Grok loading patterns from file ...
The file is actually present in the directory. Why is logstash unable to find it?
Note:
I originally though this was a problem with Ruby using Linux path separator. However, as #Stefan pointed out in comments below, Ruby accepts Linux style paths even on Windows

This seems to be a bug in latest version of logstash. Under logger.rb it has following code:
def self.initialize(config_location)
##config_mutex.synchronize do
if ##logging_context.nil?
file_path = URI(config_location).path
if ::File.exists?(file_path)
logs_location = java.lang.System.getProperty("ls.logs")
puts "Sending Logstash's logs to #{logs_location} which is now configured via log4j2.properties"
##logging_context = Configurator.initialize(nil, config_location)
else
# fall back to default config
puts "Could not find log4j2 configuration at path #{file_path}. Using default config which logs to console"
##logging_context = Configurator.initialize(DefaultConfiguration.new)
end
end
end
end
The call to URI.path seems problematic because according to documentation it returns /posts when the input is http://foo.com/posts?id=30&limit=5#time=1305298413
I'm not a Ruby programmer so I have no idea why logstash devs used it here. But simply replacing file_path = URI(config_location).path with file_path = config_location fixes the problem for me.
C:\Development\workspace\logstash>C:\Development\Software\logstash-5.1.2\bin\logstash.bat -f robot-log.js
Sending Logstash's logs to C:/Development/Software/logstash-5.1.2/logs which is now configured via log4j2.properties
[2017-01-24T15:22:04,754][INFO ][logstash.filters.multiline] Grok loading patterns from file {:path=>"C:/Development/Software/logstash-5.1.2/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-4.0.2/patterns/aws"}
[2017-01-24T15:22:04,769][INFO ][logstash.filters.multiline] Grok loading patterns from file {:path=>"C:/Development/Software/logstash-5.1.2/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-4.0.2/patterns/bacula"}
[2017-01-24T15:22:04,772][INFO ][logstash.filters.multiline] Grok loading patterns from file {:path=>"C:/Development/Software/logstash-5.1.2/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-4.0.2/patterns/bro"}

Related

How can I store the logs that are generated using log4j into Elasticsearch using filebeat?

I have a log file containing logs(sent from log4j). I would like to store these logs into elasticsearch. The log file is dynamic, meaning that it is constantly loaded with logs from log4j. I don't want to store system logs(which is covered in most tutorials). How can I configure the filebeat.yml file ? Even some resources will be helpful. Much appreciated
PS: I'm using Ubuntu 20.04
and this is the path of my file
/home/user/Log/Logging.log
The log in my file looks something like this
2022-01-22 21:04:40 INFO CalcServlet:135 - sort
You can use the dissector processor:
processors:
- dissect:
tokenizer: "%{date} %{time} %{level} %{coponent}:%{line|integer} - %{message}"
field: "message"
target_prefix: "dissect"
A detailed example you can find here.

proper set up of parsing custom logs with logstash to kibana, i see no errors and no data

I'm playing a bit with kibana to see how it works.
i was able to add nginx log data directly from the same server without logstash and it works properly. but using logstash to read log files from a different server doesn't show data. no error.. but no data.
I have custom logs from PM2 that runs some PHP script for me and the format of the messages are:
Timestamp [LogLevel]: msg
example:
2021-02-21 21:34:17 [DEBUG]: file size matches written file size 1194179
so my gork filter is:
"%{DATESTAMP:timestamp} \[%{LOGLEVEL:loglevel}\]: %{GREEDYDATA:msg}"
I checked with Gork Validator and the syntax matches the file format.
i've got files that contain the suffix out that are debug level, and files with suffix error for error level.
so to configure logstash on the kibana server, i added the file /etc/logstash/conf.d/pipeline.conf with the following:
input {
beats {
port => 5544
}
}
filter {
grok {
match => {"message"=>"%{DATESTAMP:timestamp} \[%{LOGLEVEL:loglevel}\]: %{GREEDYDATA:msg}"}
}
mutate {
rename => ["host", "server"]
convert => {"server" => "string"}
}
}
output {
elasticsearch {
hosts => "http://localhost:9200"
user => "<USER>"
password => "<PASSWORD>"
}
}
I needed to rename the host variable to server or I would get errors like Can't get text on a START_OBJECT and failed to parse field [host] of type [text]
on the 2nd server where the pm2 logs reside I configure filebeat with the following:
- type: filestream
enabled: true
paths:
- /home/ubuntu/.pm2/*-error-*log
fields:
level: error
- type: filestream
enabled: true
paths:
- /home/ubuntu/.pm2/logs/*-out-*log
fields:
level: debug
I tried to use log and not filestream the results are the same.
but it makes sense to use filestream since the logs are updated constantly on ?
so i have logstash running on one server and filebeat on the other, opened firewall ports, i can see they're connecting but i don't see any new data in the Kibana logs dashboard relevant to the files i fetch with logstash.
filebeat log always shows this line Feb 24 04:41:56 vcx-prod-backup-01 filebeat[3797286]: 2021-02-24T04:41:56.991Z INFO [file_watcher] filestream/fswatch.go:131 Start next scan and something about analytics metrics so it looks fine, and still no data.
I tried to provide here as much information as I can, i'm new to kibana, i have no idea why data is not shown in kibana if there are no errors.
I thought maybe i didn't escaped the square brackets properly in gork filter so I tried using "%{DATESTAMP:timestamp} \\[%{LOGLEVEL:loglevel}\\]: %{GREEDYDATA:msg}" which replaces \[ with \\[ but the results are the same.
any information regarding this issue would be greatly appreciated.
#update
ֿ
using stack version 7.11.1
I changed back to log instead of filestream based on #leandrojmp recommendations.
I checked for harverser.go related lines i filebeat and I found these:
Feb 24 14:16:36 SERVER filebeat[4128025]: 2021-02-24T14:16:36.566Z INFO log/harvester.go:302 Harvester started for file: /home/ubuntu/.pm2/logs/cdr-ssh-out-1.log
Feb 24 14:16:36 SERVER filebeat[4128025]: 2021-02-24T14:16:36.567Z INFO log/harvester.go:302 Harvester started for file: /home/ubuntu/.pm2/logs/cdr-ftp-out-0.log
and I also noticed that when i configured the output to stdout, i do see the events that are coming from the other server. so logstash do receive them properly but for some reason i don't see them in kiban.
If you have output using both stdout and elasticsearch outputs but you do not see the logs in Kibana, you will need to create an index pattern in Kibana so it can show your data.
After creating an index pattern for your data, in your case the index pattern could be something like logstash-* you will need to configure the Logs app inside Kibana to look for this index, per default the Logs app looks for filebeat-* index.
ok... so #leandrojmp helped me a lot in understanding what's going on with kibana. thank you! all the credit goes to you! just wanted to write a log answer that may help other people overcome the initial setup.
lets start fresh
I wanted one kibana node that monitors custom logs on a different server.
I have ubuntu latest LTS installed on both, added the deb repositories, installed kibana, elsaticsearch and logstash on the first, and filebeat on the 2nd.
basic setup is without much security and SSL which is not what i'm looking for here since i'm new to this topic, everything is mostly set up.
in kibana.yml i changed the host to 0.0.0.0 instead of localhost so i can connect from outside, and in logstash i added the following conf file:
input {
beats {
port => 5544
}
}
filter {
grok {
match => {"message"=>"%{DATESTAMP:timestamp} \[%{LOGLEVEL:loglevel}\]: %{GREEDYDATA:msg}"}
}
mutate {
rename => ["host", "server"]
convert => {"server" => "string"}
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
}
}
i didn't complicate things and didn't need to set up additional authentication.
my filebeat.yml configuration:
- type: log
enabled: true
paths:
- /home/ubuntu/.pm2/*-error-*log
fields:
level: error
- type: log
enabled: true
paths:
- /home/ubuntu/.pm2/logs/*-out-*log
level: debug
i started everything, no errors in any logs but still no data in kibana, since i had no clue how elasticsearch stored it's data, i needed to find out how can i connect to elasticsearch and see if the data is there, so i executed curl -X GET http://localhost:9200/_cat/indices?v and noticed a logstash index, so i executed curl -X GET http://localhost:9200/logstash-2021.02.24-000001/_search and i noticed that the log data is presented in the database.
so it must means that it's something with kibana. so using the web interface of kibana under settings I noticed a configuration called Index pattern for matching indices that contain log data and the input there did not match the logstash index name, so i appended ,logstash* to it and voila! it works :)
thanks

Pass parameters from Airflow to Logstash

I have configured logstash to listen the logs at default airflow logs path. I want to create the index in elasticsearch as {dag_id}-{task_id}-{execution_date}-{try_number}. All these are parameters from Airflow. These are the modified values in airflow.cfg.
[core]
remote_logging = True
[elasticsearch]
host = 127.0.0.1:9200
log_id_template = {{dag_id}}-{{task_id}}-{{execution_date}}-{{try_number}}
end_of_log_mark = end_of_log
write_stdout = True
json_format = True
json_fields = asctime, filename, lineno, levelname, message
These task instance details need to passed from Airflow to logstash.
dag_id,
task_id,
execution_date,
try_number
This is my logstash config file.
input {
file{
path => "/home/kmeenaravich/airflow/logs/Helloworld/*/*/*.log"
start_position => beginning
}
}
output {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "logginapp-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
I have 2 questions. How to pass the parameters from Airflow to Logstash?
I have configured logstash to listen to the logs path. Since remote_logging is True in airfow.cfg, logs are not written to base log folder. If that is false or if I connect to Amazon S3, logs are written to base_log_folder path too. But, for me to configure logstash, logs need to be written in local folder. I use airflow version 1.10.9 . What can I do to stream my logs to Elasticsearch index.
To answer your first question (I assume you mean passing the logs directly to Elasticsearch), you cannot. The Airflow "Elasticsearch Logging" is not really a logging to Elasticsearch but more a configuration to enable the logging to get shipped to Elasticsearch. The naming of the attributes is (in my opinion) a little bit confusing as it suggests that you can write directly to Elasticsearch.
You can configure Airflow to read logs from Elasticsearch. See Airflow Elasticsearch documentation for more information:
Airflow can be configured to read task logs from Elasticsearch and
optionally write logs to stdout in standard or json format. These logs
can later be collected and forwarded to the Elasticsearch cluster
using tools like fluentd, logstash or others.
As you have enabled write_stdout = True, output is written to stdout. If you want the output to be written in files you have to set write_stdout = False or leave it empty. Your logstash configuration should then find the files, which answers your second question.
Cheers
Michael

Export data from Elasticsearch to CSV using Logstash

How can I export data from Elasticsearch to CSV using Logstash? I need to include only specific columns.
Install 2 plugins: elasticsearch input plugin and csv output plugin.
Then create a configuration file. Here is a good example for this particular case.
You are ready to go now, just run:
bin/logstash -f /path/to/logstash-es-to-csv-example.conf
And check export.csv file specified in output -> csv -> path.
Important note:
There is a known bug in csv output plugin when working with Logstash 5.x. The plugin generates a string of %{host} %{message}%{host} %{message}%{host} %{message}.
There's an open issue for it: https://github.com/logstash-plugins/logstash-output-csv/issues/10
As a workaround you may:
downgrade to Logstash 2.x until this gets resolved
use the file output instead
file {
codec => line { format => "%{field1},%{field2}"}
path => "/path/to/data_export.csv"
}
modify the plugin's code according to the github discussion...

Generating filebeat custom fields

I have an elasticsearch cluster (ELK) and some nodes sending logs to the logstash using filebeat. All the servers in my environment are CentOS 6.5.
The filebeat.yml file in each server is enforced by a Puppet module (both my production and test servers got the same configuration).
I want to have a field in each document which tells if it came from a production/test server.
I wanted to generate a dynamic custom field in every document which indicates the environment (production/test) using filebeat.yml file.
In order to work this out i thought of running a command which returns the environment (it is possible to know the environment throught facter) and add it under an "environment" custom field in the filebeat.yml file but I couldn't find any way of doing so.
Is it possible to run a command through filebeat.yml?
Is there any other way to achieve my goal?
In your filebeat.yml:
filebeat:
prospectors:
-
paths:
- /path/to/my/folder
input_type: log
# Optional additional fields. These field can be freely picked
# to add additional information to the crawled log files
fields:
mycustomvar: production
in filebeat-7.2.0 i use next syntax:
processors:
- add_fields:
target: ''
fields:
mycustomfieldname: customfieldvalue
note: target = '' means that mycustomfieldname is a top-level field
official 7.2 docs
Yes, you can add fields to the document through filebeats.
The official doc shows you how.

Resources