Logstash Grok error - elasticsearch

My logstash configuration is giving me this error:
whenever i run this command: /opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf --auto-reload --debug
reason=>"Expected one of #, {, ,, ] at line 27, column 95 (byte 677) after filter {\n\n\tif [type] == \"s3\" {\n\t\tgrok {\n\t\n \t\t\tmatch => [\"message\", \"%{IP:client} %{USERNAME} %{USERNAME} \\[%{HTTPDATE:timestamp}\\] (?:\"", :level=>:error, :file=>"logstash/agent.rb", :line=>"430", :method=>"create_pipeline"}
This has to do with my pattern.But when i checked same in Grok online debugger its giving me required answer.Please help.
Here is my logstash configuration:
input {
s3 {
access_key_id => ""
bucket => ""
region => ""
secret_access_key => ""
prefix => "access"
type => "s3"
add_field => { source => gzfiles }
sincedb_path => "/dev/null"
#path => "/home/shubham/logstash.json"
#temporary_directory => "/home/shubham/S3_temp/"
backup_add_prefix => "logstash-backup"
backup_to_bucket => "logstash-nginx-overcart"
}
}
filter {
if [type] == "s3" {
grok {
match => ["message", "%{IP:client} %{USERNAME} %{USERNAME} \[%{HTTPDATE:timestamp}\] (?:"%{WORD:request}
%{URIPATHPARAM:path} HTTP/%{NUMBER:version}" %{NUMBER:reponse} %{NUMBER:bytes} "%{USERNAME}" %{GREEDYDATA:responseMessage})"]
}
}
}
output {
elasticsearch {
hosts => ''
index => "accesslogs"
}
}

You have a couple of unescaped " characters in your match assignment (around the username var for example), which trip up the parser. If you escape those with a \ it should work.

Related

Change a specific "Value" inside a "Field" in Logstash Config file

I want to change a value inside a field of logstash config file.
For my case my logstash config file is like this..
# Read input from filebeat by listening to port 5044 on which filebeat will send the data
input {
beats {
port => "5044"
}
}
filter {
######################################### For Solr ##############################################
if "solr" in [log][file][path] {
grok {
match => {"message" => "%{DATA:timestamp}%{SPACE}%{LOGLEVEL:log-level}%{SPACE}%{GREEDYDATA:log-message}"}
#remove_field => ["message"]
#add_field => {"message" => "%{log-message}"}
}
}
############################################## For Server ##############################################
if "server.log" in [log][file][path] {
grok {
match => {"message" => "%{DATA:timestamp}%{SPACE}%{LOGLEVEL:log-level}%{SPACE}\[%{DATA}\]%{SPACE}\(%{DATA:thread}\)%{SPACE}%{GREEDYDATA:log-message}"}
#match => { "[log][file][path]" => "%{GREEDYDATA}/%{GREEDYDATA:jboss-log}.log"}
#remove_field => ["message"]
#add_field => {"message" => "%{log-message}"}
}
}
############################################## For Mongo ##############################################
else if "mongos" in [log][file][path] or "config" in [log][file][path] or "shard" in [log][file][path] or "metrics_" in [log][file][path]{
grok {
match => {"message" => "%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{GREEDYDATA:log-message}"}
#remove_field => ["message"]
#add_field => {"message" => "%{log-message}"}
}
}
############################################## For mongo.log #####################################################
else if "mongo" in [log][file][path] {
grok {
match => {"message" => "\[%{DATA:timestamp}\]%{SPACE}%{LOGLEVEL:log-level}%{SPACE}\[%{DATA:class}\]%{SPACE}%{GREEDYDATA:log-message}"}
#remove_field => ["message"]
#add_field => {"message" => "%{log-message}"}
}
}
############################################## For Kafka ##############################################
else if "kafka" in [log][file][path] {
grok {
match => {"message" => "\[%{DATA:timestamp}\]%{SPACE}%{LOGLEVEL:log-level}%{SPACE}\[%{DATA:class}\]%{SPACE}%{GREEDYDATA:log-message}"}
#remove_field => ["message"]
#add_field => {"message" => "%{log-message}"}
}
}
############################################## For mongodb_output & mongodb_exception ##############################################
else if "mongodb_exception" in [log][file][path] or "mongodb_output" in [log][file][path]{
grok {
match => {"message" => "\[%{DATA:timestamp}\]%{SPACE}%{LOGLEVEL:log-level}%{SPACE}\[%{DATA:class}\]%{SPACE}%{GREEDYDATA:log-message}"}
#remove_field => ["message"]
#add_field => {"message" => "%{log-message}"}
}
}
############################################## Other Logs ##############################################
else {
grok {
#match => {"message" => "\[%{MONTHDAY:day}%{SPACE}%{MONTH:month}%{SPACE}%{YEAR:year},%{SPACE}%{TIME:time}\]%{SPACE}%{LOGLEVEL:log-level}%{SPACE}\[%{DATA:class}\]\[%{DATA:thread}\]%{SPACE}%{GREEDYDATA:log-message}"}
match => {"message" => "\[%{DATA:timestamp}\]%{SPACE}%{LOGLEVEL:log-level}%{SPACE}\[%{DATA:class}\]\[%{DATA:thread}\]%{SPACE}%{GREEDYDATA:log-message}"}
#remove_field => ["message"]
#add_field => {"message" => "%{log-message}"}
}
}
################################################################
grok {
match => { "[log][file][path]" => ["%{GREEDYDATA}/%{GREEDYDATA:component}.log" , "%{PATH}\\%{GREEDYDATA:component}\_%{GREEDYDATA}.log" ]}
}
if [component] =~ "^server" {
mutate {
rename => { "%{server}" => "renamed_server" }
}
}
}
output {
# sending properly parsed log events to elasticsearch
elasticsearch {
hosts => ["localhost:9200"]
}
}
I am getting a value of component field as server but I want to change the value of component field server to renamed_server.
I have tried the above but I am not getting any output.
Please help me to find out the required solution.
I guess the problem is with this block:
if [component] =~ "^server" {
mutate {
rename => { "%{server}" => "renamed_server" }
}
}
.. since it doesn't do what you desire, i.e.
I want to change the value of component field server to renamed_server.
rename mutate configuration doesn't change values, it renames fields.
If you want to change value, you can use gsub. And since you want to change the exact value, maybe you can get by without the conditional altogether. E.g.:
mutate {
gsub => [
# replace `server` value with `renamed_server` in component field
"component", "^server$", "renamed_server"
]
}
I have modified this field with gsub it also worked as well.
mutate {
gsub => [
"component", "^server$", "renamed_server",
"component", "^[0-9]{3}.*[0-9]{3}.*[0-9]{2}.*[0-9]{2}.*[0-9]{5}.*output$" , "client_output"
]
}

logstash don't report all the events

i could see some events are missing while reporting logs to elastic search. Take an example i am sending 5 logs event only 4 or 3 are reporting.
Basically i am using logstash 7.4 to read my log messages and store the information on elastic search 7.4. below is my logstash configuration
input {
file {
type => "web"
path => ["/Users/a0053/Downloads/logs/**/*-web.log"]
start_position => "beginning"
sincedb_path => "/tmp/sincedb_file"
codec => multiline {
pattern => "^(%{MONTHDAY}-%{MONTHNUM}-%{YEAR} %{TIME}) "
negate => true
what => previous
}
}
}
filter {
if [type] == "web" {
grok {
match => [ "message","(?<frontendDateTime>%{MONTHDAY}-%{MONTHNUM}-%{YEAR} %{TIME})%{SPACE}(\[%{DATA:thread}\])?( )?%{LOGLEVEL:level}%{SPACE}%{USERNAME:zhost}%{SPACE}%{JAVAFILE:javaClass} %{USERNAME:orgId} (?<loginId>[\w.+=:-]+#[0-9A-Za-z][0-9A-Za-z-]{0,62}(?:[.](?:[0-9A-Za-z][0-9A-Za-zā€Œā€‹-]{0,62}))*) %{GREEDYDATA:jsonstring}"]
}
json {
source => "jsonstring"
target => "parsedJson"
remove_field=>["jsonstring"]
}
mutate {
add_field => {
"actionType" => "%{[parsedJson][actionType]}"
"errorMessage" => "%{[parsedJson][errorMessage]}"
"actionName" => "%{[parsedJson][actionName]}"
"Payload" => "%{[parsedJson][Payload]}"
"pageInfo" => "%{[parsedJson][pageInfo]}"
"browserInfo" => "%{[parsedJson][browserInfo]}"
"dateTime" => "%{[parsedJson][dateTime]}"
}
}
}
}
output{
if "_grokparsefailure" in [tags]
{
elasticsearch
{
hosts => "localhost:9200"
index => "grokparsefailure-%{+YYYY.MM.dd}"
}
}
else {
elasticsearch
{
hosts => "localhost:9200"
index => "zindex"
}
}
stdout{codec => rubydebug}
}
As keep on new logs are writing to log files, i could see a difference of log counts.
Any suggestions would be appreciated.

How to write if condition inside of the logstash grok pattern?

My question is related to logstash grok pattern. I created below pattern that's working fine but the big problem is not string values. Sometimes; "Y" and "age" can be null so my grok pattern not create any log in elasticseach. It is not working properly. I need to tell my grok pattern :
if(age is null || age i empty){
updatefield["age",0]
}
but I don't know how to make it. by the way; I checked many solutions by googling but it is directly related to my problem.
input {
file {
path => ["C:/log/*.log"]
start_position => "beginning"
discover_interval => 10
stat_interval => 10
sincedb_write_interval => 10
close_older => 10
codec => multiline {
pattern => "^%{TIMESTAMP_ISO8601}\|"
negate => true
what => "previous"
}
}
}
filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:formattedDate}.* X: %{DATA:X} Y: %{NUMBER:Y} Z: %{DATA:Z} age: %{NUMBER:age:int} "}
}
date {
timezone => "Europe/Istanbul"
match => ["TimeStamp", "ISO8601"]
}
json{
source => "request"
target => "parsedJson"
}
mutate {
remove_field => [ "path","message","tags","#version"]
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => [ "http://localhost:9200" ]
index => "logstash-%{+YYYY.MM}"
}
}
You can check if your fields exists or are empty using conditionals with your filter,
filter {
if ![age] or [age] == "" {
mutate {
update => { "age" => "0" }
}
}
}

Logstash Not Reading "File" Input

I am trying to use file as an input to logstash.Here is my logstash.conf
input {
file {
path => "/home/dxp/elb.log"
type => "elb"
start_position => "beginning"
sincedb_path => "/home/dxp/log.db"
}
}
filter {
if [type] == "elb" {
grok {
match => [ "message", "%{TIMESTAMP_ISO8601:timestamp} %{NOTSPACE:loadbalancer} %{IP:client_ip}:%{NUMBER:client_port:int} %{IP:backend_ip}:%{NUMBER:backend_port:int} %{NUMBER:request_processing_time:float} %{NUMBER:backend_processing_time:float} %{NUMBER:response_processing_time:float} %{NUMBER:elb_status_code:int} %{NUMBER:backend_status_code:int} %{NUMBER:received_bytes:int} %{NUMBER:sent_bytes:int} %{QS:request}" ]
}
}
}
output
{
elasticsearch {
hosts => "10.99.0.180:9200"
manage_template => false
index => "elblog-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
My logs show this:
[2017-10-27T13:11:31,164][DEBUG][logstash.inputs.file ]_globbed_files: /home/dxp/elb.log: glob is []: I guess my file has not been read by logstash, so a new index is not formed in elasticsearch.
Please help me with what i am missing in this.

logstash: multiple logfiles with different pattern

We want to set up a server for logstash for a couple of different project in our company. Now I try to enable them in Kibana. My question is:
If I have different patterns of the logfiles, how can I build for them a filter?
example: logstash.conf:
input {
file {
type => "A"
path => "/home/logstash/A/*"
start_position => "beginning"
}
file {
type => "B"
path => "/home/logstash/B*"
start_position => "beginning"
}
}
filter {
multiline {
pattern => "^%{TIMESTAMP_ISO8601}"
negate => true
what => "previous"
}
grok {
type => A
match => [ "message", "%{TIMESTAMP_ISO8601:logdate} %{DATA:thread %{LOGLEVEL:level}\s*%{DATA:logger_name}\s*-\s*%{GREEDYDATA:log_text}"]
add_tag => [ "level_%{level}" ]
}
date {
match => ["logdate", "YYYY-MM-dd HH:mm:ss,SSS"]
}
grok {
type => B
match => [ any other pattern ...
}
}
output {
elasticsearch { embedded => true }
}
do I have to create for each project (A,B,C,...) an own filter, and what do I have to do, when I have for each project again different pattern of the logfiles?
You only need to create a filter for all projects.
For Logstash 1.3.3, You can use if statement to distinct each project grok. For example,
filter {
multiline {
pattern => "^%{TIMESTAMP_ISO8601}"
negate => true
what => "previous"
}
if [type] == "A" {
grok {
match => [ any other pattern ...
}
}
else if [type] == "B" {
grok {
match => [ any other pattern ...
}
}
}
Hope this can help you.

Resources