Error for Configuring Logstash in Linux (still runnable) - elasticsearch

I have come encountered some issues on configuring Logstash.
I have used filebeats to forward logs and it went well for the first time. But when I close and repoen the termainal to configure logstash and filebeats. An error comes even Kibana UI shows that log files are still sent and read:
Settings: Default pipeline workers: 8
Beats inputs: Starting input listener {:address=>"0.0.0.0:5044", :level=>:info}
The error reported is:
Address already in use - bind - Address already in use
Here is the config file
input {
beats {
port => 5044
type => "logs"
ssl => true
ssl_certificate => "/etc/pki/tls/certs/filebeat.crt"
ssl_key => "/etc/pki/tls/private/filebeat.key"
}
}
filter{
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
stdout { codec => rubydebug }
}
I have no idea what's going on. Would anyone could please tell me. Thanks

Related

logstash don't report all the events

i could see some events are missing while reporting logs to elastic search. Take an example i am sending 5 logs event only 4 or 3 are reporting.
Basically i am using logstash 7.4 to read my log messages and store the information on elastic search 7.4. below is my logstash configuration
input {
file {
type => "web"
path => ["/Users/a0053/Downloads/logs/**/*-web.log"]
start_position => "beginning"
sincedb_path => "/tmp/sincedb_file"
codec => multiline {
pattern => "^(%{MONTHDAY}-%{MONTHNUM}-%{YEAR} %{TIME}) "
negate => true
what => previous
}
}
}
filter {
if [type] == "web" {
grok {
match => [ "message","(?<frontendDateTime>%{MONTHDAY}-%{MONTHNUM}-%{YEAR} %{TIME})%{SPACE}(\[%{DATA:thread}\])?( )?%{LOGLEVEL:level}%{SPACE}%{USERNAME:zhost}%{SPACE}%{JAVAFILE:javaClass} %{USERNAME:orgId} (?<loginId>[\w.+=:-]+#[0-9A-Za-z][0-9A-Za-z-]{0,62}(?:[.](?:[0-9A-Za-z][0-9A-Za-zā€Œā€‹-]{0,62}))*) %{GREEDYDATA:jsonstring}"]
}
json {
source => "jsonstring"
target => "parsedJson"
remove_field=>["jsonstring"]
}
mutate {
add_field => {
"actionType" => "%{[parsedJson][actionType]}"
"errorMessage" => "%{[parsedJson][errorMessage]}"
"actionName" => "%{[parsedJson][actionName]}"
"Payload" => "%{[parsedJson][Payload]}"
"pageInfo" => "%{[parsedJson][pageInfo]}"
"browserInfo" => "%{[parsedJson][browserInfo]}"
"dateTime" => "%{[parsedJson][dateTime]}"
}
}
}
}
output{
if "_grokparsefailure" in [tags]
{
elasticsearch
{
hosts => "localhost:9200"
index => "grokparsefailure-%{+YYYY.MM.dd}"
}
}
else {
elasticsearch
{
hosts => "localhost:9200"
index => "zindex"
}
}
stdout{codec => rubydebug}
}
As keep on new logs are writing to log files, i could see a difference of log counts.
Any suggestions would be appreciated.

Show Kafka producer ip/port as a field in Kibana, logstash add_field?

I have logstash with ElasticSearch & Kibana 7.6.2
I connect logstash to Kafka as follows:
input {
kafka {
bootstrap_servers => "------"
topics_pattern => ["----"]
decorate_events => true
}
}
filter {
mutate { add_field => { "[topic_name]" => "%{[#metadata][kafka][topic]}"} }
mutate { add_field => { "[ip_port]" => "X*X*X*X*X*X*X"} }
date { match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ] }
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash"
document_type => "logs"
}
}
What should I replace instead of X*X*X*X*X*X*X to have messages' producers IP/Port?
I mean Client IP of producers when they publish the message into Kafka.

Show Kafka topic title as a field in Kibana, logstash add_field?

I have logstash with ElasticSearch & Kibana 7.6.2
I connect logstash to Kafka as follows:
input {
kafka {
bootstrap_servers => "******"
topics_pattern => [".*"]
decorate_events => true
add_field => { "[topic_name]" => "%{[#metadata][kafka][topic]}"}
}
}
filter {
date {
match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash"
document_type => "logs"
}
}
It's OK and work. But I field topic_name show as %{[#metadata][kafka][topic]}
How can I fix it?
The syntax of the sprintf format you are using ( %{[#metadata][kafka][topic]} ) to get the value of that field is correct.
Allegedly there is no such field #metadata.kafka.topic in your document. Therefore the sprintf can't obtain the field value and as a result, the newly created field contains the sprintf call as a string.
However, since you set decorate_events => true, the metadata fields should be available as stated in the documentation (https://www.elastic.co/guide/en/logstash/current/plugins-inputs-kafka.html):
Metadata is only added to the event if the decorate_events option is set to true (it defaults to false).
I can imagine that the add_field action set in the input plugin causes the issue. Since the decorate_events option first enables the addition of the metadata fields, the add_field action should come at second place after the input plugin.
Your configuration would then look like this:
input {
kafka {
bootstrap_servers => "******"
topics_pattern => [".*"]
decorate_events => true
}
}
filter {
mutate{
add_field => { "[topic_name]" => "%{[#metadata][kafka][topic]}"}
}
date {
match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash"
document_type => "logs"
}
}
How about
add_field => { "topic_name" => "%{[#metadata][kafka][topic]}"}
i.e. [topic_name] -> topic_name

:reason=>"Something is wrong with your configuration." GeoIP.dat Mutate Logstash

I have the following configuration for logstash.
There are 3 parts to this one is a generallog which we use for all applications they land in here.
second part is the application stats where in which we have a specific logger which will be configured to push the application statistics
third we have is the click stats when ever an event occurs on client side we may want to push it to the logstash on the upd address.
all 3 are udp based, we also use log4net to to send the logs to the logstash.
the base install did not have a GeoIP.dat file so got the file downloaded from the https://dev.maxmind.com/geoip/legacy/geolite/
have put the file in the /opt/logstash/GeoIPDataFile with a 777 permissions on the file and folder.
second thing is i have a country name and i need a way to show how many users form each country are viewing the application in last 24 hours.
so for that reason we also capture the country name as its in their profile in the application.
now i need a way to get the geo co-ordinates to use the tilemap in kibana.
What am i doing wrong.
if i take the geoIP { source -=> "country" section the logstash works fine.
when i check the
/opt/logstash/bin/logstash -t -f /etc/logstash/conf.d/logstash.conf
The configuration file is ok is what i receive. where am i going worng?
Any help would be great.
input {
udp {
port => 5001
type => generallog
}
udp {
port => 5003
type => applicationstats
}
udp {
port => 5002
type => clickstats
}
}
filter {
if [type] == "generallog" {
grok {
remove_field => message
match => { message => "(?m)%{TIMESTAMP_ISO8601:sourcetimestamp} \[%{NUMBER:threadid}\] %{LOGLEVEL:loglevel} +- %{IPORHOST:requesthost} - %{WORD:applicationname} - %{WORD:envname} - %{GREEDYDATA:logmessage}" }
}
if !("_grokparsefailure" in [tags]) {
mutate {
replace => [ "message" , "%{logmessage}" ]
replace => [ "host" , "%{requesthost}" ]
add_tag => "generalLog"
}
}
}
if [type] == "applicationstats" {
grok {
remove_field => message
match => { message => "(?m)%{TIMESTAMP_ISO8601:sourceTimestamp} \[%{NUMBER:threadid}\] %{LOGLEVEL:loglevel} - %{WORD:envName}\|%{IPORHOST:actualHostMachine}\|%{WORD:applicationName}\|%{NUMBER:empId}\|%{WORD:regionCode}\|%{DATA:country}\|%{DATA:applicationName}\|%{NUMBER:staffapplicationId}\|%{WORD:applicationEvent}" }
}
geoip {
source => "country"
target => "geoip"
database => "/opt/logstash/GeoIPDataFile/GeoIP.dat"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [ "[geoip][coordinates]", "float"]
}
if !("_grokparsefailure" in [tags]) {
mutate {
add_tag => "applicationstats"
add_tag => [ "eventFor_%{applicationName}" ]
}
}
}
if [type] == "clickstats" {
grok {
remove_field => message
match => { message => "(?m)%{TIMESTAMP_ISO8601:sourceTimestamp} \[%{NUMBER:threadid}\] %{LOGLEVEL:loglevel} - %{IPORHOST:remoteIP}\|%{IPORHOST:fqdnHost}\|%{IPORHOST:actualHostMachine}\|%{WORD:applicationName}\|%{WORD:envName}\|(%{NUMBER:clickId})?\|(%{DATA:clickName})?\|%{DATA:clickEvent}\|%{WORD:domainName}\\%{WORD:userName}" }
}
if !("_grokparsefailure" in [tags]) {
mutate {
add_tag => "clicksStats"
add_tag => [ "eventFor_%{clickName}" ]
}
}
}
}
output {
if [type] == "applicationstats" {
elasticsearch {
hosts => "localhost:9200"
index => "applicationstats-%{+YYYY-MM-dd}"
template => "/opt/logstash/templates/udp-applicationstats.json"
template_name => "applicationstats"
template_overwrite => true
}
}
else if [type] == "clickstats" {
elasticsearch {
hosts => "localhost:9200"
index => "clickstats-%{+YYYY-MM-dd}"
template => "/opt/logstash/templates/udp-clickstats.json"
template_name => "clickstats"
template_overwrite => true
}
}
else if [type] == "generallog" {
elasticsearch {
hosts => "localhost:9200"
index => "generallog-%{+YYYY-MM-dd}"
template => "/opt/logstash/templates/udp-generallog.json"
template_name => "generallog"
template_overwrite => true
}
}
else{
elasticsearch {
hosts => "localhost:9200"
index => "logstash-%{+YYYY-MM-dd}"
}
}
}
As per the error message, the mutation which you're trying to do could be wrong. Could you please change your mutate as below:
mutate {
convert => { "geoip" => "float" }
convert => { "coordinates" => "float" }
}
I guess you've given the mutate as an array, and it's a hash type by origin. Try converting both the values individually. Your database path for geoip seems to be fine in your filter. Is that the whole error which you've mentioned in the question? If not update the question with the whole error if possible.
Refer here, for in depth explanations.

I don't see the results of my Logstash filter in Kibana

I have setup successfully my system for centralized logging using: elasticsearch-logstash-filebeat-kibana.
I cant see logs using the filebeat template index in Kibana. The problems arrives when I try to create a logstash filter in order to parse my log files properly.
I'm using grok patterns, so first one I created this pattern (/opt/logstash/patterns/grok-paterns):
CUSTOMLOG %{TIMESTAMP_ISO8601:timestamp} - %{USER:auth} - %{LOGLEVEL:loglevel} - \[%{DATA:pyid}\]\[%{DATA:source}\]\[%{DATA:searchId}\] - %{GREEDYDATA:logmessage}
And this is the logstash filter (/etc/logstash/conf.d/11-log-filter.conf):
filter {
if [type] == "log" {
grok {
match => { "message" => "%{CUSTOMLOG}" }
patterns_dir => "/opt/logstash/patterns"
}
mutate {
rename => [ "logmessage", "message" ]
}
date {
timezone => "Europe/London"
locale => "en"
match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss,SSS" ]
}
}}
Apparently the parser is working fine when I test it from command line:
[root#XXXXX logstash]# bin/logstash -f test.conf
Settings: Default pipeline workers: 4
Logstash startup completed
2016-06-03 12:55:57,718 - root - INFO - [27232][service][751282714902528] - here goes my message
{
"message" => "here goes my message",
"#version" => "1",
"#timestamp" => "2016-06-03T11:55:57.718Z",
"host" => "19598",
"timestamp" => "2016-06-03 12:55:57,718",
"auth" => "root",
"loglevel" => "INFO",
"pyid" => "27232",
"source" => "service",
"searchId" => "751282714902528"
}
However... log do not appear in Kibana, I don't even see "_grokparsefailure" tasgs, so I guest that the parser is working but I can find logs.
What am I doing wrong? Do I forget something?
Thanks in advance.
Edit
Input (02-beats-input.conf):
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
Output (30-elasticsearch-output.conf):
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}

Resources