Monitor Kong API Logs Using ELK - elasticsearch

We are using ELK (Elasticsearsh, Logstash, Kibana) version 8.x to collect logs from Kong API Gateway version 2.8 using tcp-logs plugin.
We have configured tcp-logs plugin to use Logstash as an endpoint to send the Logs to Logstash then Logstash will send the logs to Elasticsearch.
Kong TCP-Logs Plugin -> Logstash -> Elasticsearch
I do appreciate your support to clarify the following, please:
How to display Kong API Gateway Logs using Kibana? From where shall I start?
Is there Index for Kong logs will be created by default in Elasticsearch?
What is the Elasticsearch Index Pattern do I need to use to get Kong API Logs?
Note: I am not using filebeat agent on the Kong API nodes. I am using tcp-logs plugin to send Kong logs to Logstash.
The content of /etc/logstash/conf.d/beats.conf
input {
beats {
port => 5044
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGLINE}" }
}
date {
match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => ["Elstic_IP_Address:9200"]
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
}
}
Thanks so much for your support!

To fix this issue, we have to use index => "transaction" in the content of /etc/logstash/conf.d/beats.conf configuration file.
Then using transaction index to display the logs on Kibana.
input {
beats {
port => 5044
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGLINE}" }
}
date {
match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => ["Elstic_IP_Address:9200"]
index => "transaction"
}
}

Related

Match missing LogStash logs to the correct date in Kibana dashboard

TL;DR; Map missing logs from LogStash in Kibana dashboard to their correct date and time
I have an Amazon ElasticSearch domain(which includes both ElasticSearch service and Kibana dashboard) configured. My source of logs is a Beanstalk environment. I have installed FileBeat inside that environment, which sends the logs to an EC2, which have configured with LogStash. The LogStash server will then send the logs to ES domain endpoint.
This happened for a while without an issue, but when I checked yesterday, I saw logs were not sent to the ES for like 4 days. I got it fixed and the current logs are being transferred alright. The missing logs are still stacked in the LogStash server.
I modified my logstash.conf file to include the missing log files, but they all appear as a one single bar in the current date in my Kibana graph. What I want to do is that, make sure each missing set of logs is shown in Kibana in their respective date and time.
Example date and time part:
2021-05-20 19:44:34.700+0000
The following is my logstash.conf configuration. (Please let me know if I should post my FileBeat config too).
input {
beats {
port => 5044
}
}
filter {
date {
match => [ "logdate", "yyyy-MM-dd HH:mm:ss.SSS+Z" ]
}
mutate {
split => { "message" => "|" }
add_field => { "messageDate" => "%{[message][0]}" }
add_field => { "messageLevel" => "%{[message][1]}" }
add_field => { "messageContextName" => "%{[message][2]}" }
add_field => { "messagePID" => "%{[message][3]}" }
add_field => { "messageThread" => "%{[message][4]}" }
add_field => { "messageLogger" => "%{[message][5]}" }
add_field => { "messageMessage" => "%{[message][6]}" }
}
}
output {
amazon_es {
hosts => ["hostname"]
index => "dev-%{+YYYY.MM.dd}"
region => "region"
aws_access_key_id => "ackey_id"
aws_secret_access_key => "ackey"
}
}
Use a date filter to parse the date contained in the message. This will set [#timestamp] and then the event will appear in the right bucket in kibana.

How to send logs from multiple servers to ELK server

I have a server in which ELK installed, On other end i have 2 source servers which sending logs to ELK server through filebeat. But the issue is both server's logs showing on same page on kibana. which is too complicated to identify which log is coming from which server! How multiple server's logs show separate on kibana.
Following are my logstash.conf:
input {
beats {
port => 5044
}
}
# Used to parse syslog messages and send it to Elasticsearch for storing
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGLINE}" }
}
date {
match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
# Specify an Elastisearch instance
output {
Elasticsearch {
hosts => ["localhost:9200"]
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
}
}

elaticsearch monitoring search queries

For more than a week I am struggling with trying to log into an index in elasticsearch information regarding queries which I run so I could compare performance between different types of queries. I have configured this config file on logstash home directory
input {
beats {
port> 5044
}
}
filter {
if "search" in [request]{
grok {
match => { "request" => ".*\n\{(?<query_body>.*)"}
}
grok {
match => { "path" => "\/(?<index>.*)\/_search"}
}
if [index] {
} else {
mutate {
add_field => { "index" => "All" }
}
}
mutate {
update => { "query_body" => "{%{query_body}" }
}
}
}
output {
if "search" in [request] and "ignore_unmapped" not in [query_body]{
elasticsearch {
hosts => "http://localhost:9200"
}
}
}
and also installed and configured packetbeat.yml
logstash hosts to :http://localhost:9200
Also in the tutorial that I have followed is mentioned that after starting Packetbeat it will listen for packets on 9200 sending them to Logstash and from there to the monitoring Elasticsearch cluster, it will be indexed in indexes like: logstash-2016.05.24. But these indexes does not exists.

Uncooperative ELK Docker Instance

I have ELK 5.5.1 running in a Docker container, and it'll parse most of my logs, except for ones that originate from my Spring application. Kinda running out of ideas.
I've traced it down to the logstash->elasticsearch pipeline. Filebeat is doing its job, and Logstash is receiving logs from the application in question, based on tailing lostash's stdout log.
I wiped the docker volume that stores my ELK data clean, and started fresh with filebeat just forwarding the logs in question.
Take a log line like this:
FINEST|8384/0|Service tsoft_spring|17-08-31 14:12:01|2017-08-31 14:12:01.260 INFO 8384 --- [ taskExecutor-2] c.t.s.c.s.a.ConfirmationService : Will not persist empty response notes
Using a very minimal logstash configuration, it'll wind up being persisted in elasticsearch:
input {
beats {
port => 5044
ssl => false
}
}
filter {
if [message] =~ /tsoft_spring/ {
grok {
match => [ "message", "%{GREEDYDATA:logmessage}" ]
}
}
}
output {
stdout { }
elasticsearch { hosts => ["localhost:9200"] }
}
Using a more complete configuration, the log is just ignored by elastic, no grokparsefailure, no dateparsefailure:
input {
beats {
port => 5044
ssl => false
}
}
filter {
if [message] =~ /tsoft_spring/ {
grok {
match => [ "message", "%{WORD}\|%{NUMBER}/%{NUMBER}\|%{WORD}%{SPACE}%{WORD}\|%{TIMESTAMP_ISO8601:timestamp}\|%{TIMESTAMP_ISO8601}%{SPACE}%{LOGLEVEL:loglevel}%{SPACE}%{NUMBER:pid}%{SPACE}---%{SPACE}%{SYSLOG5424SD:threadname}%{SPACE}%{JAVACLASS:classname}%{SPACE}:%{SPACE}%{GREEDYDATA:logmessage}" ]
}
date {
match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss" ]
}
}
}
output {
stdout { }
elasticsearch { hosts => ["localhost:9200"] }
}
I've checked that this pattern will parse that line, using http://grokconstructor.appspot.com/do/match#result, and I could've sworn it was working last weekend, but could be my imagination.
Maybe the problem here is not in your grok filter, but in the date match. Resulting year is 0017, instead of 2017. Maybe that's why you can't find the event in ES? Can you try this:
date {
match => [ "timestamp" , "yy-MM-dd HH:mm:ss" ]
}

Elasticsearch Logstash Filebeat mapping

Im having a problem with ELK Stack + Filebeat.
Filebeat is sending apache-like logs to Logstash, which should be parsing the lines. Elasticsearch should be storing the split data in fields so i can visualize them using Kibana.
Problem:
Elasticsearch recieves the logs but stores them in a single "message" field.
Desired solution:
Input:
10.0.0.1 some.hostname.at - [27/Jun/2017:23:59:59 +0200]
ES:
"ip":"10.0.0.1"
"hostname":"some.hostname.at"
"timestamp":"27/Jun/2017:23:59:59 +0200"
My logstash configuration:
input {
beats {
port => 5044
}
}
filter {
if [type] == "web-apache" {
grok {
patterns_dir => ["./patterns"]
match => { "message" => "IP: %{IPV4:client_ip}, Hostname: %{HOSTNAME:hostname}, - \[timestamp: %{HTTPDATE:timestamp}\]" }
break_on_match => false
remove_field => [ "message" ]
}
date {
locale => "en"
timezone => "Europe/Vienna"
match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
}
useragent {
source => "agent"
prefix => "browser_"
}
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["localhost:9200"]
index => "test1"
document_type => "accessAPI"
}
}
My Elasticsearch discover output:
I hope there are any ELK experts around that can help me.
Thank you in advance,
Matthias
The grok filter you stated will not work here.
Try using:
%{IPV4:client_ip} %{HOSTNAME:hostname} - \[%{HTTPDATE:timestamp}\]
There is no need to specify desired names seperately in front of the field names (you're not trying to format the message here, but to extract seperate fields), just stating the field name in brackets after the ':' will lead to the result you want.
Also, use the overwrite-function instead of remove_field for message.
More information here:
https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#plugins-filters-grok-options
It will look similar to that in the end:
filter {
grok {
match => { "message" => "%{IPV4:client_ip} %{HOSTNAME:hostname} - \[%{HTTPDATE:timestamp}\]" }
overwrite => [ "message" ]
}
}
You can test grok filters here:
http://grokconstructor.appspot.com/do/match

Resources