Using logstash index in SIEM - elasticsearch

I am using beats -> kafka streams -> logstash -> elsticsearch -> kibana.
Is it possible to use my new index to work in Security SIEM?
My logstash configuration:
input {
kafka {
bootstrap_servers => "localhost:9092"
topics => "apache"
}
}
filter {
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
}

Related

Monitor Kong API Logs Using ELK

We are using ELK (Elasticsearsh, Logstash, Kibana) version 8.x to collect logs from Kong API Gateway version 2.8 using tcp-logs plugin.
We have configured tcp-logs plugin to use Logstash as an endpoint to send the Logs to Logstash then Logstash will send the logs to Elasticsearch.
Kong TCP-Logs Plugin -> Logstash -> Elasticsearch
I do appreciate your support to clarify the following, please:
How to display Kong API Gateway Logs using Kibana? From where shall I start?
Is there Index for Kong logs will be created by default in Elasticsearch?
What is the Elasticsearch Index Pattern do I need to use to get Kong API Logs?
Note: I am not using filebeat agent on the Kong API nodes. I am using tcp-logs plugin to send Kong logs to Logstash.
The content of /etc/logstash/conf.d/beats.conf
input {
beats {
port => 5044
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGLINE}" }
}
date {
match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => ["Elstic_IP_Address:9200"]
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
}
}
Thanks so much for your support!
To fix this issue, we have to use index => "transaction" in the content of /etc/logstash/conf.d/beats.conf configuration file.
Then using transaction index to display the logs on Kibana.
input {
beats {
port => 5044
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGLINE}" }
}
date {
match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => ["Elstic_IP_Address:9200"]
index => "transaction"
}
}

how to import apache log into elasticsearcha and create an index pattern in Kibana

I have an apache log file which i want to import in elasticsearch and create index pattern in kibana. i have installed ELK stack in my centos-7 machine and the services are running.I configured the /etc/logstash/conf.d/apachelog.conf file but its not showing in kibana. Please suggest what is missing in this or any other configurations required. The elasticsearch.yml amd kibana.yml has also been configured. The /etc/logstash/conf.d/apachelog.conf file is as follows:
input {
file {
path => "/home/vagrant/apache.log"
start_position => "beginning"
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
geoip {
source => "clientip"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "project"
}
}

How to send logs from multiple servers to ELK server

I have a server in which ELK installed, On other end i have 2 source servers which sending logs to ELK server through filebeat. But the issue is both server's logs showing on same page on kibana. which is too complicated to identify which log is coming from which server! How multiple server's logs show separate on kibana.
Following are my logstash.conf:
input {
beats {
port => 5044
}
}
# Used to parse syslog messages and send it to Elasticsearch for storing
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGLINE}" }
}
date {
match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
# Specify an Elastisearch instance
output {
Elasticsearch {
hosts => ["localhost:9200"]
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
}
}

elaticsearch monitoring search queries

For more than a week I am struggling with trying to log into an index in elasticsearch information regarding queries which I run so I could compare performance between different types of queries. I have configured this config file on logstash home directory
input {
beats {
port> 5044
}
}
filter {
if "search" in [request]{
grok {
match => { "request" => ".*\n\{(?<query_body>.*)"}
}
grok {
match => { "path" => "\/(?<index>.*)\/_search"}
}
if [index] {
} else {
mutate {
add_field => { "index" => "All" }
}
}
mutate {
update => { "query_body" => "{%{query_body}" }
}
}
}
output {
if "search" in [request] and "ignore_unmapped" not in [query_body]{
elasticsearch {
hosts => "http://localhost:9200"
}
}
}
and also installed and configured packetbeat.yml
logstash hosts to :http://localhost:9200
Also in the tutorial that I have followed is mentioned that after starting Packetbeat it will listen for packets on 9200 sending them to Logstash and from there to the monitoring Elasticsearch cluster, it will be indexed in indexes like: logstash-2016.05.24. But these indexes does not exists.

Filebeat -> Logstash indexing documents twice

I have Nginx logs being sent from Filebeat to Logstash which is indexing them into Elasticsearch.
Every entry gets indexed twice. Once with the correct grok filter and then again with no fields found except for the "message" field.
This is the logstash configuration.
02-beats-input.conf
input {
beats {
port => 5044
ssl => false
}
}
11-nginx-filter.conf
filter {
if [type] == "nginx-access" {
grok {
patterns_dir => ['/etc/logstash/patterns']
match => {"message" => "%{NGINXACCESS}"
}
date {
match => [ "timestamp", "dd/MMM/YYYY:HH:mm:ss Z", "d/MMM/YYYY:HH:mm:ss Z" ]
}
}
}
Nginx Patterns
NGUSERNAME [a-zA-Z\.\#\-\+_%]+
NGUSER %{NGUSERNAME}
NGINXACCESS %{IPORHOST:clientip}\s+%{NGUSER:ident}\s+%{NGUSER:auth}\s+\[%{HTTPDATE:timestamp}\]\s+\"%{WORD:verb}\s+%{URIPATHPARAM:request}\s+HTTP/%{NUMBER:httpversion}\"\s+%{NUMBER:response}\s+(?:%{NUMBER:bytes}|-)\s+(?:\"(?:%{URI:referrer}|-)\"|%{QS:referrer})\s+%{QS:agent}
30-elasticsearc-output.conf
output {
elasticsearch {
hosts => ["elastic00:9200", "elastic01:9200", "elastic02:9200"]
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
Check your filebeat configuration!
During setup I had accidentally un-commented and configured the output.elasticsearch section of the filebeat.yml.
I then also configured the output.logstash section of the configuration but forgot to comment out the elasticsearch output section.
This caused one entry to be sent to logstash where it was grok'd and another one to be sent directly to elasticsearch.

Resources