Logstash-5.6.0 and Elastic Search-6.2.1 - elasticsearch

I have the below configuration in logstash.conf,
Started my logstash with the following command `
logstash --verbose -f D:\ELK\logstash-5.6.0\logstash-5.6.0\logstash.conf`
and Elastic search is running at 9200 port but logstash is not pipelining the parsed log file contents into elastic search. did i miss any configuration ? or what am i doing wrong here.
input{
file{
path => "D:/server.log" start_position=> "beginning" type => "logs"
}
}
filter{
grok{
match => {'message'=>'\[%{TIMESTAMP_ISO8601:logtime}\]%{SPACE}%{LOGLEVEL:loglevel}%{SPACE}\[(?<threadname>[^\]]+)\]%{SPACE}%{WORD}\:%{WORD}\:%{WORD}%{SPACE}\(%{WORD:className}\.%{WORD}\:%{WORD}\)%{SPACE}\-%{SPACE}%{GREEDYDATA:errorDescription}'
'message1'=>'\[%{TIMESTAMP_ISO8601:logtime}\]%{SPACE}%{LOGLEVEL:loglevel}%{SPACE}\[(?<threadname>[^\]]+)\]%{SPACE}%{WORD}\:%{WORD}\:%{WORD}:%{WORD}%{SPACE}\(%{WORD:className}\.%{WORD}\:%{WORD}\)%{SPACE}\-%{SPACE}%{GREEDYDATA:errorDescription}'
'message2'=>'\[%{TIMESTAMP_ISO8601:logtime}\]%{SPACE}%{LOGLEVEL:loglevel}%{SPACE}\[(?<threadname>[^\]]+)\]%{SPACE}\(%{WORD:className}\.%{WORD}\:%{WORD}\)%{SPACE}\-%{SPACE}%{GREEDYDATA:errorDescription}'
}
add_field => {
'eventName' => 'grok'
}
}
}
output{
elasticsearch{
hosts=>["localhost:9200"]
index=>"tuesday"
}
}
here is my sample log content :
[2018-02-12 05:25:22,996] ERROR [VBH-1] (ClassA.java:55) - Could not process a new task
[2018-02-13 08:02:24,690] ERROR [CTY-2] C:31:cvbb09:0x73636711c67k4g2e (ClassB.java:159) - Calling command G Update on server http://localhost/TriggerDXFGeneration?null failed because server responded with http status 400 response was: ?<?xml version="1.0" encoding="utf-8"?>
[2018-02-13 08:02:24,690] DEBUG [BHU-2] C:31:cvbb09:0x73636711c67k4g2e (ClassC.java:836) - insertDxfProcessingQueue() called with ConfigID : FTCC08_0X5A3A7E222DD2171B
[2018-02-13 08:07:51,087] ERROR [http-apr-50101-exec-2] C:10:cvbb09 (ClassD.java:133) - Exception on TestScheduler():
It is failing to parse the log content.
{
"path" => "D://ELK/server.log",
"#timestamp" => 2018-02-19T16:01:12.083Z,
"#version" => "1",
"host" => "AAEINBLR05971L",
"message" => "[2018-02-13 08:02:24,690] DEBUG [BHU-2] C:31:cvbb09:0x73636711c67k4g2e (ClassC.java:836) - insertDxfProcessingQueue() called with ConfigID : FTCC08_0X5A3A7E222DD2171B\r",
"type" => "logs",
"tags" => [
[0] "_grokparsefailure"
]
}

Related

Elasticsearch : Encountered a retryable error

I have written a Logstash config file such that it reads the log messages a file and then transfer data to elasticsearch.
Location of the config file: pipe.conf
/etc/logstash/conf.d
pipe.conf has the following contents:
input
{
file
{
path => "/var/log/elasticsearch/file.log"
sincedb_path => "/dev/null"
start_position => "beginning"
type => "doc"
}
}
output
{
elasticsearch
{
hosts => ["localhost:9200"]
action => "create"
index => ["logs"]
}
}
When Logstash runs, error occurs,
"[Ruby-0-Thread-10#[main]>worker3: :1] elasticsearch - Encountered a retryable error. Will Retry with exponential backoff {:code=>400, :url=>"http://localhost:9200/_bulk"}"
Default action is create. so need to add action.
elasticsearch {
hosts => ["http://localhost:9200"]
index => "logs"
}

Use logstash to copy index on same cluster

I have an ES 1.7.1 cluster and I wanted to "reindex" an Index. Since I cannot use _reindex I came across this tutorial to use logstash to copy an index onto a different cluster. However in my case I want to copy it over to the same cluster. This is my logstash.conf :
input {
# We read from the "old" cluster
elasticsearch {
hosts => "myhost"
port => "9200"
index => "my_index"
size => 500
scroll => "5m"
docinfo => true
}
}
output {
# We write to the "new" cluster
elasticsearch {
host => "myhost"
port => "9200"
protocol => "http"
index => "logstash_test"
index_type => "%{[#metadata][_type]}"
document_id => "%{[#metadata][_id]}"
}
# We print dots to see it in action
stdout {
codec => "dots"
}
}
filter {
mutate {
remove_field => [ "#timestamp", "#version" ]
}
}
This errors out with the following error message:
A plugin had an unrecoverable error. Will restart this plugin.
Plugin: <LogStash::Inputs::Elasticsearch hosts=>[{:scheme=>"http", :user=>nil, :password=>nil, :host=>"myhost", :path=>"", :port=>"80", :protocol=>"http"}], index=>"my_index", scroll=>"5m", query=>"{\"query\": { \"match_all\": {} } }", docinfo_target=>"#metadata", docinfo_fields=>["_index", "_type", "_id"]>
Error: Connection refused - Connection refused {:level=>:error}
Failed to install template: http: nodename nor servname provided, or not known {:level=>:error}
Any help would be appreciated

Elasticsearch Logstash Filebeat mapping

Im having a problem with ELK Stack + Filebeat.
Filebeat is sending apache-like logs to Logstash, which should be parsing the lines. Elasticsearch should be storing the split data in fields so i can visualize them using Kibana.
Problem:
Elasticsearch recieves the logs but stores them in a single "message" field.
Desired solution:
Input:
10.0.0.1 some.hostname.at - [27/Jun/2017:23:59:59 +0200]
ES:
"ip":"10.0.0.1"
"hostname":"some.hostname.at"
"timestamp":"27/Jun/2017:23:59:59 +0200"
My logstash configuration:
input {
beats {
port => 5044
}
}
filter {
if [type] == "web-apache" {
grok {
patterns_dir => ["./patterns"]
match => { "message" => "IP: %{IPV4:client_ip}, Hostname: %{HOSTNAME:hostname}, - \[timestamp: %{HTTPDATE:timestamp}\]" }
break_on_match => false
remove_field => [ "message" ]
}
date {
locale => "en"
timezone => "Europe/Vienna"
match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
}
useragent {
source => "agent"
prefix => "browser_"
}
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["localhost:9200"]
index => "test1"
document_type => "accessAPI"
}
}
My Elasticsearch discover output:
I hope there are any ELK experts around that can help me.
Thank you in advance,
Matthias
The grok filter you stated will not work here.
Try using:
%{IPV4:client_ip} %{HOSTNAME:hostname} - \[%{HTTPDATE:timestamp}\]
There is no need to specify desired names seperately in front of the field names (you're not trying to format the message here, but to extract seperate fields), just stating the field name in brackets after the ':' will lead to the result you want.
Also, use the overwrite-function instead of remove_field for message.
More information here:
https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#plugins-filters-grok-options
It will look similar to that in the end:
filter {
grok {
match => { "message" => "%{IPV4:client_ip} %{HOSTNAME:hostname} - \[%{HTTPDATE:timestamp}\]" }
overwrite => [ "message" ]
}
}
You can test grok filters here:
http://grokconstructor.appspot.com/do/match

how do you create a filter based on grok in logstash

I am trying to insert this entry to elasticsearch using logstash:
2016-05-18 00:14:30,915 DEBUG http-bio-/158.134.18.57-8200-exec-1, HTTPReport - Saved report job 1000 for report
2016-05-18 00:14:30,937 DEBUG http-bio-/158.134.18.57-8200-exec-1, JavaReport -
************************************************************************************************
Report Job information
Job ID : 12000
Job name : 101
Job priority : 1
Job group : BACKGROUND
Report : Month End
2016-05-18 00:17:38,868 DEBUG JobsMaintenanceScheduler_Worker-1, DailyReport - System information: available processors = 12; memory status : 2638 MB of 4096 MB
I have this filter in the logstash conf file:
input {
file {
path => "/data/*.log"
type => "app_log"
start_position => "beginning"
}
}
filter {
multiline {
pattern => "(([\s]+)20[0-9]{2}-)|20[0-9]{2}-"
negate => true
what => "previous"
}
if [type] == "app_log" {
grok {
patterns_dir => ["/pattern"]
match => {"message" => "%{TIMESTAMP_ISO8601:timestamp},%{NUMBER:Num_field} %{WORD:error_level} %{GREEDYDATA:origin}, %{WORD:logger} - %{GREEDYDATA:event%}"}
}
}
mutate { add_field => {"type" => "app_log"}}
mutate { add_field => {"machine_name" => "server101"}}
}
output {
elasticsearch {
hosts=> "localhost:9200"
index => "app_log-%{+YYYY.MM.dd}"
manage_template => false
}
}
I am getting this error:
translation missing: en.logstash.runner.configuration.file-not-found {:level=>:error}
Not able to insert it. Any ideas what might be wrong?
Upgrade to the latest version of Logstash (= 2.3.2), fix your grok filter like below and it will work:
grok {
add_field => {"machine_name" =>"server010"}
match =>{"message" => "%{TIMESTAMP_ISO8601:timestamp} %{WORD:error_level} %{DATA:origin}, %{DATA:logger_name} - %{GREEDYDATA:EVENT}"}
}
UPDATE

How to check if logstash receiving/parsing data from suricata to elasticsearch?

Trying to configure suricata v2.0.8 with ElasticSearch(v1.5.2)-Logstash(v1.4.2)-Kibana(v4.0.2) on Mac OS X 10.10.3 Yosemite.
suricata.yaml:
# Extensible Event Format (nicknamed EVE) event log in JSON format
- eve-log:
enabled: yes
type: file #file|syslog|unix_dgram|unix_stream
filename: eve.json
# the following are valid when type: syslog above
#identity: "suricata"
#facility: local5
#level: Info ## possible levels: Emergency, Alert, Critical,
## Error, Warning, Notice, Info, Debug
types:
- alert
- http:
extended: yes # enable this for extended logging information
# custom allows additional http fields to be included in eve-log
# the example below adds three additional fields when uncommented
#custom: [Accept-Encoding, Accept-Language, Authorization]
- dns
- tls:
extended: yes # enable this for extended logging information
- files:
force-magic: yes # force logging magic on all logged files
force-md5: yes # force logging of md5 checksums
#- drop
- ssh
#- smtp
#- flow
logstash.conf:
input {
file {
path => ["/var/log/suricata/eve.json"]
sincedb_path => ["/var/lib/logstash/"]
codec => json
type => "SuricataIDPS"
start_position => "beginning"
}
}
filter {
if [type] == "SuricataIDPS" {
date {
match => [ "timestamp", "ISO8601" ]
}
ruby {
code => "if event['event_type'] == 'fileinfo'; event['fileinfo']['type']=event['fileinfo']['magic'].to_s.split(',')[0]; end;"
}
}
if [src_ip] {
geoip {
source => "src_ip"
target => "geoip"
#database => "/usr/local/opt/logstash/libexec/vendor/geoip/GeoLiteCity.dat"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [ "[geoip][coordinates]", "float" ]
}
if ![geoip.ip] {
if [dest_ip] {
geoip {
source => "dest_ip"
target => "geoip"
#database => "/usr/local/opt/logstash/libexec/vendor/geoip/GeoLiteCity.dat"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [ "[geoip][coordinates]", "float" ]
}
}
}
}
}
output {
elasticsearch {
host => localhost
#protocol => http
}
}
Suricata logs all events successfully into eve.json. When I open kibana in browser, I see no dashboards or any information from suricata... So I assume either logstash doesn't read the data from eve.json or doesn't parse the data to elasticsearch (or both)... Are there any ways to check what's going on?
Turn on a debug output in logstash:
output {
stdout {
codec = rubydebug
}
}
Also, try running your query against Elasticsearch directly (curl) rather than with kibana.
I made an adaptation of the nginx log to the suricata log. I can have the geoip information in the suricata logs. I make the adaptation through swatch and send to a log file configured in filebeat.
Ex:
nginx.access.referrer: ET INFO Session Traversal Utilities for NAT (STUN Binding Request) [**
nginx.access.geoip.location:
{
“lon”: -119.688,
“lat”: 45.8696
}
Use the swatch to read the suricata logs and send them to the shell script that will do the adaptation.
Ex:
echo "$IP - - [$nd4] \"GET $IP2:$PORT2 --- $TYPE HTTP/1.1\" 777 0 \"$CVE\" \"Mozilla/5.0 (NONE) (NONE) NONE\"" >> /var/log/suricata_mod.log
Then configure filebeat.yml:
document_type: nginx-access
paths:
/var/log/suricata_mod.log
Restart filebeat.
Finally configure the logstash:
filter {
if [type] == "nginx-access" {
grok {
match => { "message" => ["%{IPORHOST:[nginx][access][remote_ip]} - %{DATA:[nginx][access][user_name]} \[%{HTTPDATE:[nginx][access][time]}\] \"%{WORD:[nginx][access][method]} %{DATA:[nginx][access][url]} HTTP/%{NUMBER:[$
remove_field => "message"}
mutate {
add_field => { "read_timestamp" => "%{#timestamp}" }}
date {
match => [ "[nginx][access][time]", "dd/MMM/YYYY:H:m:s Z" ]
remove_field => "[nginx][access][time]"}
useragent {
source => "[nginx][access][agent]"
target => "[nginx][access][user_agent]"
remove_field => "[nginx][access][agent]"}
geoip {
source => "[nginx][access][remote_ip]"
target => "[nginx][access][geoip]"
database => "/opt/GeoLite2-City.mmdb"}} } output {
elasticsearch {
hosts => [ "xxx.xxx.xxx.xxx:9200" ]
manage_template => false
document_type => "%{[#metadata][type]}"
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"}}
And restart logstash. In Kibana create a filebeat- * index. Ready.

Resources