I'm setting up a logstash cluster and I configured some authentication regarding the output filter.
However I can't figure out why it isn't working...
I tried brackets, no brackets, IP, FQDN...
input {
tcp {
port => 5000
type => syslog
}
udp {
port => 5000
type => syslog
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch { hosts => ["localhost.enedis.fr:9200"] }
user = sec-svc-log01
password => 3N3D1S!!
stdout { codec => rubydebug }
}
Am I missing something ?
Thanks for your help !
Try using below ouput section code,
output {
elasticsearch {
hosts => ["localhost.enedis.fr:9200"]
user => "sec-svc-log01"
password => "3N3D1S!!"
}
stdout { codec => rubydebug }
}
Related
I'm setting up an elk with kafka and want to send log through 2 kafka topic ( topic1 for windowslog and topic2 for wazuh log) to logstash with different codec and filter. I tryed with bellow input config for logstash but it doesn't
input {
kafka {
bootstrap_servers => "kafka:9000"
topics => ["windowslog", "system02"]
decorate_events => true
codec => "json"
auto_offset_reset => "earliest"
}
kafka {
bootstrap_servers => "kafka-broker:9000"
topics => ["wazuh-alerts"]
decorate_events => true
codec => "json_lines"
}
}
and filter.conf file :
filter {
if [#metadata][kafka][topic] == "wazuh-alerts" {
if [data][srcip] {
mutate {
add_field => [ "#src_ip", "%{[data][srcip]}" ]
}
}
if [data][aws][sourceIPAddress] {
mutate {
add_field => [ "#src_ip", "%{[data][aws][sourceIPAddress]}" ]
}
}
geoip {
source => "#src_ip"
target => "GeoLocation"
fields => ["city_name", "country_name", "region_name", "location"]
}
date {
match => ["timestamp", "ISO8601"]
target => "#timestamp"
}
mutate {
remove_field => [ "timestamp", "beat", "input_type", "tags", "count", "#version", "log", "offset", "type", "#src_ip", "host"]
}
}
}
How can I do this ?
Try to use tags on each input and filter based on those tags.
For example:
input {
kafka {
bootstrap_servers => "kafka-broker:9000"
topics => ["wazuh-alerts"]
decorate_events => true
codec => "json_lines"
tags => ["wazuh-alerts"]
}
}
And in your filters and outputs you need a conditional based on that tag.
filter {
if "wazuh-alerts" in [tags] {
your filters
}
}
output {
if "wazuh-alerts" in [tags] {
your output
}
}
This is my logstash.conf file:
input {
http {
host => "127.0.0.1"
port => 31311
}
}
filter {
mutate {
split => ["%{headers.request_path}", "/"]
add_field => { "index_id" => "%{headers.request_path[0]}" }
add_field => { "document_id" => "%{headers.request_path[1]}" }
}
}
output {
elasticsearch {
hosts => "http://localhost:9200"
index => "%{index_id}"
document_id => "%{document_id}"
}
stdout {
codec => "rubydebug"
}
}
When I send a PUT request like
C:\Users\BolverkXR\Downloads\curl-7.64.1-win64-mingw\bin> .\curl.exe
-XPUT 'http://127.0.0.1:31311/twitter'
I want a new index to be created with the name twitter, instead of using the ElasticSearch default.
However, Logstash crashes immediately with the following (truncated) error message:
Exception in pipelineworker, the pipeline stopped processing new
events, please check your filter configuration and restart Logstash.
org.logstash.FieldReference$IllegalSyntaxException: Invalid
FieldReference: headers.request_path[0]
I am sure I have made a syntax error somewhere, but I can't see where it is. How can I fix this?
EDIT:
The same error occurs when I change the filter segment to the following:
filter {
mutate {
split => ["%{[headers][request_path]}", "/"]
add_field => { "index_id" => "%{[headers][request_path][0]}" }
add_field => { "document_id" => "%{[headers][request_path][1]}" }
}
}
To split the field the %{foo} syntax is not used. Also you should start at position [1] of the array, because in position [0] there will be an empty string("") due to the reason that there are no characters at the left of the first separator(/). Instead, your filter section should be something like this:
filter {
mutate {
split => ["[headers][request_path]", "/"]
add_field => { "index_id" => "%{[headers][request_path][1]}" }
add_field => { "document_id" => "%{[headers][request_path][2]}" }
}
}
You can now use the value in %{index_id} and %{document_id}. I tested this using logstash 6.5.3 version and used Postman to send the 'http://127.0.0.1:31311/twitter/1' HTTP request and the output in console was as follows:
{
"message" => "",
"index_id" => "twitter",
"document_id" => "1",
"#version" => "1",
"host" => "127.0.0.1",
"#timestamp" => 2019-04-09T12:15:47.098Z,
"headers" => {
"connection" => "keep-alive",
"http_version" => "HTTP/1.1",
"http_accept" => "*/*",
"cache_control" => "no-cache",
"content_length" => "0",
"postman_token" => "cb81754f-6d1c-4e31-ac94-fde50c0fdbf8",
"accept_encoding" => "gzip, deflate",
"request_path" => [
[0] "",
[1] "twitter",
[2] "1"
],
"http_host" => "127.0.0.1:31311",
"http_user_agent" => "PostmanRuntime/7.6.1",
"request_method" => "PUT"
}
}
The output section of your configuration does not change. So, your final logstash.conf file will be something like this:
input {
http {
host => "127.0.0.1"
port => 31311
}
}
filter {
mutate {
split => ["[headers][request_path]", "/"]
add_field => { "index_id" => "%{[headers][request_path][1]}" }
add_field => { "document_id" => "%{[headers][request_path][2]}" }
}
}
output {
elasticsearch {
hosts => "http://localhost:9200"
index => "%{index_id}"
document_id => "%{document_id}"
}
stdout {
codec => "rubydebug"
}
}
Using filebeat to push nginx logs to logstash and then to elasticsearch.
Logstash filter:
filter {
if [fileset][module] == "nginx" {
if [fileset][name] == "access" {
grok {
match => { "message" => ["%{IPORHOST:[nginx][access][remote_ip]} - %{DATA:[nginx][access][user_name]} \[%{HTTPDATE:[nginx][access][time]}\] \"%{WORD:[nginx][access][method]} %{DATA:[nginx][access][url]} HTTP/%{NUMBER:[nginx][access][http_version]}\" %{NUMBER:[nginx][access][response_code]} %{NUMBER:[nginx][access][body_sent][bytes]} \"%{DATA:[nginx][access][referrer]}\" \"%{DATA:[nginx][access][agent]}\""] }
remove_field => "message"
}
mutate {
add_field => { "read_timestamp" => "%{#timestamp}" }
}
date {
match => [ "[nginx][access][time]", "dd/MMM/YYYY:H:m:s Z" ]
remove_field => "[nginx][access][time]"
}
useragent {
source => "[nginx][access][agent]"
target => "[nginx][access][user_agent]"
remove_field => "[nginx][access][agent]"
}
geoip {
source => "[nginx][access][remote_ip]"
target => "[nginx][access][geoip]"
}
}
else if [fileset][name] == "error" {
grok {
match => { "message" => ["%{DATA:[nginx][error][time]} \[%{DATA:[nginx][error][level]}\] %{NUMBER:[nginx][error][pid]}#%{NUMBER:[nginx][error][tid]}: (\*%{NUMBER:[nginx][error][connection_id]} )?%{GREEDYDATA:[nginx][error][message]}"] }
remove_field => "message"
}
mutate {
rename => { "#timestamp" => "read_timestamp" }
}
date {
match => [ "[nginx][error][time]", "YYYY/MM/dd H:m:s" ]
remove_field => "[nginx][error][time]"
}
}
}
}
There is just one file /var/log/nginx/access.log.
In kibana, I see ± half of the rows with parsed message and other half - not.
All of the rows in kibana have a tag "beats_input_codec_plain_applied".
Examples from filebeat -e
Row that works fine:
"source": "/var/log/nginx/access.log",
"offset": 5405195,
"message": "...",
"fileset": {
"module": "nginx",
"name": "access"
}
Row that doesn't work fine (no "fileset"):
"offset": 5405397,
"message": "...",
"source": "/var/log/nginx/access.log"
Any idea what could be the cause?
I want to create a tile map in Kibana to show source IP's from countries around the world.
When trying to set up a tile map, I get an error saying that "The "logstash-*" index pattern does not contain any of the following field types: geo_point"
I've googled the problem and found this link https://github.com/elastic/logstash/issues/3137 and at the end of that page, it states this is fixed in 2.x. But I am on 2.1.
Here are my configs:
1inputs.conf:
input {
udp {
type => "syslog"
port => 5140
}
}
5pfsense.conf:
filter {
# Replace with your IP
if [host] =~ /10\.1\.15\.200/ {
grok {
match => [ 'message', '.* %{WORD:program}:%{GREEDYDATA:rest}' ]
}
if [program] == "filterlog" {
# Grab fields up to IP version. The rest will vary depending on IP version.
grok {
match => [ 'rest', '%{INT:rule_number},%{INT:sub_rule_number},,%{INT:tracker_id},%{WORD:interface},%{WORD:reason},%{WORD:action},%{WORD:direction},%{WORD:ip_version},%{GREEDYDATA:rest2}' ]
}
}
mutate {
replace => [ 'message', '%{rest2}' ]
}
if [ip_version] == "4" {
# IPv4. Grab field up to dest_ip. Rest can vary.
grok {
match => [ 'message', '%{WORD:tos},,%{INT:ttl},%{INT:id},%{INT:offset},%{WORD:flags},%{INT:protocol_id},%{WORD:protocol},%{INT:length},%{IP:src_ip},%{IP:dest_ip},%{GREEDYDATA:rest3}' ]
}
}
if [protocol_id] != 2 {
# Non-IGMP has more fields.
grok {
match => [ 'rest3', '^%{INT:src_port:int},%{INT:dest_port:int}' ]
}
}
else {
# IPv6. Grab field up to dest_ip. Rest can vary.
grok {
match => [ 'message', '%{WORD:class},%{WORD:flow_label},%{INT:hop_limit},%{WORD:protocol},%{INT:protocol_id},%{INT:length},%{IPV6:src_ip},%{IPV6:dest_ip},%{GREEDYDATA:rest3}' ]
}
}
mutate {
replace => [ 'message', '%{rest3}' ]
lowercase => [ 'protocol' ]
}
if [message] {
# Non-ICMP has more fields
grok {
match => [ 'message', '^%{INT:src_port:int},%{INT:dest_port:int},%{INT:data_length}' ]
}
}
mutate {
remove_field => [ 'message' ]
remove_field => [ 'rest' ]
remove_field => [ 'rest2' ]
remove_field => [ 'rest3' ]
remove_tag => [ '_grokparsefailure' ]
add_tag => [ 'packetfilter' ]
}
geoip {
add_tag => [ "GeoIP" ]
source => "src_ip"
}
}
}
Lastly, the 50outputs.conf:
output {
elasticsearch { hosts => localhost index => "logstash-%{+YYYY.MM.dd}" template_overwrite => "true" }
stdout { codec => rubydebug }
}
I have ELK installed and working in my machine, but now I want to do a more complex filtering and field adding depending on event messages.
Specifically, I want to set "id_error" and "descripcio" depending on the message pattern.
I have been trying a lot of code combinations in "logstash.conf" file, but I am not able to get the expected behavior.
Can someone tell me what I am doing wrong, what I have to do or if this is not possible? Thanks in advance.
This is my "logstash.conf" file, with the last test I have made, resulting in no events captured in Kibana:
input {
file {
path => "C:\xxx.log"
}
}
filter {
grok {
patterns_dir => "C:\elk\patterns"
match => [ "message", "%{ERROR2:error2}" ]
add_field => [ "id_error", "2" ]
add_field => [ "descripcio", "error2!!!" ]
}
grok {
patterns_dir => "C:\elk\patterns"
match => [ "message", "%{ERROR1:error1}" ]
add_field => [ "id_error", "1" ]
add_field => [ "descripcio", "error1!!!" ]
}
if ("_grokparsefailure" in [tags]) { drop {} }
}
output {
elasticsearch {
host => "localhost"
protocol => "http"
index => "xxx-%{+YYYY.MM.dd}"
}
}
I also have tried the following code, resulting in fields "id_error" and "descripcio" with both vaules "[1,2]" and "[error1!!!,error2!!!]" respectively, in each matched event.
As "break_on_match" is set "true" by default, I expect getting only the fields behind the matching clause, but this doesn't occur.
input {
file {
path => "C:\xxx.log"
}
}
filter {
grok {
patterns_dir => "C:\elk\patterns"
match => [ "message", "%{ERROR1:error1}" ]
add_field => [ "id_error", "1" ]
add_field => [ "descripcio", "error1!!!" ]
match => [ "message", "%{ERROR2:error2}" ]
add_field => [ "id_error", "2" ]
add_field => [ "descripcio", "error2!!!" ]
}
if ("_grokparsefailure" in [tags]) { drop {} }
}
output {
elasticsearch {
host => "localhost"
protocol => "http"
index => "xxx-%{+YYYY.MM.dd}"
}
}
I have solved the problem. I get the expected results with the following code in "logstash.conf":
input {
file {
path => "C:\xxx.log"
}
}
filter {
grok {
patterns_dir => "C:\elk\patterns"
match => [ "message", "%{ERROR1:error1}" ]
match => [ "message", "%{ERROR2:error2}" ]
}
if [message] =~ /error1_regex/ {
grok {
patterns_dir => "C:\elk\patterns"
match => [ "message", "%{ERROR1:error1}" ]
}
mutate {
add_field => [ "id_error", "1" ]
add_field => [ "descripcio", "Error1!" ]
remove_field => [ "message" ]
remove_field => [ "error1" ]
}
}
else if [message] =~ /error2_regex/ {
grok {
patterns_dir => "C:\elk\patterns"
match => [ "message", "%{ERROR2:error2}" ]
}
mutate {
add_field => [ "id_error", "2" ]
add_field => [ "descripcio", "Error2!" ]
remove_field => [ "message" ]
remove_field => [ "error2" ]
}
}
if ("_grokparsefailure" in [tags]) { drop {} }
}
output {
elasticsearch {
host => "localhost"
protocol => "http"
index => "xxx-%{+YYYY.MM.dd}"
}
}