Logstash with helm in Kubernetes : grok filter not working - elasticsearch

I installed a filebeat -> logstash -> elasticsearch -> kibana stack in Kubernetes with helm charts :
helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
helm install --name elastic --namespace monitoring incubator/elasticsearch --set client.replicas=1,master.replicas=2,data.replicas=1
helm install --name logstash --namespace monitoring incubator/logstash -f logstash_values.yaml
helm install --name filebeat stable/filebeat -f filebeat_values.yaml
helm install stable/kibana --name kibana --namespace monitoring
The logs are indexed in ES, but the "message" contains the whole string, not the defined fields. My grok filter doesn't seem to work in logstash conf.
The is no documentation on https://github.com/helm/charts/tree/master/incubator/logstash about how to set the patterns.
Here is what I tried :
my log's format :
10-09-2018 11:57:55.906 [Debug] [LOG] serviceName - Technical - my specific message - correlationId - userId - data - operation - error - stackTrace escaped on one line
logstash_values.yaml (from https://github.com/helm/charts/blob/master/incubator/logstash/values.yaml) :
elasticsearch:
host: elasticsearch-client.default.svc.cluster.local
port: 9200
patterns:
main: |-
(?<time>(?:2[0123]|[01]?[0-9]):(?:[0-5][0-9]):(?:(?:[0-5]?[0-9]|60)(?:[:.,][0-9]+)?)\.(?:[0-9]){3})} [(?<logLevel>.*)] [(?<code>.*)] (?<caller>.*) - (?<logMessageType>.*) - (?<message>.*) - (?<correlationId>.*) - (?<userId>.*) - (?<data>.*) - (?<operation>.*) - (?<error>.*) - (?<stackTrace>.*)
inputs:
main: |-
input {
beats {
port => 5044
}
}
filters:
outputs:
main: |-
output {
elasticsearch {
hosts => ["${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}"]
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
This becomes a Kubernetes configMap "logstash-patterns" :
apiVersion: v1
kind: ConfigMap
data:
main: (?<time>(?:2[0123]|[01]?[0-9]):(?:[0-5][0-9]):(?:(?:[0-5]?[0-9]|60)(?:[:.,][0-9]+)?)\.(?:[0-9]){3}) [(?<code>.*)] [(?<logLevel>.*)] (?<service>.*) - (?<logMessageType>.*) - (?<message>.*) - (?<correlationId>.*) - (?<userId>.*) - (?<data>.*) - (?<operation>.*) - (?<error>.*) - (?<stackTrace>.*)
I don't see any error logs in logstash pod.
Do you have any idea how to configure patterns in logstash in Kubernetes ?
Thanks.

I was mistaking "pattern" and "filter".
In the Helm chart, "pattern" is for specifying our custom grok patterns (https://grokdebug.herokuapp.com/patterns) :
MY_CUSTOM_ALL_CHARS .*
My grok filter should be in the filter section :
patterns:
# nothing here for me
filters:
main: |-
filter {
grok {
match => { "message" => "\{%{TIMESTAMP_ISO8601:time}\} \[%{DATA:logLevel}\] \[%{DATA:code}\] %{DATA:caller} &\$ %{DATA:logMessageType} &\$ %{DATA:message} &\$ %{DATA:correlationId} &\$ %{DATA:userId} &\$ %{DATA:data} &\$ %{DATA:operation} &\$ %{DATA:error} &\$ (?<stackTrace>.*)" }
overwrite => [ "message" ]
}
date {
match => ["time", "ISO8601"]
target => "time"
}
}

Related

Cisco-module (Filebeat) to Logstash - Configuration issue - Unable to write to existing indices

I was able to send logs to Elasticsearch using Filebeat using the below configuration successfully.
# ============================== Filebeat inputs ===============================
filebeat.inputs:
- type: log
enabled: false
paths:
- /var/log/*.log
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
# Authentication credentials - either API key or username/password.
username: "elastic"
password: "XXXXXXXXXXXXX"
#Index name customization as we do not want 'Filebeat-" prefix for the indices that filbeat creates by default
index: "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
#Below configuration setting are mandatory when customizing index name
setup.ilm.enabled: false
setup.template:
name: 'network'
pattern: 'network-*'
enabled: false
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
# ================================== Logging ===================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: debug
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
# ============================= X-Pack Monitoring ==============================
#monitoring.elasticsearch:
monitoring:
enabled: true
cluster_uuid: 9ZIXSpCDBASwK5K7K1hqQA
elasticsearch:
hosts: ["http:/esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
username: beats_system
password: XXXXXXXXXXXXXX
I enabled all Cisco modules and they are able to create indices as below:
network-cisco.ios-YYYY.MM.DD
network-cisco.nexus-YYYY.MM.DD
network-cisco.asa-YYYY.MM.DD
network-cisco.ftd-YYYY.MM.DD
Until here there was no issue but it all came to a halt when I tried to introduce Logstash in between Filebeat & Elasticsearch.
Below is the network.conf file details for your analysis.
input {
beats {
port => "5046"
}
}
output {
if [event.dataset] == "cisco.ios" {
elasticsearch {
hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
user => "elastic"
password => "XXXXXXXXXXXX"
pipeline => "%{[#metadata][pipeline]}"
manage_template => "false"
ilm_enabled => "false"
}
}
else if [event.dataset] == "cisco.nexus" {
elasticsearch {
hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
user => "elastic"
password => "XXXXXXXXXXXX"
pipeline => "%{[#metadata][pipeline]}"
manage_template => "false"
ilm_enabled => "false"
}
}
else if [event.dataset] == "cisco.asa" {
elasticsearch {
hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
user => "elastic"
password => "XXXXXXXXXXXX"
pipeline => "%{[#metadata][pipeline]}"
manage_template => "false"
ilm_enabled => "false"
}
}
else if [event.dataset] == "cisco.ftd" {
elasticsearch {
hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
user => "elastic"
password => "XXXXXXXXXXXX"
pipeline => "%{[#metadata][pipeline]}"
manage_template => "false"
ilm_enabled => "false"
}
}
else if [event.dataset] == "cef.log" {
elasticsearch {
hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
user => "elastic"
password => "XXXXXXXXXXXX"
pipeline => "%{[#metadata][pipeline]}"
manage_template => "false"
ilm_enabled => "false"
}
}
else if [event.dataset] == "panw.panos" {
elasticsearch {
hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
user => "elastic"
password => "XXXXXXXXXXXX"
pipeline => "%{[#metadata][pipeline]}"
manage_template => "false"
ilm_enabled => "false"
}
}
stdout {codec => rubydebug}
}
With the above configuration I am unable to connect Filbeat --> Logstash --> Elasticsearch pipeline that I am looking to achieve.
There is no data that is getting added and stdout is able to produce output when I run logstash as below:
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/network.conf
Using --config_test_and_exit the config file is tested successfully, also the above line is producing stdout json lines, but in spite of that there is no document that is getting added to the existing indices (network-cisco.ios-YYYY.MM.DD, network-cisco.nexus-YYYY.MM.DD etc.).
When I tried to change the index name to 'test-%{+yyyy.MM.dd}' by testing with one elasticsearch output, I found that it creates an index with the same execution above.
Also when I take Logstash out of the equation, Filebeat is able to continue writing to the existing indices but it is not happening with the above Logstash configuration.
Any help would be greatly appreciated!
Thanks,
Arun

How to make logstah public over the network

Need your support, As the below image I configure ELK and deploy it in a separate server. it's working inside the same server but I'm trying to access Logstash over the same network on other servers.
My question is possible on local env?.
Logstash Configuration
# Read input from filebeat by listening to port 5044 on which filebeat will send the data
input {
beats {
host => "0.0.0.0"
port => "5044"
}
}
output {
stdout {
codec => rubydebug
}
# Sending properly parsed log events to elasticsearch
elasticsearch {
hosts => ["localhost:9200"]
index => "e-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
File Beats Configuration
filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
#- /var/log/*.log
- E:\IMR-App\imrh\logs\imrh.log
# ------------------------------ Logstash Output -------------------------------
output.logstash:
# The Logstash hosts
hosts: ["0.0.0.0:5044"]

Filebeat to logstash connection refused

I'm trying to send log files from filebeat->logstash->elastic search.
filebeat.yml. But I'm getting the following error in filebeat log:
2017-12-07T16:15:38+05:30 ERR Failed to connect: dial tcp [::1]:5044: connectex: No connection could be made because the target machine actively refused it.
My filebeat and logstash configurations are as follows:
1.filebeat.yml
filebeat.prospectors:
- input_type: log
paths:
- C:\Users\shreya\Data\mylog.log
document_type: springlog
multiline.pattern: ^\[[0-9]{4}-[0-9]{2}-[0-9]{2}
multiline.negate: true
multiline.match: before
output.logstash:
hosts: ["localhost:5044"]
2.logstash.yml
http.host: "127.0.0.1"
http.port: 5044
3.logstash conf file:
input {
beats {
port => 5044
codec => multiline {
pattern => "^(%{TIMESTAMP_ISO8601})"
negate => true
what => "previous"
}
}
}
filter {
grok{
id => "myspringlogfilter"
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}; [LOG_LEVEL=%{LOGLEVEL:log-level}, CMPNT_NM= %{GREEDYDATA:component}, MESSAGE=%{GREEDYDATA:message}" }
overwrite => ["message"]
}
}
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[#metadata][beat]}-%{[#metadata][version]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
stdout {
codec => rubydebug
}
}
Problem got solved after I commented out the metric settings in logstash.yml as follows:
# ------------ Metrics Settings --------------
#
# Bind address for the metrics REST endpoint
#
#http.host: "127.0.0.1"
#
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
#
#http.port: 5044
#
But still do not know why this solved the issue. as both(filebeat and logstash) were pointing to the same port. If someone could explain the reason,
then prior Thanks!

CircuitBreaker::rescuing exceptions {:name=>"Beats input", :exception=>LogStash::Inputs::Beats::InsertingToQueueTakeTooLong, :level=>:warn}

I am new to ELK stack. I am trying to setup FileBeat --> Logstash --> ElasticSearch --> Kibana. Here while trying to send FileBeat output to Logstash input I am getting below error on Logstash side:
CircuitBreaker::rescuing exceptions {:name=>"Beats input", :exception=>LogStash::Inputs::Beats::InsertingToQueueTakeTooLong, :level=>:warn}
Beats input: The circuit breaker has detected a slowdown or stall in the pipeline, the input is closing the current connection and rejecting new connection until the pipeline recover. {:exception=>LogStash::Inputs::BeatsSupport::CircuitBreaker::HalfOpenBreaker, :level=>:warn}
I am using Logstash 2.3.2 version with FileBeat: 1.2.2, elasticsearch: 2.2.1
my logstash config:
input {
beats {
port => 5044
# codec => multiline {
# pattern => "^%{TIME}"
# negate => true
# what => previous
# }
}
}
filter {
grok {
match => { "message" => "^%{TIME:time}\s+%{LOGLEVEL:level}" }
}
}
output {
elasticsearch {
hosts => ["host:9200"]
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
my filebeat config:
filebeat:
prospectors:
- paths: - *.log
input_type: log
tail_files: false
output:
logstash:
hosts: ["host:5044"]
compression_level: 3
shipper:
logging:
to_files: true
files:
path: /tmp
name: mybeat.log
level: error

Docker - ELK stack -- "Elasticsearch appears to be uneachable or down"

So I am using docker-compose to launch the ELK stack, which will be filled by filebeats... my config is something like this:
elasticsearch:
image: elasticsearch:latest
command: elasticsearch -Des.network.host=_non_loopback_
ports:
- "9200:9200"
- "9300:9300"
logstash:
image: logstash:latest
command: logstash -f /etc/logstash/conf.d/logstash.conf -b 10000 -w 1
volumes:
- ./logstash/config:/etc/logstash/conf.d
ports:
- "5044:5044"
links:
- elasticsearch
environment:
- LS_HEAP_SIZE=2048m
kibana:
build: kibana/
volumes:
- ./kibana/config/:/opt/kibana/config/
ports:
- "5601:5601"
links:
- elasticsearch
My logstash.conf file looks something like this:
input {
beats {
port => 5044
}
}
....
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
These docker containers are running on the same instance and I have confirmed being able to hit both ports externally.
The error which appears during a sync of a file from filebeat is:
logstash_1 | {:timestamp=>"2016-05-19T19:52:55.167000+0000", :message=>"Attempted to send a bulk request to Elasticsearch configured at '[\"http://localhost:9200/\"]', but Elasticsearch appears to be unreachable or down!", :error_message=>"Connection refused", :class=>"Manticore::SocketException", :client_config=>{:hosts=>["http://localhost:9200/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false, :http=>{:scheme=>"http", :user=>nil, :password=>nil, :port=>9200}}, :level=>:error}
Thanks,
You try to reach elasticsearch on localhost, but it's not possible, in this case localhost is the docker container containing logstash.
You have to access it via the link :
output {
elasticsearch {
hosts => "elasticsearch:9200"
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
or, if you want to access your elasticsearch instance from "outside" instead of localhost, fill your ip (not 127.0.0.1)

Resources