We are using ELK stack to monitor our logs. I am total newbie to ELK environment, recently i was working on a task where I need to configure Logstash with Redis to push our logs to,
below is the config i am using, the below config works with ElasticSearch but does not work with Redis,
input {
file {
path => "E:/Logs/**/*.log"
start_position => beginning
codec => json
}
}
filter {
date {
match => [ "TimeCreated", "YYYY-MM-dd HH:mm:ss Z" ]
}
mutate {
add_field => {
#"debug" => "true"
"index_prefix" => "logstash-app"
}
}
}
output {
#elasticsearch {
#local env
#hosts => ["localhost:9200"]
#preprod env
#hosts => ["elk.logs.abc.corp.com:9200"]
#prod env
#hosts => ["elk.logs.abc.prod.com:9200"]
#index => "logstash-app"
#}
redis {
#local env
#host => "localhost:5300"
#preprod env
host => "redis.logs.abc.corp.com"
#prod env
#host => "redis.logs.abc.prod.com"
data_type => "list"
key => "logstash"
}
if[debug] == "true" {
stdout {
codec => rubydebug
}
file {
path => "../data/logstash-app-%{+YYYYMMdd}.log"
}
}
}
I commented the Elasticsearch, with Elastic Search i am able to view the logs in Kibana but with Redis i am unable to see,
Can anyone point me what wrong i am doing ? how could i debug or see if my logs are shipped correctly
Based on the documentation of logstash plugin:
host should be an array
redis {
#preprod env
host => ["redis.logs.abc.corp.com"]
data_type => "list"
key => "logstash"
}
Related
I'm trying to send logs from specific source to specific index.
So in logstash.conf i did the following:
input {
gelf {
port => 12201
# type => docker
use_tcp => true
tags => ["docker"]
}
filter {
if "test_host" in [_source][host] {
mutate { add_tag => "test_host"}
}
output {
if "test_host" in [tags] {
stdout { }
opensearch {
hosts => ["https://opensearch:9200"]
index => "my_host_index"
user => "administrator"
password => "some_password"
ssl => true
ssl_certificate_verification => false
}
}
But unfortunately it's not working.
What am i doing wrong?
Thanks.
i use logstash 7.9.3 and with this version i have problems to create right index name like logstash-2021.01.01. I need first 9 days of month with 0.
with this config logstash-%{+yyyy.MM.dd} result is => logstash-2021.01.01-000001
with this config logstash-%{+yyyy.MM.d} result is => logstash-2021.01.1
input {
redis {
host => "someip_of_redis"
data_type => "list"
key => "logstash"
codec => "json"
}
}
output {
elasticsearch {
hosts => ["http://someip_of_elastic:9200"]
index => "logstash-%{+yyyy.MM.dd}"
}
}
Thank you in advance
to disable it, i add to config following ilm_enabled => false
input {
redis {
host => "someip_of_redis"
data_type => "list"
key => "logstash"
codec => "json"
}
}
output {
elasticsearch {
hosts => ["http://someip_of_elastic:9200"]
ilm_enabled => false
index => "logstash-%{+yyyy.MM.dd}"
}
}
I have setup successfully my system for centralized logging using: elasticsearch-logstash-filebeat-kibana.
I cant see logs using the filebeat template index in Kibana. The problems arrives when I try to create a logstash filter in order to parse my log files properly.
I'm using grok patterns, so first one I created this pattern (/opt/logstash/patterns/grok-paterns):
CUSTOMLOG %{TIMESTAMP_ISO8601:timestamp} - %{USER:auth} - %{LOGLEVEL:loglevel} - \[%{DATA:pyid}\]\[%{DATA:source}\]\[%{DATA:searchId}\] - %{GREEDYDATA:logmessage}
And this is the logstash filter (/etc/logstash/conf.d/11-log-filter.conf):
filter {
if [type] == "log" {
grok {
match => { "message" => "%{CUSTOMLOG}" }
patterns_dir => "/opt/logstash/patterns"
}
mutate {
rename => [ "logmessage", "message" ]
}
date {
timezone => "Europe/London"
locale => "en"
match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss,SSS" ]
}
}}
Apparently the parser is working fine when I test it from command line:
[root#XXXXX logstash]# bin/logstash -f test.conf
Settings: Default pipeline workers: 4
Logstash startup completed
2016-06-03 12:55:57,718 - root - INFO - [27232][service][751282714902528] - here goes my message
{
"message" => "here goes my message",
"#version" => "1",
"#timestamp" => "2016-06-03T11:55:57.718Z",
"host" => "19598",
"timestamp" => "2016-06-03 12:55:57,718",
"auth" => "root",
"loglevel" => "INFO",
"pyid" => "27232",
"source" => "service",
"searchId" => "751282714902528"
}
However... log do not appear in Kibana, I don't even see "_grokparsefailure" tasgs, so I guest that the parser is working but I can find logs.
What am I doing wrong? Do I forget something?
Thanks in advance.
Edit
Input (02-beats-input.conf):
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
Output (30-elasticsearch-output.conf):
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
I used the following piece of code to create an index in logstash.conf
output {
stdout {codec => rubydebug}
elasticsearch {
host => "localhost"
protocol => "http"
index => "trial_indexer"
}
}
To create another index i generally replace the index name with another in the above code. Is there any way of creating many indexes in the same file? I'm new to ELK.
You can use a pattern in your index name based on the value of one of your fields. Here we use the value of the type field in order to name the index:
output {
stdout {codec => rubydebug}
elasticsearch {
host => "localhost"
protocol => "http"
index => "%{type}_indexer"
}
}
You can also use several elasticsearch outputs either to the same ES host or to different ES hosts:
output {
stdout {codec => rubydebug}
elasticsearch {
host => "localhost"
protocol => "http"
index => "trial_indexer"
}
elasticsearch {
host => "localhost"
protocol => "http"
index => "movie_indexer"
}
}
Or maybe you want to route your documents to different indices based on some variable:
output {
stdout {codec => rubydebug}
if [type] == "trial" {
elasticsearch {
host => "localhost"
protocol => "http"
index => "trial_indexer"
}
} else {
elasticsearch {
host => "localhost"
protocol => "http"
index => "movie_indexer"
}
}
}
UPDATE
The syntax has changed a little bit in Logstash 2 and 5:
output {
stdout {codec => rubydebug}
if [type] == "trial" {
elasticsearch {
hosts => "localhost:9200"
index => "trial_indexer"
}
} else {
elasticsearch {
hosts => "localhost:9200"
index => "movie_indexer"
}
}
}
I have an ELK instance that uses a redis channel as a buffer. Logs are imported, correctly parsed into redis by the shipper but nothing makes it to elasticsearch.
My shipper config looks like this:
input {
file {
path => [ "/var/log/aggregates.log" ]
type => "aggregates"
}
}
output {
redis {
host => "xxxx"
data_type => "channel"
key => "logstash-aggregates"
}
}
filter {
csv {
columns => [ 'start_time', 'end_time','total_count' ... ]
separator => ","
}
}
The indexer config looks like this:
input {
redis {
host => "xxxx"
type => "aggregates"
data_type => "channel"
key => "logstash-aggregates"
format => "json_event"
}
}
output {
elasticsearch {
bind_host => "xxxx"
cluster => "default_cluster"
host => "xxxx"
action => "index"
}
}
Is there something I'm missing here? I can't seem to figure it out.