logstash create strange index name - elasticsearch

i use logstash 7.9.3 and with this version i have problems to create right index name like logstash-2021.01.01. I need first 9 days of month with 0.
with this config logstash-%{+yyyy.MM.dd} result is => logstash-2021.01.01-000001
with this config logstash-%{+yyyy.MM.d} result is => logstash-2021.01.1
input {
redis {
host => "someip_of_redis"
data_type => "list"
key => "logstash"
codec => "json"
}
}
output {
elasticsearch {
hosts => ["http://someip_of_elastic:9200"]
index => "logstash-%{+yyyy.MM.dd}"
}
}
Thank you in advance

to disable it, i add to config following ilm_enabled => false
input {
redis {
host => "someip_of_redis"
data_type => "list"
key => "logstash"
codec => "json"
}
}
output {
elasticsearch {
hosts => ["http://someip_of_elastic:9200"]
ilm_enabled => false
index => "logstash-%{+yyyy.MM.dd}"
}
}

Related

Push the data from Logstash pipeline to elasticsearch to the index that is mapped with alias

We have an alias in elasticsearch and this alias is mapped to one index at a time. This index will get change daily.
Daily we have to write the data to two indexes:
employee-%{+YYYY.MM.dd} and
one more is to the index that is mapped with alias.
To write the data to the first index no issues but how to write the data to the index that is mapped with a particular alias.
We are using kafka, Logstash pipeline to push the data and below is the pipeline:
input {
kafka {
bootstrap_servers => "SomeServer"
client_dns_lookup => "use_all_dns_ips"
topics => ["TOPIC_NAME"]
codec => json
group_id => "kalavakuri"
decorate_events => true
consumer_threads => 2
security_protocol => "SSL"
ssl_keystore_location => "${SSL_KEYSTORE_LOCATION}"
ssl_keystore_password => "${SSL_KEYSTORE_PASSWORD}"
ssl_key_password => "${SSL_KEYSTORE_PASSWORD}"
ssl_truststore_location => "${SSL_TRUSTSTORE_LOCATION}"
}
}
filter {
json {
source => "message"
}
}
output {
if [type] == "EMP" {
elasticsearch {
document_id => "%{id}"
index => "employee-%{+YYYY.MM.dd}"
hosts => ["SomeHost"]
user => "${DEFAULT_LOGSTASH_USER}"
password => "${DEFAULT_LOGSTASH_USER_PW}"
cacert => "/etc/logstash/certs/tls.crt"
action => "update"
doc_as_upsert => true
}
} else if [type] == "STD" {
elasticsearch {
document_id => "%{id}"
index => "employee-%{+YYYY.MM.dd}"
hosts => ["SomeHost"]
user => "${DEFAULT_LOGSTASH_USER}"
password => "${DEFAULT_LOGSTASH_USER_PW}"
cacert => "/etc/logstash/certs/tls.crt"
scripted_upsert => true
action => "update"
upsert => {}
script => "
if (ctx._source.associatedparties == null) {
ctx._source.associatedparties = [];
}
ctx._source.associatedparties.add(params.event.get('associatedparty'));
"
}
}
}
In above pipeline configuration currently we are pushing the data to the first index. I want to know, how to push the data to the index that is mapped with a particular alias in elasticsearch.
To get the index details that is mapped with alias we are using the command in elasticsearch GET _cat/aliases/ra_employee
Is there any way to query the elasticsearch and get the index details based on the alias.

Logstash redis configuration not pushing logs to ES

We are using ELK stack to monitor our logs. I am total newbie to ELK environment, recently i was working on a task where I need to configure Logstash with Redis to push our logs to,
below is the config i am using, the below config works with ElasticSearch but does not work with Redis,
input {
file {
path => "E:/Logs/**/*.log"
start_position => beginning
codec => json
}
}
filter {
date {
match => [ "TimeCreated", "YYYY-MM-dd HH:mm:ss Z" ]
}
mutate {
add_field => {
#"debug" => "true"
"index_prefix" => "logstash-app"
}
}
}
output {
#elasticsearch {
#local env
#hosts => ["localhost:9200"]
#preprod env
#hosts => ["elk.logs.abc.corp.com:9200"]
#prod env
#hosts => ["elk.logs.abc.prod.com:9200"]
#index => "logstash-app"
#}
redis {
#local env
#host => "localhost:5300"
#preprod env
host => "redis.logs.abc.corp.com"
#prod env
#host => "redis.logs.abc.prod.com"
data_type => "list"
key => "logstash"
}
if[debug] == "true" {
stdout {
codec => rubydebug
}
file {
path => "../data/logstash-app-%{+YYYYMMdd}.log"
}
}
}
I commented the Elasticsearch, with Elastic Search i am able to view the logs in Kibana but with Redis i am unable to see,
Can anyone point me what wrong i am doing ? how could i debug or see if my logs are shipped correctly
Based on the documentation of logstash plugin:
host should be an array
redis {
#preprod env
host => ["redis.logs.abc.corp.com"]
data_type => "list"
key => "logstash"
}

logstash elastic search output configuration based on inputs

Is there any way I can use logstash configuration file to scale output accordingly with different types/indexes ?
For eg.,
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "index_resources"
if(%{some_field_id}==kb){
document_type => "document_type"
document_id => "%{some_id}"
}
else {
document_type => "other_document_type"
document_id => "%{some_other_id}"
}
}
Yes you could route your documents to multiple indexes within your logstash itself. Output could look something like this:
output {
stdout {codec => rubydebug}
if %{some_field_id} == "kb" { <---- insert your condition here
elasticsearch {
host => "localhost"
protocol => "http"
index => "index1"
document_type => "document_type"
document_id => "%{some_id}"
}
} else {
elasticsearch {
host => "localhost"
protocol => "http"
index => "index2"
document_type => "other_document_type"
document_id => "%{some_other_id}"
}
}
}
This thread might help you as well.

Logstash Duplicate Data

i have duplicate data in Logstash
how could i remove this duplication?
my input is:
input
input {
file {
path => "/var/log/flask/access*"
type => "flask_access"
max_open_files => 409599
}
stdin{}
}
filter
filter of files is :
filter {
mutate { replace => { "type" => "flask_access" } }
grok {
match => { "message" => "%{FLASKACCESS}" }
}
mutate {
add_field => {
"temp" => "%{uniqueid} %{method}"
}
}
if "Entering" in [api_status] {
aggregate {
task_id => "%{temp}"
code => "map['blockedprocess'] = 2"
map_action => "create"
}
}
if "Entering" in [api_status] or "Leaving" in [api_status]{
aggregate {
task_id => "%{temp}"
code => "map['blockedprocess'] -= 1"
map_action => "update"
}
}
if "End Task" in [api_status] {
aggregate {
task_id => "%{temp}"
code => "event['blockedprocess'] = map['blockedprocess']"
map_action => "update"
end_of_task => true
timeout => 120
}
}
}
Take a look at the image, the same data log, at the same time, and I just sent one log request.
i solve it
i create a unique id by ('document_id') in output section
document_id point to my temp and temp is my unique id in my project
my output changed to:
output {
elasticsearch {
hosts => ["localhost:9200"]
document_id => "%{temp}"
# sniffing => true
# manage_template => false
# index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
# document_type => "%{[#metadata][type]}"
}
stdout { codec => rubydebug }
}
Executing tests in my local lab, I've just found out that logstash is sensitive to the number of its config files that are kept in /etc/logstash/conf.d directory.
If config files are more than 1, then we can see duplicates for the same record.
So, try to remove all backup configs from /etc/logstash/conf.d directory and perform logstash restart.

How to create multiple indexes in logstash.conf file?

I used the following piece of code to create an index in logstash.conf
output {
stdout {codec => rubydebug}
elasticsearch {
host => "localhost"
protocol => "http"
index => "trial_indexer"
}
}
To create another index i generally replace the index name with another in the above code. Is there any way of creating many indexes in the same file? I'm new to ELK.
You can use a pattern in your index name based on the value of one of your fields. Here we use the value of the type field in order to name the index:
output {
stdout {codec => rubydebug}
elasticsearch {
host => "localhost"
protocol => "http"
index => "%{type}_indexer"
}
}
You can also use several elasticsearch outputs either to the same ES host or to different ES hosts:
output {
stdout {codec => rubydebug}
elasticsearch {
host => "localhost"
protocol => "http"
index => "trial_indexer"
}
elasticsearch {
host => "localhost"
protocol => "http"
index => "movie_indexer"
}
}
Or maybe you want to route your documents to different indices based on some variable:
output {
stdout {codec => rubydebug}
if [type] == "trial" {
elasticsearch {
host => "localhost"
protocol => "http"
index => "trial_indexer"
}
} else {
elasticsearch {
host => "localhost"
protocol => "http"
index => "movie_indexer"
}
}
}
UPDATE
The syntax has changed a little bit in Logstash 2 and 5:
output {
stdout {codec => rubydebug}
if [type] == "trial" {
elasticsearch {
hosts => "localhost:9200"
index => "trial_indexer"
}
} else {
elasticsearch {
hosts => "localhost:9200"
index => "movie_indexer"
}
}
}

Resources