How to set kibana index pattern from filebeat? - elasticsearch

I am using elk stack with a node application. I am sending logs from host to logstash with filebeat, logsstash formats and send data to elastic and kibana reads from elastic. In kibana i see default index pattern like filebeat-2019.06.16.
I want to change this to application-name-filebeat-2019.06.16. But it's not working. I am looking for a way to do it in filebeat since there will be multiple applications/filebeats but one single logstash/elasticsearch/kibana.
I have tried this filebeat configs at filebeat.yml.
filebeat.inputs:
- type: log
paths:
- /var/log/*.log
fields:
- app_name: myapp
output.logstash:
index: "%{fields.app_name}-filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"
hosts: ["${ELK_ENDPOINT}"]
ssl.enabled: true
ssl:
certificate_authorities:
- /etc/pki/tls/certs/logstash-beats.crt
setup.template.name: "%{fields.app_name}-filebeat-%{[agent.version]}"
same kind of file will be with each of node appication host and filebeat.
also logstash is initialized with this configs
02-beats-input.conf
input {
beats {
port => 5044
codec => "json"
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-beats.crt"
ssl_key => "/etc/pki/tls/private/logstash-beats.key"
}
}
30-output.conf
filter {
json {
source => "message"
}
}
output {
elasticsearch {
hosts => ["localhost"]
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
it is genarating index pattern like filebeat-2019.06.16. I want something like application-name-filebeat-2019.06.16.

You are sending your filebeat logs to logstash, you need to define the index name in the logstash pipeline, not in the filebeat config file.
Try the following output:
output {
elasticsearch {
hosts => ["localhost"]
manage_template => false
index => "%{[fields][app_name]}-%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
To set the index name on filebeat you would need to send the logs directly to elasticsearch.
If you have other beats sending data to the same port and some of them do not have the field [fields][app_name] you could use a conditional on your output or create the field on your pipeline.

Related

Elasticsearch + Filebeat + Logstash

I am new to elastic stack and recently tried to ship logs to ELK stack, but observed strange issue.
Could someone please suggest me with the configuration.
filebeat.yml
filebeat.inputs:
- type: log
paths:
#- /var/log/*.log
- D:\apps\logs\RGGYSLT-0473\learnings-elasticsearch\*.log
multiline.pattern: '^\[[0-9]{4}-[0-9]{2}-[0-9]{2}'
multiline.negate: true
multiline.match: after
logstash.conf
input {
beats {
type => "v1-elasticsearch"
host => "127.0.0.1"
port => "5044"
}
}
filter {
if[type] == "v1-elasticsearch" {
#If log line contains tab character followed by 'at' then we will tag that entry as stacktrace
if [message] =~ "\tat" {
grok {
match => ["message", "^(\tat)"]
add_tag => ["stacktrace"]
}
}
}
}
output {
stdout {
codec => rubydebug
}
# Sending properly parsed log events to elasticsearch
elasticsearch {
hosts => ["http://localhost:9200"]
index => "dhisco-learnings-elasticseach-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
#user => "elastic"
#password => "changeme"
}
}
Kibana output -
Jun 7, 2020 # 23:58:58.067 2020-06-07 23:58:48,480 88900 [http-nio-9090-exec-2] INFO c.d.l.e.web.HotelController - Brand: RADISSON
2020-06-07 23:58:49,297 88900 [http-nio-9090-exec-3] INFO c.d.l.e.web.HotelController - Brand: RADISSON
I hit my controller twice, but unfortunately, both logs got concatenated and shown against the same timestamp.
Could someone please suggest ?
I did the same thing using json encoder and never faced any issues.

Filebeat -> Logstash indexing documents twice

I have Nginx logs being sent from Filebeat to Logstash which is indexing them into Elasticsearch.
Every entry gets indexed twice. Once with the correct grok filter and then again with no fields found except for the "message" field.
This is the logstash configuration.
02-beats-input.conf
input {
beats {
port => 5044
ssl => false
}
}
11-nginx-filter.conf
filter {
if [type] == "nginx-access" {
grok {
patterns_dir => ['/etc/logstash/patterns']
match => {"message" => "%{NGINXACCESS}"
}
date {
match => [ "timestamp", "dd/MMM/YYYY:HH:mm:ss Z", "d/MMM/YYYY:HH:mm:ss Z" ]
}
}
}
Nginx Patterns
NGUSERNAME [a-zA-Z\.\#\-\+_%]+
NGUSER %{NGUSERNAME}
NGINXACCESS %{IPORHOST:clientip}\s+%{NGUSER:ident}\s+%{NGUSER:auth}\s+\[%{HTTPDATE:timestamp}\]\s+\"%{WORD:verb}\s+%{URIPATHPARAM:request}\s+HTTP/%{NUMBER:httpversion}\"\s+%{NUMBER:response}\s+(?:%{NUMBER:bytes}|-)\s+(?:\"(?:%{URI:referrer}|-)\"|%{QS:referrer})\s+%{QS:agent}
30-elasticsearc-output.conf
output {
elasticsearch {
hosts => ["elastic00:9200", "elastic01:9200", "elastic02:9200"]
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
Check your filebeat configuration!
During setup I had accidentally un-commented and configured the output.elasticsearch section of the filebeat.yml.
I then also configured the output.logstash section of the configuration but forgot to comment out the elasticsearch output section.
This caused one entry to be sent to logstash where it was grok'd and another one to be sent directly to elasticsearch.

Logs in Kibana are not sorted by log timestamp

I use Filebeat and Logstash to send logs to Elasticsearch. I can see my logs in Kibana but logs are not sorted correctly according to the timestamp in log record.
I tried to create a separate field dateTime for the log record timestamp but it looks like it's not possible to sort table in Kibana by that column.
Kibana screenshot
Could anybody explain what could be a solution in this situation?
filebeat
filebeat.prospectors:
- input_type: log
paths:
- /var/log/app.log
fields_under_root: true
multiline.pattern: '^[0-9]{2}:[0-9]{2}:[0-9]{2},[0-9]{3}'
multiline.negate: true
multiline.match: after
registry_file: /var/lib/filebeat/registry
output.logstash:
hosts: ["host_name:5044"]
ssl.certificate_authorities: ["..."]
logstash
input {
beats {
port => 5044
ssl => true
ssl_certificate => "..."
ssl_key => "..."
}
}
filter {
if [type] == "filebeat" {
grok {
match => { "message" => "(?<dateTime>[0-9]{2}:[0-9]{2}:[0-9]{2},[0-9]{3})"}
}
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
}
}
This functionality is exactly what the date filter is made for. Add this after your grok expression:
date {
match => [ "dateTime", "HH:mm:ss.SSS" ]
}
This will set the #timestamp field that Kibana sorts by to be the value in this field.

Logstash not able to pass data to elasticsearch

I am using a Logstash server 1-> Kafka -> Logstash server 2-> Elasticsearch -> Kibana setup. Below is the configuration files from Logstash server 2 .
1) 03-logstash-logs-kafka-consumer.conf
input {
kafka {
zk_connect => 'zk_netaddress:2181'
topic_id => 'logstash_logs'
codec => "json"
}
}
output{
stdout{}
}
2) 30-elasticsearch-output.conf
output {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
Though Logs are travelling from Logstash server 1 to Logstash server 2 through Kafka and Logstash server 2 can also output to the /var/log/logstash/logstash.stdout file, Logstash server 2 is not able to output to the elasticsearch configured with it. I have checked all services, they are running well and there are no Exception in the logs of all the services.
Please post your suggestions.

How to move data from one Elasticsearch index to another using the Bulk API

I am new to Elasticsearch. How to move data from one Elasticsearch index to another using the Bulk API?
I'd suggest using Logstash for this, i.e. you use one elasticsearch input plugin to retrieve the data from your index and another elasticsearch output plugin to push the data to your other index.
The config logstash config file would look like this:
input {
elasticsearch {
hosts => "localhost:9200"
index => "source_index" <--- the name of your source index
}
}
filter {
mutate {
remove_field => [ "#version", "#timestamp" ]
}
}
output {
elasticsearch {
host => "localhost"
port => 9200
protocol => "http"
manage_template => false
index => "target_index" <---- the name of your target index
document_type => "your_doc_type" <---- make sure to set the appropriate type
document_id => "%{id}"
workers => 5
}
}
After installing Logstash, you can run it like this:
bin/logstash -f logstash.conf

Resources