Logstash not able to pass data to elasticsearch - elasticsearch

I am using a Logstash server 1-> Kafka -> Logstash server 2-> Elasticsearch -> Kibana setup. Below is the configuration files from Logstash server 2 .
1) 03-logstash-logs-kafka-consumer.conf
input {
kafka {
zk_connect => 'zk_netaddress:2181'
topic_id => 'logstash_logs'
codec => "json"
}
}
output{
stdout{}
}
2) 30-elasticsearch-output.conf
output {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
Though Logs are travelling from Logstash server 1 to Logstash server 2 through Kafka and Logstash server 2 can also output to the /var/log/logstash/logstash.stdout file, Logstash server 2 is not able to output to the elasticsearch configured with it. I have checked all services, they are running well and there are no Exception in the logs of all the services.
Please post your suggestions.

Related

Logstash doesn't send logs to elastic (Windows)

I have a Spring boot application, that produces logs into a file.
I also have running Elastic search (in docker) and Kibana and Logstash (not in docker).
This is my Logstash config:
input {
file {
type => "java"
path => "C:\Users\user\Documents\logs\semblogs.log"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
stdout {
codec => rubydebug
}
}
Elastic is up and running. When I check for the data in index that was created like this:
http://localhost:9200/logstash-2019.11.04-000001/_search
it shows:
took 0
timed_out false
_shards
total 1
successful 1
skipped 0
failed 0
hits
total
value 0
relation "eq"
max_score null
hits []
In Kibana I also can't create an index, it says there are no data in elastic.
I suspect that Logstash is not sending incoming anything to Elastic, but I don't know why. There ARE logs in the log file from the app...

How to set kibana index pattern from filebeat?

I am using elk stack with a node application. I am sending logs from host to logstash with filebeat, logsstash formats and send data to elastic and kibana reads from elastic. In kibana i see default index pattern like filebeat-2019.06.16.
I want to change this to application-name-filebeat-2019.06.16. But it's not working. I am looking for a way to do it in filebeat since there will be multiple applications/filebeats but one single logstash/elasticsearch/kibana.
I have tried this filebeat configs at filebeat.yml.
filebeat.inputs:
- type: log
paths:
- /var/log/*.log
fields:
- app_name: myapp
output.logstash:
index: "%{fields.app_name}-filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"
hosts: ["${ELK_ENDPOINT}"]
ssl.enabled: true
ssl:
certificate_authorities:
- /etc/pki/tls/certs/logstash-beats.crt
setup.template.name: "%{fields.app_name}-filebeat-%{[agent.version]}"
same kind of file will be with each of node appication host and filebeat.
also logstash is initialized with this configs
02-beats-input.conf
input {
beats {
port => 5044
codec => "json"
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-beats.crt"
ssl_key => "/etc/pki/tls/private/logstash-beats.key"
}
}
30-output.conf
filter {
json {
source => "message"
}
}
output {
elasticsearch {
hosts => ["localhost"]
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
it is genarating index pattern like filebeat-2019.06.16. I want something like application-name-filebeat-2019.06.16.
You are sending your filebeat logs to logstash, you need to define the index name in the logstash pipeline, not in the filebeat config file.
Try the following output:
output {
elasticsearch {
hosts => ["localhost"]
manage_template => false
index => "%{[fields][app_name]}-%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
To set the index name on filebeat you would need to send the logs directly to elasticsearch.
If you have other beats sending data to the same port and some of them do not have the field [fields][app_name] you could use a conditional on your output or create the field on your pipeline.

Kinesis input stream into Logstash

I am currently evaluating Logstash for our data ingestion needs. One of the use case is to read data from AWS Kinesis stream. I have tried to install logstash-input-kinesis plugin. When i run it, i do not see logstash processing any event from the stream. My logstash is working fine with other type of inputs (tcp). There is no error in debug logs. It just behaves as there is nothing to process. my config file is :
input {
kinesis {
kinesis_stream_name => "GwsElasticPoc"
application_name => "logstash"
type => "kinesis"
}
tcp {
port => 10000
type => tcp
}
}
filter {
if [type] == "kinesis" {
json {
source => "message"
}
}
if [type] == "tcp" {
grok {
match => { "message" => "Hello, %{WORD:name}"}
}
}
}
output{
if [type] == "kinesis"
{
elasticsearch{
hosts => "http://localhost:9200"
user => "elastic"
password => "changeme"
index => elasticpoc
}
}
if [type] == "tcp"
{
elasticsearch{
hosts => "http://localhost:9200"
user => "elastic"
password => "changeme"
index => elkpoc
}
}
}
I have not tried the logstash way but if you are running on AWS. There is a Kinesis Firehose to Elasticsearch ingestion available as documented at http://docs.aws.amazon.com/firehose/latest/dev/basic-create.html#console-to-es
You can see if that would work as an alternate to logstash
we need to provide the AWS credentials for accessing the AWS services for this integration to work.
You can find the same here: https://github.com/logstash-plugins/logstash-input-kinesis#authentication
This plugin requires additional access to AWS DynamoDB as 'checkpointing' database.
You need to use 'application_name' to specify the table name in DynamoDB if you have multiple streams.
https://github.com/logstash-plugins/logstash-input-kinesis

logstash kafka input not working

I am trying to get the data from Kafka and push it to ElasticSearch.
Here is the logstash configuration I am using:
input {
kafka {
zk_connect => "localhost:2181"
topic_id => "beats"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "elasticse"
}
}
Can anyone help here with the logstash configuration? If I run this I am getting invalid configuration error.
D:\logstash-5.0.0\bin>logstash -f log-uf.conf
Sending Logstash logs to D:\logstash-5.0.0\logs\logstash-plain.txt which is now
configured via log4j2.properties.
[2016-11-11T16:31:32,429][ERROR][logstash.inputs.kafka ] Unknown setting 'zk_
connect' for kafka
[2016-11-11T16:31:32,438][ERROR][logstash.inputs.kafka ] Unknown setting 'top
ic_id' for kafka
[2016-11-11T16:31:32,452][ERROR][logstash.agent ] fetched an invalid c
onfig {:config=>"input {\n kafka {\n zk_connect => \"localhost:2181\"\n to
pic_id => \"beats\"\n consumer_threads => 16\n }\n}\noutput {\nelasticsearch
{\nhosts => [\"localhost:9200\"]\nindex => \"elasticse\"\n}\n}\n", :reason=>"Som
ething is wrong with your configuration."}
can anyone help here?
You're running Logstash 5 with a config for Logstash 2.4.
zk_connect (Zookeeper host) was replaced by bootstrap_servers (Kafka broker) and topic_id by topics in 5.0
Try this config instead:
input {
kafka {
bootstrap_servers => "localhost:9092"
topics => ["beats"]
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "elasticse"
}
}

How to stop logstash input when an output error occurs

I'm running a logstash instance that reads some records from kafka and inserts them onto elasticsearch.
I had a problem with the elasticsearch configuration and new records weren't being inserted into elasticsearch.
Eventually I was able to fix the elasticsearch output. But even though the elasticsearch output wasn't able to write the records logstash didn't stop reading in more data from kafka.
So when I restarted logstash, it didn't pick up from the last successful kafka offset. Basically I lost all the records since the elasticsearch output stopped writing records.
How can I avoid that from happening again? Is there a way to stop the whole pipeline when there is an error on the output?
My simplified config file:
input {
kafka {
zk_connect => "zk01:2181,zk02:2181,zk03:2181/kafka"
topic_id => "my-topic"
auto_offset_reset => "smallest"
group_id => "logstash-es"
codec => "json"
}
}
output {
elasticsearch {
index => "index-%{+YYYY-MM-dd}"
document_type => "dev"
hosts => ["elasticsearch01","elasticsearch02","elasticsearch03","elasticsearch04","elasticsearch05","elasticsearch06"]
template => "/my-template.json"
template_overwrite => true
manage_template => true
}
}

Resources