Logstash UDP issue - elasticsearch

I have simple logstash config where I am routing(shipping) logs to secondary logstash based on the beat host name, below is the primary/secondary logstash config. I masked the IP here, but primary logstash is not shipping any logs to the secondary.
1) Is it the if condition needs to have [host] or [agent][hostname] in primary logstash?
2) On secondary do I need to receive for UDP or TCP ?
I tried to check netstat o/p on primary and secondary. Its not showing the any results on the primary node where as showing at the secondary node.
sudo netstat -ulpat | grep 8011
My main logstash config
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://<ip1>:9200","http://<ip2>:9200","http://<ip3>:9200"]
}
}
# stdout { codec => rubydebug }
}
output {
if [host] =~ "moaf-iws" {
udp {
host => "XX.XXX.XXX.XXX"
port => "8011"
}
}
}
Secondary log stash
input {
beats {
port => 5044
}
}
input {
udp {
port => 8011
}
}
output {
elasticsearch {
hosts => ["http://<ip1>:9200","http://<ip2>:9200"]
}
}
stdout { codec => rubydebug }

Related

how i can create index to elastic search using tcp input protocol?

i have configured logstash 5.5 to use tcp protocol for give the json message.
input {
tcp {
port => 9001
codec => json
type => "test-tcp-1"
}
}
output {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "logstash-%{type}-%{+YYYY.MM.dd}"
}
}
filter{
json { source => "message" }
}
The message has been received from logstash with successfully but elasticsearch not create a index ! Why ?
If use the same configuration with stdin input plugin work fine.
Many thanks.

Unavailable_shards_exception, reason: primary shard is not active

I am trying to send data from filebeat-->logstash-->Elasticsearch Cluster-->kibana.
I have a cluster with 3 nodes. 2 are master eligible nodes and 1 is client node.
I have checked the health of the cluster using the below command,
curl -XGET "http://132.186.102.61:9200/_cluster/state?pretty"
I can see the output properly with elected master.
When the filebeat pushes data to logstash, i see the following error,
logstash.outputs.elasticsearch - retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[logstash-2017.06.05][1] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[logstash-2017.06.05][1]] containing [3] requests]"})
this is my logstash.conf,
input {
beats {
port => "5043"
#ssl => true
#ssl_certificate_authorities => "D:/Softwares/ELK/ELK_SSL_Certificates/testca/cacert.pem"
#ssl_certificate => "D:/Softwares/ELK/ELK_SSL_Certificates/server/cert.pem"
#ssl_key => "D:/Softwares/ELK/ELK_SSL_Certificates/server/pkcs8.key"
#ssl_key_passphrase => "MySecretPassword"
#ssl_verify_mode => "force_peer"
}
}
filter{
grok
{
match => {"message" =>"%{IP:client} %{NUMBER:duration} %{GREEDYDATA:messageFromClient}"}
}
}
#filter{
#if "_grokparsefailure" in [tags] {
# drop { }
# }
#}
output {
elasticsearch {hosts => ["132.186.189.127:9200","132.186.102.61:9200","132.186.102.43:9200"]}
stdout { codec => rubydebug }
}
May i please know the reason for this issue.

Kinesis input stream into Logstash

I am currently evaluating Logstash for our data ingestion needs. One of the use case is to read data from AWS Kinesis stream. I have tried to install logstash-input-kinesis plugin. When i run it, i do not see logstash processing any event from the stream. My logstash is working fine with other type of inputs (tcp). There is no error in debug logs. It just behaves as there is nothing to process. my config file is :
input {
kinesis {
kinesis_stream_name => "GwsElasticPoc"
application_name => "logstash"
type => "kinesis"
}
tcp {
port => 10000
type => tcp
}
}
filter {
if [type] == "kinesis" {
json {
source => "message"
}
}
if [type] == "tcp" {
grok {
match => { "message" => "Hello, %{WORD:name}"}
}
}
}
output{
if [type] == "kinesis"
{
elasticsearch{
hosts => "http://localhost:9200"
user => "elastic"
password => "changeme"
index => elasticpoc
}
}
if [type] == "tcp"
{
elasticsearch{
hosts => "http://localhost:9200"
user => "elastic"
password => "changeme"
index => elkpoc
}
}
}
I have not tried the logstash way but if you are running on AWS. There is a Kinesis Firehose to Elasticsearch ingestion available as documented at http://docs.aws.amazon.com/firehose/latest/dev/basic-create.html#console-to-es
You can see if that would work as an alternate to logstash
we need to provide the AWS credentials for accessing the AWS services for this integration to work.
You can find the same here: https://github.com/logstash-plugins/logstash-input-kinesis#authentication
This plugin requires additional access to AWS DynamoDB as 'checkpointing' database.
You need to use 'application_name' to specify the table name in DynamoDB if you have multiple streams.
https://github.com/logstash-plugins/logstash-input-kinesis

Elk Elasticsearch logstash configuration

I'm new in ELK. In fact, I already installed Logstash, elasticsearch, and kibana on ubuntu 14.04. when I try to test ELK with an existing log file on my ubuntu, the logstash didn't load log into elasticsearch and showing nothing. This is my logstash config file : sudo gedit /etc/logstash/conf.d/logstash.conf
input {
file {
path => "/home/chayma/logs/catalina.2016-02-02.log"
start_position => "beginning"
}
}
filter {
grok {
match => { "message" => "%{COMMONAPACHELOG}" }
}
}
output {
elasticsearch {
hosts => [ "127.0.0.1:9200" ]
}
stdout
{
codec => rubydebug
}
}
However, my elasticsearch.yml contains:
cluster.name: my-application
node.name: node-1
node.master: true
node.data: true
index.number_of_shards: 1
index.number_of_replicas: 0
network.host: localhost
http.port: 9200
Please help
I presume Logstash and Elasticsearch are installed on same machine and Logstash is running?
sudo service logstash status
Try checking the Logstash log file to see if it's a connection issue or a syntax error (config looks OK, so probably the former):
tail -f /var/log/logstash/logstash.log
Does your COOMONAPACHELOG matches the log pattern that you are trying to parse using GROK ?
By default from the path on Ubuntu 14.04
/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-2.0.5/patterns/grok-patterns
You can verify the same here
https://grokdebug.herokuapp.com/
The GROK in our case is applying the following regex:
COMMONAPACHELOG %{IPORHOST:clientip} %{HTTPDUSER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-)
Please provide with the log entries.
change your elasticsearch output by adding index name to it and try
output {
elasticsearch {
hosts => [ "127.0.0.1:9200" ]
index => "testindex-%{+YYYY.MM.dd}"
}
stdout
{
codec => rubydebug
}
}
You're missing input {}. input{} and output{} are necessary in logstash pipeline.
input {
file {
path => "/home/chayma/logs/catalina.2016-02-02.log"
start_position => "beginning"
}
}
}
Or you can check simple way whether text can forward to elasticsearch.
Just test with using stdin and stdout in terminal. Be sure local elasticsearch service is running.
input {
stdin {
type => "true"
}
}
filter {
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
}
stdout {
codec => rubydebug
}
}

Kibana displaying JSON incorrectly

I'm using ELK (Elasticsearch, Logstash, Kibana) for logging purposes. The problem is Kibana doesn't seem to recognize my JSON, because it puts my JSON inside message.
Here's how I run Logstash:
bin/logstash -e 'input { udp { port => 5000 type => json_logger } }
output { stdout { } elasticsearch { host => localhost } }'
Here's an example Logstash output for my logs (for debugging purposes I also output logs to stdout):
2014-10-07T10:28:19.104+0000 127.0.0.1
{"user_id":1,"object_id":6,"#timestamp":"2014-10-07T13:28:19.101+03:00","#version":"1","severity":"INFO","host":"sergey-System"}
How do I make Elasticsearch/Kibana/Logstash recognize JSON?
Try bin/logstash -e 'input { udp { port => 5000 type => json_logger codec => json} } output { stdout { } elasticsearch { host => localhost } }'.
Note the codec => json option.

Resources