Logstash can not connect to Elastic search because wrong ip adress - elasticsearch

i have this same problem:
Logstash can not connect to Elastic search
My /conf.d/logstash.conf:
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-beats.crt"
ssl_key => "/etc/pki/tls/private/logstash-beats.key"
}
}
filter {
geoip {
source => "clientip"
}
}
output {
elasticsearch {
host => "http://anotherip, that is not localhost:9200"
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
}}
Error:
error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/]
Where logstash pick this ip adress? "localhost:9200"

I solve my problem, i just kill the service with kill -9, after this systemctl start service and works well. Thanks.

Related

How to set kibana index pattern from filebeat?

I am using elk stack with a node application. I am sending logs from host to logstash with filebeat, logsstash formats and send data to elastic and kibana reads from elastic. In kibana i see default index pattern like filebeat-2019.06.16.
I want to change this to application-name-filebeat-2019.06.16. But it's not working. I am looking for a way to do it in filebeat since there will be multiple applications/filebeats but one single logstash/elasticsearch/kibana.
I have tried this filebeat configs at filebeat.yml.
filebeat.inputs:
- type: log
paths:
- /var/log/*.log
fields:
- app_name: myapp
output.logstash:
index: "%{fields.app_name}-filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"
hosts: ["${ELK_ENDPOINT}"]
ssl.enabled: true
ssl:
certificate_authorities:
- /etc/pki/tls/certs/logstash-beats.crt
setup.template.name: "%{fields.app_name}-filebeat-%{[agent.version]}"
same kind of file will be with each of node appication host and filebeat.
also logstash is initialized with this configs
02-beats-input.conf
input {
beats {
port => 5044
codec => "json"
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-beats.crt"
ssl_key => "/etc/pki/tls/private/logstash-beats.key"
}
}
30-output.conf
filter {
json {
source => "message"
}
}
output {
elasticsearch {
hosts => ["localhost"]
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
it is genarating index pattern like filebeat-2019.06.16. I want something like application-name-filebeat-2019.06.16.
You are sending your filebeat logs to logstash, you need to define the index name in the logstash pipeline, not in the filebeat config file.
Try the following output:
output {
elasticsearch {
hosts => ["localhost"]
manage_template => false
index => "%{[fields][app_name]}-%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
To set the index name on filebeat you would need to send the logs directly to elasticsearch.
If you have other beats sending data to the same port and some of them do not have the field [fields][app_name] you could use a conditional on your output or create the field on your pipeline.

Use logstash to copy index on same cluster

I have an ES 1.7.1 cluster and I wanted to "reindex" an Index. Since I cannot use _reindex I came across this tutorial to use logstash to copy an index onto a different cluster. However in my case I want to copy it over to the same cluster. This is my logstash.conf :
input {
# We read from the "old" cluster
elasticsearch {
hosts => "myhost"
port => "9200"
index => "my_index"
size => 500
scroll => "5m"
docinfo => true
}
}
output {
# We write to the "new" cluster
elasticsearch {
host => "myhost"
port => "9200"
protocol => "http"
index => "logstash_test"
index_type => "%{[#metadata][_type]}"
document_id => "%{[#metadata][_id]}"
}
# We print dots to see it in action
stdout {
codec => "dots"
}
}
filter {
mutate {
remove_field => [ "#timestamp", "#version" ]
}
}
This errors out with the following error message:
A plugin had an unrecoverable error. Will restart this plugin.
Plugin: <LogStash::Inputs::Elasticsearch hosts=>[{:scheme=>"http", :user=>nil, :password=>nil, :host=>"myhost", :path=>"", :port=>"80", :protocol=>"http"}], index=>"my_index", scroll=>"5m", query=>"{\"query\": { \"match_all\": {} } }", docinfo_target=>"#metadata", docinfo_fields=>["_index", "_type", "_id"]>
Error: Connection refused - Connection refused {:level=>:error}
Failed to install template: http: nodename nor servname provided, or not known {:level=>:error}
Any help would be appreciated

Filebeat -> Logstash indexing documents twice

I have Nginx logs being sent from Filebeat to Logstash which is indexing them into Elasticsearch.
Every entry gets indexed twice. Once with the correct grok filter and then again with no fields found except for the "message" field.
This is the logstash configuration.
02-beats-input.conf
input {
beats {
port => 5044
ssl => false
}
}
11-nginx-filter.conf
filter {
if [type] == "nginx-access" {
grok {
patterns_dir => ['/etc/logstash/patterns']
match => {"message" => "%{NGINXACCESS}"
}
date {
match => [ "timestamp", "dd/MMM/YYYY:HH:mm:ss Z", "d/MMM/YYYY:HH:mm:ss Z" ]
}
}
}
Nginx Patterns
NGUSERNAME [a-zA-Z\.\#\-\+_%]+
NGUSER %{NGUSERNAME}
NGINXACCESS %{IPORHOST:clientip}\s+%{NGUSER:ident}\s+%{NGUSER:auth}\s+\[%{HTTPDATE:timestamp}\]\s+\"%{WORD:verb}\s+%{URIPATHPARAM:request}\s+HTTP/%{NUMBER:httpversion}\"\s+%{NUMBER:response}\s+(?:%{NUMBER:bytes}|-)\s+(?:\"(?:%{URI:referrer}|-)\"|%{QS:referrer})\s+%{QS:agent}
30-elasticsearc-output.conf
output {
elasticsearch {
hosts => ["elastic00:9200", "elastic01:9200", "elastic02:9200"]
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
Check your filebeat configuration!
During setup I had accidentally un-commented and configured the output.elasticsearch section of the filebeat.yml.
I then also configured the output.logstash section of the configuration but forgot to comment out the elasticsearch output section.
This caused one entry to be sent to logstash where it was grok'd and another one to be sent directly to elasticsearch.

Kinesis input stream into Logstash

I am currently evaluating Logstash for our data ingestion needs. One of the use case is to read data from AWS Kinesis stream. I have tried to install logstash-input-kinesis plugin. When i run it, i do not see logstash processing any event from the stream. My logstash is working fine with other type of inputs (tcp). There is no error in debug logs. It just behaves as there is nothing to process. my config file is :
input {
kinesis {
kinesis_stream_name => "GwsElasticPoc"
application_name => "logstash"
type => "kinesis"
}
tcp {
port => 10000
type => tcp
}
}
filter {
if [type] == "kinesis" {
json {
source => "message"
}
}
if [type] == "tcp" {
grok {
match => { "message" => "Hello, %{WORD:name}"}
}
}
}
output{
if [type] == "kinesis"
{
elasticsearch{
hosts => "http://localhost:9200"
user => "elastic"
password => "changeme"
index => elasticpoc
}
}
if [type] == "tcp"
{
elasticsearch{
hosts => "http://localhost:9200"
user => "elastic"
password => "changeme"
index => elkpoc
}
}
}
I have not tried the logstash way but if you are running on AWS. There is a Kinesis Firehose to Elasticsearch ingestion available as documented at http://docs.aws.amazon.com/firehose/latest/dev/basic-create.html#console-to-es
You can see if that would work as an alternate to logstash
we need to provide the AWS credentials for accessing the AWS services for this integration to work.
You can find the same here: https://github.com/logstash-plugins/logstash-input-kinesis#authentication
This plugin requires additional access to AWS DynamoDB as 'checkpointing' database.
You need to use 'application_name' to specify the table name in DynamoDB if you have multiple streams.
https://github.com/logstash-plugins/logstash-input-kinesis

I want to Delete document by logstash,but it throws a exception

Now,I meet a question. My logstash configuration file as follows:
input {
redis {
host => "127.0.0.1"
port => 6379
db => 10
data_type => "list"
key => "local_tag_del"
}
}
filter {
}
output {
elasticsearch {
action => "delete"
hosts => ["127.0.0.1:9200"]
codec => "json"
index => "mbd-data"
document_type => "localtag"
document_id => "%{album_id}"
}
file {
path => "/data/elasticsearch/result.json"
}
stdout {}
}
I want to read id from redis, by logstash, notify es to delete document.
Excuse me,My English is poor,I hope that someone will help me .
Thx.
I can't help you particularly, because your problem is spelled out in your error message - logstash couldn't connect to your elasticsearch instance.
That usually means one of:
elasticsearch isn't running
elasticsearch isn't bound to localhost
That's nothing to do with your logstash config. Using logstash to delete documents is a bit unusual though, so I'm not entirely sure this isn't an XY problem

Resources