Logstash not able to connect secured (ssl) Elastic search cluster - elasticsearch

I have installed Logstash,elasticsearch and kibana in single instance and installed X-pack also for TLS communication. enabled ssl communication in elasticsearch and kibana working good but logstash unable to connect elasticsearch , but i can curl elasticsearch url https://localhost:9200 there is no firewall blocking also,
I have generated open ssl certificate and key file and kept in elasticsearch
input {
beats {
client_inactivity_timeout => 1000
port => 5044
}
}
filter {
grok {
match => [ "message", "%{TIMESTAMP_ISO8601} %{LOGLEVEL:loglevel} zeppelin IDExtractionService transactionId %{WORD:transaction_id} operation %{WORD:otype} received request duration %{NUMBER:duration} exception %{WORD:error}" ]
}
}
filter {
if "beats_input_codec_plain_applied" in [tags] {
mutate {
remove_tag => ["beats_input_codec_plain_applied"]
}
}
}
filter {
if "_grokparsefailure" in [tags] {
mutate {
remove_tag => ["_grokparsefailure"]
}
}
}
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.url: https://localhost:9200
output {
elasticsearch {
hosts => ["http://localhost:9200"]
user => elastic
password => password
manage_template => false
# ssl_certificate_verification => false
ssl => true
cacert => '/etc/elasticsearch/ca/key.pem'
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
}
}
elastic search config file
cluster.name: my-application
network.host: 0.0.0.0
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key: /opt/elasticsearch/ca/ca.key
xpack.security.http.ssl.certificate: /opt/elasticsearch/ca/ca.crt
logstash log files
[2018-05-16T05:28:16,421][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://logstash_system:xxxxxx#localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx#localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}
[2018-05-16T05:28:17,201][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>1, "stalling_thread_info"=>{"other"=>[{"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2018-05-16T05:28:21,422][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx#localhost:9200/, :path=>"/"}
[2018-05-16T05:28:21,422][INFO ][logstash.licensechecker.licensereader] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx#localhost:9200/, :path=>"/"}
[2018-05-16T05:28:21,424][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://logstash_system:xxxxxx#localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx#localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}
[2018-05-16T05:28:21,425][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://logstash_system:xxxxxx#localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx#localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}
[2018-05-16T05:28:22,202][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>1, "stalling_thread_info"=>{"other"=>[{"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2018-05-16T05:28:26,425][INFO ][logstash.licensechecker.licensereader] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx#localhost:9200/, :path=>"/"}
[2018-05-16T05:28:26,426][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx#localhost:9200/, :path=>"/"}
[2018-05-16T05:28:26,427][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://logstash_system:xxxxxx#localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx#localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}
[2018-05-16T05:28:26,427][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://logstash_system:xxxxxx#localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx#localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}
[2018-05-16T05:28:27,201][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>1, "stalling_thread_info"=>{"other"=>[{"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
root#5c417caecc5f:/var/log/logstash#

You have to enable monitoring for elasticsearch in the logstash.yml configuration file.
/etc/logstash/logstash.yml
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.url: http://X.X.X.X:9200
See this post for more information :
https://discuss.elastic.co/t/elasticsearch-unreachable-error-in-logstash/75157/7
And the documentation (maybe needed for TLS/SSL monitoring settings) :
https://www.elastic.co/guide/en/logstash/6.2/configuring-logstash.html#monitoring-settings
xpack.monitoring.elasticsearch.ssl.ca
xpack.monitoring.elasticsearch.ssl.truststore.path
xpack.monitoring.elasticsearch.ssl.truststore.password
xpack.monitoring.elasticsearch.ssl.keystore.path
xpack.monitoring.elasticsearch.ssl.keystore.password
If this doesn't work, can I see your /etc/logstash/logstash.yml configuration file ?

Related

Got response code '400' contacting Elasticsearch at URL in logstash

I am new to elasticsearch. I tried to configure elastisearch, Kibana , logstash with MQTT plugin. I supposed to send logs to elasticseach through logstash MQTT plugin. I installed them on Mac locally, but when starting logstash, it throws following error.
[2021-11-12T17:26:37,976][ERROR][logstash.outputs.elasticsearch][logstash_pipeline] Unable to get license information {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '400' contacting Elasticsearch at URL 'http://localhost:9200/_license'"}
my logstash configuration file islike:
input {
mqtt {
host => "localhost"
port => 1883
topic => "test"
qos => 0
certificate_path => "/Users/john/logstash-7.10.2/logstash/m2mqtt_ca.crt"
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[#metadata][beat]}-%{[#metadata][version]}-%{+YYYY.MM.dd}"
#user => "elastic"
#password => "changeme"
}
}
can anybody tell, what was the issue?

find elasticsearch service endpoint

I'm on my trial to test elasticcloud. But now I got problem to create pipeline from logstash to elasticcloud. Here is my logstash.conf output
output {
stdout{codec=>rubydebug}
elasticsearch
{
hosts=>["https://<clusterid>.asia-southeast1.gcp.cloud.es.io:9243"]
index=>"testindex"
user=>elasticdeploymentcredentials
password=>elasticdeploymentcredentials
}
}
But it always returning error as:
[WARN ] 2021-03-29 12:24:50.148 [Ruby-0-Thread-9: :1] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/][Manticore::ResolutionFailure] <clusterid>.asia-southeast1.gcp.cloud.es.io"}
[WARN ] 2021-03-29 12:24:55.158 [Ruby-0-Thread-9: :1] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/][Manticore::ResolutionFailure] <clusterid>.asia-southeast1.gcp.cloud.es.io: Name or service not known"}
[WARN ] 2021-03-29 12:25:00.163 [Ruby-0-Thread-9: :1] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/][Manticore::ResolutionFailure] <clusterid>.asia-southeast1.gcp.cloud.es.io"}
[WARN ] 2021-03-29 12:25:05.170 [Ruby-0-Thread-9: :1] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/][Manticore::ResolutionFailure] <clusterid>.asia-southeast1.gcp.cloud.es.io: Name or service not known"}
[WARN ] 2021-03-29 12:25:10.175 [Ruby-0-Thread-9: :1] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/][Manticore::ResolutionFailure] <clusterid>.asia-southeast1.gcp.cloud.es.io"}
It is possible for me to curl it with my credential as :
[root#localhost testconfig]# curl https://elasticdeploymentcredentials:elasticdeploymentcredentials#<clusterid>.asia-southeast1.gcp.elastic-cloud.com:9243
it returning
"name" : "name",
"cluster_name" : "<clusterid>",
"cluster_uuid" : "<clusteruuid>",
"version" : {
"number" : "7.12.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "build_hash",
"build_date" : "2021-03-18T06:17:15.410153305Z",
"build_snapshot" : false,
"lucene_version" : "8.8.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
am I missing something?
Instead of trying to connect to Elastic Cloud via the username/password from the deployment, try to use the Cloud_ID/Cloud_Auth combination:
output {
elasticsearch {
hosts => ["https://<clusterid>.asia-southeast1.gcp.cloud.es.io:9243"]
index => "%{[#metadata][beat]}-%{[#metadata][version]}-%{+YYYY.MM.dd}"
cloud_id => "your cloudid from the console"
cloud_auth => "elastic:password"
}
}
The cloud_auth parameter is where you are actually going to use the username/password from the deployment. More information here:
https://www.elastic.co/guide/en/logstash/7.12/connecting-to-cloud.html

Unable to index with docker logstash

I am using the latest code of git#github.com:deviantony/docker-elk.git repository to host ELK stack with docker-compose up command. Elastic search and kibana are running fine.
Although I cannot index into logstash with my logstash.conf which is as shown below:
input {
    file {
        # Configure your path below
        path => ["C:/Users/matt/Desktop/temp/logs/*.txt*"]
        ignore_older => "141 days"
        start_position => "beginning"
        file_sort_by => "last_modified"
        file_sort_direction => "desc"
        sincedb_path => "NUL"
        type => "appl"
        codec => multiline {
            pattern => "^<log4j:event"
            negate => true
            what => "previous"
        }
    }
}
filter {
    if [type] == "appl" {
        grok {
            add_tag => [ "groked" ]
            match => ["message", ".*"]
            remove_tag => ["_grokparsefailure"]
        }
        mutate {
            gsub => ["message", "log4j:", ""]
        }
        xml {
            source => "message"
            remove_namespaces => true
            target => "log4jevent"
            xpath => [ "//event/#timestamp", "timestamp" ]
            xpath => [ "//event/#level", "loglevel" ]
            xpath => [ "/event/message/text()", "message" ]
            xpath => [ "/event/throwable/text()", "exception" ]
            xpath => [ "//event/properties/data[#name='log4jmachinename']/#value", "machinename" ]
            xpath => [ "//event/properties/data[#name='log4japp']/#value", "app" ]
            xpath => [ "//event/properties/data[#name='log4net:UserName']/#value", "username" ]
            xpath => [ "//event/properties/data[#name='log4net:Identity']/#value", "identity" ]
            xpath => [ "//event/properties/data[#name='log4net:HostName']/#value", "hostname" ]
        }
        mutate {
            remove_field => ["type"]
            gsub => [
            "message", "&", "&",
            "message", "<", "<",
            "message", ">", ">",
            "message", """, "\"",
            "message", "&apos;", "'"
            ]
        }
        date {
            match => [ "[timestamp][0]","UNIX_MS" ]
            target => "#timestamp"
            remove_field => ["timestamp"]
        }
    }
}
output {
    elasticsearch {
        hosts => ["localhost:9200"]
        index => "log4jevents"
        user => "elastic"
        password => "changeme"
        ecs_compatibility => disabled
    }
    stdout {
        codec => rubydebug
    }
}
and my log file that I want to index with my logstash is shown below
<log4j:event logger="Microsoft.Unity.ApplicationBlocks.Logging.Logger" timestamp="1615025506621" level="DEBUG" thread="13"><log4j:message>SSO->AccountController->Login->Before ClientID Check</log4j:message><log4j:properties><log4j:data name="log4jmachinename" value="hostname01" /><log4j:data name="log4japp" value="/LM/W3SVC/2/ROOT-1-132594985694777790" /><log4j:data name="log4net:UserName" value="IIS APPPOOL\default" /><log4j:data name="log4net:Identity" value="" /><log4j:data name="log4net:HostName" value="hostname01" /></log4j:properties><log4j:locationInfo class="Microsoft.Unity.ApplicationBlocks.Logging.Logger" method="Debug" file="F:\somefolder\Agent\_work\1\s\Unity\Microsoft.Unity.ApplicationBlocks\Logging\Logging.cs" line="353" /></log4j:event>
The issue shown while starting the docker-compose up is shown below for logstash
Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
The same logstash.conf was working earlier for EK version 6.8. Whats wrong with my logstash.conf?
In your output elasticsearch plugin, set the hosts property to elasticsearch:9200.
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "log4jevents"
user => "elastic"
password => "changeme"
ecs_compatibility => disabled
}
stdout {
codec => rubydebug
}
}

Change http_port number of Elasticsearch to 80

I am trying to setup ELK on Ubuntu 18.04 and I only have port 80 for now to test Elasticsearch dashboard so I modified the elasticsearch.yml as below
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: x.x.x.x
#
# Set a custom port for HTTP:
#
http.port: 80
#
# For more information, consult the network module documentation.
#
But in logstash logs it says
[2019-05-10T08:46:01,216][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://x.x.x.x:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://x.x.x.x:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
I think it is trying to find elasticsearch on 9200 ..
Any help on this will be appreciated
You need to edit the output port of logstash, you can look it up here.
input { stdin { } }
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch { hosts => ["localhost:9200"] } <-- change this
stdout { codec => rubydebug }
}
Change http.port into elasticsearch.yml file from config directory

logstash not pushing logs to AWS Elasticsearch

I am trying to push my logs from logstash to elasticsearch but its failing. here is my logstash.conf file :
input {
file {
path => "D:/shweta/ELK_poc/test3.txt"
start_position => "beginning"
sincedb_path => "NUL"
ignore_older => 0
}}
output {
elasticsearch {
hosts => [ "https://search-test-domain2-2msy6ufh2vl2ztfulhrtoat6hu.us-west-2.es.amazonaws.com" ]
index => "testindex4-5july"
document_type => "test-file"
}
}
The ES endpoint that i have provided in hosts is open , so there should not be an access isssue, but it still gives following error:
_[2018-07-05T13:59:05,753][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>https://search-test-domain2-2msy6ufh2vl2ztfulhrtoat6hu.us-west-2.es.amazonaws.com:9200/, :path=>"/"}_
_[2018-07-05T13:59:05,769][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://search-test-domain2-2msy6ufh2vl2ztfulhrtoat6hu.us-west-2.es.amazonaws.com:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://search-test-domain2-2msy6ufh2vl2ztfulhrtoat6hu.us-west-2.es.amazonaws.com:9200/][Manticore::ResolutionFailure] This is usually a temporary error during hostname resolution and means that the local server did not receive a response from an authoritative server (search-test-domain2-2msy6ufh2vl2ztfulhrtoat6hu.us-west-2.es.amazonaws.com)"}_
I am stuck here. But when i downloaded ES and installed it in my machine and ran it locally , replacing hosts with: hosts => [ "localhost:9200" ] ,in output , it worked all good pushing data to local es:
I tried a lot of ways but not able to resolve the issue , can anyone please help. I don't want to give localhost but AWS ES domain endpoint. Any hints or leads will be highly appreciated
Thanks in advance
Shweta
In my opinion, you simply need to explicitly add the port 443 and it will work. I think the elasticsearch output plugin automatically uses port 9200 if no port is explicitly given.
elasticsearch {
hosts => [ "https://search-test-domain2-2msy6ufh2vl2ztfulhrtoat6hu.us-west-2.es.amazonaws.com:443" ]
index => "testindex4-5july"
document_type => "test-file"
}
An alternative would be to not add the port but specify ssl => true as depicted in the official AWS ES docs
elasticsearch {
hosts => [ "https://search-test-domain2-2msy6ufh2vl2ztfulhrtoat6hu.us-west-2.es.amazonaws.com" ]
index => "testindex4-5july"
document_type => "test-file"
ssl => true
}

Resources