Unable to index with docker logstash - elasticsearch

I am using the latest code of git#github.com:deviantony/docker-elk.git repository to host ELK stack with docker-compose up command. Elastic search and kibana are running fine.
Although I cannot index into logstash with my logstash.conf which is as shown below:
input {
    file {
        # Configure your path below
        path => ["C:/Users/matt/Desktop/temp/logs/*.txt*"]
        ignore_older => "141 days"
        start_position => "beginning"
        file_sort_by => "last_modified"
        file_sort_direction => "desc"
        sincedb_path => "NUL"
        type => "appl"
        codec => multiline {
            pattern => "^<log4j:event"
            negate => true
            what => "previous"
        }
    }
}
filter {
    if [type] == "appl" {
        grok {
            add_tag => [ "groked" ]
            match => ["message", ".*"]
            remove_tag => ["_grokparsefailure"]
        }
        mutate {
            gsub => ["message", "log4j:", ""]
        }
        xml {
            source => "message"
            remove_namespaces => true
            target => "log4jevent"
            xpath => [ "//event/#timestamp", "timestamp" ]
            xpath => [ "//event/#level", "loglevel" ]
            xpath => [ "/event/message/text()", "message" ]
            xpath => [ "/event/throwable/text()", "exception" ]
            xpath => [ "//event/properties/data[#name='log4jmachinename']/#value", "machinename" ]
            xpath => [ "//event/properties/data[#name='log4japp']/#value", "app" ]
            xpath => [ "//event/properties/data[#name='log4net:UserName']/#value", "username" ]
            xpath => [ "//event/properties/data[#name='log4net:Identity']/#value", "identity" ]
            xpath => [ "//event/properties/data[#name='log4net:HostName']/#value", "hostname" ]
        }
        mutate {
            remove_field => ["type"]
            gsub => [
            "message", "&", "&",
            "message", "<", "<",
            "message", ">", ">",
            "message", """, "\"",
            "message", "&apos;", "'"
            ]
        }
        date {
            match => [ "[timestamp][0]","UNIX_MS" ]
            target => "#timestamp"
            remove_field => ["timestamp"]
        }
    }
}
output {
    elasticsearch {
        hosts => ["localhost:9200"]
        index => "log4jevents"
        user => "elastic"
        password => "changeme"
        ecs_compatibility => disabled
    }
    stdout {
        codec => rubydebug
    }
}
and my log file that I want to index with my logstash is shown below
<log4j:event logger="Microsoft.Unity.ApplicationBlocks.Logging.Logger" timestamp="1615025506621" level="DEBUG" thread="13"><log4j:message>SSO->AccountController->Login->Before ClientID Check</log4j:message><log4j:properties><log4j:data name="log4jmachinename" value="hostname01" /><log4j:data name="log4japp" value="/LM/W3SVC/2/ROOT-1-132594985694777790" /><log4j:data name="log4net:UserName" value="IIS APPPOOL\default" /><log4j:data name="log4net:Identity" value="" /><log4j:data name="log4net:HostName" value="hostname01" /></log4j:properties><log4j:locationInfo class="Microsoft.Unity.ApplicationBlocks.Logging.Logger" method="Debug" file="F:\somefolder\Agent\_work\1\s\Unity\Microsoft.Unity.ApplicationBlocks\Logging\Logging.cs" line="353" /></log4j:event>
The issue shown while starting the docker-compose up is shown below for logstash
Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx#localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx#localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
The same logstash.conf was working earlier for EK version 6.8. Whats wrong with my logstash.conf?

In your output elasticsearch plugin, set the hosts property to elasticsearch:9200.
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "log4jevents"
user => "elastic"
password => "changeme"
ecs_compatibility => disabled
}
stdout {
codec => rubydebug
}
}

Related

Got response code '400' contacting Elasticsearch at URL in logstash

I am new to elasticsearch. I tried to configure elastisearch, Kibana , logstash with MQTT plugin. I supposed to send logs to elasticseach through logstash MQTT plugin. I installed them on Mac locally, but when starting logstash, it throws following error.
[2021-11-12T17:26:37,976][ERROR][logstash.outputs.elasticsearch][logstash_pipeline] Unable to get license information {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '400' contacting Elasticsearch at URL 'http://localhost:9200/_license'"}
my logstash configuration file islike:
input {
mqtt {
host => "localhost"
port => 1883
topic => "test"
qos => 0
certificate_path => "/Users/john/logstash-7.10.2/logstash/m2mqtt_ca.crt"
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[#metadata][beat]}-%{[#metadata][version]}-%{+YYYY.MM.dd}"
#user => "elastic"
#password => "changeme"
}
}
can anybody tell, what was the issue?

Logstash creating pipelines from Kafka not working

I am trying to get data from Kafka topic to run into ELK-stack with Logstash but can't get the data moving.
I edited the logstash.conf to following:
input {
tcp {
port => 5000
}
kafka {
bootstrap_servers => "broker:29092"
topics => ["PLACES_ROWKEY"]
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
user => "elastic"
password => "changeme"
index => "from_logstash"
}
}
Im running this setup in Docker if it matters (broker is the hostname for the Kafka broker container). I restart the Logstash but cant see any new indices in elasticsearch

Logstash not able to connect secured (ssl) Elastic search cluster

I have installed Logstash,elasticsearch and kibana in single instance and installed X-pack also for TLS communication. enabled ssl communication in elasticsearch and kibana working good but logstash unable to connect elasticsearch , but i can curl elasticsearch url https://localhost:9200 there is no firewall blocking also,
I have generated open ssl certificate and key file and kept in elasticsearch
input {
beats {
client_inactivity_timeout => 1000
port => 5044
}
}
filter {
grok {
match => [ "message", "%{TIMESTAMP_ISO8601} %{LOGLEVEL:loglevel} zeppelin IDExtractionService transactionId %{WORD:transaction_id} operation %{WORD:otype} received request duration %{NUMBER:duration} exception %{WORD:error}" ]
}
}
filter {
if "beats_input_codec_plain_applied" in [tags] {
mutate {
remove_tag => ["beats_input_codec_plain_applied"]
}
}
}
filter {
if "_grokparsefailure" in [tags] {
mutate {
remove_tag => ["_grokparsefailure"]
}
}
}
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.url: https://localhost:9200
output {
elasticsearch {
hosts => ["http://localhost:9200"]
user => elastic
password => password
manage_template => false
# ssl_certificate_verification => false
ssl => true
cacert => '/etc/elasticsearch/ca/key.pem'
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
}
}
elastic search config file
cluster.name: my-application
network.host: 0.0.0.0
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key: /opt/elasticsearch/ca/ca.key
xpack.security.http.ssl.certificate: /opt/elasticsearch/ca/ca.crt
logstash log files
[2018-05-16T05:28:16,421][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://logstash_system:xxxxxx#localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx#localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}
[2018-05-16T05:28:17,201][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>1, "stalling_thread_info"=>{"other"=>[{"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2018-05-16T05:28:21,422][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx#localhost:9200/, :path=>"/"}
[2018-05-16T05:28:21,422][INFO ][logstash.licensechecker.licensereader] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx#localhost:9200/, :path=>"/"}
[2018-05-16T05:28:21,424][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://logstash_system:xxxxxx#localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx#localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}
[2018-05-16T05:28:21,425][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://logstash_system:xxxxxx#localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx#localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}
[2018-05-16T05:28:22,202][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>1, "stalling_thread_info"=>{"other"=>[{"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2018-05-16T05:28:26,425][INFO ][logstash.licensechecker.licensereader] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx#localhost:9200/, :path=>"/"}
[2018-05-16T05:28:26,426][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx#localhost:9200/, :path=>"/"}
[2018-05-16T05:28:26,427][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://logstash_system:xxxxxx#localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx#localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}
[2018-05-16T05:28:26,427][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://logstash_system:xxxxxx#localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx#localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}
[2018-05-16T05:28:27,201][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>1, "stalling_thread_info"=>{"other"=>[{"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
root#5c417caecc5f:/var/log/logstash#
You have to enable monitoring for elasticsearch in the logstash.yml configuration file.
/etc/logstash/logstash.yml
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.url: http://X.X.X.X:9200
See this post for more information :
https://discuss.elastic.co/t/elasticsearch-unreachable-error-in-logstash/75157/7
And the documentation (maybe needed for TLS/SSL monitoring settings) :
https://www.elastic.co/guide/en/logstash/6.2/configuring-logstash.html#monitoring-settings
xpack.monitoring.elasticsearch.ssl.ca
xpack.monitoring.elasticsearch.ssl.truststore.path
xpack.monitoring.elasticsearch.ssl.truststore.password
xpack.monitoring.elasticsearch.ssl.keystore.path
xpack.monitoring.elasticsearch.ssl.keystore.password
If this doesn't work, can I see your /etc/logstash/logstash.yml configuration file ?

Logstash can not connect to Elastic search

{:timestamp=>"2017-07-19T15:56:36.517000+0530", :message=>"Attempted to send a bulk request to Elasticsearch configured at '[\"http://localhost:9200\"]', but Elasticsearch appears to be unreachable or down!", :error_message=>"Connection refused (Connection refused)", :class=>"Manticore::SocketException", :level=>:error}
{:timestamp=>"2017-07-19T15:56:37.761000+0530", :message=>"Connection refused (Connection refused)", :class=>"Manticore::SocketException", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:37:in `initialize'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:79:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:256:in `call_once'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:153:in `code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/http/manticore.rb:84:in `perform_request'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/base.rb:257:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/http/manticore.rb:67:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/sniffer.rb:32:in `hosts'", "org/jruby/ext/timeout/Timeout.java:147:in `timeout'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/sniffer.rb:31:in `hosts'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/base.rb:79:in `reload_connections!'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:72:in `sniff!'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:60:in `start_sniffing!'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:60:in `start_sniffing!'", "org/jruby/RubyKernel.java:1479:in `loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:59:in `start_sniffing!'"], :level=>:error}
{:timestamp=>"2017-07-19T15:56:38.520000+0530", :message=>"Attempted to send a bulk request to Elasticsearch configured at '[\"http://localhost:9200\"]', but Elasticsearch appears to be unreachable or down!", :error_message=>"Connection refused (Connection refused)", :class=>"Manticore::SocketException", :level=>:error}
Though Elastic search in running on port 127.0.0.1:9200
I do not understand from where logstash is taking this configuration
I have not configured logstash to connect elastic search on localhost
in logstash.service
ExecStart=/usr/share/logstash/bin/logstash "--path.settings" "/etc/logstash"
and in
/etc/logstash
I have logstash.yml
path.config: /etc/logstash/conf.d
in /etc/logstash/conf.d
output {
elasticsearch { hosts => ["10.2.0.10:9200"]
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
conf.d is your directory right, you need a file there something like myconf.conf and be in following format:
input {
}
filter {
#can be empty
}
output {
}
Once you apply all your changes you need to restart your logstash service, and it will apply your new changes. You can also control that in your LS settings logstash.yml file, if you need it to restart itself, once you apply a new change to any of the files under conf.d
You can also break up your conf files like 1_input.conf 2_filter.conf and 99_output.conf such that each will contain its own plugin i.e. input , filter and output.
Start Elasticsearch.
Write a conf file for Logstash to connect and upload data into Elasticsearch.
input {
file {
type => "csv"
path => "path for csv."
start_position => "beginning"
}
}
filter {
csv {
columns => ["Column1","Column2"]
separator => ","
}
mutate {
convert => {"Column1" => "float"}
}
}
output {
elasticsearch {
hosts => "http://localhost:9200"
}
stdout { codec => rubydebug}
}
host for elasticsearch can be configured in elasticsearch.yml file.
run the conf file (logstash.bat -f abc.conf)

Unable to load index to elasticsearch using logstash

I'm Unable to load index to elasticsearch using logstash. The follwing are my logstash.conf settings. To me config settings seems fine. Please help if I'm missing something.
Assume that Logstash & elastic search services are running fine.
input {
file {
type => "IISLog"
path => "C:/inetpub/logs/LogFiles/W3SVC1/u_ex140930.log"
start_postition => "beginning"
}
}
output {
stdout { debug => true debug_format => "ruby"}
elasticsearch_http {
host => "localhost"
port => 9200
protocol => "http"
index => "iislogs2"
}
}
You can start with checking the following:
Check the logstash log file for errors.
Run the following command:telnet localhost 9200 and verify you are able to connect.
Check elasticsearch log files for errors.

Resources