Logstash 5 Alpha4 to elasticsearch5 Alpha4 communication error - elasticsearch

Elasticsearch 5 is secured with xpack security and hooked with ldap which is working fine. Even user has admin right in role_mapping.
Logstash 5 configuration is as below
output {
elasticsearch {
hosts => ['localhost:9200']
user => 'gaurav#gmail.com'
password => 'pwd'
}
}
Getting below error and because of which logstash is not able to pass data to elasticsearch.
{:timestamp=>"2016-07-14T16:32:29.592000+0530",
:message=>"Encountered an unexpected error submitting a bulk request! Will retry.",
:error_message=>"undefined method code' for #",
:class=>"NoMethodError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:217:insafe_bulk'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:105:in submit'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:72:inretrying_submit'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:23:in multi_receive'", "org/jruby/RubyArray.java:1653:ineach_slice'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:22:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:136:inthreadsafe_multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_

I think I may have figured it out. I am using the Logstash 5.1.1-alpine docker image. As far as I can tell, it comes with the elasticsearch-output plugin v4.5.0, which seems to have this bug. Forcing an update of that plugin to the latest (6.2) has fixed this issue.
My Dockerfile now
FROM logstash:5.1.1-alpine
RUN $LOGSTASH_PATH/logstash-plugin install --version 6.2.0 logstash-output-elasticsearch
With the updated plugin, I no longer see this error.

Related

W7 Logstash JRUBY Error

I am new to the entire ELK Stack, and I am trying to set up Logstash. I followed all of the instructions (unzipping, setting up config file, starting Logstash). My setup is Windows 7, and my java version is 1.8.0_51.
When I run the following command (pipeline.conf is my config file):
C:\Elastic\logstash-6.2.2\bin>logstash -f pipeline.conf
I am getting the following error:
[ERROR] 2018-03-15 12:30:05.101 [main] Logstash -
java.lang.IllegalStateException:
org.jruby.exceptions.RaiseException:
(LoadError) Could not load FFI Provider:
(NotImplementedError) FFI not available:
com.kenai.jffi.Foreign.getVersion()I
See http://jira.codehaus.org/browse/JRUBY-4583
Here is what my config file:
input {
stdin {
}
}
output {
stdout {
codec => rubydebug
}
}
Any help would be appreciated. http://jira.codehaus.org/browse/JRUBY-4583 doesn't seem like a valid site. I have tried my exact process on a different machine, and Logstash works. I have been trying to look for a solution for about 2 days now. HELP PLS
Issue Resolved on the Elastic Discussion site:
https://discuss.elastic.co/t/windows-7-logstash-jruby-error/124152

Error reading field 'topic_metadata': Error reading array of size 873589, only 41 bytes available

I have installed logstash version 5.2.2 by downloading zip file in a VM having fresh Ubuntu installed in it.
I have created a sample config file logstash-sample.conf with the following entry
input{
stdin{ }
}
output{
stdout{ }
}
And executing the command $bin/logstash -f logstash-simple.conf
it is running absolutely fine.
Now in the same Ubuntu machine, I installed kafka by following the exact same process mentioned
here https://www.digitalocean.com/community/tutorials/how-to-install-apache-kafka-on-ubuntu-14-04 and followed till step no 7.
Then I modified the logstash-sample.conf file to contain the following
input {
kafka{
bootstrap_servers => "localhost:9092"
topics => ["TutorialTopic"]
}
}
output {
stdout { codec => rubydebug }
}
And this time I am getting the following error,
sample#sample-VirtualBox:~/Downloads/logstash-5.2.2$ bin/logstash -f logstash-sample.conf
Sending Logstash's logs to /home/rs-switch/Downloads/logstash-5.2.2/logs which is now configured via log4j2.properties
[2017-03-07T00:26:25,629][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
[2017-03-07T00:26:25,650][INFO ][logstash.pipeline ] Pipeline main started
[2017-03-07T00:26:26,039][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
log4j:WARN No appenders could be found for logger (org.apache.kafka.clients.consumer.ConsumerConfig).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "Ruby-0-Thread-14: /home/rs-switch/Downloads/logstash-5.2.2/vendor/bundle/jruby/1.9/gems/logstash-input-kafka-5.1.6/lib/logstash/inputs/kafka.rb:229" org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'topic_metadata': Error reading array of size 873589, only 41 bytes available
at org.apache.kafka.common.protocol.types.Schema.read(org/apache/kafka/common/protocol/types/Schema.java:73)
at org.apache.kafka.clients.NetworkClient.parseResponse(org/apache/kafka/clients/NetworkClient.java:380)
at org.apache.kafka.clients.NetworkClient.handleCompletedReceives(org/apache/kafka/clients/NetworkClient.java:449)
at org.apache.kafka.clients.NetworkClient.poll(org/apache/kafka/clients/NetworkClient.java:269)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java:360)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java:224)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java:192)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java:163)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java:179)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(org/apache/kafka/clients/consumer/KafkaConsumer.java:974)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(org/apache/kafka/clients/consumer/KafkaConsumer.java:938)
at java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)
at RUBY.thread_runner(/home/rs-switch/Downloads/logstash-5.2.2/vendor/bundle/jruby/1.9/gems/logstash-input-kafka-5.1.6/lib/logstash/inputs/kafka.rb:239)
at java.lang.Thread.run(java/lang/Thread.java:745)
[2017-03-07T00:26:28,742][WARN ][logstash.agent ] stopping pipeline {:id=>"main"}
Can anyone help me out resolving this issue? I am really stuck setting up ELK from last few weeks, but was not successful.
You most probably have a version conflict that is causing this issue. Check out the compatibility matrix in the Logstash Kafka input plugins documentation.
The link you mentioned for installing Kafka has you install version 0.8.2.1 which will not work with Kafka 0.10 clients. Kafka has version checking and backwards compatibility, but only if the broker is newer than the client, which is not the case here.
I'd recommend installing a current version of Kafka, there have been immense improvements since version 0.8 that you'd be missing out on if you tried downgrading Logstash instead.
Check out the Confluent Platform Quickstart for an easy way to get started.

Logstash error message when using ElasticSearch output=>"Failed to flush outgoing items"

Im using ES 1.4.4 and LS 1.5 and Kibana 4 on Debian.
I start logstash, it works fine for a couple of minutes then i have a fatal error.
In order to shutdown logstash i have to delete the recent datas stored in ES, that's the only way i found.
One more relevant fact is that Elastic Search looks OK, i can see old datas in kibana and plugin head works fine.
My output config : output { elasticsearch {port => 9200 protocol => http host => "127.0.0.1"}}
Any help will be appreciated :)
Here is the full error message :
Got error to send bulk of actions to elasticsearch server at 127.0.0.1 : Read timed out {:level=>:error}
Failed to flush outgoing items {:outgoing_count=>1362, :exception=>#, :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.3.5-java/lib/manticore/response.rb:35:in initialize'", "org/jruby/RubyProc.java:271:incall'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.3.5-java/lib/manticore/response.rb:61:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.3.5-java/lib/manticore/response.rb:224:incall_once'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.3.5-java/lib/manticore/response.rb:127:in code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/http/manticore.rb:50:inperform_request'", "org/jruby/RubyProc.java:271:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/base.rb:187:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/http/manticore.rb:33:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/client.rb:115:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.7/lib/elasticsearch/api/actions/bulk.rb:80:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch/protocol.rb:82:inbulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:413:in submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:412:insubmit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:438:in flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:436:inflush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:219:in buffer_flush'", "org/jruby/RubyHash.java:1341:ineach'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:216:in buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:193:inbuffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:159:in buffer_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:402:inreceive'", "/opt/logstash/lib/logstash/outputs/base.rb:88:in handle'", "(eval):1070:ininitialize'", "org/jruby/RubyArray.java:1613:in each'", "org/jruby/RubyEnumerable.java:805:inflat_map'", "(eval):1067:in initialize'", "org/jruby/RubyProc.java:271:incall'", "/opt/logstash/lib/logstash/pipeline.rb:279:in output'", "/opt/logstash/lib/logstash/pipeline.rb:235:inoutputworker'", "/opt/logstash/lib/logstash/pipeline.rb:163:in `start_outputs'"], :level=>:warn}
Your elasticsearch have surpassed storage and it is unable to write new documents coming from logstash, try deleting old indices and then
PUT your_index/_settings
{
"index": {
"blocks.read_only": false
}
}
I hope this will work for you. Thanks !!

Logstash Config File for IIS

I have recently installed the ELK stack on a Windows server (following this: https://community.ulyaoth.net/threads/how-to-install-logstash-on-a-windows-server-with-kibana-in-iis.17/)
I can get the IIS logs from the server into Logstash and into Elasticsearch, but I can't get the same logs from another server.
Here is my logstash config file from my second server;
input {
file {
type => "IISLog"
path => "C:/inetpub/logs/LogFiles/W3SVC*/*.log"
}
}
filter {
mutate {
add_field => [ "hostip", "%{host}" ]
}
dns {
reverse => [ "host" ]
action => replace
}
}
output {
elasticsearch {
host => "ELK01v"
port => "9301"
}
}
but there is nothing showing in Kibana
In the stderr.log for Logstash I can see the following;
Exception in thread ">output" org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [30s]
at org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(org/elasticsearch/action/support/master/TransportMasterNodeOperationAction.java:180)
at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(org/elasticsearch/cluster/service/InternalClusterService.java:492)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:615)
at java.lang.Thread.run(java/lang/Thread.java:745)
and this from the stdout.log;
{:timestamp=>"2014-08-22T15:04:55.775000+0100", :message=>"Using milestone 2 input plugin 'file'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.2/plugin-milestones", :level=>:warn}
{:timestamp=>"2014-08-22T15:04:55.853000+0100", :message=>"Using milestone 2 filter plugin 'dns'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.2/plugin-milestones", :level=>:warn}
log4j, [2014-08-22T15:05:34.215] WARN: org.elasticsearch.discovery: [logstash-WEB01v-3460-4038] waited for 30s and no initial state was set by the discovery
log4j, [2014-08-22T15:09:06.334] WARN: org.elasticsearch.transport: [logstash-WEB01v-3460-4038] Transport response handler not found of id [240]
I've confirmed that I can telnet to ELK01v on port 9301, but I can't think what else could be causing these errors. Is there anyone with ELK knowledge that could help at all?
Thanks
This is an indication that it's trying to join your cluster but wasn't able to for some reason (for example a firewall -- there is communication in both directions when it joins the cluster). The easist solution is to add protocol => http to your elasticsearch ouput. This will work since you've already verified the firewall is open in that direction.

Logstash stuck when starting up

What's wrong with the following logstash configuration?
input {
file {
type => "access_log"
# Wildcards work, here :)
path => [ "/root/isaac/my_logs/access_logs/gw_access_log*"]
start_position => "beginning"
}
}
output {
stdout { debug => true }
elasticsearch { embedded => true }
}
When running the above configuration, logstash is stuck on startup as follows:
[root#myvm logstash]# java -jar logstash-1.3.3-flatjar.jar agent -f logstash-complex.conf
Using milestone 2 input plugin 'file'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.3.3/plugin-milestones {:level=>:warn}
More importantly what are the ways to debug the issue?
I already checked that the file i am putting in the path do exist.
That isn't stuck, that's running.
you get this:
Using milestone 2 input plugin 'file'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.3.3/plugin-milestones {:level=>:warn}
Once logstash has started successfully
If you add -- web onto the end of your command then you should be able to see some output in Kibana web interface
If you aren't seeing messages appear in the console, first I would check that new entries are definitely being written to the file(s) that you're trying to tail. Since you're using the stdout output you should see the messages written to the console at the same time as they're going into the embedded Elasticsearch.
What I would suggest is you simplify your config by removing the elasticsearch output - this should speed up the startup time (it can take a minute or two for the embedded elasticsearch instance to start up) and focus on getting messages onto the console output first.
If you do want more verbose debug output from Logstash you can start the program with -v, -vv or -vvv for progressively more detailed debug information. E.g.:
java -jar logstash-1.3.3-flatjar.jar agent -f logstash-complex.conf -vvv
Fair warning that -vvv does produce a LOT of debug information, so start with -v and work your way up.

Resources