Logstash Config File for IIS - windows

I have recently installed the ELK stack on a Windows server (following this: https://community.ulyaoth.net/threads/how-to-install-logstash-on-a-windows-server-with-kibana-in-iis.17/)
I can get the IIS logs from the server into Logstash and into Elasticsearch, but I can't get the same logs from another server.
Here is my logstash config file from my second server;
input {
file {
type => "IISLog"
path => "C:/inetpub/logs/LogFiles/W3SVC*/*.log"
}
}
filter {
mutate {
add_field => [ "hostip", "%{host}" ]
}
dns {
reverse => [ "host" ]
action => replace
}
}
output {
elasticsearch {
host => "ELK01v"
port => "9301"
}
}
but there is nothing showing in Kibana
In the stderr.log for Logstash I can see the following;
Exception in thread ">output" org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [30s]
at org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(org/elasticsearch/action/support/master/TransportMasterNodeOperationAction.java:180)
at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(org/elasticsearch/cluster/service/InternalClusterService.java:492)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:615)
at java.lang.Thread.run(java/lang/Thread.java:745)
and this from the stdout.log;
{:timestamp=>"2014-08-22T15:04:55.775000+0100", :message=>"Using milestone 2 input plugin 'file'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.2/plugin-milestones", :level=>:warn}
{:timestamp=>"2014-08-22T15:04:55.853000+0100", :message=>"Using milestone 2 filter plugin 'dns'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.2/plugin-milestones", :level=>:warn}
log4j, [2014-08-22T15:05:34.215] WARN: org.elasticsearch.discovery: [logstash-WEB01v-3460-4038] waited for 30s and no initial state was set by the discovery
log4j, [2014-08-22T15:09:06.334] WARN: org.elasticsearch.transport: [logstash-WEB01v-3460-4038] Transport response handler not found of id [240]
I've confirmed that I can telnet to ELK01v on port 9301, but I can't think what else could be causing these errors. Is there anyone with ELK knowledge that could help at all?
Thanks

This is an indication that it's trying to join your cluster but wasn't able to for some reason (for example a firewall -- there is communication in both directions when it joins the cluster). The easist solution is to add protocol => http to your elasticsearch ouput. This will work since you've already verified the firewall is open in that direction.

Related

Loading logs in one machine into elasticsearch located setup in another machine using logstash

I have my logs and logstash running on the one EC2 machine (M1), so I read my logs placed on my local machine with this config:
input {
file{
path => "/path/to/logs/in/M1"
start_position => "beginning"
}
}
Now, we have elasticsearch running on a different EC2 machine (M2) and I need to transfer the logs from M1 to elasticsearch in M2 using logstash. I used the following output config:
output {
stdout { codec => rubydebug }
elasticsearch {
hosts => "http://<M2 ip address>:9200"
index => "logstash-%{+YYYY.MM.dd}"
}
}
When I run the config file, I get the following error:
04:18:57.640 [[main]>worker0] WARN logstash.outputs.elasticsearch - UNEXPECTED POOL ERROR {:e=>#<LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError: No Available connections>}
04:18:57.646 [[main]>worker0] ERROR logstash.outputs.elasticsearch - Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>2}
04:18:59.682 [[main]>worker0] WARN logstash.outputs.elasticsearch - UNEXPECTED POOL ERROR {:e=>#<LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError: No Available connections>}
04:18:59.686 [[main]>worker0] ERROR logstash.outputs.elasticsearch - Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>4}
04:19:01.109 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.4.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:188] WARN logstash.outputs.elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x1d08c988 URL:http://10.60.40.120:9200>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://10.60.40.120:9200][Manticore::ConnectTimeout] connect timed out"}
04:19:02.111 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.4.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:188] INFO logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:url=>#<URI::HTTP:0x55444fcf URL:http://10.60.40.120:9200>, :healthcheck_path=>"/"}
I am new to logstash. Any help is appreciated.
UPDATE:
So I looked around in forumns and I got one solution which told me to update logstash output using the command:
sudo /usr/share/logstash/bin/logstash-plugin update logstash-output-elasticsearch
I also updated the logstash config file to include username and password:
output {
stdout { codec => rubydebug }
elasticsearch {
hosts => ["<M2 ip address>"]
user => 'username'
password => 'changeme'
index => "logstash-%{+YYYY.MM.dd}"
manage_template => false
}
}
Now I'm getting a different error. Pleas help:
09:16:21.305 [[main]>worker0] WARN logstash.outputs.elasticsearch - Could not index event to Elasticsearch. {:status=>404, :action=>["index", {:_id=>nil, :_index=>"logstash-2017.04.17", :_type=>"Messagelog", :_routing=>nil}, 2017-04-17T10:06:11.348Z ip-10-60-40-201 No valid licenses found for COLL], :response=>{"index"=>{"_index"=>"logstash-2017.04.17", "_type"=>"Messagelog", "_id"=>nil, "status"=>404, "error"=>{"type"=>"index_not_found_exception", "reason"=>"no such index and [action.auto_create_index] ([.security,.monitoring*,.watches,.triggered_watches,.watcher-history*]) doesn't match", "index_uuid"=>"_na_", "index"=>"logstash-2017.04.17"}}}}
Thanks.
It looks like you have disable auto creation of index on elasticsearch. By default elasticsearch supports auto creation of indexes.
Remove
action.auto_create_index: -b*,+a*,-*
(whatever the pattern) in your elasticsearch.yml and you will be good.
Furthermore if you want to accept auto creation of indexes starting with l used the pattern +l*. That is by adding
action.auto_create_index: +l*
Read this for additional informations.

Error reading field 'topic_metadata': Error reading array of size 873589, only 41 bytes available

I have installed logstash version 5.2.2 by downloading zip file in a VM having fresh Ubuntu installed in it.
I have created a sample config file logstash-sample.conf with the following entry
input{
stdin{ }
}
output{
stdout{ }
}
And executing the command $bin/logstash -f logstash-simple.conf
it is running absolutely fine.
Now in the same Ubuntu machine, I installed kafka by following the exact same process mentioned
here https://www.digitalocean.com/community/tutorials/how-to-install-apache-kafka-on-ubuntu-14-04 and followed till step no 7.
Then I modified the logstash-sample.conf file to contain the following
input {
kafka{
bootstrap_servers => "localhost:9092"
topics => ["TutorialTopic"]
}
}
output {
stdout { codec => rubydebug }
}
And this time I am getting the following error,
sample#sample-VirtualBox:~/Downloads/logstash-5.2.2$ bin/logstash -f logstash-sample.conf
Sending Logstash's logs to /home/rs-switch/Downloads/logstash-5.2.2/logs which is now configured via log4j2.properties
[2017-03-07T00:26:25,629][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
[2017-03-07T00:26:25,650][INFO ][logstash.pipeline ] Pipeline main started
[2017-03-07T00:26:26,039][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
log4j:WARN No appenders could be found for logger (org.apache.kafka.clients.consumer.ConsumerConfig).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "Ruby-0-Thread-14: /home/rs-switch/Downloads/logstash-5.2.2/vendor/bundle/jruby/1.9/gems/logstash-input-kafka-5.1.6/lib/logstash/inputs/kafka.rb:229" org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'topic_metadata': Error reading array of size 873589, only 41 bytes available
at org.apache.kafka.common.protocol.types.Schema.read(org/apache/kafka/common/protocol/types/Schema.java:73)
at org.apache.kafka.clients.NetworkClient.parseResponse(org/apache/kafka/clients/NetworkClient.java:380)
at org.apache.kafka.clients.NetworkClient.handleCompletedReceives(org/apache/kafka/clients/NetworkClient.java:449)
at org.apache.kafka.clients.NetworkClient.poll(org/apache/kafka/clients/NetworkClient.java:269)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java:360)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java:224)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java:192)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(org/apache/kafka/clients/consumer/internals/ConsumerNetworkClient.java:163)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java:179)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(org/apache/kafka/clients/consumer/KafkaConsumer.java:974)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(org/apache/kafka/clients/consumer/KafkaConsumer.java:938)
at java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)
at RUBY.thread_runner(/home/rs-switch/Downloads/logstash-5.2.2/vendor/bundle/jruby/1.9/gems/logstash-input-kafka-5.1.6/lib/logstash/inputs/kafka.rb:239)
at java.lang.Thread.run(java/lang/Thread.java:745)
[2017-03-07T00:26:28,742][WARN ][logstash.agent ] stopping pipeline {:id=>"main"}
Can anyone help me out resolving this issue? I am really stuck setting up ELK from last few weeks, but was not successful.
You most probably have a version conflict that is causing this issue. Check out the compatibility matrix in the Logstash Kafka input plugins documentation.
The link you mentioned for installing Kafka has you install version 0.8.2.1 which will not work with Kafka 0.10 clients. Kafka has version checking and backwards compatibility, but only if the broker is newer than the client, which is not the case here.
I'd recommend installing a current version of Kafka, there have been immense improvements since version 0.8 that you'd be missing out on if you tried downgrading Logstash instead.
Check out the Confluent Platform Quickstart for an easy way to get started.

Logstash 5 Alpha4 to elasticsearch5 Alpha4 communication error

Elasticsearch 5 is secured with xpack security and hooked with ldap which is working fine. Even user has admin right in role_mapping.
Logstash 5 configuration is as below
output {
elasticsearch {
hosts => ['localhost:9200']
user => 'gaurav#gmail.com'
password => 'pwd'
}
}
Getting below error and because of which logstash is not able to pass data to elasticsearch.
{:timestamp=>"2016-07-14T16:32:29.592000+0530",
:message=>"Encountered an unexpected error submitting a bulk request! Will retry.",
:error_message=>"undefined method code' for #",
:class=>"NoMethodError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:217:insafe_bulk'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:105:in submit'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:72:inretrying_submit'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:23:in multi_receive'", "org/jruby/RubyArray.java:1653:ineach_slice'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:22:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:136:inthreadsafe_multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_
I think I may have figured it out. I am using the Logstash 5.1.1-alpine docker image. As far as I can tell, it comes with the elasticsearch-output plugin v4.5.0, which seems to have this bug. Forcing an update of that plugin to the latest (6.2) has fixed this issue.
My Dockerfile now
FROM logstash:5.1.1-alpine
RUN $LOGSTASH_PATH/logstash-plugin install --version 6.2.0 logstash-output-elasticsearch
With the updated plugin, I no longer see this error.

Logstash error message when using ElasticSearch output=>"Failed to flush outgoing items"

Im using ES 1.4.4 and LS 1.5 and Kibana 4 on Debian.
I start logstash, it works fine for a couple of minutes then i have a fatal error.
In order to shutdown logstash i have to delete the recent datas stored in ES, that's the only way i found.
One more relevant fact is that Elastic Search looks OK, i can see old datas in kibana and plugin head works fine.
My output config : output { elasticsearch {port => 9200 protocol => http host => "127.0.0.1"}}
Any help will be appreciated :)
Here is the full error message :
Got error to send bulk of actions to elasticsearch server at 127.0.0.1 : Read timed out {:level=>:error}
Failed to flush outgoing items {:outgoing_count=>1362, :exception=>#, :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.3.5-java/lib/manticore/response.rb:35:in initialize'", "org/jruby/RubyProc.java:271:incall'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.3.5-java/lib/manticore/response.rb:61:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.3.5-java/lib/manticore/response.rb:224:incall_once'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.3.5-java/lib/manticore/response.rb:127:in code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/http/manticore.rb:50:inperform_request'", "org/jruby/RubyProc.java:271:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/base.rb:187:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/http/manticore.rb:33:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/client.rb:115:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.7/lib/elasticsearch/api/actions/bulk.rb:80:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch/protocol.rb:82:inbulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:413:in submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:412:insubmit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:438:in flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:436:inflush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:219:in buffer_flush'", "org/jruby/RubyHash.java:1341:ineach'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:216:in buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:193:inbuffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:159:in buffer_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:402:inreceive'", "/opt/logstash/lib/logstash/outputs/base.rb:88:in handle'", "(eval):1070:ininitialize'", "org/jruby/RubyArray.java:1613:in each'", "org/jruby/RubyEnumerable.java:805:inflat_map'", "(eval):1067:in initialize'", "org/jruby/RubyProc.java:271:incall'", "/opt/logstash/lib/logstash/pipeline.rb:279:in output'", "/opt/logstash/lib/logstash/pipeline.rb:235:inoutputworker'", "/opt/logstash/lib/logstash/pipeline.rb:163:in `start_outputs'"], :level=>:warn}
Your elasticsearch have surpassed storage and it is unable to write new documents coming from logstash, try deleting old indices and then
PUT your_index/_settings
{
"index": {
"blocks.read_only": false
}
}
I hope this will work for you. Thanks !!

Logstash stuck when starting up

What's wrong with the following logstash configuration?
input {
file {
type => "access_log"
# Wildcards work, here :)
path => [ "/root/isaac/my_logs/access_logs/gw_access_log*"]
start_position => "beginning"
}
}
output {
stdout { debug => true }
elasticsearch { embedded => true }
}
When running the above configuration, logstash is stuck on startup as follows:
[root#myvm logstash]# java -jar logstash-1.3.3-flatjar.jar agent -f logstash-complex.conf
Using milestone 2 input plugin 'file'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.3.3/plugin-milestones {:level=>:warn}
More importantly what are the ways to debug the issue?
I already checked that the file i am putting in the path do exist.
That isn't stuck, that's running.
you get this:
Using milestone 2 input plugin 'file'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.3.3/plugin-milestones {:level=>:warn}
Once logstash has started successfully
If you add -- web onto the end of your command then you should be able to see some output in Kibana web interface
If you aren't seeing messages appear in the console, first I would check that new entries are definitely being written to the file(s) that you're trying to tail. Since you're using the stdout output you should see the messages written to the console at the same time as they're going into the embedded Elasticsearch.
What I would suggest is you simplify your config by removing the elasticsearch output - this should speed up the startup time (it can take a minute or two for the embedded elasticsearch instance to start up) and focus on getting messages onto the console output first.
If you do want more verbose debug output from Logstash you can start the program with -v, -vv or -vvv for progressively more detailed debug information. E.g.:
java -jar logstash-1.3.3-flatjar.jar agent -f logstash-complex.conf -vvv
Fair warning that -vvv does produce a LOT of debug information, so start with -v and work your way up.

Resources