I use google protobuf in the logstash input,Start error when running logstash。
./bin/logstash -f logstash.conf -r
the error is:
[ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exceptio n=>"NoMethodError", :message=>"undefined method msgclass' for nil:NilClass", :backtrace=>["/home/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-codec-pr otobuf-1.1.0/lib/logstash/codecs/protobuf.rb:101:inregister'", "/home/logstash/logstash-core/lib/logstash/codecs/base.rb:20:in initialize'", "/home/logs tash/logstash-core/lib/logstash/plugins/plugin_factory.rb:97:inplugin'", "/home/logstash/logstash-core/lib/logstash/pipeline.rb:110:in plugin'", "(eval) :8:in'", "org/jruby/RubyKernel.java:994:in eval'", "/home/logstash/logstash-core/lib/logstash/pipeline.rb:82:ininitialize'", "/home/logstash/log stash-core/lib/logstash/pipeline.rb:167:in initialize'", "/home/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:40:inexecute'", "/home/log stash/logstash-core/lib/logstash/agent.rb:305:in `block in converge_state'"]}
logstash.conf is setting:
input {
beats {
port => 5044
ssl => false
codec => protobuf {
class_name => ["Elk.ElkData"]
include_path => ["/home/logstash/test_code/elk.pb.rb"]
protobuf_version => 3
}
type => "protobuf"
}
}
output {
stdout { codec => rubydebug }
}
the 'register' in the 'logstash/vendor/bundle/jruby/2.3.0/gems/logstash-codec-protobuf-1.1.0/lib/logstash/codecs/protobuf.rb' is:
def register
#metainfo_messageclasses = {}
#metainfo_enumclasses = {}
#metainfo_pb2_enumlist = []
include_path.each { |path| load_protobuf_definition(path) }
if #protobuf_version == 3
#pb_builder = Google::Protobuf::DescriptorPool.generated_pool.lookup(class_name).msgclass
else
#pb_builder = pb2_create_instance(class_name)
end
end
Google::Protobuf::DescriptorPool.generated_pool.lookup(class_name).msgclass
logstash version is 6.3.0, protoc version is 3.6.1, ruby-protoc version is 1.6.1,
Elk community question connection is as follows:
https://discuss.elastic.co/t/logstash-uses-protobuf-running-error-nomethoderror-message-undefined-method-msgclass-for-nil-nilclass/144806?u=sun_changlong
Is it my environmental factor or the protobuf version? Can be used in the protobuf 2 environment.Welcome to leave valuable suggestions
Related
I am running logstash 8.2.2 on Ubuntu 16,04 with the following command:
bin/logstash -f /etc/logstash/conf.d/twitter.conf
Here is the content of twitter.conf:
input {
twitter {
consumer_key => 'nmOC0'
consumer_secret => 'TQajpe0PSLwCP4M'
oauth_token => '380242506-P2P'
oauth_token_secret => 'OLhqUoIjnLj'
keywords => ["AWS","Qbox","Elasticsearch"]
full_tweet => true
}
}
output {
stdout {
codec => dots
} }
Here is the error:
[WARN ][logstash.inputs.twitter ] Twitter client error {:message=>"", :exception=>Twitter::Error::Forbidden, :backtrace=>["/usr/share/logstash-8.2.2/vendor/bundle/jruby/2.5.0/gems/twitter-6.2.0/lib/twitter/streaming/response.rb:24:in on_headers_complete'", "org/ruby_http_parser/RubyHttpParser.java:370:in <<'",
"/usr/share/logstash-8.2.2/vendor/bundle/jruby/2.5.0/gems/twitter-6.2.0/lib/twitter/streaming/response.rb:19:in <<'", "/usr/share/logstash-8.2.2/vendor/bundle/jruby/2.5.0/gems/twitter-6.2.0/lib/twitter/streaming/connection.rb:20:in stream'", "/usr/share/logstash-8.2.2/vendor/bundle/jruby/2.5.0/gems/twitter-6.2.0/lib/twitter/streaming/client.rb:119:in request'", "/usr/share/logstash-8.2.2/vendor/bundle/jruby/2.5.0/gems/twitter-6.2.0/lib/twitter/streaming/client.rb:38:in filter'", "/usr/share/logstash-8.2.2/vendor/bundle/jruby/2.5.0/gems/logstash-input-twitter-4.1.0/lib/logstash/inputs/twitter.rb:166:in do_run'", "/usr/share/logstash-8.2.2/vendor/bundle/jruby/2.5.0/gems/logstash-input-twitter-4.1.0/lib/logstash/inputs/twitter.rb:146:in run'", "/usr/share/logstash-8.2.2/logstash-core/lib/logstash/java_pipeline.rb:410:in inputworker'", "/usr/share/logstash-8.2.2/logstash-core/lib/logstash/java_pipeline.rb:401:in block in start_input'"], :options=>nil}
I'm newbie for using Logstash and Elasticsearch. I wanted to sync my MongoDB data into Elasticsearch using Logstash Plugin (logstash-input-mongodb).
In my mongodata.conf is
input {
uri => 'mongodb://127.0.0.1:27017/final?ssl=true'
placeholder_db_dir => '/opt/logstash-mongodb/'
placeholder_db_name => 'logstash_sqlite.db'
collection => 'twitter_stream'
batch_size => 5000
}
filter {
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
action => "index"
index => "twitter_stream"
hosts => ["localhost:9200"]
}
}
While I running bin/logstash -f /etc/logstash/conf.d/mongodata.conf --path.settings /etc/logstash/
The error was displayed like this
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2020-02-28T08:48:20,246][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-02-28T08:48:20,331][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.6.0"}
[2020-02-28T08:48:20,883][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \t\r\n], "#", "{" at line 2, column 13 (byte 21) after input {\n uri ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:47:in compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:55:in compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:17:in block in compile_sources'", "org/jruby/RubyArray.java:2580:in map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:14:in compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:161:in initialize'", "org/logstash/execution/JavaBasePipelineExt.java:47:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:27:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:36:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:326:in block in converge_state'"]}
[2020-02-28T08:48:21,114][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2020-02-28T08:48:25,969][INFO ][logstash.runner ] Logstash shut down.
Please help me, I don't have any idea about this.
Your configuration is wrong, you need to specify what type of input you are using.
Try to change your input to this one:
input {
mongodb {
uri => 'mongodb://127.0.0.1:27017/final?ssl=true'
placeholder_db_dir => '/opt/logstash-mongodb/'
placeholder_db_name => 'logstash_sqlite.db'
collection => 'twitter_stream'
batch_size => 5000
}
}
This code is used to send slowlog of elasticsearch 5.1.1 to logstash 5.1.1 as an input:
input {
file {
path => "C:\Users\571952\Downloads\elasticsearch-5.1.1\elasticsearch-5.1.1\logs\elasticsearch_index_search_slowlog"
start_position => "beginning"
}
}
filter {
grok { # parses the common bits
match => [ "message", "[%{URIHOST}:%{ISO8601_SECOND}][%{LOGLEVEL:log_level}]
[%{DATA:es_slowquery_type}]\s*[%{DATA:es_host}]\s*[%{DATA:es_index}]\s*[%{DATA:es_shard}]\s*took[%{DATA:es_duration}],\s*took_millis[%{DATA:es_duration_ms:float}],\s*types[%{DATA:es_types}],\s*stats[%{DATA:es_stats}],\s*search_type[%{DATA:es_search_type}],\s*total_shards[%{DATA:es_total_shards:float}],\s*source[%{GREEDYDATA:es_source}],\s*extra_source[%{GREEDYDATA:es_extra_source}]"]
}
mutate {
gsub => [
"source_body", "], extra_source[$", ""
]
}
}
output {
file {
path => "C:\Users\571952\Desktop\logstash-5.1.1\just_queries"
codec => "json_lines"
}
}
When i ran this code it is showing error like this in the command prompt.
[2017-01-04T18:30:32,032][ERROR][logstash.agent ] Pipeline aborted due to error
{:exception=>#<RegexpError: premature end of char-class: /], extra_source[$/>, :backtrac
e=>["org/jruby/RubyRegexp.java:1424:in `initialize'", "C:/Users/571952/Desktop/logstash-5
.1.1/vendor/bundle/jruby/1.9/gems/logstash-filter-mutate-3.1.3/lib/logstash/filters/mutat
e.rb:196:in `register'", "org/jruby/RubyArray.java:1653:in `each_slice'", "C:/Users/57195
2/Desktop/logstash-5.1.1/vendor/bundle/jruby/1.9/gems/logstash-filter-mutate-3.1.3/lib/lo
gstash/filters/mutate.rb:184:in `register'", "C:/Users/571952/Desktop/logstash-5.1.1/logs
tash-core/lib/logstash/pipeline.rb:230:in `start_workers'", "org/jruby/RubyArray.java:161
3:in `each'", "C:/Users/571952/Desktop/logstash-5.1.1/logstash-core/lib/logstash/pipeline
.rb:230:in `start_workers'", "C:/Users/571952/Desktop/logstash-5.1.1/logstash-core/lib/lo
gstash/pipeline.rb:183:in `run'", "C:/Users/571952/Desktop/logstash-5.1.1/logstash-core/l
ib/logstash/agent.rb:292:in `start_pipeline'"]}
[2017-01-04T18:30:32,141][INFO ][logstash.agent ] Successfully started Logstash
API endpoint {:port=>9600}
[2017-01-04T18:30:35,036][WARN ][logstash.agent ] stopping pipeline {:id=>"main
"}
Can anyone help me in solving this problem?
This is the code of my slowlog
[2016-12-28T15:53:21,341][DEBUG][index.search.slowlog.query] [vVhZxH7] [sw][0] took[184.7micros], took_millis[0], types[], stats[], search_type[QUERY_THEN_FETCH], total_shards[5], source[{
"ext" : { }
}],
I'm trying to set up LogStash and I'm following this tutorial exactly. But when I run command
bin/logstash -e 'input { stdin { } } output { stdout {} }'
it gives me the following error:
warning: --1.9 ignored
LoadError: no such file to load -- bundler
require at org/jruby/RubyKernel.java:940
require at C:/jruby-9.0.0.0/lib/ruby/stdlib/rubygems/core_ext/kernel_require.rb:54
setup! at C:/Users/ryan.dai/Desktop/logstash-1.5.3/lib/bootstrap/bundler.rb:43
<top> at c:/Users/ryan.dai/Desktop/logstash-1.5.3/lib/bootstrap/environment.rb:46
I tried jruby -S gem install bundler as suggested from someone else but it doesn't work. Totally new to Ruby, what is happening and what should I do?
You can fallow the below URL for installing entire ELK Setup.
Here you need to pass the file(log) as a path to the input of the logstash configuration.
input {
file {
path => "/tmp/access_log"
start_position => "beginning"
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch { host => localhost }
stdout { codec => rubydebug }
}
ELK Setup Installtion
Commands for running with CMD Prompt:
logstash -f logstash.conf for running logstash
logstash --configtest -f logstash.conf for configuration test
logstash --debug -f logstash.conf for debug the logstash configuration
Logstash configuration Examples
←[33mUsing milestone 2 input plugin 'file'. This plugin should be stable, but if
you see strange behavior, please let us know! For more information on plugin mi
lestones, see http://logstash.net/docs/1.4.2/plugin-milestones {:level=>:warn}←[
0m
←[33mUsing milestone 2 filter plugin 'csv'. This plugin should be stable, but if
you see strange behavior, please let us know! For more information on plugin mi
lestones, see http://logstash.net/docs/1.4.2/plugin-milestones {:level=>:warn}←[
0m
My configuration :
input {
file {
path => [ "e:\mycsvfile.csv" ]
start_position => "beginning"
}
}
filter {
csv {
columns => ["col1","col2"]
source => "csv_data"
separator => ","
}
}
output {
elasticsearch {
host => localhost
port => 9200
index => test
index_type => test_type
protocol => http
}
stdout {
codec => rubydebug
}
}
My environment:
Windows 8
logstash 1.4.2
Question: Has anyone experienced this before? Where do the logstash logs go? Are there known logstash bugs on windows? My experience is that logstash does not do anything.
I tried:
logstash.bat agent -f test.conf --verbose
←[33mUsing milestone 2 input plugin 'file'. This plugin should be stable, but if
you see strange behavior, please let us know! For more information on plugin mi
lestones, see http://logstash.net/docs/1.4.2/plugin-milestones {:level=>:warn}←[
0m
←[33mUsing milestone 2 filter plugin 'csv'. This plugin should be stable, but if
you see strange behavior, please let us know! For more information on plugin mi
lestones, see http://logstash.net/docs/1.4.2/plugin-milestones {:level=>:warn}←[
0m
←[32mRegistering file input {:path=>["e:/temp.csv"], :level=>:info}←[0m
←[32mNo sincedb_path set, generating one based on the file path {:sincedb_path=>
"C:\Users\gemini/.sincedb_d8e46c18292a898ea0b5b1cd94987f21", :path=>["e:/tem
p.csv"], :level=>:info}←[0m
←[32mPipeline started {:level=>:info}←[0m
←[32mNew Elasticsearch output {:cluster=>nil, :host=>"localhost", :port=>9200, :
embedded=>false, :protocol=>"http", :level=>:info}←[0m
←[32mAutomatic template management enabled {:manage_template=>"true", :level=>:i
nfo}←[0m
←[32mUsing mapping template {:template=>"{ \"template\" : \"logstash-\", \"se
ttings\" : { \"index.refresh_interval\" : \"5s\" }, \"mappings\" : { \"_
default_\" : { \"_all\" : {\"enabled\" : true}, \"dynamic_templates\
" : [ { \"string_fields\" : { \"match\" : \"\", \"m
atch_mapping_type\" : \"string\", \"mapping\" : { \"type\"
: \"string\", \"index\" : \"analyzed\", \"omit_norms\" : true, \"
fields\" : { \"raw\" : {\"type\": \"string\", \"index\" : \"not_
analyzed\", \"ignore_above\" : 256} } } }
} ], \"properties\" : { \"#version\": { \"type\": \"string\", \"in
dex\": \"not_analyzed\" }, \"geoip\" : { \"type\" : \"object\
", \"dynamic\": true, \"path\": \"full\", \"
properties\" : { \"location\" : { \"type\" : \"geo_point\" }
} } } } }}", :level=>:info}←[0m
It stays like this for a while and no new index is created in elasticsearch.
I had to add:
sincedb_path => "NIL"
and it worked.
http://logstash.net/docs/1.1.0/inputs/file#setting_sincedb_path
sincedb_path Value type is string There is no default value for this
setting. Where to write the since database (keeps track of the current
position of monitored log files). Defaults to the value of environment
variable "$SINCEDB_PATH" or "$HOME/.sincedb".
I've had several sincedb files generated in my C:\users{user}.
While using CSV as the input data I had to add:
sincedb_path => "NIL" inside the file{} json
Example :
input {
file {
path => [ "C:/csvfilename.txt"]
start_position => "beginning"
sincedb_path => "NIL"
}
}
and it worked for logstash version 1.4.2