I am running logstash 8.2.2 on Ubuntu 16,04 with the following command:
bin/logstash -f /etc/logstash/conf.d/twitter.conf
Here is the content of twitter.conf:
input {
twitter {
consumer_key => 'nmOC0'
consumer_secret => 'TQajpe0PSLwCP4M'
oauth_token => '380242506-P2P'
oauth_token_secret => 'OLhqUoIjnLj'
keywords => ["AWS","Qbox","Elasticsearch"]
full_tweet => true
}
}
output {
stdout {
codec => dots
} }
Here is the error:
[WARN ][logstash.inputs.twitter ] Twitter client error {:message=>"", :exception=>Twitter::Error::Forbidden, :backtrace=>["/usr/share/logstash-8.2.2/vendor/bundle/jruby/2.5.0/gems/twitter-6.2.0/lib/twitter/streaming/response.rb:24:in on_headers_complete'", "org/ruby_http_parser/RubyHttpParser.java:370:in <<'",
"/usr/share/logstash-8.2.2/vendor/bundle/jruby/2.5.0/gems/twitter-6.2.0/lib/twitter/streaming/response.rb:19:in <<'", "/usr/share/logstash-8.2.2/vendor/bundle/jruby/2.5.0/gems/twitter-6.2.0/lib/twitter/streaming/connection.rb:20:in stream'", "/usr/share/logstash-8.2.2/vendor/bundle/jruby/2.5.0/gems/twitter-6.2.0/lib/twitter/streaming/client.rb:119:in request'", "/usr/share/logstash-8.2.2/vendor/bundle/jruby/2.5.0/gems/twitter-6.2.0/lib/twitter/streaming/client.rb:38:in filter'", "/usr/share/logstash-8.2.2/vendor/bundle/jruby/2.5.0/gems/logstash-input-twitter-4.1.0/lib/logstash/inputs/twitter.rb:166:in do_run'", "/usr/share/logstash-8.2.2/vendor/bundle/jruby/2.5.0/gems/logstash-input-twitter-4.1.0/lib/logstash/inputs/twitter.rb:146:in run'", "/usr/share/logstash-8.2.2/logstash-core/lib/logstash/java_pipeline.rb:410:in inputworker'", "/usr/share/logstash-8.2.2/logstash-core/lib/logstash/java_pipeline.rb:401:in block in start_input'"], :options=>nil}
Related
I am trying Consume data From RabbitMQ To Elasticsearch, and I followed this tutorial https://akintola-lonlon.medium.com/logstash-5-easy-steps-to-consume-data-from-rabbitmq-to-elasticsearch-8fb0eb6e9196
this is my rabbitmq quque
This is my logstash-rabbitmq.conf
input {
rabbitmq {
id => "rabbitmq_logs"
host => "localhost"
port => 5672
vhost => "/"
queue => "system_logs"
ack => false
}
}
filter {
grok {
match => {"message" => "%{COMBINEDAPACHELOG}"}
}
date {
match => ["timestamp", "dd/MM/yyyy:HH:mm:ss Z"]
}
}
output {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "logstash_rabbit_mq_hello"
}
stdout {
codec => rubydebug
}
}
Then I try to run sudo bin/logstash -f conf.d/logstash-rabbitmq.conf I get flowing error
[2022-10-17T10:08:43,917][WARN ][logstash.inputs.rabbitmq ][main][rabbitmq_logs] Error while setting up connection, will retry {:exception=>MarchHare::PreconditionFailed, :message=>"PRECONDITION_FAILED - inequivalent arg 'durable' for queue 'system_logs' in vhost '/': received 'false' but current is 'true'", :cause=>#<Java::JavaIo::IOException: >}
[2022-10-17T10:08:43,917][WARN ][logstash.inputs.rabbitmq ][main][rabbitmq_logs] RabbitMQ connection was closed {:url=>"amqp://guest:XXXXXX#localhost:5672/", :automatic_recovery=>true, :cause=>#<Java::ComRabbitmqClient::ShutdownSignalException: clean connection shutdown; protocol method: #method<connection.close>(reply-code=200, reply-text=OK, class-id=0, method-id=0)>}
[2022-10-17T10:08:44,929][INFO ][logstash.inputs.rabbitmq ][main][rabbitmq_logs] Connected to RabbitMQ {:url=>"amqp://guest:XXXXXX#localhost:5672/"}
how can I fix this problem?
I am a beginner in RabbitMQ and ELK, pleas help me
I want to send data to 2 endpoint from logstash, one of which is HTTP endpoint & other is HTTPS.
I tried putting username & password for HTTPS endpoint in url itself but logstash is taking those fields [username & password] for the other endpoint also.
my current output field if like:
output {
elasticsearch{
index => "index_name"
document_id => "%{index_id}"
hosts => ["https://elastic:pass#clusterid.asia-northeast1.gcp.cloud.es.io:9243",
"http://127.0.0.1:9200"]
}
}
Getting this message in logs:
Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elastic:xxxxxx#clusterid.asia-northeast1.gcp.cloud.es.io:9243/, https://elastic:xxxxxx#127.0.0.1:9200/]}}
And this:
[logstash.agent] Failed to execute action {:id=>:"cloud-elastic", :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<cloud-elastic>, action_result: false", :backtrace=>nil}
please try using different elasticsearch output for https and http,below settings
if "Https" in [tag]{
elasticsearch {
hosts => [ "https://elastic:pass#clusterid.asia-northeast1.gcp.cloud.es.io:9243" ]
user => "${ES_USER:admin}"
password => "${ES_PWD:admin}"
ssl => true
ssl_certificate_verification => false
cacert => "${CERTS_DIR}/certs/ca.crt"
index => "%{[docInfo][index]}"
action => "index"
}
} else {
elasticsearch {
hosts => [ "http://127.0.0.1:9200" ]
index => "%{[docInfo][index]}"
action => "index"
}
}
In .bashrc file
set the below environment variables
export ES_USER=elastic
export ES_PWD=pass
export CERTS_DIR=/home/abc
I use google protobuf in the logstash input,Start error when running logstash。
./bin/logstash -f logstash.conf -r
the error is:
[ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exceptio n=>"NoMethodError", :message=>"undefined method msgclass' for nil:NilClass", :backtrace=>["/home/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-codec-pr otobuf-1.1.0/lib/logstash/codecs/protobuf.rb:101:inregister'", "/home/logstash/logstash-core/lib/logstash/codecs/base.rb:20:in initialize'", "/home/logs tash/logstash-core/lib/logstash/plugins/plugin_factory.rb:97:inplugin'", "/home/logstash/logstash-core/lib/logstash/pipeline.rb:110:in plugin'", "(eval) :8:in'", "org/jruby/RubyKernel.java:994:in eval'", "/home/logstash/logstash-core/lib/logstash/pipeline.rb:82:ininitialize'", "/home/logstash/log stash-core/lib/logstash/pipeline.rb:167:in initialize'", "/home/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:40:inexecute'", "/home/log stash/logstash-core/lib/logstash/agent.rb:305:in `block in converge_state'"]}
logstash.conf is setting:
input {
beats {
port => 5044
ssl => false
codec => protobuf {
class_name => ["Elk.ElkData"]
include_path => ["/home/logstash/test_code/elk.pb.rb"]
protobuf_version => 3
}
type => "protobuf"
}
}
output {
stdout { codec => rubydebug }
}
the 'register' in the 'logstash/vendor/bundle/jruby/2.3.0/gems/logstash-codec-protobuf-1.1.0/lib/logstash/codecs/protobuf.rb' is:
def register
#metainfo_messageclasses = {}
#metainfo_enumclasses = {}
#metainfo_pb2_enumlist = []
include_path.each { |path| load_protobuf_definition(path) }
if #protobuf_version == 3
#pb_builder = Google::Protobuf::DescriptorPool.generated_pool.lookup(class_name).msgclass
else
#pb_builder = pb2_create_instance(class_name)
end
end
Google::Protobuf::DescriptorPool.generated_pool.lookup(class_name).msgclass
logstash version is 6.3.0, protoc version is 3.6.1, ruby-protoc version is 1.6.1,
Elk community question connection is as follows:
https://discuss.elastic.co/t/logstash-uses-protobuf-running-error-nomethoderror-message-undefined-method-msgclass-for-nil-nilclass/144806?u=sun_changlong
Is it my environmental factor or the protobuf version? Can be used in the protobuf 2 environment.Welcome to leave valuable suggestions
I’m using logstash with queuing enabled.
I’ve setup logstash to inject rows from mysql via the mysql input plugin on startup. Currently this is injecting 1846 rows.
I also have a http input.
When I take down ES and restart logstash as expected I get errors
logstash_1 WARN logstash.outputs.amazones - Failed to flush outgoing
items {:outgoing_count=>1, :exception=>“Faraday::ConnectionFailed”,
:backtrace=>nil} logstash_1 ERROR logstash.outputs.amazones -
Attempted to send a bulk request to Elasticsearch configured at … I’d
expect when in this situation hitting the logstash http input would
result in an ack.
Actually the http POST does not return and the injection is not seen in logstash logs.
My logstash.yaml looks like
queue {
type: persisted
checkpoint.writes: 1
queue.max_bytes: 8gb
queue.page_capacity: 512mb
}
And my logstash.conf
input {
jdbc {
jdbc_connection_string => "${JDBC_CONNECTION_STRING}"
jdbc_user => "${JDBC_USER}"
jdbc_password => "${JDBC_PASSWORD}"
jdbc_driver_library => "/home/logstash/jdbc_driver.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
statement => "
SELECT blah blah blah
"
}
http {
host => "0.0.0.0"
port => 31311
}
}
output {
stdout { codec => json_lines }
amazon_es {
hosts => ["${AWS_ES_HOST}"]
region => "${AWS_REGION}"
aws_access_key_id => '${AWS_ACCESS_KEY_ID}'
aws_secret_access_key => '${AWS_SECRET_ACCESS_KEY}'
"index" => "${INDEX_NAME}"
"document_type" => "data"
"document_id" => "%{documentid}"
}
}
Is it possible for the http input to still ack events as I’m pretty sure the queue cannot be full as each event payload is about 850 characters?
Thanks in advance
I'm trying to run this:
input {
twitter {
# add your data
consumer_key => "shhhhh"
consumer_secret => "shhhhh"
oauth_token => "shhhhh"
oauth_token_secret => "shhhhh"
keywords => ["words"]
full_tweet => true
}
}
output {
elasticsearch_http {
host => "shhhhh"
index => "idx_ls"
index_type => "tweet_ls"
}
}
This is the error I got:
Sending Logstash's logs to /usr/local/Cellar/logstash/5.2.1/libexec/logs which is now configured via log4j2.properties
[2017-02-24T04:48:03,060][ERROR][logstash.plugins.registry] Problems loading a plugin with {:type=>"output", :name=>"elasticsearch_http", :path=>"logstash/outputs/elasticsearch_http", :error_message=>"NameError", :error_class=>NameError, :error_backtrace=>["/usr/local/Cellar/logstash/5.2.1/libexec/logstash-core/lib/logstash/plugins/registry.rb:221:in `namespace_lookup'", "/usr/local/Cellar/logstash/5.2.1/libexec/logstash-core/lib/logstash/plugins/registry.rb:157:in `legacy_lookup'", "/usr/local/Cellar/logstash/5.2.1/libexec/logstash-core/lib/logstash/plugins/registry.rb:133:in `lookup'", "/usr/local/Cellar/logstash/5.2.1/libexec/logstash-core/lib/logstash/plugins/registry.rb:175:in `lookup_pipeline_plugin'", "/usr/local/Cellar/logstash/5.2.1/libexec/logstash-core/lib/logstash/plugin.rb:129:in `lookup'", "/usr/local/Cellar/logstash/5.2.1/libexec/logstash-core/lib/logstash/pipeline.rb:452:in `plugin'", "(eval):12:in `initialize'", "org/jruby/RubyKernel.java:1079:in `eval'", "/usr/local/Cellar/logstash/5.2.1/libexec/logstash-core/lib/logstash/pipeline.rb:98:in `initialize'", "/usr/local/Cellar/logstash/5.2.1/libexec/logstash-core/lib/logstash/agent.rb:246:in `create_pipeline'", "/usr/local/Cellar/logstash/5.2.1/libexec/logstash-core/lib/logstash/agent.rb:95:in `register_pipeline'", "/usr/local/Cellar/logstash/5.2.1/libexec/logstash-core/lib/logstash/runner.rb:264:in `execute'", "/usr/local/Cellar/logstash/5.2.1/libexec/vendor/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/local/Cellar/logstash/5.2.1/libexec/logstash-core/lib/logstash/runner.rb:183:in `run'", "/usr/local/Cellar/logstash/5.2.1/libexec/vendor/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/local/Cellar/logstash/5.2.1/libexec/lib/bootstrap/environment.rb:71:in `(root)'"]}
[2017-02-24T04:48:03,073][ERROR][logstash.agent ] fetched an invalid config {:config=>"input { \n twitter {\n # add your data\n consumer_key => \"shhhhh\"\n consumer_secret => \"Shhhhhh\"\n oauth_token => \"shhhh\"\n oauth_token_secret => \"shhhhh\"\n keywords => [\"word\"]\n full_tweet => true\n }\n}\noutput { \n elasticsearch_http {\n host => \"shhhhh.amazonaws.com\"\n index => \"idx_ls\"\n index_type => \"tweet_ls\"\n }\n}\n", :reason=>"Couldn't find any output plugin named 'elasticsearch_http'. Are you sure this is correct? Trying to load the elasticsearch_http output plugin resulted in this error: Problems loading the requested plugin named elasticsearch_http of type output. Error: NameError NameError"}
I've tried installing elasticsearch_http but it doesnt seem to be a package. Ive also tried
logstash-plugin install logstash-input-elasticsearch
and
logstash-plugin install logstash-output-elasticsearch
which did install but got the same error.
Totally new to logstash so this might be very simple.
I am Trying to follow this https://www.rittmanmead.com/blog/2015/08/three-easy-ways-to-stream-twitter-data-into-elasticsearch/
I tried Val's answer and got this:
[2017-02-24T05:12:45,385][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x4c2332e0 URL:http://shhhhh:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://sshhhhhh:9200/][Manticore::ConnectTimeout] connect timed out"}
I can go to the url and i get a response on browser and I it set open on permissions so Im not sure what the issue with that would be.
The elasticsearch_http output is no longer alive. You need to use the elasticsearch output instead.
elasticsearch {
hosts => "localhost:9200"
index => "idx_ls"
document_type => "tweet_ls"
}
Just an addition to #Val's answer. What if you have your hosts parameter without the port:
output {
elasticsearch {
index => "idx_ls"
document_type => "tweet_ls"
hosts => "localhost"
}
}
by default ES runs on port 9200 so you don't have to explicitly set it up.