Consume messages from rabbitmq in logstash - elasticsearch

Im trying to read logs from rabbitmq queue from logstash and then pass it to elasticsearch. But with no success. Here is my logstash config.
input {
rabbitmq {
host => "localhost"
port => 15672
heartbeat => 30
durable => true
exchange => "logging_queue"
exchange_type => "logging_queue"
}
}
output {
elasticsearch {
hosts => "localhost:9200"
}
stdout {}
}
But there is no index created so ofcourse I cant see any logs in Kibana
There are some messages in queue

I think the correct (default) port is 5672, as 15672 is the port of the web admin console.
input {
rabbitmq {
host => "localhost"
port => 5672 <--- change this
heartbeat => 30
durable => true
exchange => "logging_queue"
exchange_type => "logging_queue"
}
}
output {
elasticsearch {
hosts => "localhost:9200"
}
stdout {}
}

Related

trying Consume data From RabbitMQ To Elasticsearch

I am trying Consume data From RabbitMQ To Elasticsearch, and I followed this tutorial https://akintola-lonlon.medium.com/logstash-5-easy-steps-to-consume-data-from-rabbitmq-to-elasticsearch-8fb0eb6e9196
this is my rabbitmq quque
This is my logstash-rabbitmq.conf
input {
rabbitmq {
id => "rabbitmq_logs"
host => "localhost"
port => 5672
vhost => "/"
queue => "system_logs"
ack => false
}
}
filter {
grok {
match => {"message" => "%{COMBINEDAPACHELOG}"}
}
date {
match => ["timestamp", "dd/MM/yyyy:HH:mm:ss Z"]
}
}
output {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "logstash_rabbit_mq_hello"
}
stdout {
codec => rubydebug
}
}
Then I try to run sudo bin/logstash -f conf.d/logstash-rabbitmq.conf I get flowing error
[2022-10-17T10:08:43,917][WARN ][logstash.inputs.rabbitmq ][main][rabbitmq_logs] Error while setting up connection, will retry {:exception=>MarchHare::PreconditionFailed, :message=>"PRECONDITION_FAILED - inequivalent arg 'durable' for queue 'system_logs' in vhost '/': received 'false' but current is 'true'", :cause=>#<Java::JavaIo::IOException: >}
[2022-10-17T10:08:43,917][WARN ][logstash.inputs.rabbitmq ][main][rabbitmq_logs] RabbitMQ connection was closed {:url=>"amqp://guest:XXXXXX#localhost:5672/", :automatic_recovery=>true, :cause=>#<Java::ComRabbitmqClient::ShutdownSignalException: clean connection shutdown; protocol method: #method<connection.close>(reply-code=200, reply-text=OK, class-id=0, method-id=0)>}
[2022-10-17T10:08:44,929][INFO ][logstash.inputs.rabbitmq ][main][rabbitmq_logs] Connected to RabbitMQ {:url=>"amqp://guest:XXXXXX#localhost:5672/"}
how can I fix this problem?
I am a beginner in RabbitMQ and ELK, pleas help me

Got response code '400' contacting Elasticsearch at URL in logstash

I am new to elasticsearch. I tried to configure elastisearch, Kibana , logstash with MQTT plugin. I supposed to send logs to elasticseach through logstash MQTT plugin. I installed them on Mac locally, but when starting logstash, it throws following error.
[2021-11-12T17:26:37,976][ERROR][logstash.outputs.elasticsearch][logstash_pipeline] Unable to get license information {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '400' contacting Elasticsearch at URL 'http://localhost:9200/_license'"}
my logstash configuration file islike:
input {
mqtt {
host => "localhost"
port => 1883
topic => "test"
qos => 0
certificate_path => "/Users/john/logstash-7.10.2/logstash/m2mqtt_ca.crt"
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[#metadata][beat]}-%{[#metadata][version]}-%{+YYYY.MM.dd}"
#user => "elastic"
#password => "changeme"
}
}
can anybody tell, what was the issue?

Logstash creating pipelines from Kafka not working

I am trying to get data from Kafka topic to run into ELK-stack with Logstash but can't get the data moving.
I edited the logstash.conf to following:
input {
tcp {
port => 5000
}
kafka {
bootstrap_servers => "broker:29092"
topics => ["PLACES_ROWKEY"]
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
user => "elastic"
password => "changeme"
index => "from_logstash"
}
}
Im running this setup in Docker if it matters (broker is the hostname for the Kafka broker container). I restart the Logstash but cant see any new indices in elasticsearch

Logstash with queue enabled not ack http input events after jdbc input runs

I’m using logstash with queuing enabled.
I’ve setup logstash to inject rows from mysql via the mysql input plugin on startup. Currently this is injecting 1846 rows.
I also have a http input.
When I take down ES and restart logstash as expected I get errors
logstash_1 WARN logstash.outputs.amazones - Failed to flush outgoing
items {:outgoing_count=>1, :exception=>“Faraday::ConnectionFailed”,
:backtrace=>nil} logstash_1 ERROR logstash.outputs.amazones -
Attempted to send a bulk request to Elasticsearch configured at … I’d
expect when in this situation hitting the logstash http input would
result in an ack.
Actually the http POST does not return and the injection is not seen in logstash logs.
My logstash.yaml looks like
queue {
type: persisted
checkpoint.writes: 1
queue.max_bytes: 8gb
queue.page_capacity: 512mb
}
And my logstash.conf
input {
jdbc {
jdbc_connection_string => "${JDBC_CONNECTION_STRING}"
jdbc_user => "${JDBC_USER}"
jdbc_password => "${JDBC_PASSWORD}"
jdbc_driver_library => "/home/logstash/jdbc_driver.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
statement => "
SELECT blah blah blah
"
}
http {
host => "0.0.0.0"
port => 31311
}
}
output {
stdout { codec => json_lines }
amazon_es {
hosts => ["${AWS_ES_HOST}"]
region => "${AWS_REGION}"
aws_access_key_id => '${AWS_ACCESS_KEY_ID}'
aws_secret_access_key => '${AWS_SECRET_ACCESS_KEY}'
"index" => "${INDEX_NAME}"
"document_type" => "data"
"document_id" => "%{documentid}"
}
}
Is it possible for the http input to still ack events as I’m pretty sure the queue cannot be full as each event payload is about 850 characters?
Thanks in advance

Logstash-Elastic search

I'm getting the below error while try to connect Logstash with Elasticsearch
**log4j, [2015-01-17T17:19:00.559] WARN: org.elasticsearch.discovery: [logstash-ip-10-181-166-160-1026-2020] waited for 30s and no initial state was set by the discovery**
logstash.conf
input {
lumberjack {
port => 5000
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
filter {
grok { match => [ "message", "%{TIME:log_time}\|%{WORD:Message_type}\|%{GREEDYDATA:Component}\|%{NUMBER:line_number}\| %{GREEDYDATA:log_message}"]
match => [ "path" , "%{GREEDYDATA}/%{GREEDYDATA:loccode}/%{GREEDYDATA:_machine}\:%{DATE:logdate}.log"]
break_on_match => false
}
stdout { codec => rubydebug }
elasticsearch{
embedded => false
host => "localhost"
#host => "http://xxx.xxx.xxx:9200"
port => "9300"
cluster=>"Elasticsearch-Logstash"
manage_template=> false
index=>"doppleml-%{loccode}-%{+YYYY.MM.dd}"
#template=>"/home/hduser/elasticsearch/logstash-1.4.2/doppleML_template.json"
#template=>"/home/ubuntu/elkproject/logstash-1.4.2/doppleML_template.json"
}
}
Elasticsearch.yml:
network.host: localhost
cluster.name: Elasticsearch-Logstash
Do I need to include any changes with logstash or Elasticsearch configuration files?
If you are not setting protocol it's likely defaulting to "http", but you're directly setting port to "9300" which isn't going to work. I'd either set protocol to "http" and set port to "9200" or set protocol to "node" or "transport" and port to "9300".
This page of the docs is helpful in setting/debugging output settings for logstash and elasticsearch:
http://logstash.net/docs/1.4.2/outputs/elasticsearch

Resources