Reading Rabbit MQ From LogStash (ELK Instance) - elasticsearch

I have 2 servers, one RabbitMQ, and one ELK server.
Both are independently running as they should. My ELK instance receives input messages from several sources across my network, but both servers are in the same internal network.
I am trying to make LogStash read from RabbitMQ, to get any log messages i pushed into there.
Here's my logstash conf.d file:
input{
rabbitmq {
host => "1.66.66.66"
queue => "logs"
durable => true
exchange => "event.log"
threads => 1
prefetch_count => 50
port => 5672
user => "elk"
password => "*******"
}
}
filter{
}
output {
elasticsearch {
hosts => ["1.66.66.1:9200"]
sniffing => false
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
After i run a config test ($ sudo service logstash configtest)
if get: ConfigTest OK
So this seems good.
But then when it runs i get the following error in my "/var/log/logstash/logstash.log" file:
{:timestamp=>"2017-06-15T10:28:18.727000-0400", :message=>"Connection refused (Connection refused)", :class=>"Manticore::SocketException", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:37:in `initialize'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:79:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:256:in `call_once'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:153:in `code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/http/manticore.rb:84:in `perform_request'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/base.rb:257:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/http/manticore.rb:67:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/sniffer.rb:32:in `hosts'", "org/jruby/ext/timeout/Timeout.java:147:in `timeout'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/sniffer.rb:31:in `hosts'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/base.rb:79:in `reload_connections!'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:72:in `sniff!'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:60:in `start_sniffing!'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:60:in `start_sniffing!'", "org/jruby/RubyKernel.java:1479:in `loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:59:in `start_sniffing!'"], :level=>:error}
If, from my ELK server do:
curl -X GET http://1.66.66.1:9200
I get the proper response:
{
"name" : "Hera",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "-NmCwDk_RA2qzTwWlNNizQ",
"version" : {
"number" : "2.4.5",
"build_hash" : "c849dd13904f53e63e88efc33b2ceeda0b6a1276",
"build_timestamp" : "2017-04-24T16:18:17Z",
"build_snapshot" : false,
"lucene_version" : "5.5.4"
},
"tagline" : "You Know, for Search"
}
If you know anything i can try, that would be much appreciated, I am running Ubuntu 16.04 on both servers. Thanks!

Related

trying Consume data From RabbitMQ To Elasticsearch

I am trying Consume data From RabbitMQ To Elasticsearch, and I followed this tutorial https://akintola-lonlon.medium.com/logstash-5-easy-steps-to-consume-data-from-rabbitmq-to-elasticsearch-8fb0eb6e9196
this is my rabbitmq quque
This is my logstash-rabbitmq.conf
input {
rabbitmq {
id => "rabbitmq_logs"
host => "localhost"
port => 5672
vhost => "/"
queue => "system_logs"
ack => false
}
}
filter {
grok {
match => {"message" => "%{COMBINEDAPACHELOG}"}
}
date {
match => ["timestamp", "dd/MM/yyyy:HH:mm:ss Z"]
}
}
output {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "logstash_rabbit_mq_hello"
}
stdout {
codec => rubydebug
}
}
Then I try to run sudo bin/logstash -f conf.d/logstash-rabbitmq.conf I get flowing error
[2022-10-17T10:08:43,917][WARN ][logstash.inputs.rabbitmq ][main][rabbitmq_logs] Error while setting up connection, will retry {:exception=>MarchHare::PreconditionFailed, :message=>"PRECONDITION_FAILED - inequivalent arg 'durable' for queue 'system_logs' in vhost '/': received 'false' but current is 'true'", :cause=>#<Java::JavaIo::IOException: >}
[2022-10-17T10:08:43,917][WARN ][logstash.inputs.rabbitmq ][main][rabbitmq_logs] RabbitMQ connection was closed {:url=>"amqp://guest:XXXXXX#localhost:5672/", :automatic_recovery=>true, :cause=>#<Java::ComRabbitmqClient::ShutdownSignalException: clean connection shutdown; protocol method: #method<connection.close>(reply-code=200, reply-text=OK, class-id=0, method-id=0)>}
[2022-10-17T10:08:44,929][INFO ][logstash.inputs.rabbitmq ][main][rabbitmq_logs] Connected to RabbitMQ {:url=>"amqp://guest:XXXXXX#localhost:5672/"}
how can I fix this problem?
I am a beginner in RabbitMQ and ELK, pleas help me

Logstash can not connect to Elastic search

{:timestamp=>"2017-07-19T15:56:36.517000+0530", :message=>"Attempted to send a bulk request to Elasticsearch configured at '[\"http://localhost:9200\"]', but Elasticsearch appears to be unreachable or down!", :error_message=>"Connection refused (Connection refused)", :class=>"Manticore::SocketException", :level=>:error}
{:timestamp=>"2017-07-19T15:56:37.761000+0530", :message=>"Connection refused (Connection refused)", :class=>"Manticore::SocketException", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:37:in `initialize'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:79:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:256:in `call_once'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:153:in `code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/http/manticore.rb:84:in `perform_request'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/base.rb:257:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/http/manticore.rb:67:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/sniffer.rb:32:in `hosts'", "org/jruby/ext/timeout/Timeout.java:147:in `timeout'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/sniffer.rb:31:in `hosts'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/base.rb:79:in `reload_connections!'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:72:in `sniff!'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:60:in `start_sniffing!'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:60:in `start_sniffing!'", "org/jruby/RubyKernel.java:1479:in `loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:59:in `start_sniffing!'"], :level=>:error}
{:timestamp=>"2017-07-19T15:56:38.520000+0530", :message=>"Attempted to send a bulk request to Elasticsearch configured at '[\"http://localhost:9200\"]', but Elasticsearch appears to be unreachable or down!", :error_message=>"Connection refused (Connection refused)", :class=>"Manticore::SocketException", :level=>:error}
Though Elastic search in running on port 127.0.0.1:9200
I do not understand from where logstash is taking this configuration
I have not configured logstash to connect elastic search on localhost
in logstash.service
ExecStart=/usr/share/logstash/bin/logstash "--path.settings" "/etc/logstash"
and in
/etc/logstash
I have logstash.yml
path.config: /etc/logstash/conf.d
in /etc/logstash/conf.d
output {
elasticsearch { hosts => ["10.2.0.10:9200"]
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
conf.d is your directory right, you need a file there something like myconf.conf and be in following format:
input {
}
filter {
#can be empty
}
output {
}
Once you apply all your changes you need to restart your logstash service, and it will apply your new changes. You can also control that in your LS settings logstash.yml file, if you need it to restart itself, once you apply a new change to any of the files under conf.d
You can also break up your conf files like 1_input.conf 2_filter.conf and 99_output.conf such that each will contain its own plugin i.e. input , filter and output.
Start Elasticsearch.
Write a conf file for Logstash to connect and upload data into Elasticsearch.
input {
file {
type => "csv"
path => "path for csv."
start_position => "beginning"
}
}
filter {
csv {
columns => ["Column1","Column2"]
separator => ","
}
mutate {
convert => {"Column1" => "float"}
}
}
output {
elasticsearch {
hosts => "http://localhost:9200"
}
stdout { codec => rubydebug}
}
host for elasticsearch can be configured in elasticsearch.yml file.
run the conf file (logstash.bat -f abc.conf)

Logstash for Vagrant: Address already in use

I have a Vagrant image in which there is an application; it is reachable in the Vagrant image if you call the port 2401 and depending on the service that you want, you call a specific address (i.e. "curl -X GET http://127.0.0.1:2401/provider/ipfix"). To retrieve the output outside the Vagrant machine I have set a port forwarding in the Vagrant file ("config.vm.network :forwarded_port, guest: 2401, host: 8080"), thus using the command "curl -X GET http://127.0.0.1:8080/provider/ipfix" from host I get the same output.
I am now on the phase of installing Logstash. My issue is that when I run Logstash with the config file I get the error "Address already in use". I tried to use also fields to guide to the specific output. Below is my Logstash config file. What workaround would you suggest?
input {
tcp {
host => localhost
port => 8080
add_field => {
"field1" => "provider"
"field2" => "ipfix"
}
codec => netflow {
versions => [10]
target => ipfix
}
type => ipfix
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
index => "IPFIX-logstash-%{+YYYY.MM.dd}"
}
}
If I'm reading this right, you're expecting Logstash to use TCP to connect to localhost:8080 to fetch information that it will then process.
That's not what this input does. This creates a listener on 127.0.0.1:8080, so the error message about 'already in use' is quite correct.
Considering you're using curl as an example of fetching this data, I suggest the http_poller plugin is better for what you want.
input {
http_poller {
urls => {
IPFIX => "http://127.0.0.1:8080/provider/ipfix"
}
request_timeout => 30
schedule => { "every" => "5s" }
add_tags => [ 'ipfix' ]
}
}
This will hit the known-working CURL URL every 5 seconds with a GET request.

Elasticsearch with Yii 2.0: Error: Elasticsearch request failed: 7 - Failed to connect to ##.##.##.### port 9200: Connection refused

I have Elasticsearch properly configured on my server. I can do everything from the command line using cURL. I can even connect to it using cURL from a PHP script outside Yii. However, I can't seem to get it to work from within Yii 2.0.
In my config, I have:
'elasticsearch' => [
'class' => 'yii\elasticsearch\Connection',
'nodes' => [
['http_address' => 'localhost:9200'],
// configure more hosts if you have a cluster
],
],
But when I try to do a simple query in Yii, I get this error. Note how it's using my server ip address rather than 'localhost' or '172.0.0.1'. Note: I've hashed out my ip address for sercurity.
Elasticsearch Database Exception – yii\elasticsearch\Exception
Elasticsearch request failed: 7 - Failed to connect to ##.##.##.### port 9200: Connection refused
Error Info: Array
(
[requestMethod] => GET
[requestUrl] => http://##.##.##.###:9200/profiles/profile/_search
[requestBody] => {"size":100,"query":{"match_all":{}}}
[responseHeaders] => Array
(
)
[responseBody] =>
)
I was able to fix this error by updating the version of Elasticsearch to something > 1.3.0 as this is the minimum requirement for YIISOFT/YII2-ELASTICSEARCH
run curl -X GET 'http://127.0.0.1:9200' to check what version you are running.
First follow this steps to download elastic search.
wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.5.2.tar.gz
mkdir es
tar -xf elasticsearch-1.5.2.tar.gz -C es
cd es
./bin/elasticsearch
Then you must be able to access to localhost:9200 and get something like this below :
{
"name" : "Sigyn",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.4.0",
"build_hash" : "ce9f0c7394dee074091dd1bc4e9469251181fc55",
"build_timestamp" : "2016-08-29T09:14:17Z",
"build_snapshot" : false,
"lucene_version" : "5.5.2"
},
"tagline" : "You Know, for Search"
}
Then secondly,follow instruction in https://github.com/yiisoft/yii2-elasticsearch. Then you are done

Logstash-Elastic search

I'm getting the below error while try to connect Logstash with Elasticsearch
**log4j, [2015-01-17T17:19:00.559] WARN: org.elasticsearch.discovery: [logstash-ip-10-181-166-160-1026-2020] waited for 30s and no initial state was set by the discovery**
logstash.conf
input {
lumberjack {
port => 5000
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
filter {
grok { match => [ "message", "%{TIME:log_time}\|%{WORD:Message_type}\|%{GREEDYDATA:Component}\|%{NUMBER:line_number}\| %{GREEDYDATA:log_message}"]
match => [ "path" , "%{GREEDYDATA}/%{GREEDYDATA:loccode}/%{GREEDYDATA:_machine}\:%{DATE:logdate}.log"]
break_on_match => false
}
stdout { codec => rubydebug }
elasticsearch{
embedded => false
host => "localhost"
#host => "http://xxx.xxx.xxx:9200"
port => "9300"
cluster=>"Elasticsearch-Logstash"
manage_template=> false
index=>"doppleml-%{loccode}-%{+YYYY.MM.dd}"
#template=>"/home/hduser/elasticsearch/logstash-1.4.2/doppleML_template.json"
#template=>"/home/ubuntu/elkproject/logstash-1.4.2/doppleML_template.json"
}
}
Elasticsearch.yml:
network.host: localhost
cluster.name: Elasticsearch-Logstash
Do I need to include any changes with logstash or Elasticsearch configuration files?
If you are not setting protocol it's likely defaulting to "http", but you're directly setting port to "9300" which isn't going to work. I'd either set protocol to "http" and set port to "9200" or set protocol to "node" or "transport" and port to "9300".
This page of the docs is helpful in setting/debugging output settings for logstash and elasticsearch:
http://logstash.net/docs/1.4.2/outputs/elasticsearch

Resources