RabbitMQ - Elasticsearch consumer - elasticsearch

I have a RabbitMQ docker container that runs perfectly and receives messages, storing them inside a queue. What I'm trying to do now is move those messages and insert them in elasticsearch. Now I've spent some time reading about it and according to the elastic documentation this can be achieved by running an instance of logstash and configuring it using the RabbitMQ plug-in.
So my questions are:
Can logstash actually play the role of a consumer and get the messages from a queue and insert them in elastic at all?
Assuming that this is the case, and having a logstash docker container is the following correct?
Command to run the docker logstash container:
docker run --rm -it -v ~/pipeline/:/usr/share/logstash/pipeline/ docker.elastic.co/logstash/logstash:7.8.0 - In this situation I'm pointing the container to use a config file outside of the container located on the server here: /usr/share/logstash/pipeline/ - took that command from elastic documents.
Is the following example config file actually correct? bear in mind that elastic, kibana and logstash are basically on the same server running in separate containers.
input {
rabbitmq {
host => "IP OF RABBITMQ"; - located on another VM.
durable => true
password => "guest"
user => "guest"
exchange => "RLMF"
exchange_type => "topic"
queue => "db.rlmf"
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}

Related

Source of host Variable in Logstash

I'm using ELK (Kibana, ElasticSearch and Logstash are running as Docker containers) and LogstashTcpSocketAppender in Spring boot app to forward data to logstash.
Logstash config is very simple:
input {
tcp {
port => 4560
codec => json_lines
}
}
output {
elasticsearch {
hosts => [ "elasticsearch:9200" ]
}
}
The issue is that in Kibana I see host field with "gateway" value - "host: gateway".
What I don't understand is HOW is this field populated and added to logstash-* index in Kibana as:
I do not set any host variable in logback config and can clearly see it's not going out of there.
This might be set by logstash itself. But I couldn't find a concrete reference in logback documentation of how this field is being populated. And what does "gateway" really mean?
This is very confusive to me.
Could anyone please explain.
Thanks in advance.

Shipping logs to my local windows ELK/Logstash file from remote centos7 using filebeats

I have ELK all this three components configured on my local windows machine up and running.
I have a logfile on a remote centos7 machine available which I want to ship from there to my local windows with the help of Filebeat. How I can achieve that?
I have installed filebeat on centos with the help of rpm.
in configuration file I have made following changes
commented output.elasticsearch
and uncommented output.logstash (which Ip of my windows machine shall I give overe here? How to get that ip)
AND
**filebeat.inputs:
type: log
enabled: true
paths:
path to my log file**
The flow of the data is:
Agent => logstash > elasticsearch
Agent could be beat family, and you are using filebeat that is okay.
You have to configure all these stuff.
on filebeat you have to configure filebeat.yml
on logstash you have to configure logstash.conf
on elasticsearch you have to configure elasticsearch.yml
I personally will start with logstash.conf
input
{
beats
{
port =>5044
}
}
filter
{
#I assume you just want the log to run into elasticsearch
}
output
{
elasticsearch
{
hosts => "(YOUR_ELASTICSEARCH_IP):9200"
index=>"insertindexname"
}
stdout
{
codec => rubydebug
}
}
this is a very minimal working configuration. this means, Logstash will listen for input from filebeat in port 5044. Filter is needed when you want parse the data. Output is where you want to store the data. We are using elasticsearch output plugin since you want to store it to elasticsearch. stdout is super helpful for debugging your configuration file. add this you will regret nothing. This will print all the messages that sent to elasticsearch
filebeat.yml
filebeat.inputs:
- type: log
paths:
- /directory/of/your/file/file.log
output.logstash:
hosts: ["YOUR_LOGSTASH_IP:5044"]
this is a very minimal working filebeat.yml. paths is where you want logstash to harvest the file.
When you done configuring the file, start elasticsearch then logstash then filebeat.
Let me know any difficulties

filebeat configuration to send logfile to ELK which is installed in cloudfoundry

I've been working to install ELK stack in CloudFoundry and sending log files from other local server by using filebeat.
I have successfully installed ELK in CloudFoundry and able to see sample messages.
Now I am trying to send log files from local server by using filebeat. Can you suggest how to configure filebeat to send log files from local server to Logstash in CloudFoundry?
You'll need to configure the Logstash output in Filebeat for this, specifying the host & port for target logstash:
#----------------------------- Logstash output --------------------------------
output.logstash:
hosts: ["127.0.0.1:5044"]
On the logstash side, you'll need to add a beats input to the config:
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[#metadata][beat]}-%{[#metadata][version]}"
}
}
Read the complete documentation here.

error INFO No non-zero metrics in the last 30s message in filebeat

I 'm newbie in ELK and and I'm getting issues while running logstash. I ran logstash as define in structure step by step as I do for file beat but
But when run filebeat and logstash, Its show logstash successfully runs at port 9600. In filebeat it gives like this
INFO No non-zero metrics in the last 30s
Logstash is not getting input from file beat. Please help.
My problem is as the same as this article and did what it said but noting change .
the filebeat.yml is :
filebeat.prospectors:
- input_type: log
paths:
- /usr/share/tomcat/log_app/news/*.log
output.logstash:
hosts: ["10.0.20.163:5000"]
and I ran this command sudo ./filebeat -e -c filebeat.yml -d "publish"
the logstash config file is :
input {
beats {
port => "5000"
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}"}
}
geoip {
source => "clientip"
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
then ran the commands
1)bin/logstash -f first-pipeline.conf --config.test_and_exit - this gave Ok
2)bin/logstash -f first-pipeline.conf --config.reload.automatic -This started the logstash on port 9600
I couldn't proceeds after this since filebeat gives the INFO
INFO No non-zero metrics in the last 30s
and I use
elastic search : 5.5.1
kibana : 5.5.1
logstash : 5.5.1
file beat : 5.5.1
If you want to resend your data, you can try to delete filebeat's registry file, and when you restart filebeat, it will send the data again.
File location depends on your platform. See https://www.elastic.co/guide/en/beats/filebeat/5.3/migration-registry-file.html
Registry file location can also be defined in your filebeat.yml:
filebeat.registry_file: registry
https://www.elastic.co/guide/en/beats/filebeat/current/configuration-global-options.html
Everytime you stop the filebeat. It will start reading the data from the tail of file. And because the sample file which you are using are not getting frequent data. It's not able to fetch and send it to elastic search.
Edit your log file. Add few more redundant data and then try it. It should work.
This error which you have mentioned is because FIlebeat is not able to get any updated data in that file.

Logstash wont talk to Elastic Search

I have Elastic Search 1.3.2 via ELMA. The ELMA setup places ES REST API behind an Apache reverse proxy with SSL and basic auth.
On a separate host, I am trying to setup Logstash 1.4.2 to forward some information over to ES. The output part of my LS is as follows:
output {
stdout { codec => rubydebug }
elasticsearch {
host => "192.168.248.4"
}
This produces the following error:
log4j, [2014-09-25T01:40:02.082] WARN: org.elasticsearch.discovery: [logstash-ubuntu-jboss-39160-4018] waited for 30s and no initial state was set by the discovery
I then tried setting the protocol to HTTP as follows:
elasticsearch {
host => "192.168.248.4"
protocol => "http"
}
This produces a connection refused error:
Faraday::ConnectionFailed: Connection refused - Connection refused
I have then tried setting the port to 9200 (which gives connection refused error) and 9300 which gives:
Faraday::ConnectionFailed: End of file reached
Any ideas on how I can get logstash talking to my ES?
The way to inform logstash to set output in ES is :
elasticsearch {
protocol => "http"
host => "EShostname:EsportNo"
}
In your case, it should be,
elasticsearch {
protocol => "http"
host => "192.168.248.4:9200"
}
If it's not working, then the problem is with the network address configuration.In order to make sure you have provided the correct configuration,
Check the http.port property of ES
Check network.bind_host property of ES
Check network.publish_host property of ES

Resources