Source of host Variable in Logstash - spring-boot

I'm using ELK (Kibana, ElasticSearch and Logstash are running as Docker containers) and LogstashTcpSocketAppender in Spring boot app to forward data to logstash.
Logstash config is very simple:
input {
tcp {
port => 4560
codec => json_lines
}
}
output {
elasticsearch {
hosts => [ "elasticsearch:9200" ]
}
}
The issue is that in Kibana I see host field with "gateway" value - "host: gateway".
What I don't understand is HOW is this field populated and added to logstash-* index in Kibana as:
I do not set any host variable in logback config and can clearly see it's not going out of there.
This might be set by logstash itself. But I couldn't find a concrete reference in logback documentation of how this field is being populated. And what does "gateway" really mean?
This is very confusive to me.
Could anyone please explain.
Thanks in advance.

Related

RabbitMQ - Elasticsearch consumer

I have a RabbitMQ docker container that runs perfectly and receives messages, storing them inside a queue. What I'm trying to do now is move those messages and insert them in elasticsearch. Now I've spent some time reading about it and according to the elastic documentation this can be achieved by running an instance of logstash and configuring it using the RabbitMQ plug-in.
So my questions are:
Can logstash actually play the role of a consumer and get the messages from a queue and insert them in elastic at all?
Assuming that this is the case, and having a logstash docker container is the following correct?
Command to run the docker logstash container:
docker run --rm -it -v ~/pipeline/:/usr/share/logstash/pipeline/ docker.elastic.co/logstash/logstash:7.8.0 - In this situation I'm pointing the container to use a config file outside of the container located on the server here: /usr/share/logstash/pipeline/ - took that command from elastic documents.
Is the following example config file actually correct? bear in mind that elastic, kibana and logstash are basically on the same server running in separate containers.
input {
rabbitmq {
host => "IP OF RABBITMQ"; - located on another VM.
durable => true
password => "guest"
user => "guest"
exchange => "RLMF"
exchange_type => "topic"
queue => "db.rlmf"
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}

filebeat configuration to send logfile to ELK which is installed in cloudfoundry

I've been working to install ELK stack in CloudFoundry and sending log files from other local server by using filebeat.
I have successfully installed ELK in CloudFoundry and able to see sample messages.
Now I am trying to send log files from local server by using filebeat. Can you suggest how to configure filebeat to send log files from local server to Logstash in CloudFoundry?
You'll need to configure the Logstash output in Filebeat for this, specifying the host & port for target logstash:
#----------------------------- Logstash output --------------------------------
output.logstash:
hosts: ["127.0.0.1:5044"]
On the logstash side, you'll need to add a beats input to the config:
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[#metadata][beat]}-%{[#metadata][version]}"
}
}
Read the complete documentation here.

error INFO No non-zero metrics in the last 30s message in filebeat

I 'm newbie in ELK and and I'm getting issues while running logstash. I ran logstash as define in structure step by step as I do for file beat but
But when run filebeat and logstash, Its show logstash successfully runs at port 9600. In filebeat it gives like this
INFO No non-zero metrics in the last 30s
Logstash is not getting input from file beat. Please help.
My problem is as the same as this article and did what it said but noting change .
the filebeat.yml is :
filebeat.prospectors:
- input_type: log
paths:
- /usr/share/tomcat/log_app/news/*.log
output.logstash:
hosts: ["10.0.20.163:5000"]
and I ran this command sudo ./filebeat -e -c filebeat.yml -d "publish"
the logstash config file is :
input {
beats {
port => "5000"
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}"}
}
geoip {
source => "clientip"
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
then ran the commands
1)bin/logstash -f first-pipeline.conf --config.test_and_exit - this gave Ok
2)bin/logstash -f first-pipeline.conf --config.reload.automatic -This started the logstash on port 9600
I couldn't proceeds after this since filebeat gives the INFO
INFO No non-zero metrics in the last 30s
and I use
elastic search : 5.5.1
kibana : 5.5.1
logstash : 5.5.1
file beat : 5.5.1
If you want to resend your data, you can try to delete filebeat's registry file, and when you restart filebeat, it will send the data again.
File location depends on your platform. See https://www.elastic.co/guide/en/beats/filebeat/5.3/migration-registry-file.html
Registry file location can also be defined in your filebeat.yml:
filebeat.registry_file: registry
https://www.elastic.co/guide/en/beats/filebeat/current/configuration-global-options.html
Everytime you stop the filebeat. It will start reading the data from the tail of file. And because the sample file which you are using are not getting frequent data. It's not able to fetch and send it to elastic search.
Edit your log file. Add few more redundant data and then try it. It should work.
This error which you have mentioned is because FIlebeat is not able to get any updated data in that file.

Kibana and Elasticsearch error

I want to access to Kibana by http://IP:80.
Nevertheless when I visit the pageI obtain these errors:
Upgrade Required Your version of Elasticsearch is too old. Kibana
requires Elasticsearch 0.90.9 or above.
and
Error Could not reach http://localhost:80/_nodes. If you are using a
proxy, ensure it is configured correctly
I have been looking up these problems on the internet and I have included these lines without success...
http.cors.enabled: true
http.cors.allow-origin: http://localhost:80
My Elasticsearch version is in fact 0.90.9.
What could I do?
please help me
According to my scenario Logstash using node protocol by default. if you apply command :
curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'
if you are getting "number_of_nodes" : 2, means logstash using node protocol and becoming part of cluster .so kibana taking it other node that was in older version of elasticsearch.
solution :
put protocol => transport in logstash config file for shipping to ES.
like,
input { }
output {
elasticsearch {
action => ... # string (optional), default: "index"
embedded_http_port => ... # string (optional), default: "9200-9300"
index => ... # string (optional), default: "logstash-%{+YYYY.MM.dd}"
node_name => ... # string (optional)
port => ... # string (optional)
protocol => ... # string, one of ["node", "transport", "http"]
}
if you want to access on port 80 than you have to do proxy . otherwise kibana listen on 5601 by default. if you still facing same issue then use latest version of logstash + kibana +elasticsearch .
Download advanced version of elasticsearch as the version you are using is not compatible with Kibana. Try using latest elasticsearch version.

Logstash wont talk to Elastic Search

I have Elastic Search 1.3.2 via ELMA. The ELMA setup places ES REST API behind an Apache reverse proxy with SSL and basic auth.
On a separate host, I am trying to setup Logstash 1.4.2 to forward some information over to ES. The output part of my LS is as follows:
output {
stdout { codec => rubydebug }
elasticsearch {
host => "192.168.248.4"
}
This produces the following error:
log4j, [2014-09-25T01:40:02.082] WARN: org.elasticsearch.discovery: [logstash-ubuntu-jboss-39160-4018] waited for 30s and no initial state was set by the discovery
I then tried setting the protocol to HTTP as follows:
elasticsearch {
host => "192.168.248.4"
protocol => "http"
}
This produces a connection refused error:
Faraday::ConnectionFailed: Connection refused - Connection refused
I have then tried setting the port to 9200 (which gives connection refused error) and 9300 which gives:
Faraday::ConnectionFailed: End of file reached
Any ideas on how I can get logstash talking to my ES?
The way to inform logstash to set output in ES is :
elasticsearch {
protocol => "http"
host => "EShostname:EsportNo"
}
In your case, it should be,
elasticsearch {
protocol => "http"
host => "192.168.248.4:9200"
}
If it's not working, then the problem is with the network address configuration.In order to make sure you have provided the correct configuration,
Check the http.port property of ES
Check network.bind_host property of ES
Check network.publish_host property of ES

Resources