Connection refused from filebeat to logstash - elasticsearch

I have an issue when I try to connect to my logstash from Filebeat
Logstash version 2.0.0
Filebeat 1.0.1
Here the error
INFO Connecting error publishing events (retrying): dial tcp 192.168.50.5:14560: getsockopt: connection refused
This is my logstash configuration
input {
beats {
codec => json
port => 14560
}
}
output {
elasticsearch { hosts=> localhost}
stdout {codec = > rubydebug}
}
Here my filebeat configuration
logstash:
# The Logstash hosts
hosts: ["192.168.50.5:14560","192.168.50.15:14560"]
I install the filebeat logstash plugin as I have read it
./plugin install logstash-input-beats
I have completely run out of ideas, and I would love to use this framework, but it seems not to be responding at all.
Any ideas would be great.

This happens when your logstash is not up or the logstash host is not getting connected (due to firewall maybe) from the host running filebeat . Try doing a telnet to 192.168.50.5 14560 from the host you are running filebeat.

Related

Shipping logs to my local windows ELK/Logstash file from remote centos7 using filebeats

I have ELK all this three components configured on my local windows machine up and running.
I have a logfile on a remote centos7 machine available which I want to ship from there to my local windows with the help of Filebeat. How I can achieve that?
I have installed filebeat on centos with the help of rpm.
in configuration file I have made following changes
commented output.elasticsearch
and uncommented output.logstash (which Ip of my windows machine shall I give overe here? How to get that ip)
AND
**filebeat.inputs:
type: log
enabled: true
paths:
path to my log file**
The flow of the data is:
Agent => logstash > elasticsearch
Agent could be beat family, and you are using filebeat that is okay.
You have to configure all these stuff.
on filebeat you have to configure filebeat.yml
on logstash you have to configure logstash.conf
on elasticsearch you have to configure elasticsearch.yml
I personally will start with logstash.conf
input
{
beats
{
port =>5044
}
}
filter
{
#I assume you just want the log to run into elasticsearch
}
output
{
elasticsearch
{
hosts => "(YOUR_ELASTICSEARCH_IP):9200"
index=>"insertindexname"
}
stdout
{
codec => rubydebug
}
}
this is a very minimal working configuration. this means, Logstash will listen for input from filebeat in port 5044. Filter is needed when you want parse the data. Output is where you want to store the data. We are using elasticsearch output plugin since you want to store it to elasticsearch. stdout is super helpful for debugging your configuration file. add this you will regret nothing. This will print all the messages that sent to elasticsearch
filebeat.yml
filebeat.inputs:
- type: log
paths:
- /directory/of/your/file/file.log
output.logstash:
hosts: ["YOUR_LOGSTASH_IP:5044"]
this is a very minimal working filebeat.yml. paths is where you want logstash to harvest the file.
When you done configuring the file, start elasticsearch then logstash then filebeat.
Let me know any difficulties

filebeat configuration to send logfile to ELK which is installed in cloudfoundry

I've been working to install ELK stack in CloudFoundry and sending log files from other local server by using filebeat.
I have successfully installed ELK in CloudFoundry and able to see sample messages.
Now I am trying to send log files from local server by using filebeat. Can you suggest how to configure filebeat to send log files from local server to Logstash in CloudFoundry?
You'll need to configure the Logstash output in Filebeat for this, specifying the host & port for target logstash:
#----------------------------- Logstash output --------------------------------
output.logstash:
hosts: ["127.0.0.1:5044"]
On the logstash side, you'll need to add a beats input to the config:
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[#metadata][beat]}-%{[#metadata][version]}"
}
}
Read the complete documentation here.

Filebeat sent Logs to Logstash thought nginx proxy

I am trying to make Filbeat sending logs to Logstash using docker containers.
The problem is that I have an nginx proxy in between and Filbeat-Logstash communication is not based on HTTPS.
What is the solutions to make it working?
I was trying to make nginx able to process tcp streams configuring it in this way:
stream {
upstream logs {
server logstash:5044;
}
server {
listen 5088;
proxy_pass logs;
}
}
And this is my filebeat output config:
output.logstash:
hosts: ["IP_OF_NGINX:5088"]
ssl.verification_mode: none
But it seems not to work.
Filebeat shows me this error in its logs:
pipeline/output.go:100 Failed to connect to backoff(async(tcp://IP_OF_NGINX:5088)): dial tcp IP_OF_NGINX:5088: connect: connection refused
Any help?

Logstash wont talk to Elastic Search

I have Elastic Search 1.3.2 via ELMA. The ELMA setup places ES REST API behind an Apache reverse proxy with SSL and basic auth.
On a separate host, I am trying to setup Logstash 1.4.2 to forward some information over to ES. The output part of my LS is as follows:
output {
stdout { codec => rubydebug }
elasticsearch {
host => "192.168.248.4"
}
This produces the following error:
log4j, [2014-09-25T01:40:02.082] WARN: org.elasticsearch.discovery: [logstash-ubuntu-jboss-39160-4018] waited for 30s and no initial state was set by the discovery
I then tried setting the protocol to HTTP as follows:
elasticsearch {
host => "192.168.248.4"
protocol => "http"
}
This produces a connection refused error:
Faraday::ConnectionFailed: Connection refused - Connection refused
I have then tried setting the port to 9200 (which gives connection refused error) and 9300 which gives:
Faraday::ConnectionFailed: End of file reached
Any ideas on how I can get logstash talking to my ES?
The way to inform logstash to set output in ES is :
elasticsearch {
protocol => "http"
host => "EShostname:EsportNo"
}
In your case, it should be,
elasticsearch {
protocol => "http"
host => "192.168.248.4:9200"
}
If it's not working, then the problem is with the network address configuration.In order to make sure you have provided the correct configuration,
Check the http.port property of ES
Check network.bind_host property of ES
Check network.publish_host property of ES

Logstash ganglia input plugin - udp listener died

I am using Logstash ganglia input plugin. Ganglia gmond daemon and logstash are installed on same machine. Gmond send metrics to itself. Here is the gmond configuration.
udp_send_channel {
host = 10.0.3.167
port = 8649
ttl = 1
}
Logstash configuration file is like this :
input {
ganglia {
host => "127.0.0.1"
type => "ganglia"
}
}
output {
elasticsearch {
host => "10.0.3.168"
}
}
While logstash connecting to port which gmond unicasts, I am getting this error:
{:timestamp=>"2014-01-04T12:50:38.422000+0000",
:message=>"ganglia udp listener died",
:address=>"127.0.0.1:8649",
:exception=>#<SocketError: bind: name or service not known>,
:backtrace=>
[
"org/jruby/ext/socket/RubyUDPSocket.java:160:in `bind'",
"file:/etc/logstash/logstash.jar!/logstash/inputs/ganglia.rb:61:in `udp_listener'",
"file:/etc/logstash/logstash.jar!/logstash/inputs/ganglia.rb:39:in `run'",
"file:/etc/logstash/logstash.jar!/logstash/pipeline.rb:156:in `inputworker'",
"file:/etc/logstash/logstash.jar!/logstash/pipeline.rb:150:in `start_input'"
],
:level=>:warn}
Any help is appreciated, thanks in advance.
I am answering my question, hopefully someone do not spend some time to find out this error.
Logstash ganglia input plugin could not connect to port since ganglia monitoring agent (gmond) is running on the same port with logstash. Either u need to redirect gmond to another port or reconfigure ganglia input plugin of logstash.

Resources