Filebeat to logstash connection refused - elasticsearch

I'm trying to send log files from filebeat->logstash->elastic search.
filebeat.yml. But I'm getting the following error in filebeat log:
2017-12-07T16:15:38+05:30 ERR Failed to connect: dial tcp [::1]:5044: connectex: No connection could be made because the target machine actively refused it.
My filebeat and logstash configurations are as follows:
1.filebeat.yml
filebeat.prospectors:
- input_type: log
paths:
- C:\Users\shreya\Data\mylog.log
document_type: springlog
multiline.pattern: ^\[[0-9]{4}-[0-9]{2}-[0-9]{2}
multiline.negate: true
multiline.match: before
output.logstash:
hosts: ["localhost:5044"]
2.logstash.yml
http.host: "127.0.0.1"
http.port: 5044
3.logstash conf file:
input {
beats {
port => 5044
codec => multiline {
pattern => "^(%{TIMESTAMP_ISO8601})"
negate => true
what => "previous"
}
}
}
filter {
grok{
id => "myspringlogfilter"
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp}; [LOG_LEVEL=%{LOGLEVEL:log-level}, CMPNT_NM= %{GREEDYDATA:component}, MESSAGE=%{GREEDYDATA:message}" }
overwrite => ["message"]
}
}
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[#metadata][beat]}-%{[#metadata][version]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
stdout {
codec => rubydebug
}
}

Problem got solved after I commented out the metric settings in logstash.yml as follows:
# ------------ Metrics Settings --------------
#
# Bind address for the metrics REST endpoint
#
#http.host: "127.0.0.1"
#
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
#
#http.port: 5044
#
But still do not know why this solved the issue. as both(filebeat and logstash) were pointing to the same port. If someone could explain the reason,
then prior Thanks!

Related

Cisco-module (Filebeat) to Logstash - Configuration issue - Unable to write to existing indices

I was able to send logs to Elasticsearch using Filebeat using the below configuration successfully.
# ============================== Filebeat inputs ===============================
filebeat.inputs:
- type: log
enabled: false
paths:
- /var/log/*.log
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
# Authentication credentials - either API key or username/password.
username: "elastic"
password: "XXXXXXXXXXXXX"
#Index name customization as we do not want 'Filebeat-" prefix for the indices that filbeat creates by default
index: "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
#Below configuration setting are mandatory when customizing index name
setup.ilm.enabled: false
setup.template:
name: 'network'
pattern: 'network-*'
enabled: false
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
# ================================== Logging ===================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: debug
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
# ============================= X-Pack Monitoring ==============================
#monitoring.elasticsearch:
monitoring:
enabled: true
cluster_uuid: 9ZIXSpCDBASwK5K7K1hqQA
elasticsearch:
hosts: ["http:/esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
username: beats_system
password: XXXXXXXXXXXXXX
I enabled all Cisco modules and they are able to create indices as below:
network-cisco.ios-YYYY.MM.DD
network-cisco.nexus-YYYY.MM.DD
network-cisco.asa-YYYY.MM.DD
network-cisco.ftd-YYYY.MM.DD
Until here there was no issue but it all came to a halt when I tried to introduce Logstash in between Filebeat & Elasticsearch.
Below is the network.conf file details for your analysis.
input {
beats {
port => "5046"
}
}
output {
if [event.dataset] == "cisco.ios" {
elasticsearch {
hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
user => "elastic"
password => "XXXXXXXXXXXX"
pipeline => "%{[#metadata][pipeline]}"
manage_template => "false"
ilm_enabled => "false"
}
}
else if [event.dataset] == "cisco.nexus" {
elasticsearch {
hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
user => "elastic"
password => "XXXXXXXXXXXX"
pipeline => "%{[#metadata][pipeline]}"
manage_template => "false"
ilm_enabled => "false"
}
}
else if [event.dataset] == "cisco.asa" {
elasticsearch {
hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
user => "elastic"
password => "XXXXXXXXXXXX"
pipeline => "%{[#metadata][pipeline]}"
manage_template => "false"
ilm_enabled => "false"
}
}
else if [event.dataset] == "cisco.ftd" {
elasticsearch {
hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
user => "elastic"
password => "XXXXXXXXXXXX"
pipeline => "%{[#metadata][pipeline]}"
manage_template => "false"
ilm_enabled => "false"
}
}
else if [event.dataset] == "cef.log" {
elasticsearch {
hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
user => "elastic"
password => "XXXXXXXXXXXX"
pipeline => "%{[#metadata][pipeline]}"
manage_template => "false"
ilm_enabled => "false"
}
}
else if [event.dataset] == "panw.panos" {
elasticsearch {
hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
user => "elastic"
password => "XXXXXXXXXXXX"
pipeline => "%{[#metadata][pipeline]}"
manage_template => "false"
ilm_enabled => "false"
}
}
stdout {codec => rubydebug}
}
With the above configuration I am unable to connect Filbeat --> Logstash --> Elasticsearch pipeline that I am looking to achieve.
There is no data that is getting added and stdout is able to produce output when I run logstash as below:
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/network.conf
Using --config_test_and_exit the config file is tested successfully, also the above line is producing stdout json lines, but in spite of that there is no document that is getting added to the existing indices (network-cisco.ios-YYYY.MM.DD, network-cisco.nexus-YYYY.MM.DD etc.).
When I tried to change the index name to 'test-%{+yyyy.MM.dd}' by testing with one elasticsearch output, I found that it creates an index with the same execution above.
Also when I take Logstash out of the equation, Filebeat is able to continue writing to the existing indices but it is not happening with the above Logstash configuration.
Any help would be greatly appreciated!
Thanks,
Arun

How to make logstah public over the network

Need your support, As the below image I configure ELK and deploy it in a separate server. it's working inside the same server but I'm trying to access Logstash over the same network on other servers.
My question is possible on local env?.
Logstash Configuration
# Read input from filebeat by listening to port 5044 on which filebeat will send the data
input {
beats {
host => "0.0.0.0"
port => "5044"
}
}
output {
stdout {
codec => rubydebug
}
# Sending properly parsed log events to elasticsearch
elasticsearch {
hosts => ["localhost:9200"]
index => "e-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
File Beats Configuration
filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
#- /var/log/*.log
- E:\IMR-App\imrh\logs\imrh.log
# ------------------------------ Logstash Output -------------------------------
output.logstash:
# The Logstash hosts
hosts: ["0.0.0.0:5044"]

why is filebeat unable to connect to logstash

I am trying to send a log4net log to logstash to get parsed and then end up in elasticsearch. I have added the port to the windows firewall security setting and allow all connection, both to 5044 and 9600.
In the filebeat log, i get this error
pipeline/output.go:100 Failed to connect to backoff(async(tcp://[http://hostname:5044]:5044)): lookup http://hostname:5044: no such host
Filebeat.yml (Logstash section)
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["http://hostname:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Processors =====================================
Logstash.yml
I have set the http.host to 0.0.0.0
# ------------ Metrics Settings --------------
#
# Bind address for the metrics REST endpoint
#
http.host: "0.0.0.0"
#
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
#
# http.port: 9600-9700
Logstash Filter Config
input {
beats {
port => "5044"
}
}
filter {
if [type] == "log4net" {
grok {
match => [ "message", "%{TIMESTAMP_ISO8601:timestamp} \[%{NUMBER:threadid}\] %{WORD:level}\s*%{DATA:class} \[%{DATA:NDC}\]\s+-\s+%{GREEDYDATA:message}" ]
}
date {
match => ["timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss"]
remove_field => ["timestamp"]
}
mutate {
update => {
"type" => "log4net-logs"
}
}
}
}
output {
elasticsearch {
hosts => ["http://hostname:9200"]
index => "%{[#metadata][beat]}-%{[#metadata][version]}-%{+YYYY.MM.dd}"
#user => "elastic"
#password => "changeme"
}
}
You can try using hostname:
hosts: ["hostname:5044"]
As mentioned by #Adrian Dr try using:
hosts: ["hostname:5044"]
But also bind logstash to a single port:
http.port: 9600
Same error. It's because you mention protocol.
You have to remove 'http' from hosts field.
hosts: ["somename.com:5044"]
or ip
hosts: ["10.10.10.1:5044"]

CircuitBreaker::rescuing exceptions {:name=>"Beats input", :exception=>LogStash::Inputs::Beats::InsertingToQueueTakeTooLong, :level=>:warn}

I am new to ELK stack. I am trying to setup FileBeat --> Logstash --> ElasticSearch --> Kibana. Here while trying to send FileBeat output to Logstash input I am getting below error on Logstash side:
CircuitBreaker::rescuing exceptions {:name=>"Beats input", :exception=>LogStash::Inputs::Beats::InsertingToQueueTakeTooLong, :level=>:warn}
Beats input: The circuit breaker has detected a slowdown or stall in the pipeline, the input is closing the current connection and rejecting new connection until the pipeline recover. {:exception=>LogStash::Inputs::BeatsSupport::CircuitBreaker::HalfOpenBreaker, :level=>:warn}
I am using Logstash 2.3.2 version with FileBeat: 1.2.2, elasticsearch: 2.2.1
my logstash config:
input {
beats {
port => 5044
# codec => multiline {
# pattern => "^%{TIME}"
# negate => true
# what => previous
# }
}
}
filter {
grok {
match => { "message" => "^%{TIME:time}\s+%{LOGLEVEL:level}" }
}
}
output {
elasticsearch {
hosts => ["host:9200"]
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
my filebeat config:
filebeat:
prospectors:
- paths: - *.log
input_type: log
tail_files: false
output:
logstash:
hosts: ["host:5044"]
compression_level: 3
shipper:
logging:
to_files: true
files:
path: /tmp
name: mybeat.log
level: error

Docker - ELK stack -- "Elasticsearch appears to be uneachable or down"

So I am using docker-compose to launch the ELK stack, which will be filled by filebeats... my config is something like this:
elasticsearch:
image: elasticsearch:latest
command: elasticsearch -Des.network.host=_non_loopback_
ports:
- "9200:9200"
- "9300:9300"
logstash:
image: logstash:latest
command: logstash -f /etc/logstash/conf.d/logstash.conf -b 10000 -w 1
volumes:
- ./logstash/config:/etc/logstash/conf.d
ports:
- "5044:5044"
links:
- elasticsearch
environment:
- LS_HEAP_SIZE=2048m
kibana:
build: kibana/
volumes:
- ./kibana/config/:/opt/kibana/config/
ports:
- "5601:5601"
links:
- elasticsearch
My logstash.conf file looks something like this:
input {
beats {
port => 5044
}
}
....
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
These docker containers are running on the same instance and I have confirmed being able to hit both ports externally.
The error which appears during a sync of a file from filebeat is:
logstash_1 | {:timestamp=>"2016-05-19T19:52:55.167000+0000", :message=>"Attempted to send a bulk request to Elasticsearch configured at '[\"http://localhost:9200/\"]', but Elasticsearch appears to be unreachable or down!", :error_message=>"Connection refused", :class=>"Manticore::SocketException", :client_config=>{:hosts=>["http://localhost:9200/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false, :http=>{:scheme=>"http", :user=>nil, :password=>nil, :port=>9200}}, :level=>:error}
Thanks,
You try to reach elasticsearch on localhost, but it's not possible, in this case localhost is the docker container containing logstash.
You have to access it via the link :
output {
elasticsearch {
hosts => "elasticsearch:9200"
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
or, if you want to access your elasticsearch instance from "outside" instead of localhost, fill your ip (not 127.0.0.1)

Resources