Filebeat over HTTPS - elasticsearch

I am totally newbie in elk but I'm currently deploying ELK stack via docker-compose (https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-docker.html TLS part).
Elasticsearch and Kibana work correctly in HTTPS.
However, I don't understand how to enable Filebeat over HTTPS. I would like to send my nginx logs which is located on another server (over internet, so I do not want to send logs in clear text). Everything works fine in HTTP but when I switch to HTTPS and reload Filebeat I get the following message:
Error: ... Get https://10.15.0.12:9200: x509: certificate is valid for 127.0.0.0.1, not 10.15.0.12
I know I'm doing something wrong but I don't find the answer for Filebeat over HTTPS...
Here is my Filebeat configuration :
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["10.15.0.12:9200"]
# Protocol - either `http` (default) or `https`.
protocol: "https"
# Authentication credentials - either API key or username/password.
username: "elastic"
password: "myelasticpassword"
Thanks in advance.

I found the error :
My self signed certificate was for 127.0.0.1 host.
I've changed the IP in the instances.yml
Then I changed my filebeat config :
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["xx.xx.xx.xx:9200"]
# Protocol - either `http` (default) or `https`.
protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "elastic"
password: "mypassword"
ssl.verification_mode: none

Related

Logstash error : Failed to publish events caused by: write tcp YY.YY.YY.YY:40912->XX.XX.XX.XX:5044: write: connection reset by peer

I am using filebeat to push my logs to elasticsearch using logstash and the set up was working fine for me before. I am getting Failed to publish events error now.
filebeat | 2020-06-20T06:26:03.832969730Z 2020-06-20T06:26:03.832Z INFO log/harvester.go:254 Harvester started for file: /logs/app-service.log
filebeat | 2020-06-20T06:26:04.837664519Z 2020-06-20T06:26:04.837Z ERROR logstash/async.go:256 Failed to publish events caused by: write tcp YY.YY.YY.YY:40912->XX.XX.XX.XX:5044: write: connection reset by peer
filebeat | 2020-06-20T06:26:05.970506599Z 2020-06-20T06:26:05.970Z ERROR pipeline/output.go:121 Failed to publish events: write tcp YY.YY.YY.YY:40912->XX.XX.XX.XX:5044: write: connection reset by peer
filebeat | 2020-06-20T06:26:05.970749223Z 2020-06-20T06:26:05.970Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://xx.com:5044))
filebeat | 2020-06-20T06:26:05.972790871Z 2020-06-20T06:26:05.972Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://xx.com:5044)) established
Logstash pipeline
02-beats-input.conf
input {
beats {
port => 5044
}
}
10-syslog-filter.conf
filter {
json {
source => "message"
}
}
30-elasticsearch-output.conf
output {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "index-%{+YYYY.MM.dd}"
}
}
Filebeat configuration
Sharing my filebeat config at /usr/share/filebeat/filebeat.yml
filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /logs/*
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["xx.com:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
When I do telnet xx.xx 5044, this is the what I see in terminal
Trying X.X.X.X...
Connected to xx.xx.
Escape character is '^]'
I had the same problem. Here some steps, which could help you to find the core of your problem.
Firstly I tested such way: filebeat (localhost) -> logstash (localhost) -> elastic -> kibana. Each service is on the same machine.
My /etc/logstash/conf.d/config.conf:
input {
beats {
port => 5044
ssl => false
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
}
Here, I specially disabled ssl (in my case it was a main reason of the issue, even when certificates were correct, magic).
After that don't forget to restart logstash and test with sudo filebeat -e command.
If everything is ok, you wouldn't see 'connection reset by peer' error
I had the same problem. Starting filebeat as a sudo user worked for me.
sudo ./filebeat -e
I have made some changes to input plugin config, as specifying ssl => false but did not worked without starting filebeat as a sudo privileged user or as root.
In order to start filebeat as a sudo user, filebeat.yml file must be owned by root. Change whole filebeat folder permission to a sudo privileged user by using sudo chown -R sime_sudo_user:some_group filebeat-7.15.0-linux-x86_64/ and then chown root filebeat.yml will change the permission of file.

Shipping Logs Securely to a Remote Process Group using MiNiFi

I am having a little bit of challenge with NiFi…MiNiFi precisely. We use MiNiFi to ship logs from remote systems to a NiFi Instance, from there to Kafka and into Elasticsearch. We can successfully do this without https, However, recently I was tasked to do same securely using https.
Using certificates, I can connect to the NiFi UI, the challenge is that MiNiFi is unable to connect to the RPG on the remote NiFi with the error "Unable to communicate with Remote NiFi at URI https://xxxx.com:9443/nifi due to: Received fatal alert: handshake_failure" . I suspect this is because of the errors below
2018-07-23 16:27:23,083 INFO [main] o.apache.nifi.controller.FlowController Not enabling RAW Socket Site-to-Site functionality because nifi.remote.input.socket.port is not set
2018-07-23 16:27:23,083 INFO [main] o.apache.nifi.controller.FlowController Not enabling HTTP(S) Site-to-Site functionality because the 'nifi.remote.input.http.enabled' property is not true
I have tried to set these properties in the nifi.properties file of MiNiFi, but the file is always overwritten at each restart with default values loaded.
Please, do you have any ideas on how to resolve this?
How can I bootstrap these settings at startup in the config.yml file or any other place?
You'll need to set those in the original flow that you export from NiFi to MiNiFi. The nifi.properties of the MiNiFi instance is automatically generated from the provided config.yml file. That file is generated by using the MiNiFi Converter Toolkit to convert the exported template XML file.
For more, you can watch these videos or read the Getting Started Guide.
You'll want to look for lines like the following in the config.yml:
Security Properties:
keystore: /tmp/ssl/localhost-ks.jks
keystore type: JKS
keystore password: localtest
key password: localtest
truststore: /tmp/ssl/localhost-ts.jks
truststore type: JKS
truststore password: localtest
ssl protocol: TLS
Sensitive Props:
key:
algorithm: PBEWITHMD5AND256BITAES-CBC-OPENSSL
provider: BC
Remote Processing Groups:
- name: http://localhost:8080/nifi
url: http://localhost:8080/nifi
comment: ''
timeout: 30 sec
yield period: 10 sec
Input Ports:
- id: AUTOGENERATED_NIFI_PORT_ID_HERE
name: MiNiFi-input
comment: ''
max concurrent tasks: 1
use compression: false
Properties: # Deviates from spec and will later be removed when this is autonegotiated
Port: 1026
Host Name: localhost

Setting up ELK stack

I'm completely new to ELK and trying to install the stack with some beats for our servers.
Elasticsearch, Kibana and Logstash are all installed (on server A). I followed this guide here https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html.
Filebeat template was installed as well.
I also installed filebeat on another server (server B), and was trying to test the connection
$ /usr/share/filebeat/bin/filebeat test output -c
/etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -
path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs
/var/log/filebeat
logstash: my-own-domain:5044...
connection...
parse host... OK
dns lookup... OK
addresses: 163.172.167.147
dial up... OK
TLS...
security: server's certificate chain verification is enabled
handshake... OK
TLS version: TLSv1.2
dial up... OK
talk to server... OK
Things seems to be ok, yet data from filebeat on server B doesn't seem to be sending data to logstash.
Accessing Kibana keeps redirecting me back to Create Index pattern, with the message
Couldn't find any Elasticsearch data
Any direction pointing would be really appreciated.
Can you check your filebeat.yml file and see if configuration for logs are activated :
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/*.log

Error in shipping logs between different servers using ELK and Filebeat

I have installed Filebeat deb package in Client-server(Linux Wind-River) and ELK in Elk-server(Ubuntu-16.04-server). The problem is, I can't receive logs from Client-server. I checked the network statistics and it seems 5044 port(Listening port) in ELK server is LISTENING. I can ping from both sides. I also have ssh connection in both directions.
This is the link which I used to install these packages on servers.
My Filebeat configurations:
filebeat.prospectors:
- type: log
# Change to true to enable this prospector configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths. paths:
- /var/log/filebeat/*
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
document_type: log
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["192.168.10.3:5044"]
proxy_url: socks5://wwproxy.seln.ete.ericsson.se:808
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
# Certificate for SSL client authentication
ssl.certificate: "/etc/pki/tls/certs/logstash-forwarder.crt"
# Client Certificate Key
ssl.key: "/etc/pki/tls/private/logstash-forwarder.key"
I figured out the Error! The problem is the server IP in openssl.cnf should be the IP address of bridged Interface. And the certificate generated with this openssl.cnf should be used in both the servers. Further, I also shared the .key generated in ELK server to Client-server to be more secured/authenticate.

Getting 401 Authorization Required from client Filebeat in ELK ElasticSearch Logstash Kibana)

I'm trying to setup my first ELK environment on RHEL7 using this guide,
I installed all required components (Nginx,logstash,kibana,elasticsearch),
I also installed filebeat on my client machine that I'm trying to pull the logs from, But when checking the installation I get 401:
[root#myd-vm666 beats-dashboards-1.1.0]# curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
<html>
<head><title>401 Authorization Required</title></head>
<body bgcolor="white">
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx/1.10.2</center>
</body>
</html>
in my filebeat configuration I stated the logstash host and the certificate location as follows:
#================================ Outputs =====================================
# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
#hosts: ["16.XX.XXX.XXX:9200"]
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["16.XX.XXX.XXX:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
tls:
# List of root certificates for HTTPS server verifications
certificate_authorities: "/etc/pki/tls/certs/logstash-forwarder.crt"
I verified that The logstash-forwarder.crt is in the right place.
And on my server, I have this configuration, /etc/logstash/conf.d/02-beats-input.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
What am I missing? is there another key/certificate I need to place on the client?
If you are approaching AWS Elasticsearch with username/password security as
then and version of both are compatible then
In Aws, while configuring your Elasticsearch service configure it for whitelisting of IP instead of Master User.
or
Configure FileBeat–> Logstash–>Elasticsearch with master username/password also it will work.
Reference: https://learningsubway.com/filebeat-401-unauthorized-error-with-aws-elasticsearch/

Resources