Redis input over TLS for Logstash - elasticsearch

I am trying to setup a securized ELK stack with redis as a buffer :
filebeat -> redis -> logstash -> elastic
I installed redis with TLS configuration, filebeat can communicate with redis over TLS without any issue.
But i don't understand how to configure logstash. There is a boolean option ssl, but where can i provide the redis certificate ?
filebeat.yml
output.redis:
hosts: ["redishost:6379"]
password: "password"
key: "filebeat"
db: 0
timeout: 5
ssl:
enabled: true
certificate_authorities: ["/etc/filebeat/cert/ca.crt"]
insecure: true
supported_protocols: [TLSv1.2]
verification_mode: none
redis.conf in logstash
redis {
host => "redishost"
password => "password"
db => 0
key => "filebeat"
data_type => "list"
ssl => true
}
Thanks in advance

You cannot configure logstash to trust the redis certificate, or the authority that signed it. The certificate has to be trusted by the JRE or JDK that runs logstash.

Related

i setup keycloak authentication for elasticsearch.but login is not work properlly

i have add elasticsearch & kibana yml files and few screenshots.for login kibana dashboard it take properly but kibana dashboard not appear it will redirect again back page.
video link - enter link description here
screenshot --> keycloak client configurations
elasticsearch.yml
# --------------------------------------------------------------------------------
# Enable security features
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["surangas-MacBook-Air.local"]
# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0
# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
#transport.host: 0.0.0.0
#----------------------- END SECURITY AUTO CONFIGURATION -------------------------
xpack.security.authc.realms.oidc.oidc1:
order: 0
rp.client_id: "kibana"
rp.response_type: code
rp.redirect_uri: "http://localhost:5601/app/home#/"
op.issuer: "http://localhost:8080/realms/oidc1"
op.authorization_endpoint: "http://localhost:8080/realms/oidc1/protocol/openid-connect/auth"
op.token_endpoint: "http://localhost:8080/realms/oidc1/protocol/openid-connect/token"
op.jwkset_path: "https://localhost:8080/realms/oidc1/protocol/openid-connect/certs"
op.userinfo_endpoint: "http://localhost:8080/realms/oidc1/protocol/openid-connect/userinfo"
op.endsession_endpoint: "http://localhost:8080/realms/oidc1/protocol/openid-connect/logout"
rp.post_logout_redirect_uri: "http://localhost:5601/security/logged_out"
claims.principal: email
claims.groups: "http://localhost:8080/claims/groups"
kibana.yml
this my kibana.yml file. here i have configure keycloak login page configurations
# =================== Search Autocomplete ===================
xpack.security.session.idleTimeout: "30m"
xpack.security.session.cleanupInterval: "1d"
xpack.security.authc.providers:
oidc.oidc1:
order: 0
realm: "oidc1"
description: "Keycloak"
basic.basic:
order: 1
# This section was automatically generated during setup.
elasticsearch.hosts: ['https://192.168.8.184:9200']
elasticsearch.serviceAccountToken: AAEAAWVsYXN0aWMva2liYW5hL2Vucm9sbC1wcm9jZXNzLXRva2VuLTE2NTUwMDc0NDM1MDY6YXRIcmpaVnRRclMwSHM4NmVJcWpVZw
elasticsearch.ssl.certificateAuthorities: [/Users/suranga/Desktop/Monitoring/test/keycloak/kibana-8.2.0/data/ca_1655007444485.crt]
xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://192.168.8.184:9200'], ca_trusted_fingerprint: 82d6e3b36b6132052fb895809a97588fb366edf7f3dfba981e724194d2d19af3}]

Filebeat over HTTPS

I am totally newbie in elk but I'm currently deploying ELK stack via docker-compose (https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-docker.html TLS part).
Elasticsearch and Kibana work correctly in HTTPS.
However, I don't understand how to enable Filebeat over HTTPS. I would like to send my nginx logs which is located on another server (over internet, so I do not want to send logs in clear text). Everything works fine in HTTP but when I switch to HTTPS and reload Filebeat I get the following message:
Error: ... Get https://10.15.0.12:9200: x509: certificate is valid for 127.0.0.0.1, not 10.15.0.12
I know I'm doing something wrong but I don't find the answer for Filebeat over HTTPS...
Here is my Filebeat configuration :
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["10.15.0.12:9200"]
# Protocol - either `http` (default) or `https`.
protocol: "https"
# Authentication credentials - either API key or username/password.
username: "elastic"
password: "myelasticpassword"
Thanks in advance.
I found the error :
My self signed certificate was for 127.0.0.1 host.
I've changed the IP in the instances.yml
Then I changed my filebeat config :
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["xx.xx.xx.xx:9200"]
# Protocol - either `http` (default) or `https`.
protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "elastic"
password: "mypassword"
ssl.verification_mode: none

Logstash error : Failed to publish events caused by: write tcp YY.YY.YY.YY:40912->XX.XX.XX.XX:5044: write: connection reset by peer

I am using filebeat to push my logs to elasticsearch using logstash and the set up was working fine for me before. I am getting Failed to publish events error now.
filebeat | 2020-06-20T06:26:03.832969730Z 2020-06-20T06:26:03.832Z INFO log/harvester.go:254 Harvester started for file: /logs/app-service.log
filebeat | 2020-06-20T06:26:04.837664519Z 2020-06-20T06:26:04.837Z ERROR logstash/async.go:256 Failed to publish events caused by: write tcp YY.YY.YY.YY:40912->XX.XX.XX.XX:5044: write: connection reset by peer
filebeat | 2020-06-20T06:26:05.970506599Z 2020-06-20T06:26:05.970Z ERROR pipeline/output.go:121 Failed to publish events: write tcp YY.YY.YY.YY:40912->XX.XX.XX.XX:5044: write: connection reset by peer
filebeat | 2020-06-20T06:26:05.970749223Z 2020-06-20T06:26:05.970Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://xx.com:5044))
filebeat | 2020-06-20T06:26:05.972790871Z 2020-06-20T06:26:05.972Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://xx.com:5044)) established
Logstash pipeline
02-beats-input.conf
input {
beats {
port => 5044
}
}
10-syslog-filter.conf
filter {
json {
source => "message"
}
}
30-elasticsearch-output.conf
output {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "index-%{+YYYY.MM.dd}"
}
}
Filebeat configuration
Sharing my filebeat config at /usr/share/filebeat/filebeat.yml
filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /logs/*
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["xx.com:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
When I do telnet xx.xx 5044, this is the what I see in terminal
Trying X.X.X.X...
Connected to xx.xx.
Escape character is '^]'
I had the same problem. Here some steps, which could help you to find the core of your problem.
Firstly I tested such way: filebeat (localhost) -> logstash (localhost) -> elastic -> kibana. Each service is on the same machine.
My /etc/logstash/conf.d/config.conf:
input {
beats {
port => 5044
ssl => false
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
}
Here, I specially disabled ssl (in my case it was a main reason of the issue, even when certificates were correct, magic).
After that don't forget to restart logstash and test with sudo filebeat -e command.
If everything is ok, you wouldn't see 'connection reset by peer' error
I had the same problem. Starting filebeat as a sudo user worked for me.
sudo ./filebeat -e
I have made some changes to input plugin config, as specifying ssl => false but did not worked without starting filebeat as a sudo privileged user or as root.
In order to start filebeat as a sudo user, filebeat.yml file must be owned by root. Change whole filebeat folder permission to a sudo privileged user by using sudo chown -R sime_sudo_user:some_group filebeat-7.15.0-linux-x86_64/ and then chown root filebeat.yml will change the permission of file.

Error in shipping logs between different servers using ELK and Filebeat

I have installed Filebeat deb package in Client-server(Linux Wind-River) and ELK in Elk-server(Ubuntu-16.04-server). The problem is, I can't receive logs from Client-server. I checked the network statistics and it seems 5044 port(Listening port) in ELK server is LISTENING. I can ping from both sides. I also have ssh connection in both directions.
This is the link which I used to install these packages on servers.
My Filebeat configurations:
filebeat.prospectors:
- type: log
# Change to true to enable this prospector configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths. paths:
- /var/log/filebeat/*
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
document_type: log
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["192.168.10.3:5044"]
proxy_url: socks5://wwproxy.seln.ete.ericsson.se:808
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
# Certificate for SSL client authentication
ssl.certificate: "/etc/pki/tls/certs/logstash-forwarder.crt"
# Client Certificate Key
ssl.key: "/etc/pki/tls/private/logstash-forwarder.key"
I figured out the Error! The problem is the server IP in openssl.cnf should be the IP address of bridged Interface. And the certificate generated with this openssl.cnf should be used in both the servers. Further, I also shared the .key generated in ELK server to Client-server to be more secured/authenticate.

Getting 401 Authorization Required from client Filebeat in ELK ElasticSearch Logstash Kibana)

I'm trying to setup my first ELK environment on RHEL7 using this guide,
I installed all required components (Nginx,logstash,kibana,elasticsearch),
I also installed filebeat on my client machine that I'm trying to pull the logs from, But when checking the installation I get 401:
[root#myd-vm666 beats-dashboards-1.1.0]# curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
<html>
<head><title>401 Authorization Required</title></head>
<body bgcolor="white">
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx/1.10.2</center>
</body>
</html>
in my filebeat configuration I stated the logstash host and the certificate location as follows:
#================================ Outputs =====================================
# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
#hosts: ["16.XX.XXX.XXX:9200"]
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["16.XX.XXX.XXX:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
tls:
# List of root certificates for HTTPS server verifications
certificate_authorities: "/etc/pki/tls/certs/logstash-forwarder.crt"
I verified that The logstash-forwarder.crt is in the right place.
And on my server, I have this configuration, /etc/logstash/conf.d/02-beats-input.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
What am I missing? is there another key/certificate I need to place on the client?
If you are approaching AWS Elasticsearch with username/password security as
then and version of both are compatible then
In Aws, while configuring your Elasticsearch service configure it for whitelisting of IP instead of Master User.
or
Configure FileBeat–> Logstash–>Elasticsearch with master username/password also it will work.
Reference: https://learningsubway.com/filebeat-401-unauthorized-error-with-aws-elasticsearch/

Resources