Setting up ELK stack - elasticsearch

I'm completely new to ELK and trying to install the stack with some beats for our servers.
Elasticsearch, Kibana and Logstash are all installed (on server A). I followed this guide here https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html.
Filebeat template was installed as well.
I also installed filebeat on another server (server B), and was trying to test the connection
$ /usr/share/filebeat/bin/filebeat test output -c
/etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -
path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs
/var/log/filebeat
logstash: my-own-domain:5044...
connection...
parse host... OK
dns lookup... OK
addresses: 163.172.167.147
dial up... OK
TLS...
security: server's certificate chain verification is enabled
handshake... OK
TLS version: TLSv1.2
dial up... OK
talk to server... OK
Things seems to be ok, yet data from filebeat on server B doesn't seem to be sending data to logstash.
Accessing Kibana keeps redirecting me back to Create Index pattern, with the message
Couldn't find any Elasticsearch data
Any direction pointing would be really appreciated.

Can you check your filebeat.yml file and see if configuration for logs are activated :
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/*.log

Related

use tls and elastic in fluentbit

I'm trying to send logs to my elastic pod with FluentBit service on a different VM.
I configured ingress for elastic.
I configured the FluentBit that way:
[OUTPUT]
Name es
Match *
Host <host_ip>
Port 443
#Retry_Limit 1
URI /elastic
tls On
tls.verify Off
but I keep getting the following error :
[2020/10/25 07:34:09] [debug] [out_es] HTTP Status=413 URI=/_bulk
it is possible to yo use TLS in elastic output? if yes can you suggest what I configured wrong?
HTTP 413 is a code for Payload Too Large. Try increasing the http.max_content_length in elasticsearch.yml
Also note that you are using tls.verify Off which does not make sense longterm. If you have an ingress with a certificate (LetsEncrypt?) it should be OK to set tls.verify On. Otherwise all looks correct.

Filebeat unable to send data to logstash which results in empty data in elastic & kibana

I am trying to deploy ELK stack in openshift platform (OKD - v3.11) and using filebeat to automatically detect the logs.
The kibana dashboard is up, elastic & logstash api's are working fine but the filebeat is not sending the data to logstash since I do not see any data polling on the logstash listening on 5044 port.
So I found that from elastic forums that the following iptables command would resolve my issue but no luck,
iptables -A OUTPUT -t mangle -p tcp --dport 5044 -j MARK --set-mark 10
Still nothing is polling on the logstash listener. Please help me if I am missing anything and let me know if you need any more information.
NOTE:
The filebeat.yml, logstash.yml & logstash.conf files are working perfectly while deployed in the plain kubernetes.
The steps I have followed to debug this issue are:
Check if Kibana is coming up,
Check if Elastic API's are working,
Check if Logstash is accessible from Filebeat.
Everything is working fine in my case. Added log levels in Filebeat.yml and found "Permission Denied" error while filebeat is accessing the docker container logs under "/var/lib/docker/containers//" folder.
Fixed the issue by setting selinux to "Permissive" by running the following command,
sudo setenforce Permissive
After this ELK started to sync the logs.

How to collect log from different servers to a central server(Elastic search and kibana)

I am assigned with task to create a central logging server. In my case there are many web app servers spread across. My task is to get logs from these different servers and manage in central server where there will be elastic-search and kibana.
Question
Is it possible to get logs from servers that are having different public IP? If possible how?
How much resource (CPU, Memory, Storage) is required in central server.
Things seen
Saw the examples setups where all logs and applications are on same machine only.
Looking for way to send logs over public IP to elastic-search.
I would like to differ from the Ishara's Answer. You can ship logs directly from filebeat to elasticsearch without using logstash, If your logs are generic types(system logs, nginx logs, apache logs), Using this approach You don't need to go into incur extra cost and maintenance of logstash as filebeat provides inbuilt parsing processor.
If you have debian based OS on your server, I have prepared a shell script to install and configure filebeat. You need to change elasticsearch server URL and modify second last line based on the modules that you want to configure.
Regarding your first question, Yes, You can run filebeat agent on each server and send data to centralize Elasticsearch.
For your second question, It depends on the amount of logs elasticsearch server is going to process and store. It also depends on the where kibana is hosted.
sudo wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
sudo echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
sudo apt-get update && sudo apt-get install -y filebeat
sudo systemctl enable filebeat
sudo bash -c "cat >/etc/filebeat/filebeat.yml" <<FBEOL
filebeat.inputs:
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.name: "filebeat-system"
setup.template.pattern: "filebeat-system-*"
setup.template.settings:
index.number_of_shards: 1
setup.ilm.enabled: false
setup.kibana:
output.elasticsearch:
hosts: ["10.32.66.55:9200", "10.32.67.152:9200", "10.32.66.243:9200"]
indices:
- index: "filebeat-system-%{+yyyy.MM.dd}"
when.equals:
event.module: system
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
logging.level: warning
FBEOL
sudo filebeat modules enable system
sudo systemctl restart filebeat
Yes, it is possible to get logs from servers that are having different public IP. You need to setup an agent like filebeat (provided by elastic) to each server which produce logs.
You need to setup filebeat instance in each machine.
It will listen to your log files in each machine and forward them to the logstash instance you would mention in filebeat.yml configuration file like below:
#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /path_to_your_log_1/ELK/your_log1.log
- /path_to_your_log_2/ELK/your_log2.log
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["private_ip_of_logstash_server:5044"]
Logstash server listens to port 5044 and stream all logs through logstash configuration files:
input {
beats { port => 5044 }
}
filter {
# your log filtering logic is here
}
output {
elasticsearch {
hosts => [ "elasticcsearch_server_private_ip:9200" ]
index => "your_idex_name"
}
}
In logstash you can filter and split your logs into fields and send them to elasticsearch.
Resources depend on how much of data you produce, data retention plan, TPS and your custom requirements. If you can provide some more details, I would be able to provide a rough idea about resource requirement.

Integration between ELK and LDAP

I recently got to manage an opensource-based infrastructure composed by multiple Debian servers. On some of them, the ELK stack is installed.
I am verifying verify the presence of any integration between ELK and LDAP or other IAMs. On the dedicated monitoring node, I looked for IAM-related info into the following configuration files:
/etc/elasticsearch/elasticsearch.yaml
/etc/kibana/kibana.yml
/etc/logstash/logstash.yml
but the only login/account credentials I have been able to find are in the kibana.yml file:
elasticsearch.username: "username"
elasticsearch.password: "password"
In /etc/kibana/kibana.yml and /etc/elasticsearch/elasticsearch.yml I find the following:
xpack.security.enabled: false
which leads me think to the presence of a "xpack" plugin in somehow related to ldap. Where should I look for LDAP integration ?
Thanks to #Wonka for suggesting the presence of ReadOnlyRest. I found a readonlyrest.yml in /etc/elasticsearch. There, the following was present:
ldaps:
- name: ldap1
host: "ourldapserver.ourdomain"
[...]
Here is where LDAP integration occured.

Error in shipping logs between different servers using ELK and Filebeat

I have installed Filebeat deb package in Client-server(Linux Wind-River) and ELK in Elk-server(Ubuntu-16.04-server). The problem is, I can't receive logs from Client-server. I checked the network statistics and it seems 5044 port(Listening port) in ELK server is LISTENING. I can ping from both sides. I also have ssh connection in both directions.
This is the link which I used to install these packages on servers.
My Filebeat configurations:
filebeat.prospectors:
- type: log
# Change to true to enable this prospector configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths. paths:
- /var/log/filebeat/*
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
document_type: log
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["192.168.10.3:5044"]
proxy_url: socks5://wwproxy.seln.ete.ericsson.se:808
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
# Certificate for SSL client authentication
ssl.certificate: "/etc/pki/tls/certs/logstash-forwarder.crt"
# Client Certificate Key
ssl.key: "/etc/pki/tls/private/logstash-forwarder.key"
I figured out the Error! The problem is the server IP in openssl.cnf should be the IP address of bridged Interface. And the certificate generated with this openssl.cnf should be used in both the servers. Further, I also shared the .key generated in ELK server to Client-server to be more secured/authenticate.

Resources