Zeek logs to elk - elasticsearch

I've installed elk on server and zeek with filebeat on another server.
I followed documnetation to install each one, but the filebeat is not shipping zeek logs to kibana.
by the way filebeat basic logs is shiped to kibana but without zeek logs
for the records:
1 - I've enabled zeek module
2 - I've add load policy/tuning/json-logs.zeek to local.zeek

Related

why i don't receive the fortigate logs from filebeat elk?

I installed elastic and kibana and filebeat in a same ubuntu 22.04 VM and I installed FortiGate 7.2.0 in other VM and i want to collect FortiGate logs with filebeat but I don't receive the FortiGate logs enter image description here

"Attempting to reconnect to backoff(elasticsearch(http://localhost:9200)) with 3 reconnect attempt(s)" error appears

I am running filebeat, elastic search and kibana to get logs of nginx from local machine, i am directly connecting filebeat with elastic search in filebeat configuration but as i start filebeat config , it shows errors like" pipeline/output.go:145 Attempting to reconnect to backoff(elasticsearch(http://localhost:9200)) with 3 reconnect attempt(s)" and no logs received by kibana.

Filebeat unable to send data to logstash which results in empty data in elastic & kibana

I am trying to deploy ELK stack in openshift platform (OKD - v3.11) and using filebeat to automatically detect the logs.
The kibana dashboard is up, elastic & logstash api's are working fine but the filebeat is not sending the data to logstash since I do not see any data polling on the logstash listening on 5044 port.
So I found that from elastic forums that the following iptables command would resolve my issue but no luck,
iptables -A OUTPUT -t mangle -p tcp --dport 5044 -j MARK --set-mark 10
Still nothing is polling on the logstash listener. Please help me if I am missing anything and let me know if you need any more information.
NOTE:
The filebeat.yml, logstash.yml & logstash.conf files are working perfectly while deployed in the plain kubernetes.
The steps I have followed to debug this issue are:
Check if Kibana is coming up,
Check if Elastic API's are working,
Check if Logstash is accessible from Filebeat.
Everything is working fine in my case. Added log levels in Filebeat.yml and found "Permission Denied" error while filebeat is accessing the docker container logs under "/var/lib/docker/containers//" folder.
Fixed the issue by setting selinux to "Permissive" by running the following command,
sudo setenforce Permissive
After this ELK started to sync the logs.

How to collect log from different servers to a central server(Elastic search and kibana)

I am assigned with task to create a central logging server. In my case there are many web app servers spread across. My task is to get logs from these different servers and manage in central server where there will be elastic-search and kibana.
Question
Is it possible to get logs from servers that are having different public IP? If possible how?
How much resource (CPU, Memory, Storage) is required in central server.
Things seen
Saw the examples setups where all logs and applications are on same machine only.
Looking for way to send logs over public IP to elastic-search.
I would like to differ from the Ishara's Answer. You can ship logs directly from filebeat to elasticsearch without using logstash, If your logs are generic types(system logs, nginx logs, apache logs), Using this approach You don't need to go into incur extra cost and maintenance of logstash as filebeat provides inbuilt parsing processor.
If you have debian based OS on your server, I have prepared a shell script to install and configure filebeat. You need to change elasticsearch server URL and modify second last line based on the modules that you want to configure.
Regarding your first question, Yes, You can run filebeat agent on each server and send data to centralize Elasticsearch.
For your second question, It depends on the amount of logs elasticsearch server is going to process and store. It also depends on the where kibana is hosted.
sudo wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
sudo echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
sudo apt-get update && sudo apt-get install -y filebeat
sudo systemctl enable filebeat
sudo bash -c "cat >/etc/filebeat/filebeat.yml" <<FBEOL
filebeat.inputs:
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.name: "filebeat-system"
setup.template.pattern: "filebeat-system-*"
setup.template.settings:
index.number_of_shards: 1
setup.ilm.enabled: false
setup.kibana:
output.elasticsearch:
hosts: ["10.32.66.55:9200", "10.32.67.152:9200", "10.32.66.243:9200"]
indices:
- index: "filebeat-system-%{+yyyy.MM.dd}"
when.equals:
event.module: system
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
logging.level: warning
FBEOL
sudo filebeat modules enable system
sudo systemctl restart filebeat
Yes, it is possible to get logs from servers that are having different public IP. You need to setup an agent like filebeat (provided by elastic) to each server which produce logs.
You need to setup filebeat instance in each machine.
It will listen to your log files in each machine and forward them to the logstash instance you would mention in filebeat.yml configuration file like below:
#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /path_to_your_log_1/ELK/your_log1.log
- /path_to_your_log_2/ELK/your_log2.log
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["private_ip_of_logstash_server:5044"]
Logstash server listens to port 5044 and stream all logs through logstash configuration files:
input {
beats { port => 5044 }
}
filter {
# your log filtering logic is here
}
output {
elasticsearch {
hosts => [ "elasticcsearch_server_private_ip:9200" ]
index => "your_idex_name"
}
}
In logstash you can filter and split your logs into fields and send them to elasticsearch.
Resources depend on how much of data you produce, data retention plan, TPS and your custom requirements. If you can provide some more details, I would be able to provide a rough idea about resource requirement.

Can't get Centralized pipeline management working in ELastic Cloud X-pack

I'm trying to set Centralized pipeline management but it is still not working.
I'm using Elastic Cloud trial version and Logstash running on a local vm
my logstash.yml looks like:
xpack.management.elasticsearch.url: "https://xxx.eu-central-1.aws.cloud.es.io:xxx/"
xpack.management.enabled: true
xpack.management.elasticsearch.username: elastic
xpack.management.elasticsearch.password: password
xpack.management.logstash.poll_interval: 5s
xpack.management.pipeline.id: ["apache", "cloudwatch_logs"]
cloud.auth: "elastic:xxxx"
cloud.id: "yyyy=="
path.data: /var/lib/logstash
path.logs: /var/log/logstash
I followed instructions from https://www.elastic.co/guide/en/logstash/6.x/logstash-centralized-pipeline-management.html#logstash-centralized-pipeline-management and https://www.elastic.co/guide/en/logstash/6.x/configuring-centralized-pipelines.html
But Logstash isn't shipping anything to Elastic if I set manually a conf file and pipeline on logstash vm (whereas these were working fine on hosted trial) and If I create new pipeline in from Kibana UI...nothing else happen than having my pipeline saved under logstash pipeline management
Any tip? Did I miss some steps?

Resources