Filebeat Netflow not showing up in Kibana - elasticsearch

I use Filebeat to send Netflow to Elasticsearch and visualize it with Kibana.
The problem is that the Netflow events are not showing up in Kibana.
Here are my netflow and filebeat configuration files.
netflow.yml
Module: netflow
module: netflow
log:
enabled: true
var:
netflow_host: 0.0.0.0
netflow_port: 2055
filebeat.yml
Kibana section:
setup.kibana:
host: "X.X.X.X:5601"
Elasticsearch Output
output.elasticsearch:
hosts: ["localhost:9200"]
username: "xxxxxxxxxxxx"
password: "XXXXXXXXXXXX"

Related

Why doesn't the metricbeat index name change each day?

I am using a metricbeat (7.3) docker container along side several other docker containers, and sending the results to an elasticsearch (7.3) instance. This works, and the first time everything spins up I get an index in elasticsearch called metricbeat-7.3.1-2019.09.06-000001
The initial problem is that I have a Graphana dashboard setup to look for an index with today's date, so it seems to ignore one created several days ago altogether. I could try to figure out what's wrong with those Grafana queries, but more generically I need those index names to roll at some point - the index that's there is already up to over 1.3GB, and at some point that will just be too big for the system.
My initial metricbeat.yml config:
- module: docker
metricsets:
- "container"
- "cpu"
- "diskio"
- "memory"
- "network"
hosts: ["unix:///var/run/docker.sock"]
period: 10s
enabled: true
output.elasticsearch:
hosts: ["${ELASTICSEARCH_URL}"]
Searching around a bit, it seems like the index field on the elasticsearch output should configure the index name, so I tried the following:
- module: docker
metricsets:
- "container"
- "cpu"
- "diskio"
- "memory"
- "network"
hosts: ["unix:///var/run/docker.sock"]
period: 10s
enabled: true
output.elasticsearch:
hosts: ["${ELASTICSEARCH_URL}"]
index: "metricbeat-%{[beat.version]}-instance1-%{+yyyy.MM.dd}"
That throws an error about needing setup.template settings, so I settled on this:
- module: docker
metricsets:
- "container"
- "cpu"
- "diskio"
- "memory"
- "network"
hosts: ["unix:///var/run/docker.sock"]
period: 10s
enabled: true
output.elasticsearch:
hosts: ["${ELASTICSEARCH_URL}"]
index: "metricbeat-%{[beat.version]}-instance1-%{+yyyy.MM.dd}"
setup.template:
overwrite: true
name: "metricbeat"
pattern: "metricbeat-*"
I don't really know what the setup.template section does, so most of that is a guess from Google searches.
I'm not really sure if the issue is on the metricbeat side, or on the elasticsearch side, or somewhere in-between. But bottom line - how do I get them to roll the index to a new one when the day changes?
This is the setting/steps that worked for me:
metricbeat.yml file:
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["<es-ip>:9200"]
index: metricbeat-%{[beat.version]}
index_pattern: -%{+yyyy.MM.dd}
ilm.enabled: true
then, over to kibana i.e :5601:
go to "Stack Monitoring", select the "metricbeat-*"
do this kind of a setting to begin with, and what follows later is self-explanatory too:

Unable to use custom index in filebeat configuration

I'm working on elasticsearch version 7.2.0 and shipping the logs using filebeat. I can use the custom pipeline but I'm unable to set custom index name. Kindly help.
Below is my filebeat output configuration:
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
pipeline: reindex_timestamp
index: "logstash-%{+yyyy.MM.dd}"
setup.template.name: "logstash"
setup.template.pattern: "logstash-*"
setup.template.enabled: true
setup.template.overwrite: true
Here I'm not sure though how I have to create a custom template that I specified the name
Update: I found the solution to my requirement- Below configurations worked for me since my requirement is to write weblogs( if logs contain name: web) in separate index and rest of the application logs in another index (now writing to default index called filebeat-*)
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
#index: "filebeat-7.2.0-logstash-%{+yyyy.MM.dd}" #Its not taking custom index
pipeline: reindex_timestamp_logstash
indices:
- index: "node-%{+yyyy.MM.dd}"
when.contains:
name: web
pipelines:
- pipeline: reindex_timestamp_node
when.contains:
name: web
setup.template.name: "filebeat-7.2.0"
setup.template.pattern: "filebeat-7.2.0-*"

How to constrain Filebeat to only ship logs to ELK if they contain a specific field?

I’m trying to collect logs from Kubernetes nodes using Filebeat and ONLY ship them to ELK IF the logs originate from a specific Kubernetes Namespace.
So far I’ve discovered that you can define Processors which I think accomplish this. However, no matter what I do I can not get the shipped logs to be constrained. Does this look right?
Hm, does this look correct then?
filebeat.config:
inputs:
path: ${path.config}/inputs.d/*.yml
reload.enabled: true
reload.period: 10s
when.contains:
kubernetes.namespace: "NAMESPACE"
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
processors:
- add_kubernetes_metadata:
namespace: "NAMESPACE"
xpack.monitoring.enabled: true
output.elasticsearch:
hosts: ['elasticsearch:9200']
Despite this configuration I still get logs from all of the namespaces.
Filebeat is running as a DaemonSet on Kubernetes. Here is an example of an expanded log entry: https://i.imgur.com/xfTwbhl.png
You have number options to do it:
Filter data by filebeat
processors:
- drop_event:
when:
contains:
source: "field"
Use ingest pipeline into elasticsearch:
output.elasticsearch:
hosts: ["localhost:9200"]
pipeline: my_pipeline_id
And then test events into pipeline:
{
"drop": {
"if" : "ctx['field'] == null "
}
}
Use drop filter of logstash:
filter {
if ![field] {
drop { }
}
}
In the end, I resolved this by moving the drop processor to the input configuration file from the configuration file.

Filebeat not sending specific Log Files

I have configured filebeat 6.6 on a Windows instance. Weird thing is, it is sending logs for IIS but not for file I have specified even though the filebeat can detect it.
Filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- C:\ELK-Logger\filebeat-6.6.1-windows-x86_64\LowError.txt
- type: log
enabled: true
paths:
- C:\inetpub\logs\LogFiles\*\*
- C:\Hosting\stagingb2c\PaymentGatewayLogs\*\*
recursive_glob: enabled
- type: log
enabled: true
paths:
- C:\Hosting\stagingb2c\ErrorLogs\*
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 3
output.logstash:
hosts: ["13.234.83.186:5044"]
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
logging:
to_files: true
files:
path: C:\ELK-Logger\filebeat-6.6.1-windows-x86_64\filebeat-6.6.1-windows-x86_64\LOG
level: info
I can see logs from C:\inetpub\logs\LogFiles folder but not from C:\Hosting\stagingb2c\PaymentGatewayLogs.
I can not see any errors or warnings in filebeat.log when I started it with :slight_smile:
PS C:\ELK-Logger\filebeat-6.6.1-windows-x86_64\filebeat-6.6.1-windows-x86_64> .\filebeat.exe -e -d "*"
|2019-03-04T21:15:51.602+0300|INFO|log/harvester.go:255|Harvester started for file: C:\Hosting\stagingb2c\PaymentGatewayLogs\CredimaxPaymentGateway_OrderId_12f1050220190810\CredimaxPayment_TransactionDetails_OrderId_12f1050220190810|
|---|---|---|---|
|2019-03-04T21:15:51.761+0300|INFO|log/harvester.go:255|Harvester started for file: C:\Hosting\stagingb2c\PaymentGatewayLogs\CredimaxPaymentGateway_OrderId_Sw2m\CredimaxPayment_PROCESS_ACS_RESULT_Response_20190213124610_OrderId_Sw2m.txt|
|2019-03-04T21:15:51.920+0300|INFO|log/harvester.go:255|Harvester started for file: C:\Hosting\stagingb2c\PaymentGatewayLogs\CredimaxPaymentGateway_OrderId__SoLx\CredimaxPayment_PAY_Request_20190205085701_OrderId__SoLx.txt|
I am not able to see these logs in logstash though I can surely see other files coming in Logstash.
Change your input section to this and check,
filebeat.inputs:
- type: log
enabled: true
paths:
- C:\ELK-Logger\filebeat-6.6.1-windows-x86_64\LowError.txt
- C:\inetpub\logs\LogFiles\*\*
- C:\Hosting\stagingb2c\PaymentGatewayLogs\*\*
- C:\Hosting\stagingb2c\ErrorLogs\*
recursive_glob: enabled

Non-Zero Metrics-FileBeat

I am using elasic.co/filebeat:6.3.1 and ELK elastic.co:6.3.0 in ubuntu as docker, while running filebeat facing this issue,
https://i.stack.imgur.com/9rZjt.png
And my filebeat.yml is
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: false
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /usr/local/java/ABC_LOGS/*/*.log
#- c:\programdata\elasticsearch\logs\*
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false
#============================== Dashboards =====================================
setup.dashboards.enabled: true
#============================== Kibana =====================================
setup.kibana:
host: "10.0.0.0:5601"
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
hosts: ["10.0.0.0:9200"]
Please help me,Thanks in advance

Resources