Setup Logstash Netflow Dashboads and Visualizations - elasticsearch

Is there some way to setup Kibana Netflow Dashboards and Visualizations from Logstash(Module NETFLOW) application?
VERSION: Elasticsearch-7-0-1 and Kibana-7-0-1 and Logstash-7-3-1
Obs1 : I can't use CLI option (bin/logstash --modules netflow --setup -M netflow.var.input.udp.port=NNNN).
I tried putting these lines in the logstash.yml file:
modules:
- name: netflow
var.input.udp.port: 9995
var.elasticsearch.hosts: "ELASTICSEARCH-IP:9200"
var.kibana.host: "KIBANA-IP:5601"
I tried putting --setup, in the startup.options file:
# Arguments to pass to logstash
LS_OPTS="--path.settings ${LS_SETTINGS_DIR} --setup"
The Dashboards and Visualizations still aren't loading.

Related

I keep seeing in Kibana logs from files that are not configured filebeat.yml

I'm a beginner in ELK.
I have Elasticsearch 8.5.3, Filebeat and Kibana all running on the same Windows machine.
In the filebeat.yml I have configured the following paths:
type: filestream
Unique ID among all inputs, an ID is required.
id: my-filestream-id
Change to true to enable this input configuration.
enabled: true
Paths that should be crawled and fetched. Glob based paths.
paths:
- C:\ProgramData\mycompany\Logs\specific.Log
I want Filebeat to ship data from that specific file only.
For some reason, no matter what I configure under paths
Filebeat ships data from all the log files in C:\ProgramData\mycompany\Logs.
Each time I change the paths to test something I restart Filebeat:
filebeat.exe -c C:\ProgramData\Elastic\Beats\filebeat\filebeat.yml
The path to the yml file is verified and correct.
Yet, the result is the same.
I see in Kibana the data and documents from all the files in that folder.
Filebeat is running in powershell and no errors there.
I tried to delete the Filebeat registry and it didn't help.
I also tried also to restart the elasticsearch, filebeat and kibana all together.
What am I missing here?

How to send custom logs in a specified path to filebeat running inside docker

I am new to filebeat and elk. I am trying to send custom logs using filebeat to elastic search directly.Both the elk stack and filebeat are running inside docker containers.. The custom logs are in the folder home/username/docker/hello.log. Here is my filebeat.yml file:
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
filebeat.inputs:
- type: log
enabled: true
paths:
- /home/raju/elk/docker/*.log
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
processors:
- add_cloud_metadata: ~
output.elasticsearch:
hosts: ["my_ip:9200"]
And here is my custom log file:
This is a custom log file
Sending logs to elastic search
And these are the commands using which I am using to run filebeat.
docker run -d \
--name=filebeat \
--user=root \
--volume="$(pwd)/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro" \
--volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \
--volume="/var/run/docker.sock:/var/run/docker.sock:ro" \
docker.elastic.co/beats/filebeat:8.5.3 filebeat -e --strict.perms=false
When i use the above commands to run filebeat I can see the logs of the docker containers on my kibana dashboard. But I am struggling on how to make filebeat to read my custom logs from the specified location above and show me the lines inside the log file on kibana dashboard.
Anyhelp would be appreciated.
Filebeat inputs generally can accept multiple log file paths for harvesting them. In your case, you just need to add the log file location to your log filebeat input path attribute, similar to:
filebeat.inputs:
- type: log
enabled: true
paths:
- /home/raju/elk/docker/*.log
- /home/username/docker/hello.log

Filebeat read all logs, not only that one defined in configuration

I try to configure filebeat version 7.17.5 (amd64), libbeat 7.17.5, for reading Spring boot logs and sending them via logstash to elasticsearch. All works fine, logs are send and I can read it in Kibana but the problem is that I configured filebeat in file /etc/filebeat/filebeat.yml and defined there only one source of logs, but filebeat's still getting all logs from /var/log
It's my only one config for inputs:
filebeat.inputs:
- type: filestream
id: some_id
enabled: true
paths:
- "/var/log/dir_with_logs/application.log"
But when I check status of filebeat a have the information that:
[input] log/input.go:171 Configured paths: [/var/log/auth.log* /var/log/secure*]
And also I have logs from files: auth or secure in Kibana, which I don't want to have.
What I'm doing wrong or what I don't know what I should?
Based on the configured paths of /var/log/auth.log* and /var/log/secure*, I think this is the Filebeat system module. You can disable the system module by renaming /etc/filebeat/modules.d/system.yml to /etc/filebeat/modules.d/system.yml.disabled.
Alternatively you can run the filebeat modules command to disable the module (it simply renames the file for you).
filebeat modules disable system

Automated Setup of Kibana and Elasticsearch with Filebeat Module in Elastic Cloud for Kubernetes (ECK)

I'm trying out the K8s Operator (a.k.a. ECK) and so far, so good.
However, I'm wondering what the right pattern is for, say, configuring Kibana and Elasticsearch with the Apache module.
I know I can do it ad hoc with:
filebeat setup --modules apache2 --strict.perms=false \
--dashboards --pipelines --template \
-E setup.kibana.host="${KIBANA_URL}"
But what's the automated way to do it? I see some docs for the Kibana dashboard portion of it but what about the rest (pipelines, etc.)?
Note: At some point, I may end up actually running a beat for the K8s cluster, but I'm not at that stage yet. At the moment, I just want to set Elasticsearch/Kibana up with the Apache module additions so that external Apache services' Filebeats can get ingested/displayed properly.
FYI, I'm on version 6.8 of the Elastic stack for now.
you can try auto-discovery using label based approach.
config:
filebeat.autodiscover:
providers:
- type: kubernetes
hints.default_config.enabled: "false"
templates:
- condition.contains:
kubernetes.labels.app: "apache"
config:
- module: apache
access:
enabled: true
var.paths: ["/path/to/log/apache/access.log*"]
error:
enabled: true
var.paths: ["/path/to/log/apache/error.log*"]

Filebeat is not creating index in Elasticsearch

I'm setting up Filebeat to send logs to Elasticsearch. This is my filebeat.yml:
filebeat.prospectors:
- type: log
paths:
- '/var/log/project/*.log'
json.message_key: message
output.elasticsearch:
hosts: ["localhost:9200"]
I have this file /var/log/project/test.log with this content:
{ "message": "This is a test" }
and I was expecting this log to be sent to Elasticsearch. Elasticsearch is running in a Docker container in localhost at 9200.
When I run filebeat (Docker), no index is created in Elasticsearch. So, in Kibana, I don't see any data.
Why is that? Isn't supposed that Filebeat creates index automatically?
Solved! I wasn't sharing logs dir between host and Filebeat container, so there wasn't logs to send.
I added a volume when run Filebeat:
docker run -it -v $(pwd)/filebeat.yml:/usr/share/filebeat/filebeat.yml -v /var/log/project/:/var/log/project/ docker.elastic.co/beats/filebeat:6.4.0
you can create index as below
output.elasticsearch:
hosts: ["localhost:9200"]
index: "test-%{+yyyy.MM.dd}"

Resources