How to create multiple filebeats dashboard in Kibana - elasticsearch

I have multiple Filebeats are running in multiple systems with custom index name. Filebeat send data to Logstash then logstash send data to Elasticsearch. Every thing is working fine logs are showing in Discovery tab. But the issue is when i trying to load dashboard in kibana by using 'Filebeat setup -e' the dashboards are not getting load in it it and showing the error(Image is attached)
image
image1353×453 24 KB
Filebeat.yml
filebeat.inputs:
type: log
enabled: true
paths:
/var/log/.log
filebeat.config.modules:
path: ${path.config}/modules.d/.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.dashboards.enabled: true
setup.dashboards.index: "care-stagging-"
setup.kibana:
host: "http://xx.xx.xx.xx:5601"
username: "elastic"
password: "VKkLOmFXUupzgXNnahp"
ssl.verification_mode: none
output.logstash:
hosts: ["xx.xx.xx.xx:5044"]
index: "care-stagging"
setup.template.name: "care-stagging"
setup.template.pattern: "care-stagging-"
setup.ilm.enabled: false
setup.template.enabled: true
setup.template.overwrite: false
processors:
add_fields:
fields:
host.ip: "xx.xx.xx.xx"
logging.metrics.period: 30s
Please share how can i load multiples filebeat dashboards in kibana

Related

How to configure Name/IndexPattern and amount of DataStreams in filebeat.yml

For my application i would like to have a two Data-Views inside Kibana. I'm using filebeat as data shiper and configure kibana with my filebeat.yml (see below). I've got one Data view with the correct Index-Pattern and name but it's configured with the setup.dashboard.index. If i comment that out or delete it i will get the default filebeat-* name and pattern which doesn't match anything. And espacially since i need to use two different data views inside a dashboard the setup.dashboard.index can't be uses because this is overwriting my default settings with the two different index-pattern for data-views
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
setup.template.name: pac-filebeat-%{[agent.version]}
setup.template.pattern: pac-filebeat-%{[agent.version]}
setup.template.fields: ${path.config}/fields.yml
setup.dashboards.enabled: false
setup.dashboards.directory: ${path.config}\kibana\custom
setup.dashboards.index: pac-filebeat*
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
# Protocol - either `http` (default) or `https`.
protocol: "http"
index: pac-filebeat-%{[agent.version]}
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
So i would like to configure two Data-Views in my Project with different names and different Index-Pattern for their corresponding data-Streams.

How to configure Filebeat to read log files, using ELK stack?

I am new to nowadays ELK stack.
I need to have an ability to read logs from path, using ElasticSearch, Kibana and Filebeat.
I've tried to configure them step by step with ELK guides. But I still cannot see my logs in Kibana.
Now I work only with localhost.
Here is how my .yml files are configured:
elasticsearch.yml:
xpack.security.enabled: true
kibana.yml:
elasticsearch.username: "elastic"
elasticsearch.password: "elastic1"
filebeat.yml:
filebeat.inputs:
- type: log
enabled: true
paths:
- C:\\logs\\*.log
- type: filestream
enabled: false
paths:
- C:\logs\*
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
host: "localhost:5601"
username: "kibana_system"
password: "kibana_system1"
output.elasticsearch:
hosts: ["localhost:9200"]
username: "elastic"
password: "elastic1"
setup.kibana:
host: "localhost:5601"
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
So I execute ElasticSearch and Kibana. It's OK. I set up Filebeat, using PowerShell like in guide. Many dashboards are being loaded. But I can't see anything, related to my logs in Discovery tab...
Tell me, please, if I do anything wrong, or may be I need to configure the files more deeply?

How to make promtail read new log written to log file which was read already?

I have a very simple test setup. Data flow is as follows:
sample.log -> Promtail -> Loki -> Grafana
I am using this log file from microsoft: sample log file download link
I have promtail config as follows:
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: C:\Users\user\Desktop\tmp\positions.yaml
clients:
- url: http://localhost:3100/loki/api/v1/push
scrape_configs:
- job_name: testing_logging_a_log_file
static_configs:
- targets:
- localhost
labels:
job: testing_logging_a_log_file_labels_job_what_even_is_this
host: testing_for_signs_of_life_probably_my_computer_name
__path__: C:\Users\user\Desktop\sample.log
- job_name: testing_logging_a_log_file_with_no_timestamp_test_2
static_configs:
- targets:
- localhost
labels:
job: actor_v2
host: ez_change
__path__: C:\Users\user\Desktop\Actors_2.txt
Loki config:
auth_enabled: false
server:
http_listen_port: 3100
ingester:
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 5m
chunk_retain_period: 30s
max_transfer_retries: 0
schema_config:
configs:
- from: 2018-04-15
store: boltdb
object_store: filesystem
schema: v11
index:
prefix: index_
period: 168h
storage_config:
boltdb:
directory: C:\Users\user\Desktop\tmp\loki\index
filesystem:
directory: C:\Users\user\Desktop\tmp\loki\chunks
limits_config:
enforce_metric_name: false
reject_old_samples: True
reject_old_samples_max_age: 168h
chunk_store_config:
max_look_back_period: 0s
table_manager:
retention_deletes_enabled: false
retention_period: 0s
The sample files are read properly the first time. I can query WARN logs with: {host="testing_for_signs_of_life_probably_my_computer_name"} |= "WARN"
Problem arises when I manually add a new log line to the sample.log file. (To emulate log lines written to the file)
2012-02-03 20:11:56 SampleClass3 [WARN] missing id 42334089511
This new line is not visible in Grafana. Is there any particular config I must to know to make this happen?
It was a problem with the network, if you remove the loki port and don't configure any network you can access it by putting http://loki:3100 in your grafana panel.
Yes, it's weird, when I append a line to a existed log file, it can't be seen in grafana explore.but ....try to do it again , append one more line, now the previous line is show in grafana
it happens when you using notepad, works well on notepad++

kubernetes filebeat disable metrics monitoring

I do not want filebeat to report any metrics to elasticsearch.
Once I start the deamon set I can see the following message:
2020-03-17T09:14:59.524Z INFO [monitoring] log/log.go:118 Starting metrics logging every 30s
How can you disable that?
Basically what I think I need is logging.metrics.enabled: false or is it monitoring.enabled: false ?
I just cannot make it work. I'm not sure where to put it. The documentation just says to put it into the logging section of my filebeat.yaml. So I added it on the same intendation level as "filebeat.inputs". To no success... - where do I need to put it? Or is it the completely wrong configuration setting I am looking at?
https://raw.githubusercontent.com/elastic/beats/master/deploy/kubernetes/filebeat-kubernetes.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# node: ${NODE_NAME}
# hints.enabled: true
# hints.default_config:
# type: container
# paths:
# - /var/log/containers/*${data.kubernetes.container.id}.log
processors:
- add_cloud_metadata:
- add_host_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
logging.metrics.enabled: false
---
The filebeat.yml is configuration file that mounted at /etc/filebeat.yml in the filebeat DaemonSet.
There are directory layout and configuration reference pages for FileBeat in elastic.co documentation.
Update:
The logging.metrics.enabled: false will only disable internal metrics.
Take a look at this post.
Note the difference between this INFO log for the internal metrics:
2019-03-26T16:16:02.557Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s
And the one in Your case:
2020-03-17T09:14:59.524Z INFO [monitoring] log/log.go:118 Starting metrics logging every 30s
Unfortunately this configuration will not stop FileBeat from reporting metrics to ElasticSearch
Hope it helps.

Filebeat unable to send logs to Kafka

File Beat is unable to send logs from a particular folder, This is the application logs folder.
Things that have been tried :
Created a new topic in kafka to retest the settings.
Checked for file permission for the folder and the file to send.
Updated to the filebeats to 6.7 from 5.5
changed from filebeat.prospector to filebeat.inputs
Running Configuration :
filebeat.inputs:
- type: log
paths:
- /var/log/containers/*.log
fields_under_root: true
output.kafka:
hosts: ["10.0.0.0:9092"]
topic: "testtopic"
codec.json:
pretty: true
With this i am able to see all the logs in "testtopic"
Not running Configuration :
filebeat.inputs:
- type: log
paths:
- /app/log/server/*.log
fields_under_root: true
output.kafka:
hosts: ["10.0.0.0:9092"]
topic: "testtopic"
codec.json:
pretty: true
Expected Results : Logs from the path /app/log/server/*.log should be sent to Kafka

Resources