I am new to nowadays ELK stack.
I need to have an ability to read logs from path, using ElasticSearch, Kibana and Filebeat.
I've tried to configure them step by step with ELK guides. But I still cannot see my logs in Kibana.
Now I work only with localhost.
Here is how my .yml files are configured:
elasticsearch.yml:
xpack.security.enabled: true
kibana.yml:
elasticsearch.username: "elastic"
elasticsearch.password: "elastic1"
filebeat.yml:
filebeat.inputs:
- type: log
enabled: true
paths:
- C:\\logs\\*.log
- type: filestream
enabled: false
paths:
- C:\logs\*
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
host: "localhost:5601"
username: "kibana_system"
password: "kibana_system1"
output.elasticsearch:
hosts: ["localhost:9200"]
username: "elastic"
password: "elastic1"
setup.kibana:
host: "localhost:5601"
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
So I execute ElasticSearch and Kibana. It's OK. I set up Filebeat, using PowerShell like in guide. Many dashboards are being loaded. But I can't see anything, related to my logs in Discovery tab...
Tell me, please, if I do anything wrong, or may be I need to configure the files more deeply?
Related
For my application i would like to have a two Data-Views inside Kibana. I'm using filebeat as data shiper and configure kibana with my filebeat.yml (see below). I've got one Data view with the correct Index-Pattern and name but it's configured with the setup.dashboard.index. If i comment that out or delete it i will get the default filebeat-* name and pattern which doesn't match anything. And espacially since i need to use two different data views inside a dashboard the setup.dashboard.index can't be uses because this is overwriting my default settings with the two different index-pattern for data-views
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
setup.template.name: pac-filebeat-%{[agent.version]}
setup.template.pattern: pac-filebeat-%{[agent.version]}
setup.template.fields: ${path.config}/fields.yml
setup.dashboards.enabled: false
setup.dashboards.directory: ${path.config}\kibana\custom
setup.dashboards.index: pac-filebeat*
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
# Protocol - either `http` (default) or `https`.
protocol: "http"
index: pac-filebeat-%{[agent.version]}
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
So i would like to configure two Data-Views in my Project with different names and different Index-Pattern for their corresponding data-Streams.
I have multiple Filebeats are running in multiple systems with custom index name. Filebeat send data to Logstash then logstash send data to Elasticsearch. Every thing is working fine logs are showing in Discovery tab. But the issue is when i trying to load dashboard in kibana by using 'Filebeat setup -e' the dashboards are not getting load in it it and showing the error(Image is attached)
image
image1353×453 24 KB
Filebeat.yml
filebeat.inputs:
type: log
enabled: true
paths:
/var/log/.log
filebeat.config.modules:
path: ${path.config}/modules.d/.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.dashboards.enabled: true
setup.dashboards.index: "care-stagging-"
setup.kibana:
host: "http://xx.xx.xx.xx:5601"
username: "elastic"
password: "VKkLOmFXUupzgXNnahp"
ssl.verification_mode: none
output.logstash:
hosts: ["xx.xx.xx.xx:5044"]
index: "care-stagging"
setup.template.name: "care-stagging"
setup.template.pattern: "care-stagging-"
setup.ilm.enabled: false
setup.template.enabled: true
setup.template.overwrite: false
processors:
add_fields:
fields:
host.ip: "xx.xx.xx.xx"
logging.metrics.period: 30s
Please share how can i load multiples filebeat dashboards in kibana
I do not want filebeat to report any metrics to elasticsearch.
Once I start the deamon set I can see the following message:
2020-03-17T09:14:59.524Z INFO [monitoring] log/log.go:118 Starting metrics logging every 30s
How can you disable that?
Basically what I think I need is logging.metrics.enabled: false or is it monitoring.enabled: false ?
I just cannot make it work. I'm not sure where to put it. The documentation just says to put it into the logging section of my filebeat.yaml. So I added it on the same intendation level as "filebeat.inputs". To no success... - where do I need to put it? Or is it the completely wrong configuration setting I am looking at?
https://raw.githubusercontent.com/elastic/beats/master/deploy/kubernetes/filebeat-kubernetes.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# node: ${NODE_NAME}
# hints.enabled: true
# hints.default_config:
# type: container
# paths:
# - /var/log/containers/*${data.kubernetes.container.id}.log
processors:
- add_cloud_metadata:
- add_host_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
logging.metrics.enabled: false
---
The filebeat.yml is configuration file that mounted at /etc/filebeat.yml in the filebeat DaemonSet.
There are directory layout and configuration reference pages for FileBeat in elastic.co documentation.
Update:
The logging.metrics.enabled: false will only disable internal metrics.
Take a look at this post.
Note the difference between this INFO log for the internal metrics:
2019-03-26T16:16:02.557Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s
And the one in Your case:
2020-03-17T09:14:59.524Z INFO [monitoring] log/log.go:118 Starting metrics logging every 30s
Unfortunately this configuration will not stop FileBeat from reporting metrics to ElasticSearch
Hope it helps.
We just installed Elasticsearch 7.x. We want to use the x-pack security module. We already automated everything via Ansible but we have a problem creating/setting the built in users with password:
ElsticSearch how to:
Run on system: /usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive or auto.
Then you are asked for a password for each user in ElasticSearch. Are there any possibilities to automate this? Like some autoanswer question in Ansible or anything else?
Thanks
You can try to use interactive mode and ansible expect module: https://docs.ansible.com/ansible/latest/modules/expect_module.html
hosts: all
name: "Elasticsearch with SSL/TLS enabled"
roles:
-
role: elastic.elasticsearch
vars:
es_api_port: 9200
es_config:
action.destructive_requires_name: true
bootstrap.memory_lock: true
cluster.name: lab
discovery.seed_hosts: "0.0.0.0:9300"
discovery.type: single-node
http.port: 9200
indices.query.bool.max_clause_count: 8192
network.host: "0.0.0.0"
node.data: true
node.master: true
node.ml: false
node.name: lab1
reindex.remote.whitelist: "*:*"
search.max_buckets: 250000
transport.port: 9300
xpack.ilm.enabled: true
xpack.ml.enabled: false
xpack.monitoring.collection.enabled: true
xpack.monitoring.collection.interval: 30s
xpack.monitoring.enabled: true
xpack.security.audit.enabled: false
#xpack.security.enabled: true
xpack.sql.enabled: true
xpack.watcher.enabled: false
es_api_basic_auth_username: "elastic"
es_api_basic_auth_password: "changeme"
es_data_dirs:
- /opt/elasticsearch/data
es_heap_size: 2g
es_plugins:
-
plugin: ingest-attachment
es_validate_certs: false
es_version: "7.17.0"
es_users:
native:
elastic:
password: helloakash1234
kibana_system:
password: hellokibana1234
logstash_system:
password: hellologs1234
This works fine for me!!
es_users:
native:
elastic:
password: helloakash1234
With the above mentioned code the username will be "elastic" and the password will be "helloakash1234"
If you use the auto mode, then random passwords are generated and written to the console that you can maybe read.
Another solution is to call the Change password API in order to change user passwords after the fact.
I am following this tutorial to set up a ELK stack (VPS B) that will receive some Docker/docker compose images logs (VPS A) using Beatfile as forwarder, my diagram is as shown below
So far, I have managed to have all the interfaces with green ticks working. However, there are still remaining some issues in that I am not able to understand. Thus, I would appreciate if someone could help me out a bit with it.
My main issue is that I don't get any Docker/docker-compose log from the VPSA into the Filebeat Server of VPSB; nevertheless, I got other logs from VPSA such as rsyslog, authentication log and so on on the Filebeat Server of VPSB. I have configured my docker-compose file to forward the logs using rsyslog as logging driver, and then filebeat is fowarding that syslog to the VPSB. Reaching this point, I do see logs from the docker daemon itself, such as virtual interfaces up/down, staring process and so, but not the "debug" logs of the containters themselves.
The configuration of Filebeat client in VPSA looks like this
root#VPSA:/etc/filebeat# cat filebeat.yml
filebeat:
prospectors:
-
paths:
- /var/log/auth.log
- /var/log/syslog
# - /var/log/*.log
input_type: log
document_type: syslog
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["ipVPSB:5044"]
bulk_max_size: 2048
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
files:
rotateeverybytes: 10485760 # = 10MB
level: debug
One of the docker-compose logging driver looks like this
redis:
logging:
driver: syslog
options:
syslog-facility: user
Finally I would like to ask, whether it is possible to forward natively from docker-composer the logs to Filebeat client in VPSA, red arrow in the diagram, so that it can forward them to my VPSB.
Thank you very much,
REgards!!
The issue seemed to be in FileBeat VPSA, since it has to collect data from the syslog, it has to be run before that syslog!
Updating rc.d made it work
sudo update-rc.d filebeat defaults 95 10
My filebeats.yml if someone needs it
root#VPSA:/etc/filebeat# cat filebeat.yml
filebeat:
prospectors:
-
paths:
# - /var/log/auth.log
- /var/log/syslog
# - /var/log/*.log
input_type: log
ignore_older: 24h
scan_frequency: 10s
document_type: syslog
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["ipVPSB:5044"]
bulk_max_size: 2048
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
shipper:
logging:
level: debug
to_files: true
to_syslog: false
files:
path: /var/log/mybeat
name: mybeat.log
keepfiles: 7
rotateeverybytes: 10485760 # = 10MB
Regards