I do not want filebeat to report any metrics to elasticsearch.
Once I start the deamon set I can see the following message:
2020-03-17T09:14:59.524Z INFO [monitoring] log/log.go:118 Starting metrics logging every 30s
How can you disable that?
Basically what I think I need is logging.metrics.enabled: false or is it monitoring.enabled: false ?
I just cannot make it work. I'm not sure where to put it. The documentation just says to put it into the logging section of my filebeat.yaml. So I added it on the same intendation level as "filebeat.inputs". To no success... - where do I need to put it? Or is it the completely wrong configuration setting I am looking at?
https://raw.githubusercontent.com/elastic/beats/master/deploy/kubernetes/filebeat-kubernetes.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# node: ${NODE_NAME}
# hints.enabled: true
# hints.default_config:
# type: container
# paths:
# - /var/log/containers/*${data.kubernetes.container.id}.log
processors:
- add_cloud_metadata:
- add_host_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
logging.metrics.enabled: false
---
The filebeat.yml is configuration file that mounted at /etc/filebeat.yml in the filebeat DaemonSet.
There are directory layout and configuration reference pages for FileBeat in elastic.co documentation.
Update:
The logging.metrics.enabled: false will only disable internal metrics.
Take a look at this post.
Note the difference between this INFO log for the internal metrics:
2019-03-26T16:16:02.557Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s
And the one in Your case:
2020-03-17T09:14:59.524Z INFO [monitoring] log/log.go:118 Starting metrics logging every 30s
Unfortunately this configuration will not stop FileBeat from reporting metrics to ElasticSearch
Hope it helps.
Related
I am new to nowadays ELK stack.
I need to have an ability to read logs from path, using ElasticSearch, Kibana and Filebeat.
I've tried to configure them step by step with ELK guides. But I still cannot see my logs in Kibana.
Now I work only with localhost.
Here is how my .yml files are configured:
elasticsearch.yml:
xpack.security.enabled: true
kibana.yml:
elasticsearch.username: "elastic"
elasticsearch.password: "elastic1"
filebeat.yml:
filebeat.inputs:
- type: log
enabled: true
paths:
- C:\\logs\\*.log
- type: filestream
enabled: false
paths:
- C:\logs\*
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
host: "localhost:5601"
username: "kibana_system"
password: "kibana_system1"
output.elasticsearch:
hosts: ["localhost:9200"]
username: "elastic"
password: "elastic1"
setup.kibana:
host: "localhost:5601"
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
So I execute ElasticSearch and Kibana. It's OK. I set up Filebeat, using PowerShell like in guide. Many dashboards are being loaded. But I can't see anything, related to my logs in Discovery tab...
Tell me, please, if I do anything wrong, or may be I need to configure the files more deeply?
I am launching Elasticsearch cluster in K8S and below is the spec file. It failed to launch the pod with below error. I am trying to disable authentication and want to connect to the cluster without any credentials. But it stops me doing that. It says the configuration is internal use. What is the correct way for me to set this settings?
Warning ReconciliationError 84s elasticsearch-controller Failed to apply spec change: adjust resources: adjust discovery config: Operation cannot be fulfilled on elasticsearches.elasticsearch.k8s.elastic.co "datasource": the object has been modified; please apply your changes to the latest version and try again
Normal AssociationStatusChange 1s (x16 over 86s) es-monitoring-association-controller Association status changed from [] to []
Warning Validation 1s (x20 over 84s) elasticsearch-controller [spec.nodeSets[0].config.xpack.security.enabled: Forbidden: Configuration setting is reserved for internal use. User-configured use is unsupported, spec.nodeSets[0].config.xpack.security.http.ssl.enabled: Forbidden: Configuration setting is reserved for internal use. User-configured use is unsupported, spec.nodeSets[0].config.xpack.security.transport.ssl.enabled: Forbidden: Configuration setting is reserved for internal use. User-configured use is unsupported]
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: datasource
spec:
version: 7.14.0
nodeSets:
- name: node
count: 2
config:
node.store.allow_mmap: false
xpack.security.http.ssl.enabled: false
xpack.security.transport.ssl.enabled: false
xpack.security.enabled: false
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc
resources:
requests:
storage: 1024Gi
You can try this:
https://discuss.elastic.co/t/cannot-disable-tls-and-security-in-eks/222335/2
I have tested and its working fine for me without any issues:
cat <<EOF | kubectl apply -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.15.0
nodeSets:
- name: default
count: 1
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
xpack.security.authc:
anonymous:
username: anonymous
roles: superuser
authz_exception: false
EOF
To Disable basic authentication:
https://www.elastic.co/guide/en/elasticsearch/reference/7.14/anonymous-access.html
To disable SSL self signed certificate:
https://www.elastic.co/guide/en/cloud-on-k8s/0.9/k8s-accessing-elastic-services.html#k8s-disable-tls
I have Elasticsearch and Kibana running on Kubernetes. Both created by ECK. Now I try to add Filebeat to it and configure it to index data coming from a Kafka topic. This is my current configuration:
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: my-filebeat
namespace: my-namespace
spec:
type: filebeat
version: 7.10.2
elasticsearchRef:
name: my-elastic
kibanaRef:
name: my-kibana
config:
filebeat.inputs:
- type: kafka
hosts:
- host1:9092
- host2:9092
- host3:9092
topics: ["my.topic"]
group_id: "my_group_id"
index: "my_index"
deployment:
podTemplate:
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
securityContext:
runAsUser: 0
containers:
- name: filebeat
In the logs of the pod I can see entries like following
log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":2470,"time":{"ms":192}},"total":{"ticks":7760,"time":{"ms":367},"value":7760},"user":{"ticks":5290,"time":{"ms":175}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":13},"info":{"ephemeral_id":"5ce8521c-f237-4994-a02e-dd11dfd31b09","uptime":{"ms":181997}},"memstats":{"gc_next":23678528,"memory_alloc":15320760,"memory_total":459895768},"runtime":{"goroutines":106}},"filebeat":{"harvester":{"open_files":0,"running":0},"inputs":{"kafka":{"bytes_read":46510,"bytes_write":37226}}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":1.18,"15":0.77,"5":0.97,"norm":{"1":0.0738,"15":0.0481,"5":0.0606}}}}}}
And nor error entries are there. So I assume that the connection to Kafka works. Unfortunately, there no data in the my_index specified above. What do I do wrong?
I guess you are not able to connect to the Elasticsearch mentioned in the output.
As per docs, ECK secures the Elasticsearch deployed and stores it in the Kubernetes Secrets.
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-beat-configuration.html
I have an elasticsearch cluster that has the storage field set to 10Gi, I want to resize this cluster (for testing purposes to 15Gi). However, after changing the storage value from 10Gi to 15Gi I can see that the cluster still did not resize and the generated PVC is still set to 10Gi.
From what I can tell the aws-ebs storage https://kubernetes.io/docs/concepts/storage/storage-classes/ allows for volume expansion when the field allowVolumeExpansion is true. But even when I have this, the volume is never expanded when I change that storage value
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: elasticsearch-storage
namespace: test
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Delete
allowVolumeExpansion: true
---
apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
name: elasticsearch
namespace: test
spec:
version: 7.4.2
spec:
http:
tls:
certificate:
secretName: es-cert
nodeSets:
- name: default
count: 3
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
annotations:
volume.beta.kubernetes.io/storage-class: elasticsearch-storage
spec:
accessModes:
- ReadWriteOnce
storageClassName: elasticsearch-storage
resources:
requests:
storage: 15Gi
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
xpack.security.authc.realms:
native:
native1:
order: 1
---
Technically it should work but your Kubernetes cluster might not be able to connect to the AWS API to expand the volume. Did you check the actual EBS volume on the EC2 console or AWS CLI? You can debug this issue by looking at the kube-controller-manager and cloud-controller manager logs.
My guess is that there is some type of permission issue that from your K8s cluster that cannot talk to your AWS/EC2 API.
If you are running EKS, make sure that the IAM cluster role that you are using has permissions for EC2/EBS. You can check the control plane logs (kube-controller-manager, kube-apiserver, cloud-controller-manager, etc) on CloudWatch.
EDIT:
The Elasticsearch operator uses StatefulSets and as of this date Volume expansion is not supported on StatefulSets.
I am using the ELK stack (elasticsearch, logsash, kibana) for log processing and analysis in a Kubernetes (minikube) environment. To capture logs I am using filebeat. Logs are propagated successfully from filebeat through to elasticsearch and are viewable in Kibana.
My problem is that I am unable to get the pod name of the actual pod issuing log records. Rather I only get the filebeat podname which is gathering log files and not name of the pod that is originating log records.
The information I can get from filebeat are (as viewed in Kibana)
beat.hostname: the value of this field is the filebeat pod name
beat.name: value is the filebeat pod name
host: value is the filebeat pod name
I can also see/discern container information in Kibana which flow through from filebeat / logstash / elasticsearch:
app: value is {log-container-id}-json.log
source: value is /hostfs/var/lib/docker/containers/{log-container-id}-json.log
As shown above, I seem to be able to get the container Id but not the pod name.
To mitigate the situation, I could probably embed the pod-name in the actual log message and parse it from there, but I am hoping there is a solution in which I can configure filebeat to emit actual pod names.
Does anyone now how to configure filebeat (or other components) to capture kubernetes (minikube) pod names in their logs?
My current filebeat configuration is listed below:
ConfigMap is shown below:
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat
namespace: logging
labels:
component: filebeat
data:
filebeat.yml: |
filebeat.prospectors:
- input_type: log
tags:
- host
paths:
- "/hostfs/var/log"
- "/hostfs/var/log/*"
- "/hostfs/var/log/*/*"
exclude_files:
- '\.[0-9]$'
- '\.[0-9]\.gz$'
- input_type: log
tags:
- docker
paths:
- /hostfs/var/lib/docker/containers/*/*-json.log
json:
keys_under_root: false
message_key: log
add_error_key: true
multiline:
pattern: '^[[:space:]]+|^Caused by:'
negate: false
match: after
output.logstash:
hosts: ["logstash:5044"]
logging.level: info
DamemonSet is shown below:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: logging
spec:
template:
metadata:
labels:
component: filebeat
spec:
containers:
- name: filebeat
image: giantswarm/filebeat:5.2.2
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 100m
requests:
cpu: 100m
volumeMounts:
- name: config
mountPath: /etc/filebeat
readOnly: true
- name: hostfs-var-lib-docker-containers
mountPath: /hostfs/var/lib/docker/containers
readOnly: true
- name: hostfs-var-log
mountPath: /hostfs/var/log
readOnly: true
volumes:
- name: config
configMap:
name: filebeat
- name: hostfs-var-log
hostPath:
path: /var/log
- name: hostfs-var-lib-docker-containers
hostPath:
path: /var/lib/docker/containers
disclaimer: I'm a beats developer
What you want to do is not yet supported by filebeat, but definitely, it's something we want to put some effort on, so you can expect future releases supporting this kind of mapping.
In the meantime, I think your approach is correct. You can append the info you need to your logs so you have it in elasticsearch
I have achieved what you looking for, by assigning a group of specific pods to a namespace, and now can query the log I look for using a combination of namespace, pod name and container name which is also included in generated log which is piped by file beat without any extra effort as you can see here
For future people coming here, it is now already in place in a filebeat processor :
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/*.log
- /var/log/messages
- /var/log/syslog
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true
- drop_event:
when:
equals:
kubernetes.container.name: "filebeat"
helm chart default values : https://github.com/helm/charts/blob/master/stable/filebeat/values.yaml
doc : https://www.elastic.co/guide/en/beats/filebeat/current/add-kubernetes-metadata.html