Schedule Filebeat daemonset on particular ip nodes in kubernetes - elasticsearch

I am trying to gather logs of kubernetes cluster into elk cluster via filebeat.
As of now, my filebeat runs on all ec2 nodes as daemon and its working fine but I want to schedule the filebeat on the ip address of the node i choose.
My kubernetes version is 1.15 and helm chart version is 2.17 due to which i am using helm chart of elasticsearch with version 7.17.3.
As per documentation of kubernetes,this can achieved and i have tried to modify helm chart with following entry in filebeat but then no node comes up:
daemonset:
# Annotations to apply to the daemonset
annotations: {}
# additionals labels
labels: {}
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchFields:
- key: metadata.name
operator: In
values:
- 10.17.7.7
Kindly help.

Kindly follow the steps as mentioned:
Get the lables of all nodes by running the following command:
kubectl get nodes --show-labels
ip-xx-xx-x-xx.xx.compute.internal Ready worker 511d v1.15.11 KubernetesCluster=XXdev.com,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m5.xlarge,beta.kubernetes.io/os=linux,cattle.io/creator=norman,failure-domain.beta.kubernetes.io/region=xx,failure-domain.beta.kubernetes.io/zone=xx,kubernetes.io/arch=amd64,kubernetes.io/hostname=XXdev-aworker1,kubernetes.io/os=linux,node-role.kubernetes.io/worker=true
ip-xx-xx-x-xx.xx.compute.internal Ready controlplane,etcd,worker 688d v1.15.11 KubernetesCluster=XXdev.com,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m5.2xlarge,beta.kubernetes.io/os=linux,cattle.io/creator=norman,failure-domain.beta.kubernetes.io/region=xx,failure-domain.beta.kubernetes.io/zone=xx,kubernetes.io/arch=amd64,kubernetes.io/hostname=XXdev-all3,kubernetes.io/os=linux,node-role.kubernetes.io/controlplane=true,node-role.kubernetes.io/etcd=true,node-role.kubernetes.io/worker=true
now in the labels column, kindly see the key value pair kubernetes.io/etcd=true and mention the same in above code
daemonset:
# Annotations to apply to the daemonset
annotations: {}
# additionals labels
labels: {}
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchFields:
- key: kubernetes.io/etcd
operator: In
values:
- true
Another way to achieve same by nodeSelector is as follows:
nodeSelector:
kubernetes.io/etcd: true

Related

How to configure Filebeat on ECK for kafka input?

I have Elasticsearch and Kibana running on Kubernetes. Both created by ECK. Now I try to add Filebeat to it and configure it to index data coming from a Kafka topic. This is my current configuration:
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: my-filebeat
namespace: my-namespace
spec:
type: filebeat
version: 7.10.2
elasticsearchRef:
name: my-elastic
kibanaRef:
name: my-kibana
config:
filebeat.inputs:
- type: kafka
hosts:
- host1:9092
- host2:9092
- host3:9092
topics: ["my.topic"]
group_id: "my_group_id"
index: "my_index"
deployment:
podTemplate:
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
securityContext:
runAsUser: 0
containers:
- name: filebeat
In the logs of the pod I can see entries like following
log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":2470,"time":{"ms":192}},"total":{"ticks":7760,"time":{"ms":367},"value":7760},"user":{"ticks":5290,"time":{"ms":175}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":13},"info":{"ephemeral_id":"5ce8521c-f237-4994-a02e-dd11dfd31b09","uptime":{"ms":181997}},"memstats":{"gc_next":23678528,"memory_alloc":15320760,"memory_total":459895768},"runtime":{"goroutines":106}},"filebeat":{"harvester":{"open_files":0,"running":0},"inputs":{"kafka":{"bytes_read":46510,"bytes_write":37226}}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":1.18,"15":0.77,"5":0.97,"norm":{"1":0.0738,"15":0.0481,"5":0.0606}}}}}}
And nor error entries are there. So I assume that the connection to Kafka works. Unfortunately, there no data in the my_index specified above. What do I do wrong?
I guess you are not able to connect to the Elasticsearch mentioned in the output.
As per docs, ECK secures the Elasticsearch deployed and stores it in the Kubernetes Secrets.
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-beat-configuration.html

Fix elasticsearch broken cluster within kubernetes

I deployed an elasticsearch cluster with official Helm chart (https://github.com/elastic/helm-charts/tree/master/elasticsearch).
There are 3 Helm releases:
master (3 nodes)
client (1 node)
data (2 nodes)
Cluster was running fine, I did a crash test by removing master release, and re-create it.
After that, master nodes are ok, but data nodes complain:
Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: join validation on cluster state with a different cluster uuid xeQ6IVkDQ2es1CO2yZ_7rw than local cluster uuid 9P9ZGqSuQmy7iRDGcit5fg, rejecting
which is normal because master nodes are new.
How can I fix data nodes cluster state without removing data folder?
Edit:
I know the reason why is broken, I know a basic solution is to remove data folder and restart node (as I can see on elastic forum, lot of similar questions without answers). But I am looking for a production aware solution, maybe with https://www.elastic.co/guide/en/elasticsearch/reference/current/node-tool.html tool?
Using elasticsearch-node utility, it's possible to reset cluster state, then the fresh node can join another cluster.
The tricky thing is to use this utility bin with Docker, because elasticsearch server must be stopped!
Solution with kubernetes:
Stop pods by scaling to 0 the sts: kubectl scale data-nodes --replicas=0
Create a k8s job that reset the cluster state, with data volume attached
Apply the job for each PVC
Rescale sts and enjoy!
job.yaml:
apiVersion: batch/v1
kind: Job
metadata:
name: test-fix-cluster-m[0-3]
spec:
template:
spec:
containers:
- args:
- -c
- yes | elasticsearch-node detach-cluster; yes | elasticsearch-node remove-customs '*'
# uncomment for at least 1 PVC
#- yes | elasticsearch-node unsafe-bootstrap -v
command:
- /bin/sh
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.1
name: elasticsearch
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: es-data
restartPolicy: Never
volumes:
- name: es-data
persistentVolumeClaim:
claimName: es-test-master-es-test-master-[0-3]
If you are interested, here the code behind unsafe-bootstrap: https://github.com/elastic/elasticsearch/blob/master/server/src/main/java/org/elasticsearch/cluster/coordination/UnsafeBootstrapMasterCommand.java#L83
I have written a small story at https://medium.com/#thomasdecaux/fix-broken-elasticsearch-cluster-405ad67ee17c.

Packetbeat does not add Kubernetes metadata

I've started a minikube (using Kubernetes 1.18.3) to test out ECK and specifically packetbeat. The minikube profile is called "packetbeat" (important, as that's the hostname for the Virtualbox VM as well) and I followed the ECK quickstart to get it up and running. ElasticSearch (single node) and Kibana are running fine and packetbeat is gathering flows as well, however, I'm unable to make it add the Kubernetes metadata to the fields.
I'm working in the default namespace and created a ClusterRoleBinding to view for the default ServiceAccount in the namespace. This is working well, if I do not do that, packetbeat will report it is unable to list the Pods on the API server.
This is the Beat config I'm using to make ECK deploy packetbeat:
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: packetbeat
spec:
type: packetbeat
version: 7.9.0
elasticsearchRef:
name: quickstart
kibanaRef:
name: kibana
config:
packetbeat.interfaces.device: any
packetbeat.protocols:
- type: http
ports: [80, 8000, 8080, 9200]
- type: tls
ports: [443]
packetbeat.flows:
timeout: 30s
period: 10s
processors:
- add_kubernetes_metadata: {}
daemonSet:
podTemplate:
spec:
terminationGracePeriodSeconds: 30
hostNetwork: true
automountServiceAccountToken: true # some older Beat versions are depending on this settings presence in k8s context
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: packetbeat
securityContext:
runAsUser: 0
capabilities:
add:
- NET_ADMIN
(This is mostly a slightly modified example from the ECK example page.) However, this is not working at all. I tried it with "add_kubernetes_metadata: {}" first, but that will error with the message:
2020-08-19T14:23:38.550Z ERROR [kubernetes] kubernetes/util.go:117
kubernetes: Querying for pod failed with error: pods "packetbeat" not
found {"libbeat.processor": "add_kubernetes_metadata"}
This message goes away when I add the "host: packetbeat". I'm no longer getting an error now, but I'm not getting the Kubernetes metadata either. I'm mostly interested in the namespace tag, but I'm not getting any. I do not see any additional errors in the log and it just reports monitoring details every 30 seconds at the moment.
What am I doing wrong? Any more information I can provide to help me debug this?
So the docs are just unclear. Although they do not explicitely state it, you do need to add indexers and matchers. My understanding was that there are "default" ones (as you can disable those), but that does not seem to be the case. Adding the indexers and matchers as per the example in the docs makes the Kubernetes metadata part of the data.

Service discovery does not discover multiple nodes

I have elasticsearch cluster on kubernetes on aws. I have used upmc operator and readonlyrest security plugin.
I have started my cluster passing yaml to upmc operator with 3 data / master and ingest nodes.
However when I do /localhsot:9200/_nodes all I see only 1 node is being assigned. Service discovery did not attach other nodes to the clusters. Essentially I have a single node cluster. Any settings I am missing or after creating cluster I need to run some settings so that all nodes become part of the cluster ?
Here is my yml file:
The following yaml file is used to create the cluster.
This yaml config creates 3 master/data/ingest nodes and upmcoperator uses pod afinity to allocate pods in different zones.
All 9 nodes are getting created just fine, but they are unable to become part of the cluster.
====================================
apiVersion: enterprises.upmc.com/v1
kind: ElasticsearchCluster
metadata:
name: es-cluster
spec:
kibana:
image: docker.elastic.co/kibana/kibana-oss:6.1.3
image-pull-policy: Always
cerebro:
image: upmcenterprises/cerebro:0.7.2
image-pull-policy: Always
elastic-search-image: myelasticimage:elasticsearch-mod-v0.1
elasticsearchEnv:
- name: PUBLIC_KEY
value: "default"
- name: NETWORK_HOST
value: "_eth0:ipv4_"
image-pull-secrets:
- name: egistrykey
image-pull-policy: Always
client-node-replicas: 3
master-node-replicas: 3
data-node-replicas: 3
network-host: 0.0.0.0
zones: []
data-volume-size: 1Gi
java-options: "-Xms512m -Xmx512m"
snapshot:
scheduler-enabled: false
bucket-name: somebucket
cron-schedule: "#every 1m"
image: upmcenterprises/elasticsearch-cron:0.0.4
storage:
type: standard
storage-class-provisioner: volume.alpha.kubernetes.io/storage-class
volume-reclaim-policy: Delete

How I can visualize elasticsearch metrics in prometheus?, both installed in a gke cluster

I have a GKE cluster with this elasticseach logging solution installed
https://console.cloud.google.com/marketplace/details/google/elastic-gke-logging
And prometheus-operator installed by helm inside the same cluster.
I would like configure a grafana dashboard for visualize metrics of my elasticsearch.
I read that elastic application from gke has the elastic_exporter installed... https://github.com/GoogleCloudPlatform/click-to-deploy/blob/master/k8s/elastic-gke-logging/README.md
But if I go to my Prometheus panel I don't see any metric about elasticsearch. I try install another elastic_exporter, but nothing.
I miss something? I forget something? Do you need to configure prometheus to read from the elastic_exporter?
I see the metrics when I do port-forwarding of the elastic_exporter, but I don't see the metrics inside prometheus panel.
# HELP elasticsearch_breakers_estimated_size_bytes Estimated size in bytes of breaker
# TYPE elasticsearch_breakers_estimated_size_bytes gauge
elasticsearch_breakers_estimated_size_bytes{breaker="accounting",cluster="elastic-gke-logging-1-cluster",es_client_node="true",es_data_node="true",es_ingest_node="true",es_master_node="true",host="10.50.2.54",name="elastic-gke-logging-1-elasticsearch-0"} 4.6637464e+07
elasticsearch_breakers_estimated_size_bytes{breaker="fielddata",cluster="elastic-gke-logging-1-cluster",es_client_node="true",es_data_node="true",es_ingest_node="true",es_master_node="true",host="10.50.2.54",name="elastic-gke-logging-1-elasticsearch-0"} 0
elasticsearch_breakers_estimated_size_bytes{breaker="in_flight_requests",cluster="elastic-gke-logging-1-cluster",es_client_node="true",es_data_node="true",es_ingest_node="true",es_master_node="true",host="10.50.2.54",name="elastic-gke-logging-1-elasticsearch-0"} 0
elasticsearch_breakers_estimated_size_bytes{breaker="parent",cluster="elastic-gke-logging-1-cluster",es_client_node="true",es_data_node="true",es_ingest_node="true",es_master_node="true",host="10.50.2.54",name="elastic-gke-logging-1-elasticsearch-0"} 4.6637464e+07
elasticsearch_breakers_estimated_size_bytes{breaker="request",cluster="elastic-gke-logging-1-cluster",es_client_node="true",es_data_node="true",es_ingest_node="true",es_master_node="true",host="10.50.2.54",name="elastic-gke-logging-1-elasticsearch-0"} 0
# HELP elasticsearch_breakers_limit_size_bytes Limit size in bytes for breaker
# TYPE elasticsearch_breakers_limit_size_bytes gauge
Thank you
You are probably missing ServiceMonitor, this should work:
k apply -f -<<EOF
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
annotations:
labels:
release: prom
name: elasticsearch
spec:
endpoints:
- port: metrics
selector:
matchLabels:
app: es-exporter
EOF
Your elasticsearch service must define metrics and have lable app: es-exporter, similar to this:
apiVersion: v1
kind: Service
metadata:
labels:
app: es-exporter
component: elasticsearch
name: elasticsearch
spec:
ports:
- name: transport
port: 9200
protocol: TCP
targetPort: 9200
- name: metrics
port: 9108
protocol: TCP
targetPort: 9108
selector:
component: elasticsearch
type: ClusterIP
After that you should find metrics in Prometheus, to confirm that you can always use Status -> Targets tab in Prometheus.

Resources