SOS
I'm trying to deploy ELK stack on my Kubernetes**a
ElasticSearch, Metricbeat, Filebeat and Kibana running on Kubernetes, but in Kibana there is no Filebeat index logs
Kibana accessable: URL here
Only MetricBeat index available
I don't know where the issue please help me to figure out.
Any idea???
Pods:
NAME READY STATUS RESTARTS AGE
counter 1/1 Running 0 21h
es-mono-0 1/1 Running 0 19h
filebeat-4446k 1/1 Running 0 11m
filebeat-fwb57 1/1 Running 0 11m
filebeat-mk5wl 1/1 Running 0 11m
filebeat-pm8xd 1/1 Running 0 11m
kibana-86d8ccc6bb-76bwq 1/1 Running 0 24h
logstash-deployment-8ffbcc994-bcw5n 1/1 Running 0 24h
metricbeat-4s5tx 1/1 Running 0 21h
metricbeat-sgf8h 1/1 Running 0 21h
metricbeat-tfv5d 1/1 Running 0 21h
metricbeat-z8rnm 1/1 Running 0 21h
SVC
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch LoadBalancer 10.245.83.99 159.223.240.9 9200:31872/TCP,9300:30997/TCP 19h
kibana NodePort 10.245.229.75 <none> 5601:32040/TCP 24h
kibana-external LoadBalancer 10.245.184.232 <pending> 80:31646/TCP 24h
logstash-service ClusterIP 10.245.113.154 <none> 5044/TCP 24h
Logstash logs logstash (Raw)
filebeat logs (Raw)
kibana.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: elk
labels:
run: kibana
spec:
replicas: 1
selector:
matchLabels:
run: kibana
template:
metadata:
labels:
run: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:6.5.4
env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch.elk:9200/
- name: XPACK_SECURITY_ENABLED
value: "true"
#- name: CLUSTER_NAME
# value: elasticsearch
#resources:
# limits:
# cpu: 1000m
# requests:
# cpu: 500m
ports:
- containerPort: 5601
name: http
protocol: TCP
#volumes:
# - name: logtrail-config
# configMap:
# name: logtrail-config
---
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: elk
labels:
#service: kibana
run: kibana
spec:
type: NodePort
selector:
run: kibana
ports:
- port: 5601
targetPort: 5601
---
apiVersion: v1
kind: Service
metadata:
name: kibana-external
spec:
type: LoadBalancer
selector:
app: kibana
ports:
- name: http
port: 80
targetPort: 5601
filebeat.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: elk
labels:
k8s-app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: elk
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: elk
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.config:
prospectors:
# Mounted `filebeat-prospectors` configmap:
path: ${path.config}/prospectors.d/*.yml
# Reload prospectors configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
output.logstash:
hosts: ['logstash-service:5044']
setup.kibana.host: "http://kibana.elk:5601"
setup.kibana.protocol: "http"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-prospectors
namespace: elk
labels:
k8s-app: filebeat
data:
kubernetes.yml: |-
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: elk
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:6.5.4
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
securityContext:
runAsUser: 0
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: prospectors
mountPath: /usr/share/filebeat/prospectors.d
readOnly: true
#- name: data
# mountPath: /usr/share/filebeat/data
subPath: filebeat/
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: prospectors
configMap:
defaultMode: 0600
name: filebeat-prospectors
#- name: data
# persistentVolumeClaim:
# claimName: elk-pvc
---
Metricbeat.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-config
namespace: elk
labels:
k8s-app: metricbeat
data:
metricbeat.yml: |-
metricbeat.config.modules:
# Mounted `metricbeat-daemonset-modules` configmap:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
processors:
- add_cloud_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
setup.kibana:
host: "kibana.elk:5601"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-daemonset-modules
namespace: elk
labels:
k8s-app: metricbeat
data:
system.yml: |-
- module: system
period: 10s
metricsets:
- cpu
- load
- memory
- network
- process
- process_summary
#- core
#- diskio
#- socket
processes: ['.*']
process.include_top_n:
by_cpu: 5 # include top 5 processes by CPU
by_memory: 5 # include top 5 processes by memory
- module: system
period: 1m
metricsets:
- filesystem
- fsstat
processors:
- drop_event.when.regexp:
system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'
kubernetes.yml: |-
- module: kubernetes
metricsets:
- node
- system
- pod
- container
- volume
period: 10s
hosts: ["localhost:10255"]
---
# Deploy a Metricbeat instance per node for node metrics retrieval
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: metricbeat
namespace: elk
labels:
k8s-app: metricbeat
spec:
selector:
matchLabels:
k8s-app: metricbeat
template:
metadata:
labels:
k8s-app: metricbeat
spec:
serviceAccountName: metricbeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: metricbeat
image: docker.elastic.co/beats/metricbeat:6.5.4
args: [
"-c", "/etc/metricbeat.yml",
"-e",
"-system.hostfs=/hostfs",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
securityContext:
runAsUser: 0
resources:
limits:
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/metricbeat.yml
readOnly: true
subPath: metricbeat.yml
- name: modules
mountPath: /usr/share/metricbeat/modules.d
readOnly: true
- name: dockersock
mountPath: /var/run/docker.sock
- name: proc
mountPath: /hostfs/proc
readOnly: true
- name: cgroup
mountPath: /hostfs/sys/fs/cgroup
readOnly: true
- name: data
mountPath: /usr/share/metricbeat/data
subPath: metricbeat/
volumes:
- name: proc
hostPath:
path: /proc
- name: cgroup
hostPath:
path: /sys/fs/cgroup
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: config
configMap:
defaultMode: 0600
name: metricbeat-config
- name: modules
configMap:
defaultMode: 0600
name: metricbeat-daemonset-modules
- name: data
persistentVolumeClaim:
claimName: elk-pvc
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metricbeat
subjects:
- kind: ServiceAccount
name: metricbeat
namespace: elk
roleRef:
kind: ClusterRole
name: metricbeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metricbeat
labels:
k8s-app: metricbeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- events
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metricbeat
namespace: elk
labels:
k8s-app: metricbeat
Logstash.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: elk
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
geoip {
source => "clientip"
}
}
output {
elasticsearch {
hosts => ["elasticsearch.elk:9200"]
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
namespace: elk
spec:
replicas: 1
selector:
matchLabels:
app: logstash
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:6.3.0
ports:
- containerPort: 5044
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
---
kind: Service
apiVersion: v1
metadata:
name: logstash-service
namespace: elk
spec:
selector:
app: logstash
ports:
- protocol: TCP
port: 5044
targetPort: 5044
Full src files(GitHub)
Try to use FluentD as log transportation
fluentd.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: elk
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: elk
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: elk
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch.elk.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
- name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
value: /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
resources:
limits:
memory: 512Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
Related
how to set up efk logging in aks cluster nodes?
Below are my spec files for efk logging in aks clusters.
# Elasticsearch.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: logging
spec:
serviceName: logs-elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.logs-elasticsearch,es-cluster-1.logs-elasticsearch,es-cluster-2.logs-elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data-logging
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "managed-premium-retain-sc"
resources:
requests:
storage: 100Gi
---
kind: Service
apiVersion: v1
metadata:
name: logs-elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
########################
# Kibana yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
env:
- name: ELASTICSEARCH_HOSTS
value: http://logs-elasticsearch.logging.svc.cluster.local:9200
ports:
- containerPort: 5601
---
apiVersion: v1
kind: Service
metadata:
name: logs-kibana
spec:
selector:
app: kibana
type: ClusterIP
ports:
- port: 5601
targetPort: 5601
##################
# fluentd daemonset and rbac,sa,clusterrole specs
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: default
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "logs-elasticsearch.logging.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
- name: FLUENT_UID
value: "0"
resources:
limits:
memory: 512Mi
cpu: 500m
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log/
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log/
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
The setup is working fine only thing no logs are coming to elasticsearch cluster from fluentd whereas the same spec files work fine inside minikube cluster.
As for this setup kibana is up and able to connect with elasticsearch and the same is the case with fluentd, just logs are not coming inside elasticseach.
What extra configuration needs to be configured to make these config files work with azure k8 service(AKS) cluster nodes?
Had to add below environment variables for Fluentd.
Reference Link: https://github.com/fluent/fluentd-kubernetes-daemonset/issues/434
- name: FLUENT_CONTAINER_TAIL_EXCLUDE_PATH
value: /var/log/containers/fluent*
- name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
value: /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
Here's the complete spec.
# Elasticsearch.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: logging
spec:
serviceName: logs-elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.logs-elasticsearch,es-cluster-1.logs-elasticsearch,es-cluster-2.logs-elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data-logging
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "managed-premium-retain-sc"
resources:
requests:
storage: 100Gi
---
kind: Service
apiVersion: v1
metadata:
name: logs-elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
########################
# Kibana yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
env:
- name: ELASTICSEARCH_HOSTS
value: http://logs-elasticsearch.logging.svc.cluster.local:9200
ports:
- containerPort: 5601
---
apiVersion: v1
kind: Service
metadata:
name: logs-kibana
spec:
selector:
app: kibana
type: ClusterIP
ports:
- port: 5601
targetPort: 5601
##################
# fluentd daemonset and rbac,sa,clusterrole specs
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: default
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "logs-elasticsearch.logging.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
- name: FLUENT_UID
value: "0"
- name: FLUENT_CONTAINER_TAIL_EXCLUDE_PATH
value: /var/log/containers/fluent*
- name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
value: /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
resources:
limits:
memory: 512Mi
cpu: 500m
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log/
# - name: varlibdockercontainers
# mountPath: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
mountPath: /var/log/pods
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log/
# - name: varlibdockercontainers
# hostPath:
# path: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
hostPath:
path: /var/log/pods
I need to create a cluster with ELK.
Elastic should be Stateful, I'm not able to attach disks; the error highlighted below occurs.
Does anyone have any solution?
Sincreley, Pablo
Message error: 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
YAML:
apiVersion: v1
kind: ServiceAccount
metadata:
name: elasticsearch-logging
namespace: elk-iot-cloud
labels:
k8s-app: elasticsearch-logging
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: elasticsearch-logging
labels:
k8s-app: elasticsearch-logging
rules:
- apiGroups:
- ""
resources:
- "services"
- "namespaces"
- "endpoints"
verbs:
- "get"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: elk-iot-cloud
name: elasticsearch-logging
labels:
k8s-app: elasticsearch-logging
subjects:
- kind: ServiceAccount
name: elasticsearch-logging
namespace: elk-iot-cloud
apiGroup: ""
roleRef:
kind: ClusterRole
name: elasticsearch-logging
apiGroup: ""
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch-logging
namespace: elk-iot-cloud
labels:
k8s-app: elasticsearch-logging
spec:
serviceName: elasticsearch-logging
replicas: 3
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
k8s-app: elasticsearch-logging
template:
metadata:
labels:
k8s-app: elasticsearch-logging
spec:
serviceAccountName: elasticsearch-logging
containers:
- image: docker.elastic.co/elasticsearch/elasticsearch:7.15.1
name: elasticsearch-logging
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
# sets a list of master-eligible nodes in the cluster.
- name: discovery.seed_hosts
value: 'elasticsearch-logging-0.elasticsearch-logging.elk-iot-cloud.svc.cluster.local,elasticsearch-logging-1.elasticsearch-logging.elk-iot-cloud.svc.cluster.local ,elasticsearch-logging-2.elasticsearch-logging.elk-iot-cloud.svc.cluster.local'
# specifies a list of master-eligible nodes that will participate in the master election process.
- name: cluster.initial_master_nodes
value: 'elasticsearch-logging-0,elasticsearch-logging-1,elasticsearch-logging-2'
- name: ES_JAVA_OPTS
value: '-Xms1g -Xmx1g'
- name: ELASTICSEARCH_USERNAME
value: 'elastic'
- name: ELASTIC_PASSWORD
value: 'elastic'
#- name: xpack.license.self_generated.type
# value: "basic"
- name: xpack.security.enabled
value: 'true'
#- name: xpack.security.transport.ssl.enabled
# value: 'true'
#- name: xpack.security.audit.enabled
# value: 'true'
- name: xpack.monitoring.collection.enabled
value: 'true'
volumes:
- name: elasticsearch-logging
emptyDir: {}
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
- name: elasticsearch-logging-init
image: busybox
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data
labels:
app: elasticsearch-logging
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: do-block-storage
resources:
requests:
storage: 50Gi
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-logging
namespace: elk-iot-cloud
labels:
k8s-app: elasticsearch-logging
spec:
ports:
- port: 9200
protocol: TCP
targetPort: db
selector:
k8s-app: elasticsearch-logging
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: data
namespace: elk-iot-cloud
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-pv0
namespace: elk-iot-cloud
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: data
local:
path: /mnt/disk/vol0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- elasticsearch-logging-0
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-pv1
namespace: elk-iot-cloud
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: data
local:
path: /mnt/disk/vol1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- elasticsearch-logging-1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-pv2
namespace: elk-iot-cloud
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: data
local:
path: /mnt/disk/vol2
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- elasticsearch-logging-2
Config:
Ubuntu / Microk8s / K8S 1.21.7
Image of error:
enter image description here
Check your PersistentVolumeClaims (kubectl get pvc -n elk-iot-cloud).
The "has unbound immediate PersistentVolumeClaim" message suggests that your PVC status is "Pending". Meaning that either you don't have StorageClass "do-block-storage", or that the provisioner corresponding to that class did not create the underlying volume and corresponding PersistentVolume object.
Check your StorageClasses (kubectl get sc)
Make sure the storageClassName in your StatefulSets volumeClaimTemplate refers to an existing StorageClass.
Make sure the provisioner for that StorageClass works as expected (kubectl logs).
Alternatively, for a test, you could use ephemeral storage instead - remove the volumeClaimTemplate, add some emptyDir volume instead.
I am trying to deploy Elasticsearch search to K8S cluster. Below is my configuration. It works fine for replicas: 1. But if I change it to a value more than 1 there will be multiple pods in the K8S cluster get deployed. But how can I make them working together as one Elasticsearch cluster nodes?
I know I can manually add the nodes in the Elasticsearch cluster by using the API. But is there any way to let the cluster discover the extra nodes automatically?
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es
namespace: default
spec:
serviceName: es-entrypoint
replicas: 1
selector:
matchLabels:
name: es
template:
metadata:
labels:
name: es
spec:
volumes:
- name: es-config
configMap:
name: es-config
items:
- key: elasticsearch.yml
path: elasticsearch.yml
- name: persistent-storage
persistentVolumeClaim:
claimName: es-claim
initContainers:
- name: permissions-fix
image: busybox
volumeMounts:
- name: persistent-storage
mountPath: /usr/share/elasticsearch/data
command: [ 'chown' ]
args: [ '1000:1000', '/usr/share/elasticsearch/data' ]
containers:
- name: es
image: elasticsearch:7.10.1
resources:
requests:
cpu: 2
memory: 8
ports:
- name: http
containerPort: 9200
- containerPort: 9300
name: inter-node
volumeMounts:
- name: es-config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
- name: persistent-storage
mountPath: /usr/share/elasticsearch/data
---
apiVersion: v1
kind: Service
metadata:
name: es-entrypoint
spec:
selector:
name: es
ports:
- port: 9200
targetPort: 9200
protocol: TCP
type: NodePort
I'm runnig EFK stack in my kubernetes cluster however each time i start kibana dashboard i will need to manually import export.ndjson i've heard that all kibana objects are stored in elasticsearch so a mounted this file to /usr/share/elasticsearch/data/ but still can't see it in the dashboard.
Here are my yaml files:
kibana :
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: tools
spec:
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.6.0
ports:
- containerPort: 5601
---
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: tools
spec:
selector:
app: kibana
ports:
- name: client
port: 5601
protocol: TCP
type: ClusterIP
es:
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
namespace: tools
spec:
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: elasticsearch-volume
mountPath: /usr/share/elasticsearch/data
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
ports:
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
env:
- name: discovery.type
valueFrom:
configMapKeyRef:
name: tools-config
key: discovery.type
volumeMounts:
- name: elasticsearch-volume
mountPath: /usr/share/elasticsearch/data
- name: kibana-dashboard
mountPath: /usr/share/elasticsearch/data/export.ndjson
subPath: export.ndjson
volumes:
- name: elasticsearch-volume
persistentVolumeClaim:
claimName: elasticsearch-storage-pvc
- name: kibana-dashboard
configMap:
name: kibana-dashboard
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: tools
spec:
selector:
app: elasticsearch
ports:
- name: http
protocol: TCP
port: 9200
- name: transport
protocol: TCP
port: 9300
type: ClusterIP
I am configuring my EFK stack to keep all Kubernetes related logs including the events.
I searched and found the metricbeat config file and deployed it in my cluster.
Problem: All other metricbeat modules are working fine except for the "event" resource. I can see logs from status_pod, status_node etc but no logs availble for events module.
ERROR : 2019/09/04 11:53:23.961693 watcher.go:52: ERR kubernetes: List API error kubernetes api: Failure 403 events is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "events" in API group "" at the cluster scope
My metricbeat.yml file:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-config
namespace: kube-system
labels:
k8s-app: metricbeat
kubernetes.io/cluster-service: "true"
data:
metricbeat.yml: |-
metricbeat.config.modules:
# Mounted `metricbeat-daemonset-modules` configmap:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
processors:
- add_cloud_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-deployment-modules
namespace: kube-system
labels:
k8s-app: metricbeat
kubernetes.io/cluster-service: "true"
data:
# This module requires `kube-state-metrics` up and running under `kube-system` namespace
kubernetes.yml: |-
- module: kubernetes
metricsets:
- state_node
- state_deployment
- state_replicaset
- state_pod
- state_container
period: 10s
hosts: ["kube-state-metrics:5602"]
- module: kubernetes
enabled: true
metricsets:
- event
---
# Deploy singleton instance in the whole cluster for some unique data sources, like kube-state-metrics
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: metricbeat
namespace: kube-system
labels:
k8s-app: metricbeat
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: metricbeat
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: metricbeat
image: docker.elastic.co/beats/metricbeat:6.0.1
args: [
"-c", "/etc/metricbeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: "elasticsearch"
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/metricbeat.yml
readOnly: true
subPath: metricbeat.yml
- name: modules
mountPath: /usr/share/metricbeat/modules.d
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: metricbeat-config
- name: modules
configMap:
defaultMode: 0600
name: metricbeat-deployment-modules
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: metricbeat
subjects:
- kind: ServiceAccount
name: metricbeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: metricbeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: metricbeat
labels:
k8s-app: metricbeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- events
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metricbeat
namespace: kube-system
labels:
k8s-app: metricbeat
---
You are running your deployment with the default service account.
Set the name of the ServiceAccount in the spec.serviceAccountName field in the Deployment definition.
kind: Deployment
metadata:
name: metricbeat
namespace: kube-system
labels:
k8s-app: metricbeat
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: metricbeat
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: metricbeat **<<--- here**
Also maybe you need to add to your ClusterRole definition, the resource pods/log