How do I switch from Table-manager to Compactor in an existing Loki deploy? - grafana-loki

I have an issue whereby the chunks is not being deleted as per the set max_look_back_period . Having done some research I discovered here that table-manager is no longer supported as in the comment here.
I am however unsure how to amend my current configuration which looks like this :
loki-stack-values.yml
grafana:
enabled: true
persistence:
enabled: true
size: 5Gi
adminPassword: Vfgfhdjdkdisynwtey678CMX7xghuy879
prometheus:
enabled: true
alertmanager:
persistentVolume:
enabled: true
size: 2Gi
server:
persistentVolume:
enabled: true
size: 10Gi
loki:
enabled: true
persistence:
enabled: true
size: 70Gi
config:
chunk_store_config:
max_look_back_period: 672h
table_manager:
retention_deletes_enabled: true
retention_period: 672h
statefulset.yml
Source: loki-stack/charts/loki/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: loki
namespace: loki
labels:
app: loki
chart: loki-2.11.0
release: loki
heritage: Helm
annotations:
{}
spec:
podManagementPolicy: OrderedReady
replicas: 1
selector:
matchLabels:
app: loki
release: loki
serviceName: loki-headless
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: loki
name: loki
release: loki
annotations:
checksum/config: f1685c19aa5e8157738636fd074c7cb45763c46d46f08d9f05ca31e4445cd082
prometheus.io/port: http-metrics
prometheus.io/scrape: "true"
spec:
serviceAccountName: loki
securityContext:
fsGroup: 10001
runAsGroup: 10001
runAsNonRoot: true
runAsUser: 10001
initContainers:
[]
containers:
- name: loki
image: "grafana/loki:2.5.0"
imagePullPolicy: IfNotPresent
args:
- "-config.file=/etc/loki/loki.yaml"
volumeMounts:
- name: tmp
mountPath: /tmp
- name: config
mountPath: /etc/loki
- name: storage
mountPath: "/data"
subPath:
ports:
- name: http-metrics
containerPort: 3100
protocol: TCP
livenessProbe:
httpGet:
path: /ready
port: http-metrics
initialDelaySeconds: 45
readinessProbe:
httpGet:
path: /ready
port: http-metrics
initialDelaySeconds: 45
resources:
{}
securityContext:
readOnlyRootFilesystem: true
env:
nodeSelector:
{}
affinity:
{}
tolerations:
[]
terminationGracePeriodSeconds: 4800
volumes:
- name: tmp
emptyDir: {}
- name: config
secret:
secretName: loki
volumeClaimTemplates:
- metadata:
name: storage
annotations:
{}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "70Gi"
storageClassName:
I would like to amend my configuration as below:
grafana:
enabled: true
persistence:
enabled: true
size: 5Gi
adminPassword: Vfgfhdjdkdisynwtey678CMX7xghuy879
prometheus:
enabled: true
alertmanager:
persistentVolume:
enabled: true
size: 2Gi
server:
persistentVolume:
enabled: true
size: 10Gi
loki:
enabled: true
persistence:
enabled: true
size: 70Gi
config:
compactor:
retention_enabled: true
retention_delete_delay: 2h
retention_delete_worker_count: 150
limits_config:
retention_period: 672h
Is this correct ? Or they are additional options I will need to add also.

Related

K8s - Metricbeat sending data but Filebeat doesn't to Elasticsearch

SOS
I'm trying to deploy ELK stack on my Kubernetes**a
ElasticSearch, Metricbeat, Filebeat and Kibana running on Kubernetes, but in Kibana there is no Filebeat index logs
Kibana accessable: URL here
Only MetricBeat index available
I don't know where the issue please help me to figure out.
Any idea???
Pods:
NAME READY STATUS RESTARTS AGE
counter 1/1 Running 0 21h
es-mono-0 1/1 Running 0 19h
filebeat-4446k 1/1 Running 0 11m
filebeat-fwb57 1/1 Running 0 11m
filebeat-mk5wl 1/1 Running 0 11m
filebeat-pm8xd 1/1 Running 0 11m
kibana-86d8ccc6bb-76bwq 1/1 Running 0 24h
logstash-deployment-8ffbcc994-bcw5n 1/1 Running 0 24h
metricbeat-4s5tx 1/1 Running 0 21h
metricbeat-sgf8h 1/1 Running 0 21h
metricbeat-tfv5d 1/1 Running 0 21h
metricbeat-z8rnm 1/1 Running 0 21h
SVC
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch LoadBalancer 10.245.83.99 159.223.240.9 9200:31872/TCP,9300:30997/TCP 19h
kibana NodePort 10.245.229.75 <none> 5601:32040/TCP 24h
kibana-external LoadBalancer 10.245.184.232 <pending> 80:31646/TCP 24h
logstash-service ClusterIP 10.245.113.154 <none> 5044/TCP 24h
Logstash logs logstash (Raw)
filebeat logs (Raw)
kibana.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: elk
labels:
run: kibana
spec:
replicas: 1
selector:
matchLabels:
run: kibana
template:
metadata:
labels:
run: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:6.5.4
env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch.elk:9200/
- name: XPACK_SECURITY_ENABLED
value: "true"
#- name: CLUSTER_NAME
# value: elasticsearch
#resources:
# limits:
# cpu: 1000m
# requests:
# cpu: 500m
ports:
- containerPort: 5601
name: http
protocol: TCP
#volumes:
# - name: logtrail-config
# configMap:
# name: logtrail-config
---
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: elk
labels:
#service: kibana
run: kibana
spec:
type: NodePort
selector:
run: kibana
ports:
- port: 5601
targetPort: 5601
---
apiVersion: v1
kind: Service
metadata:
name: kibana-external
spec:
type: LoadBalancer
selector:
app: kibana
ports:
- name: http
port: 80
targetPort: 5601
filebeat.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: elk
labels:
k8s-app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: elk
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: elk
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.config:
prospectors:
# Mounted `filebeat-prospectors` configmap:
path: ${path.config}/prospectors.d/*.yml
# Reload prospectors configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
output.logstash:
hosts: ['logstash-service:5044']
setup.kibana.host: "http://kibana.elk:5601"
setup.kibana.protocol: "http"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-prospectors
namespace: elk
labels:
k8s-app: filebeat
data:
kubernetes.yml: |-
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: elk
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:6.5.4
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
securityContext:
runAsUser: 0
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: prospectors
mountPath: /usr/share/filebeat/prospectors.d
readOnly: true
#- name: data
# mountPath: /usr/share/filebeat/data
subPath: filebeat/
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: prospectors
configMap:
defaultMode: 0600
name: filebeat-prospectors
#- name: data
# persistentVolumeClaim:
# claimName: elk-pvc
---
Metricbeat.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-config
namespace: elk
labels:
k8s-app: metricbeat
data:
metricbeat.yml: |-
metricbeat.config.modules:
# Mounted `metricbeat-daemonset-modules` configmap:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
processors:
- add_cloud_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
setup.kibana:
host: "kibana.elk:5601"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-daemonset-modules
namespace: elk
labels:
k8s-app: metricbeat
data:
system.yml: |-
- module: system
period: 10s
metricsets:
- cpu
- load
- memory
- network
- process
- process_summary
#- core
#- diskio
#- socket
processes: ['.*']
process.include_top_n:
by_cpu: 5 # include top 5 processes by CPU
by_memory: 5 # include top 5 processes by memory
- module: system
period: 1m
metricsets:
- filesystem
- fsstat
processors:
- drop_event.when.regexp:
system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'
kubernetes.yml: |-
- module: kubernetes
metricsets:
- node
- system
- pod
- container
- volume
period: 10s
hosts: ["localhost:10255"]
---
# Deploy a Metricbeat instance per node for node metrics retrieval
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: metricbeat
namespace: elk
labels:
k8s-app: metricbeat
spec:
selector:
matchLabels:
k8s-app: metricbeat
template:
metadata:
labels:
k8s-app: metricbeat
spec:
serviceAccountName: metricbeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: metricbeat
image: docker.elastic.co/beats/metricbeat:6.5.4
args: [
"-c", "/etc/metricbeat.yml",
"-e",
"-system.hostfs=/hostfs",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
securityContext:
runAsUser: 0
resources:
limits:
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/metricbeat.yml
readOnly: true
subPath: metricbeat.yml
- name: modules
mountPath: /usr/share/metricbeat/modules.d
readOnly: true
- name: dockersock
mountPath: /var/run/docker.sock
- name: proc
mountPath: /hostfs/proc
readOnly: true
- name: cgroup
mountPath: /hostfs/sys/fs/cgroup
readOnly: true
- name: data
mountPath: /usr/share/metricbeat/data
subPath: metricbeat/
volumes:
- name: proc
hostPath:
path: /proc
- name: cgroup
hostPath:
path: /sys/fs/cgroup
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: config
configMap:
defaultMode: 0600
name: metricbeat-config
- name: modules
configMap:
defaultMode: 0600
name: metricbeat-daemonset-modules
- name: data
persistentVolumeClaim:
claimName: elk-pvc
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metricbeat
subjects:
- kind: ServiceAccount
name: metricbeat
namespace: elk
roleRef:
kind: ClusterRole
name: metricbeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metricbeat
labels:
k8s-app: metricbeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- events
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metricbeat
namespace: elk
labels:
k8s-app: metricbeat
Logstash.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: elk
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
geoip {
source => "clientip"
}
}
output {
elasticsearch {
hosts => ["elasticsearch.elk:9200"]
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
namespace: elk
spec:
replicas: 1
selector:
matchLabels:
app: logstash
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:6.3.0
ports:
- containerPort: 5044
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
---
kind: Service
apiVersion: v1
metadata:
name: logstash-service
namespace: elk
spec:
selector:
app: logstash
ports:
- protocol: TCP
port: 5044
targetPort: 5044
Full src files(GitHub)
Try to use FluentD as log transportation
fluentd.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: elk
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: elk
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: elk
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch.elk.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
- name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
value: /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
resources:
limits:
memory: 512Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers

Nodeport works but ingress doesn't

I am new to kubernetes and I am trying to set up a web app for production. I tested an nginx app and it works perfectly in both nodeport and ingress.
When I setupped my web app exposing it using nodeport it works but when I expose it via ingress it doesn't work. I know that I deleted all nginx app deployment and service and ingress but my web app is still the same (WORK via NODEPORT AND NOT via INGRESS). Any help getting this to work is appreciated.
THIS IS THE NGINX YAML CODE (WORK via BOTH NODEPORT AND INGRESS)
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:stable
imagePullPolicy: Always
ports:
- containerPort: 80
name: web
protocol: TCP
readinessProbe:
httpGet:
port: web
path: /
---
apiVersion: v1
kind: Service
metadata:
name: nginx-nodeport
namespace: default
spec:
type: NodePort
selector:
# app: nginx
app: mruser-dev
ports:
- name: web
port: 80
targetPort: web
nodePort: 31122
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-nodeport
port:
number: 80
THIS IS MY WEB APP CODE (WORK via NODEPORT AND NOT via INGRESS)
apiVersion: apps/v1
kind: Deployment
metadata:
name: mywebapp-dev
spec:
replicas: 2
selector:
matchLabels:
app: mywebapp-dev
template:
metadata:
labels:
app: mywebapp-dev
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: /metrics
prometheus.io/port: "80"
scheduler.alpha.kubernetes.io/affinity: >
{
"nodeAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "kubernetes.io/hostname",
"operator": "In",
"values": ["server-node-1"]
}
]
}
]
}
}
}
spec:
volumes:
- name: logs
emptyDir: {}
- name: cache
emptyDir: {}
- name: testing
emptyDir: {}
- name: sessions
emptyDir: {}
- name: views
emptyDir: {}
- name: mywebapp-dev-nfsvolume
nfs:
server: 10.0.0.20
path: /var/www/html/misteruser-dev
- name: mywebapp-nginx-nfsvolume
nfs:
server: 10.0.0.20
path: /var/www/nginx
securityContext:
fsGroup: 82
initContainers:
- name: database-migrations
image: mywebapp-dev-laravel-fpm:8.1-fpm-alpine
imagePullPolicy: Never
envFrom:
- configMapRef:
name: mywebapp-dev-laravel
- secretRef:
name: mywebapp-dev-laravel
volumeMounts:
- name: mywebapp-dev-nfsvolume
mountPath: /var/www/html/
command:
- "php"
args:
- "artisan"
- "migrate"
- "--force"
containers:
- name: nginx
imagePullPolicy: Never
image: mywebapp-dev-laravel-nginx:stable-alpine
volumeMounts:
- name: mywebapp-nginx-nfsvolume
mountPath: /etc/nginx/conf.d/
resources:
limits:
cpu: 200m
memory: 50M
ports:
- containerPort: 80
name: web
protocol: TCP
readinessProbe:
httpGet:
port: web
path: /
- name: fpm
imagePullPolicy: Never
envFrom:
- configMapRef:
name: mywebapp-dev-laravel
- secretRef:
name: mywebapp-dev-laravel
securityContext:
runAsUser: 82
readOnlyRootFilesystem: true
volumeMounts:
- name: logs
mountPath: /var/www/html/storage/logs
- name: cache
mountPath: /var/www/html/storage/framework/cache
- name: sessions
mountPath: /var/www/html/storage/framework/sessions
- name: views
mountPath: /var/www/html/storage/framework/views
- name: testing
mountPath: /var/www/html/storage/framework/testing
- name: mywebapp-dev-nfsvolume
mountPath: /var/www/html/
resources:
limits:
cpu: 500m
memory: 200Mi
image: mywebapp-dev-laravel-fpm:8.1-fpm-alpine
ports:
- containerPort: 9000
---
apiVersion: v1
kind: Service
metadata:
name: mywebapp-dev-svc
namespace: default
spec:
type: NodePort
selector:
app: mywebapp-dev
ports:
- name: web
port: 80
targetPort: 80
nodePort: 31112
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: mywebapp-nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: mywebapp-dev-svc
port:
number: 80

how to setup efk logging in aks cluster nodes?

how to set up efk logging in aks cluster nodes?
Below are my spec files for efk logging in aks clusters.
# Elasticsearch.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: logging
spec:
serviceName: logs-elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.logs-elasticsearch,es-cluster-1.logs-elasticsearch,es-cluster-2.logs-elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data-logging
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "managed-premium-retain-sc"
resources:
requests:
storage: 100Gi
---
kind: Service
apiVersion: v1
metadata:
name: logs-elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
########################
# Kibana yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
env:
- name: ELASTICSEARCH_HOSTS
value: http://logs-elasticsearch.logging.svc.cluster.local:9200
ports:
- containerPort: 5601
---
apiVersion: v1
kind: Service
metadata:
name: logs-kibana
spec:
selector:
app: kibana
type: ClusterIP
ports:
- port: 5601
targetPort: 5601
##################
# fluentd daemonset and rbac,sa,clusterrole specs
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: default
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "logs-elasticsearch.logging.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
- name: FLUENT_UID
value: "0"
resources:
limits:
memory: 512Mi
cpu: 500m
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log/
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log/
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
The setup is working fine only thing no logs are coming to elasticsearch cluster from fluentd whereas the same spec files work fine inside minikube cluster.
As for this setup kibana is up and able to connect with elasticsearch and the same is the case with fluentd, just logs are not coming inside elasticseach.
What extra configuration needs to be configured to make these config files work with azure k8 service(AKS) cluster nodes?
Had to add below environment variables for Fluentd.
Reference Link: https://github.com/fluent/fluentd-kubernetes-daemonset/issues/434
- name: FLUENT_CONTAINER_TAIL_EXCLUDE_PATH
value: /var/log/containers/fluent*
- name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
value: /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
Here's the complete spec.
# Elasticsearch.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: logging
spec:
serviceName: logs-elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.logs-elasticsearch,es-cluster-1.logs-elasticsearch,es-cluster-2.logs-elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data-logging
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "managed-premium-retain-sc"
resources:
requests:
storage: 100Gi
---
kind: Service
apiVersion: v1
metadata:
name: logs-elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
########################
# Kibana yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
env:
- name: ELASTICSEARCH_HOSTS
value: http://logs-elasticsearch.logging.svc.cluster.local:9200
ports:
- containerPort: 5601
---
apiVersion: v1
kind: Service
metadata:
name: logs-kibana
spec:
selector:
app: kibana
type: ClusterIP
ports:
- port: 5601
targetPort: 5601
##################
# fluentd daemonset and rbac,sa,clusterrole specs
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: default
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "logs-elasticsearch.logging.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
- name: FLUENT_UID
value: "0"
- name: FLUENT_CONTAINER_TAIL_EXCLUDE_PATH
value: /var/log/containers/fluent*
- name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
value: /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
resources:
limits:
memory: 512Mi
cpu: 500m
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log/
# - name: varlibdockercontainers
# mountPath: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
mountPath: /var/log/pods
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log/
# - name: varlibdockercontainers
# hostPath:
# path: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
hostPath:
path: /var/log/pods

ELK Stateful - ERROR: 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims

I need to create a cluster with ELK.
Elastic should be Stateful, I'm not able to attach disks; the error highlighted below occurs.
Does anyone have any solution?
Sincreley, Pablo
Message error: 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
YAML:
apiVersion: v1
kind: ServiceAccount
metadata:
name: elasticsearch-logging
namespace: elk-iot-cloud
labels:
k8s-app: elasticsearch-logging
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: elasticsearch-logging
labels:
k8s-app: elasticsearch-logging
rules:
- apiGroups:
- ""
resources:
- "services"
- "namespaces"
- "endpoints"
verbs:
- "get"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: elk-iot-cloud
name: elasticsearch-logging
labels:
k8s-app: elasticsearch-logging
subjects:
- kind: ServiceAccount
name: elasticsearch-logging
namespace: elk-iot-cloud
apiGroup: ""
roleRef:
kind: ClusterRole
name: elasticsearch-logging
apiGroup: ""
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch-logging
namespace: elk-iot-cloud
labels:
k8s-app: elasticsearch-logging
spec:
serviceName: elasticsearch-logging
replicas: 3
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
k8s-app: elasticsearch-logging
template:
metadata:
labels:
k8s-app: elasticsearch-logging
spec:
serviceAccountName: elasticsearch-logging
containers:
- image: docker.elastic.co/elasticsearch/elasticsearch:7.15.1
name: elasticsearch-logging
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
# sets a list of master-eligible nodes in the cluster.
- name: discovery.seed_hosts
value: 'elasticsearch-logging-0.elasticsearch-logging.elk-iot-cloud.svc.cluster.local,elasticsearch-logging-1.elasticsearch-logging.elk-iot-cloud.svc.cluster.local ,elasticsearch-logging-2.elasticsearch-logging.elk-iot-cloud.svc.cluster.local'
# specifies a list of master-eligible nodes that will participate in the master election process.
- name: cluster.initial_master_nodes
value: 'elasticsearch-logging-0,elasticsearch-logging-1,elasticsearch-logging-2'
- name: ES_JAVA_OPTS
value: '-Xms1g -Xmx1g'
- name: ELASTICSEARCH_USERNAME
value: 'elastic'
- name: ELASTIC_PASSWORD
value: 'elastic'
#- name: xpack.license.self_generated.type
# value: "basic"
- name: xpack.security.enabled
value: 'true'
#- name: xpack.security.transport.ssl.enabled
# value: 'true'
#- name: xpack.security.audit.enabled
# value: 'true'
- name: xpack.monitoring.collection.enabled
value: 'true'
volumes:
- name: elasticsearch-logging
emptyDir: {}
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
- name: elasticsearch-logging-init
image: busybox
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data
labels:
app: elasticsearch-logging
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: do-block-storage
resources:
requests:
storage: 50Gi
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-logging
namespace: elk-iot-cloud
labels:
k8s-app: elasticsearch-logging
spec:
ports:
- port: 9200
protocol: TCP
targetPort: db
selector:
k8s-app: elasticsearch-logging
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: data
namespace: elk-iot-cloud
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-pv0
namespace: elk-iot-cloud
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: data
local:
path: /mnt/disk/vol0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- elasticsearch-logging-0
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-pv1
namespace: elk-iot-cloud
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: data
local:
path: /mnt/disk/vol1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- elasticsearch-logging-1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-pv2
namespace: elk-iot-cloud
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: data
local:
path: /mnt/disk/vol2
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- elasticsearch-logging-2
Config:
Ubuntu / Microk8s / K8S 1.21.7
Image of error:
enter image description here
Check your PersistentVolumeClaims (kubectl get pvc -n elk-iot-cloud).
The "has unbound immediate PersistentVolumeClaim" message suggests that your PVC status is "Pending". Meaning that either you don't have StorageClass "do-block-storage", or that the provisioner corresponding to that class did not create the underlying volume and corresponding PersistentVolume object.
Check your StorageClasses (kubectl get sc)
Make sure the storageClassName in your StatefulSets volumeClaimTemplate refers to an existing StorageClass.
Make sure the provisioner for that StorageClass works as expected (kubectl logs).
Alternatively, for a test, you could use ephemeral storage instead - remove the volumeClaimTemplate, add some emptyDir volume instead.

How can i creat pod with out this error elasticsearch on kubernetes

I want to create elasticsearch pod on kubernetes.
I make some config change to edit path.data and path.logs
But I'm getting this error.
error: error validating "es-deploy.yml": error validating data:
ValidationError(Deployment.spec.template.spec.containers[0]): unknown
field "volumes" in io.k8s.api.core.v1.Container; if you choose to
ignore these errors, turn validation off with --validate=false
service-account.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: elasticsearch
es-svc.yml
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
component: elasticsearch
spec:
# type: LoadBalancer
selector:
component: elasticsearch
ports:
- name: http
port: 9200
protocol: TCP
- name: transport
port: 9300
protocol: TCP
elasticsearch.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch-config
data:
elasticsearch.yml: |
cluster:
name: ${CLUSTER_NAME:elasticsearch-default}
node:
master: ${NODE_MASTER:true}
data: ${NODE_DATA:true}
name: ${NODE_NAME}
ingest: ${NODE_INGEST:true}
max_local_storage_nodes: ${MAX_LOCAL_STORAGE_NODES:1}
processors: ${PROCESSORS:1}
network.host: ${NETWORK_HOST:_site_}
path:
data: ${DATA_PATH:"/data/elk"}
repo: ${REPO_LOCATIONS:[]}
bootstrap:
memory_lock: ${MEMORY_LOCK:false}
http:
enabled: ${HTTP_ENABLE:true}
compression: true
cors:
enabled: true
allow-origin: "*"
discovery:
zen:
ping.unicast.hosts: ${DISCOVERY_SERVICE:elasticsearch-discovery}
minimum_master_nodes: ${NUMBER_OF_MASTERS:1}
xpack:
license.self_generated.type: basic
es-deploy.yml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: es
labels:
component: elasticsearch
spec:
replicas: 1
template:
metadata:
labels:
component: elasticsearch
spec:
serviceAccount: elasticsearch
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: es
securityContext:
capabilities:
add:
- IPC_LOCK
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: "DISCOVERY_SERVICE"
value: "elasticsearch"
- name: NODE_MASTER
value: "true"
- name: NODE_DATA
value: "true"
- name: HTTP_ENABLE
value: "true"
- name: ES_JAVA_OPTS
value: "-Xms256m -Xmx256m"
ports:
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: storage
mountPath: /data/elk
- name: config-volume
mountPath: /usr/share/elasticsearch/elastic.yaml
volumes:
- name: storage
emptyDir: {}
- name: config-volume
configMap:
name: elasticsearch-config
There is syntax problem in your es-deploy.yaml file.
This should work.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: es
labels:
component: elasticsearch
spec:
replicas: 1
template:
metadata:
labels:
component: elasticsearch
spec:
serviceAccount: elasticsearch
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: es
securityContext:
capabilities:
add:
- IPC_LOCK
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: "DISCOVERY_SERVICE"
value: "elasticsearch"
- name: NODE_MASTER
value: "true"
- name: NODE_DATA
value: "true"
- name: HTTP_ENABLE
value: "true"
- name: ES_JAVA_OPTS
value: "-Xms256m -Xmx256m"
ports:
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: storage
mountPath: /data/elk
- name: config-volume
mountPath: /usr/share/elasticsearch/elastic.yaml
volumes:
- name: storage
emptyDir: {}
- name: config-volume
configMap:
name: elasticsearch-config
The volumes section is not under containers section, it should be under spec section as the error suggest.
You can validate your k8s yaml files for syntax error online using this site.
Hope this helps.

Resources