i have a kubernetes with minikube in which i would like to get metrics via elasticsearch metricbeat. Almost all the metrics are showing less the cpu usage, which is always 0. Even when i'm running a pod with high cpu usage. Here are my yaml's what can i possibly be doing wrong?
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: metricbeat
namespace: default
spec:
type: metricbeat
version: 8.1.0
elasticsearchRef:
name: elasticsearch
kibanaRef:
name: kibana
config:
setup:
kibana:
path: "/kibana"
metricbeat:
autodiscover:
providers:
- hints:
default_config: {}
enabled: "true"
node: ${NODE_NAME}
type: kubernetes
modules:
- module: system
period: 10s
metricsets:
- cpu
- load
- memory
- network
- process
- process_summary
process:
include_top_n:
by_cpu: 5
by_memory: 5
processes:
- .*
- module: system
period: 1m
metricsets:
- filesystem
- fsstat
processors:
- drop_event:
when:
regexp:
system:
filesystem:
mount_point: ^/(sys|cgroup|proc|dev|etc|host|lib)($|/)
- module: kubernetes
period: 10s
node: ${NODE_NAME}
hosts:
- https://${NODE_NAME}:10250
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl:
verification_mode: none
metricsets:
- node
- system
- pod
- container
- volume
processors:
- add_cloud_metadata: {}
- add_host_metadata: {}
daemonSet:
podTemplate:
spec:
serviceAccountName: metricbeat
automountServiceAccountToken: true
containers:
- args:
- -e
- -c
- /etc/beat.yml
- -system.hostfs=/hostfs
name: metricbeat
volumeMounts:
- mountPath: /hostfs/sys/fs/cgroup
name: cgroup
- mountPath: /var/run/docker.sock
name: dockersock
- mountPath: /hostfs/proc
name: proc
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
securityContext:
runAsUser: 0
terminationGracePeriodSeconds: 30
volumes:
- hostPath:
path: /sys/fs/cgroup
name: cgroup
- hostPath:
path: /var/run/docker.sock
name: dockersock
- hostPath:
path: /proc
name: proc
And here are the cluster role
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metricbeat
namespace: default
rules:
- apiGroups:
- ""
resources:
- nodes
- namespaces
- events
- pods
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- replicasets
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- statefulsets
- deployments
- replicasets
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/stats
verbs:
- get
- nonResourceURLs:
- /metrics
verbs:
- get
Related
I am trying to send the logs from my AKS cluster into Elasticsearch the log that I am getting is "event.agent_id_status auth_metadata_missing" in my kibana even after all the volume mounts are done correctly
here's my configmap for standalone elastic agent
apiVersion: v1
kind: ConfigMap
metadata:
name: agent-node-datastreams
namespace: elastic
labels:
k8s-app: elastic-agent-standalone
data:
agent.yml: |-
outputs:
default:
type: elasticsearch
protocol: https
ssl.verification_mode: 'none'
allow_older_versions: true
hosts:
- >-
${ES_HOST}
username: ${ES_USERNAME}
password: ${ES_PASSWORD}
indices:
- index: "journalbeat-alias"
when:
and:
- has_fields: ['fields.k8s.component']
- equals:
fields.k8s.component: "journal"
- index: "test-audit"
when:
and:
- has_fields: ['fields.k8s.component']
- equals:
fields.k8s.component: "audit"
- index: "filebeat-%{[beat.version]}-%{+yyyy.MM.dd}"
when.not:
has_fields: ['kubernetes.namespace']
agent:
monitoring:
enabled: true
use_output: default
logs: true
metrics: false
providers.kubernetes:
node: ${NODE_NAME}
scope: node
#Uncomment to enable hints' support
#hints.enabled: true
inputs:
- name: system-logs
type: logfile
use_output: default
meta:
package:
name: system
version: 0.10.7
data_stream:
namespace: filebeat
streams:
- data_stream:
dataset: audit
type: logfile
paths:
- /var/log/*.log
exclude_files:
- .gz$
multiline:
pattern: ^\s
match: after
processors:
- add_fields:
target: ''
fields:
ecs.version: 1.12.0
- data_stream:
dataset: journald
type: logfile
paths:
- /var/lib/host/log/journal
exclude_files:
- .gz$
multiline:
pattern: ^\s
match: after
processors:
- add_fields:
target: ''
fields:
ecs.version: 1.12.0
- data_stream:
dataset: container
type: logfile
paths:
- /var/log/containers/*.log
exclude_files:
- .gz$
multiline:
pattern: ^\s
match: after
processors:
- add_fields:
target: ''
fields:
ecs.version: 1.12.0
and here's my daemonset file
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: elastic-agent-standalone
namespace: elastic
labels:
app: elastic-agent-standalone
spec:
selector:
matchLabels:
app: elastic-agent-standalone
template:
metadata:
labels:
app: elastic-agent-standalone
spec:
# Tolerations are needed to run Elastic Agent on Kubernetes control-plane nodes.
# Agents running on control-plane nodes collect metrics from the control plane components (scheduler, controller manager) of Kubernetes
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: elastic-agent-standalone
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: elastic-agent-standalone
image: docker.elastic.co/beats/elastic-agent:8.4.3
args: [
"-c", "/etc/elastic-agent/agent.yml",
"-e",
]
env:
# The basic authentication username used to connect to Elasticsearch
# This user needs the privileges required to publish events to Elasticsearch.
- name: FLEET_ENROLL_INSECURE
value: "1"
- name: ES_USERNAME
value: <CORRECT USER>
# The basic authentication password used to connect to Elasticsearch
- name: ES_PASSWORD
value: <MY CORRECT PASSWORD>
# The Elasticsearch host to communicate with
- name: ES_HOST
value: <CORRECT HOST>
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: STATE_PATH
value: "/etc/elastic-agent"
securityContext:
runAsUser: 0
resources:
limits:
memory: 700Mi
requests:
cpu: 100m
memory: 400Mi
volumeMounts:
- name: datastreams
mountPath: /etc/elastic-agent/agent.yml
readOnly: true
subPath: agent.yml
- name: proc
mountPath: /hostfs/proc
readOnly: true
- name: cgroup
mountPath: /hostfs/sys/fs/cgroup
readOnly: true
- name: varlogcontainers
mountPath: /var/log/containers
readOnly: true
- name: varlogpods
mountPath: /var/log/pods
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: kubenodevarlogs
mountPath: /var/lib/host/log
readOnly: true
# - name: varlog
# mountPath: /var/log
# readOnly: true
- name: etc-full
mountPath: /hostfs/etc
readOnly: true
- name: var-lib
mountPath: /hostfs/var/lib
readOnly: true
volumes:
- name: datastreams
configMap:
defaultMode: 0640
name: agent-node-datastreams
- name: proc
hostPath:
path: /proc
- name: cgroup
hostPath:
path: /sys/fs/cgroup
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: kubenodevarlogs
hostPath:
path: /var/log
# The following volumes are needed for Cloud Security Posture integration (cloudbeat)
# If you are not using this integration, then these volumes and the corresponding
# mounts can be removed.
- name: etc-full
hostPath:
path: /etc
- name: var-lib
hostPath:
path: /var/lib
and this is the log that i get in kibana for path /var/log/containers but it's same for all the other inputs, and there are no datastreams getting generated for either path
Trying to setup elasticsearch cluster on kube, the problem i am having is that each pod isn't able to talk to the others by the respective hostnames, but the ip address works.
So for example i'm trying to currently setup 3 master nodes, es-master-0, es-master-1 and es-master-2 , if i log into one of the containers and ping another based on the pod ip it's fine, but i i try to ping say es-master-1 from es-master-0 based on the hostname it can't find it.
Clearly missing something here. Currently launching this config to try get it working:
apiVersion: v1
kind: Service
metadata:
name: ed
labels:
component: elasticsearch
role: master
spec:
selector:
component: elasticsearch
role: master
ports:
- name: transport1
port: 9300
protocol: TCP
clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-master
labels:
component: elasticsearch
role: master
spec:
selector:
matchLabels:
component: elasticsearch
role: master
serviceName: ed
replicas: 3
template:
metadata:
labels:
component: elasticsearch
role: master
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- { key: es-master, operator: In, values: [ "true" ] }
initContainers:
- name: init-sysctl
image: busybox:1.27.2
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
dnsPolicy: "None"
dnsConfig:
options:
- name: ndots
value: "6"
nameservers:
- 10.85.0.10
searches:
- ed.es.svc.cluster.local
- es.svc.cluster.local
- svc.cluster.local
- cluster.local
- home
- node1
containers:
- name: es-master
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.5
imagePullPolicy: Always
securityContext:
privileged: true
env:
- name: ES_JAVA_OPTS
value: -Xms2048m -Xmx2048m
resources:
requests:
cpu: "0.25"
limits:
cpu: "2"
ports:
- containerPort: 9300
name: transport1
livenessProbe:
tcpSocket:
port: transport1
initialDelaySeconds: 60
periodSeconds: 10
volumeMounts:
- name: storage
mountPath: /data
- name: config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
volumes:
- name: config
configMap:
name: es-master-config
volumeClaimTemplates:
- metadata:
name: storage
spec:
storageClassName: "local-path"
accessModes: [ ReadWriteOnce ]
resources:
requests:
storage: 2Gi
It's clearly somehow not resolving the hostnames
For pod to pod communication you can use k8s service which you had defined.
I'm having an issue with Filebeat on an environment which suddenly stopped sending logs to elasticsearch. On both environments we have the same setup but on this one it just stopped.. Filebeat, ElasticSearch and Kibana version 7.15.0 all helm deployments
/var/lib/docker/containers/ is empty on the filebeat container but so is in the other working environment..
Filebeat logs:
2022-07-02T16:56:12.731Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:12.731Z DEBUG [input] log/input.go:215 Start next scan {"input_id": "31e0e6d8-e599-453a-a8d0-69afdf5b52d6"}
2022-07-02T16:56:12.731Z DEBUG [input] log/input.go:279 input states cleaned up. Before: 0, After: 0, Pending: 0 {"input_id": "31e0e6d8-e599-453a-a8d0-69afdf5b52d6"}
2022-07-02T16:56:12.976Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:12.976Z DEBUG [input] log/input.go:215 Start next scan {"input_id": "89b55ab8-8fb3-49c4-9d9e-2372c956cf49"}
2022-07-02T16:56:12.977Z DEBUG [input] log/input.go:279 input states cleaned up. Before: 0, After: 0, Pending: 0 {"input_id": "89b55ab8-8fb3-49c4-9d9e-2372c956cf49"}
2022-07-02T16:56:13.074Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:13.074Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:13.074Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:215 Start next scan {"input_id": "ac5b2c6d-189a-420a-bb00-f9d9e6d5aef7"}
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:215 Start next scan {"input_id": "be885467-72ea-44c1-bdce-cdd91fb03e79"}
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:215 Start next scan {"input_id": "1fa30d44-77e8-42ec-8d22-55abd4f8f60b"}
2022-07-02T16:56:13.074Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:279 input states cleaned up. Before: 0, After: 0, Pending: 0 {"input_id": "ac5b2c6d-189a-420a-bb00-f9d9e6d5aef7"}
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:279 input states cleaned up. Before: 0, After: 0, Pending: 0 {"input_id": "1fa30d44-77e8-42ec-8d22-55abd4f8f60b"}
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:279 input states cleaned up. Before: 0, After: 0, Pending: 0 {"input_id": "be885467-72ea-44c1-bdce-cdd91fb03e79"}
Inside the filebeat container:
ls data/registry/filebeat
log.json
meta.json
cat logs/filebeat
2022-07-02T17:37:30.639Z INFO instance/beat.go:665 Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs]
2022-07-02T17:37:30.640Z DEBUG [beat] instance/beat.go:723 Beat metadata path: /usr/share/filebeat/data/meta.json
2022-07-02T17:37:30.640Z INFO instance/beat.go:673 Beat ID: b0e19db9-df61-4eec-9a95-1cd5ef653718
2022-07-02T17:37:30.640Z INFO [index-management] idxmgmt/std.go:184 Set output.elasticsearch.index to 'filebeat-7.15.0' as ILM is enabled.
2022-07-02T17:37:30.641Z INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://elasticsearch.logging:9200
2022-07-02T17:37:30.740Z DEBUG [esclientleg] eslegclient/connection.go:249 ES Ping(url=http://elasticsearch.logging:9200)
2022-07-02T17:37:30.742Z DEBUG [esclientleg] transport/logging.go:41 Completed dialing successfully {"network": "tcp", "address": "elasticsearch.logging:9200"}
2022-07-02T17:37:30.743Z DEBUG [esclientleg] eslegclient/connection.go:272 Ping status code: 200
2022-07-02T17:37:30.743Z INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.15.0
2022-07-02T17:37:30.743Z DEBUG [esclientleg] eslegclient/connection.go:328 GET http://elasticsearch.logging:9200/_license?human=false <nil>
cat data/meta.json
{"uuid":"b0e19db9-df61-4eec-9a95-1cd5ef653718","first_start":"2022-05-29T00:10:26.137238912Z"}
ls data/registry/filebeat
log.json
meta.json
cat data/registry/filebeat/log.json
cat data/registry/filebeat/meta.json
{"version":"1"}
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/containerID: 1e66a1c066aa10de73834586c605c7adf71b2c652498b0de7a9d94b44633f919
cni.projectcalico.org/podIP: 10.0.4.120/32
cni.projectcalico.org/podIPs: 10.0.4.120/32
co.elastic.logs/enabled: "false"
configChecksum: 9e8011c4cd9f9bf36cafe98af8e7862345164b1c11f062f4ab9a67492248076
kubectl.kubernetes.io/restartedAt: "2022-04-14T16:22:07+03:00"
creationTimestamp: "2022-07-01T13:53:29Z"
generateName: filebeat-filebeat-
labels:
app: filebeat-filebeat
chart: filebeat-7.15.0
controller-revision-hash: 79bdd78b56
heritage: Helm
pod-template-generation: "21"
release: filebeat
name: filebeat-filebeat-95l2d
namespace: logging
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: DaemonSet
name: filebeat-filebeat
uid: 343f6f76-ffde-11e9-bf3f-42010a9c01ac
resourceVersion: "582889515"
uid: 916d7dc9-f4b2-498a-9963-91213f568560
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchFields:
- key: metadata.name
operator: In
values:
- ..mynode
containers:
- args:
- -e
- -E
- http.enabled=true
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: ELASTICSEARCH_HOSTS
value: elasticsearch.logging:9200
image: docker.elastic.co/beats/filebeat:7.15.0
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
curl --fail 127.0.0.1:5066
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: filebeat
readinessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
filebeat test output
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 50m
memory: 50Mi
securityContext:
privileged: false
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/share/filebeat/filebeat.yml
name: filebeat-config
readOnly: true
subPath: filebeat.yml
- mountPath: /usr/share/filebeat/my_ilm_policy.json
name: filebeat-config
readOnly: true
subPath: my_ilm_policy.json
- mountPath: /usr/share/filebeat/data
name: data
- mountPath: /var/lib/docker/containers
name: varlibdockercontainers
readOnly: true
- mountPath: /var/log
name: varlog
readOnly: true
- mountPath: /var/run/docker.sock
name: varrundockersock
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-2gvbn
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ..mynode
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: filebeat-filebeat
serviceAccountName: filebeat-filebeat
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/disk-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/memory-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/pid-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/unschedulable
operator: Exists
volumes:
- configMap:
defaultMode: 384
name: filebeat-filebeat-daemonset-config
name: filebeat-config
- hostPath:
path: /var/lib/filebeat-filebeat-logging-data
type: DirectoryOrCreate
name: data
- hostPath:
path: /var/lib/docker/containers
type: ""
name: varlibdockercontainers
- hostPath:
path: /var/log
type: ""
name: varlog
- hostPath:
path: /var/run/docker.sock
type: ""
name: varrundockersock
- name: kube-api-access-3axln
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
Actually it worked with another configuration posted on elastic.co website:
filebeat.autodiscover:
providers:
- type: kubernetes
hints.enabled: true
hints.default_config:
type: container
paths:
- /var/log/containers/*-${data.container.id}.log # CRI path
https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover-hints.html
I'm still not sure why this happend suddenly but it the reason might be a container runtime change for kubernetes on the node but I don't have access to check that
I am trying to deploy an elasticsearch statfulset and have the storage provisioned by rook-ceph storageclass.
The pod is in pending mode because of:
Warning FailedScheduling 87s default-scheduler 0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
The statefull set looks like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
namespace: rcc
labels:
app: elasticsearch
tier: backend
type: db
spec:
serviceName: es
replicas: 3
selector:
matchLabels:
app: elasticsearch
tier: backend
type: db
template:
metadata:
labels:
app: elasticsearch
tier: backend
type: db
spec:
terminationGracePeriodSeconds: 300
initContainers:
- name: fix-the-volume-permission
image: busybox
command:
- sh
- -c
- chown -R 1000:1000 /usr/share/elasticsearch/data
securityContext:
privileged: true
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
- name: increase-the-vm-max-map-count
image: busybox
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
- name: increase-the-ulimit
image: busybox
command:
- sh
- -c
- ulimit -n 65536
securityContext:
privileged: true
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.7.1
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: tcp
resources:
requests:
memory: 4Gi
limits:
memory: 6Gi
env:
- name: cluster.name
value: elasticsearch
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.zen.ping.unicast.hosts
value: "elasticsearch-0.es.default.svc.cluster.local,elasticsearch-1.es.default.svc.cluster.local,elasticsearch-2.es.default.svc.cluster.local"
- name: ES_JAVA_OPTS
value: -Xms4g -Xmx4g
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
storageClassName: rook-cephfs
resources:
requests:
storage: 5Gi
and the reson of the pvc is not getting created is:
Normal FailedBinding 47s (x62 over 15m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Any idea what I do wrong?
After adding rook-ceph-block storage class and changing the storageclass to that it worked without any issues.
I am trying to deploy elastic-search in kubernetes with local drive volume but I get the following error, can you please correct me.
using ubuntu 16.04
kubernetes v1.11.0
Docker version 17.03.2-ce
Getting error 'unknown field hostPath' Kubernetes Elasticsearch using with local volume
error: error validating "es-d.yaml": error validating data: ValidationError(StatefulSet.spec.template.spec.containers[1]): unknown field "hostPath" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
This is the yaml file of the statefulSet:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: es-data
labels:
component: elasticsearch
role: data
spec:
serviceName: elasticsearch-data
replicas: 1
template:
metadata:
labels:
component: elasticsearch
role: data
spec:
initContainers:
- name: init-sysctl
image: alpine:3.6
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: es-data
image: quay.io/pires/docker-elasticsearch-kubernetes:6.3.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: myesdb
- name: NODE_MASTER
value: "false"
- name: NODE_INGEST
value: "false"
- name: HTTP_ENABLE
value: "true"
- name: ES_JAVA_OPTS
value: -Xms256m -Xmx256m
- name: PROCESSORS
valueFrom:
resourceFieldRef:
resource: limits.cpu
resources:
requests:
cpu: 0.25
limits:
cpu: 1
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: transport
livenessProbe:
tcpSocket:
port: transport
initialDelaySeconds: 20
periodSeconds: 10
readinessProbe:
httpGet:
path: /_cluster/health
port: http
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: storage
mountPath: /es
volumes:
- name: storage
You have the wrong structure. volumes must be on the same level as containers, initContainers.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: es-data
labels:
component: elasticsearch
role: data
spec:
serviceName: elasticsearch-data
replicas: 1
template:
metadata:
labels:
component: elasticsearch
role: data
spec:
initContainers:
- name: init-sysctl
image: alpine:3.6
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: es-data
image: quay.io/pires/docker-elasticsearch-kubernetes:6.3.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: myesdb
- name: NODE_MASTER
value: "false"
- name: NODE_INGEST
value: "false"
- name: HTTP_ENABLE
value: "true"
- name: ES_JAVA_OPTS
value: -Xms256m -Xmx256m
- name: PROCESSORS
valueFrom:
resourceFieldRef:
resource: limits.cpu
resources:
requests:
cpu: 0.25
limits:
cpu: 1
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: transport
livenessProbe:
tcpSocket:
port: transport
initialDelaySeconds: 20
periodSeconds: 10
readinessProbe:
httpGet:
path: /_cluster/health
port: http
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: storage
mountPath: /es
volumes:
- name: storage
You can find example here.
Check your format, hostPath is not supposed to be under container part, 'volume' is not in it's position.