Getting error 'unknown field hostPath' Kubernetes Elasticsearch using with local volume - elasticsearch

I am trying to deploy elastic-search in kubernetes with local drive volume but I get the following error, can you please correct me.
using ubuntu 16.04
kubernetes v1.11.0
Docker version 17.03.2-ce
Getting error 'unknown field hostPath' Kubernetes Elasticsearch using with local volume
error: error validating "es-d.yaml": error validating data: ValidationError(StatefulSet.spec.template.spec.containers[1]): unknown field "hostPath" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
This is the yaml file of the statefulSet:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: es-data
labels:
component: elasticsearch
role: data
spec:
serviceName: elasticsearch-data
replicas: 1
template:
metadata:
labels:
component: elasticsearch
role: data
spec:
initContainers:
- name: init-sysctl
image: alpine:3.6
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: es-data
image: quay.io/pires/docker-elasticsearch-kubernetes:6.3.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: myesdb
- name: NODE_MASTER
value: "false"
- name: NODE_INGEST
value: "false"
- name: HTTP_ENABLE
value: "true"
- name: ES_JAVA_OPTS
value: -Xms256m -Xmx256m
- name: PROCESSORS
valueFrom:
resourceFieldRef:
resource: limits.cpu
resources:
requests:
cpu: 0.25
limits:
cpu: 1
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: transport
livenessProbe:
tcpSocket:
port: transport
initialDelaySeconds: 20
periodSeconds: 10
readinessProbe:
httpGet:
path: /_cluster/health
port: http
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: storage
mountPath: /es
volumes:
- name: storage

You have the wrong structure. volumes must be on the same level as containers, initContainers.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: es-data
labels:
component: elasticsearch
role: data
spec:
serviceName: elasticsearch-data
replicas: 1
template:
metadata:
labels:
component: elasticsearch
role: data
spec:
initContainers:
- name: init-sysctl
image: alpine:3.6
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: es-data
image: quay.io/pires/docker-elasticsearch-kubernetes:6.3.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: myesdb
- name: NODE_MASTER
value: "false"
- name: NODE_INGEST
value: "false"
- name: HTTP_ENABLE
value: "true"
- name: ES_JAVA_OPTS
value: -Xms256m -Xmx256m
- name: PROCESSORS
valueFrom:
resourceFieldRef:
resource: limits.cpu
resources:
requests:
cpu: 0.25
limits:
cpu: 1
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: transport
livenessProbe:
tcpSocket:
port: transport
initialDelaySeconds: 20
periodSeconds: 10
readinessProbe:
httpGet:
path: /_cluster/health
port: http
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: storage
mountPath: /es
volumes:
- name: storage
You can find example here.

Check your format, hostPath is not supposed to be under container part, 'volume' is not in it's position.

Related

Setting up elastic search cluster on kubernetes pods can't talk to each other by hostname

Trying to setup elasticsearch cluster on kube, the problem i am having is that each pod isn't able to talk to the others by the respective hostnames, but the ip address works.
So for example i'm trying to currently setup 3 master nodes, es-master-0, es-master-1 and es-master-2 , if i log into one of the containers and ping another based on the pod ip it's fine, but i i try to ping say es-master-1 from es-master-0 based on the hostname it can't find it.
Clearly missing something here. Currently launching this config to try get it working:
apiVersion: v1
kind: Service
metadata:
name: ed
labels:
component: elasticsearch
role: master
spec:
selector:
component: elasticsearch
role: master
ports:
- name: transport1
port: 9300
protocol: TCP
clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-master
labels:
component: elasticsearch
role: master
spec:
selector:
matchLabels:
component: elasticsearch
role: master
serviceName: ed
replicas: 3
template:
metadata:
labels:
component: elasticsearch
role: master
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- { key: es-master, operator: In, values: [ "true" ] }
initContainers:
- name: init-sysctl
image: busybox:1.27.2
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
dnsPolicy: "None"
dnsConfig:
options:
- name: ndots
value: "6"
nameservers:
- 10.85.0.10
searches:
- ed.es.svc.cluster.local
- es.svc.cluster.local
- svc.cluster.local
- cluster.local
- home
- node1
containers:
- name: es-master
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.5
imagePullPolicy: Always
securityContext:
privileged: true
env:
- name: ES_JAVA_OPTS
value: -Xms2048m -Xmx2048m
resources:
requests:
cpu: "0.25"
limits:
cpu: "2"
ports:
- containerPort: 9300
name: transport1
livenessProbe:
tcpSocket:
port: transport1
initialDelaySeconds: 60
periodSeconds: 10
volumeMounts:
- name: storage
mountPath: /data
- name: config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
volumes:
- name: config
configMap:
name: es-master-config
volumeClaimTemplates:
- metadata:
name: storage
spec:
storageClassName: "local-path"
accessModes: [ ReadWriteOnce ]
resources:
requests:
storage: 2Gi
It's clearly somehow not resolving the hostnames
For pod to pod communication you can use k8s service which you had defined.

Elasticsearch cluster on Kubernetes - nodes are not communicating

I have an Elasticsearch cluster (6.3) running on Kubernetes (GKE) with the following manifest file:
---
# Source: elasticsearch/templates/manifests.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch-configmap
labels:
app.kubernetes.io/name: "elasticsearch"
app.kubernetes.io/component: elasticsearch-server
data:
elasticsearch.yml: |
cluster.name: "${CLUSTER_NAME}"
node.name: "${NODE_NAME}"
path.data: /usr/share/elasticsearch/data
path.repo: ["${BACKUP_REPO_PATH}"]
network.host: 0.0.0.0
discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.unicast.hosts: ${DISCOVERY_SERVICE}
log4j2.properties: |
status = error
appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n
rootLogger.level = info
rootLogger.appenderRef.console.ref = console
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
labels: &ElasticsearchDeploymentLabels
app.kubernetes.io/name: "elasticsearch"
app.kubernetes.io/component: elasticsearch-server
spec:
selector:
matchLabels: *ElasticsearchDeploymentLabels
serviceName: elasticsearch-svc
replicas: 2
updateStrategy:
# The procedure for updating the Elasticsearch cluster is described at
# https://www.elastic.co/guide/en/elasticsearch/reference/current/rolling-upgrades.html
type: OnDelete
template:
metadata:
labels: *ElasticsearchDeploymentLabels
spec:
terminationGracePeriodSeconds: 180
initContainers:
# This init container sets the appropriate limits for mmap counts on the hosting node.
# https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html
- name: set-max-map-count
image: marketplace.gcr.io/google/elasticsearch/ubuntu16_04#...
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
command:
- /bin/bash
- -c
- 'if [[ "$(sysctl vm.max_map_count --values)" -lt 262144 ]]; then sysctl -w vm.max_map_count=262144; fi'
containers:
- name: elasticsearch
image: eu.gcr.io/projectId/elasticsearch6.3#sha256:...
imagePullPolicy: Always
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: "elasticsearch-cluster"
- name: DISCOVERY_SERVICE
value: "elasticsearch-svc"
- name: BACKUP_REPO_PATH
value: ""
ports:
- name: prometheus
containerPort: 9114
protocol: TCP
- name: http
containerPort: 9200
- name: tcp-transport
containerPort: 9300
volumeMounts:
- name: configmap
mountPath: /etc/elasticsearch/elasticsearch.yml
subPath: elasticsearch.yml
- name: configmap
mountPath: /etc/elasticsearch/log4j2.properties
subPath: log4j2.properties
- name: elasticsearch-pvc
mountPath: /usr/share/elasticsearch/data
readinessProbe:
httpGet:
path: /_cluster/health?local=true
port: 9200
initialDelaySeconds: 5
livenessProbe:
exec:
command:
- /usr/bin/pgrep
- -x
- "java"
initialDelaySeconds: 5
resources:
requests:
memory: "2Gi"
- name: prometheus-to-sd
image: marketplace.gcr.io/google/elasticsearch/prometheus-to-sd#sha256:8e3679a6e059d1806daae335ab08b304fd1d8d35cdff457baded7306b5af9ba5
ports:
- name: profiler
containerPort: 6060
command:
- /monitor
- --stackdriver-prefix=custom.googleapis.com
- --source=elasticsearch:http://localhost:9114/metrics
- --pod-id=$(POD_NAME)
- --namespace-id=$(POD_NAMESPACE)
- --monitored-resource-types=k8s
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: configmap
configMap:
name: "elasticsearch-configmap"
volumeClaimTemplates:
- metadata:
name: elasticsearch-pvc
labels:
app.kubernetes.io/name: "elasticsearch"
app.kubernetes.io/component: elasticsearch-server
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: standard
resources:
requests:
storage: 50Gi
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-prometheus-svc
labels:
app.kubernetes.io/name: elasticsearch
app.kubernetes.io/component: elasticsearch-server
spec:
clusterIP: None
ports:
- name: prometheus-port
port: 9114
protocol: TCP
selector:
app.kubernetes.io/name: elasticsearch
app.kubernetes.io/component: elasticsearch-server
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-svc-internal
labels:
app.kubernetes.io/name: "elasticsearch"
app.kubernetes.io/component: elasticsearch-server
spec:
ports:
- name: http
port: 9200
- name: tcp-transport
port: 9300
selector:
app.kubernetes.io/name: "elasticsearch"
app.kubernetes.io/component: elasticsearch-server
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: ilb-service-elastic
annotations:
cloud.google.com/load-balancer-type: "Internal"
labels:
app: elasticsearch-svc
spec:
type: LoadBalancer
loadBalancerIP: some-ip-address
selector:
app.kubernetes.io/component: elasticsearch-server
app.kubernetes.io/name: elasticsearch
ports:
- port: 9200
protocol: TCP
This manifest was written from the template that used to be available on the GCP marketplace.
I'm encountering the following issue: the cluster is supposed to have 2 nodes, and indeed 2 pods are running.
However
a call to ip:9200/_nodes returns just one node
there still seems to be a second node running that receives traffic (at least, read traffic), as visible in the logs. Those requests typically fail because the requested entities don't exist on that node (just on the master node).
I can't wrap my head around the fact that the node at the same time isn't visible to the master node, and receives read traffic from the load balanced pointing to the stateful set.
Am I missing something subtle ?
Did you try checking which types of both Nodes are?
There are Master nodes and data nodes, at a time only one master gets elected while the other just stay in the background if the first master node goes down new Node gets elected and handles the further request.
i cant see Node type config in stateful sets. i would recommand checking out the helm of Elasticsearch to set up and deploy on GKE.
Helm chart : https://github.com/elastic/helm-charts/tree/main/elasticsearch
Sharing example Env config for reference :
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: my-es
- name: NODE_MASTER
value: "false"
- name: NODE_INGEST
value: "false"
- name: HTTP_ENABLE
value: "false"
- name: ES_JAVA_OPTS
value: -Xms256m -Xmx256m
read more at : https://faun.pub/https-medium-com-thakur-vaibhav23-ha-es-k8s-7e655c1b7b61

Elasticsearch pods failing at readiness probe

Elasticsearch pod is not becoming active.
logging-es-data-master-ilmz5zyt-3-deploy 1/1 Running 0 5m
logging-es-data-master-ilmz5zyt-3-qxkml 1/2 Running 0 5m
and events are.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m default-scheduler Successfully assigned logging-es-data-master-ilmz5zyt-3-qxkml to digi-srv-pp-01
Normal Pulled 5m kubelet, digi-srv-pp-01 Container image "docker.io/openshift/origin-logging-elasticsearch:v3.10" already present on machine
Normal Created 5m kubelet, digi-srv-pp-01 Created container
Normal Started 5m kubelet, digi-srv-pp-01 Started container
Normal Pulled 5m kubelet, digi-srv-pp-01 Container image "docker.io/openshift/oauth-proxy:v1.0.0" already present on machine
Normal Created 5m kubelet, digi-srv-pp-01 Created container
Normal Started 5m kubelet, digi-srv-pp-01 Started container
Warning Unhealthy 13s (x55 over 4m) kubelet, digi-srv-pp-01 Readiness probe failed: Elasticsearch node is not ready to accept HTTP requests yet [response code: 000]
Deployment config is
# oc export dc/logging-es-data-master-ilmz5zyt -o yaml
Command "export" is deprecated, use the oc get --export
apiVersion: v1
kind: DeploymentConfig
metadata:
creationTimestamp: null
generation: 5
labels:
component: es
deployment: logging-es-data-master-ilmz5zyt
logging-infra: elasticsearch
provider: openshift
name: logging-es-data-master-ilmz5zyt
spec:
replicas: 1
revisionHistoryLimit: 0
selector:
component: es
deployment: logging-es-data-master-ilmz5zyt
logging-infra: elasticsearch
provider: openshift
strategy:
activeDeadlineSeconds: 21600
recreateParams:
timeoutSeconds: 600
resources: {}
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
component: es
deployment: logging-es-data-master-ilmz5zyt
logging-infra: elasticsearch
provider: openshift
name: logging-es-data-master-ilmz5zyt
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: logging-infra
operator: In
values:
- elasticsearch
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- env:
- name: DC_NAME
value: logging-es-data-master-ilmz5zyt
- name: NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: KUBERNETES_TRUST_CERTIFICATES
value: "true"
- name: SERVICE_DNS
value: logging-es-cluster
- name: CLUSTER_NAME
value: logging-es
- name: INSTANCE_RAM
value: 12Gi
- name: HEAP_DUMP_LOCATION
value: /elasticsearch/persistent/heapdump.hprof
- name: NODE_QUORUM
value: "1"
- name: RECOVER_EXPECTED_NODES
value: "1"
- name: RECOVER_AFTER_TIME
value: 5m
- name: READINESS_PROBE_TIMEOUT
value: "30"
- name: POD_LABEL
value: component=es
- name: IS_MASTER
value: "true"
- name: HAS_DATA
value: "true"
- name: PROMETHEUS_USER
value: system:serviceaccount:openshift-metrics:prometheus
image: docker.io/openshift/origin-logging-elasticsearch:v3.10
imagePullPolicy: IfNotPresent
name: elasticsearch
ports:
- containerPort: 9200
name: restapi
protocol: TCP
- containerPort: 9300
name: cluster
protocol: TCP
readinessProbe:
exec:
command:
- /usr/share/java/elasticsearch/probe/readiness.sh
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 120
resources:
limits:
memory: 12Gi
requests:
cpu: "1"
memory: 12Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/elasticsearch/secret
name: elasticsearch
readOnly: true
- mountPath: /usr/share/java/elasticsearch/config
name: elasticsearch-config
readOnly: true
- mountPath: /elasticsearch/persistent
name: elasticsearch-storage
- args:
- --upstream-ca=/etc/elasticsearch/secret/admin-ca
- --https-address=:4443
- -provider=openshift
- -client-id=system:serviceaccount:openshift-logging:aggregated-logging-elasticsearch
- -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
- -cookie-secret=endzaVczSWMzb0NoNlVtVw==
- -basic-auth-password=NXd9xTjg4npjIM0E
- -upstream=https://localhost:9200
- '-openshift-sar={"namespace": "openshift-logging", "verb": "view", "resource":
"prometheus", "group": "metrics.openshift.io"}'
- '-openshift-delegate-urls={"/": {"resource": "prometheus", "verb": "view",
"group": "metrics.openshift.io", "namespace": "openshift-logging"}}'
- --tls-cert=/etc/tls/private/tls.crt
- --tls-key=/etc/tls/private/tls.key
- -pass-access-token
- -pass-user-headers
image: docker.io/openshift/oauth-proxy:v1.0.0
imagePullPolicy: IfNotPresent
name: proxy
ports:
- containerPort: 4443
name: proxy
protocol: TCP
resources:
limits:
memory: 64Mi
requests:
cpu: 100m
memory: 64Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/tls/private
name: proxy-tls
readOnly: true
- mountPath: /etc/elasticsearch/secret
name: elasticsearch
readOnly: true
dnsPolicy: ClusterFirst
nodeSelector:
node-role.kubernetes.io/compute: "true"
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
supplementalGroups:
- 65534
serviceAccount: aggregated-logging-elasticsearch
serviceAccountName: aggregated-logging-elasticsearch
terminationGracePeriodSeconds: 30
volumes:
- name: proxy-tls
secret:
defaultMode: 420
secretName: prometheus-tls
- name: elasticsearch
secret:
defaultMode: 420
secretName: logging-elasticsearch
- configMap:
defaultMode: 420
name: logging-elasticsearch
name: elasticsearch-config
- name: elasticsearch-storage
persistentVolumeClaim:
claimName: logging-es-0
test: false
triggers: []
status:
availableReplicas: 0
latestVersion: 0
observedGeneration: 0
replicas: 0
unavailableReplicas: 0
updatedReplicas: 0

My Spring boot application Pod is stuck in kubernetes and not being up

Spring boot application pod is not getting up, neither is failing , it is just stuck. Not getting any clue what is going wrong .Attached screenshot is for logs it is generating
pod log
apiVersion: apps/v1
kind: Deployment
metadata:
name: <deployement-name>
spec:
selector:
matchLabels:
app: <app-name>
replicas: 1
template:
metadata:
labels:
app: <app-name>
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8090"
prometheus.io/path: "/app/actuator/prometheus"
spec:
securityContext:
runAsGroup: 3000
runAsUser: 3000
containers:
- name: <app-name>
image: <url>/REPLACE_ME
imagePullPolicy: Always
env:
- name: DB_USER
valueFrom:
secretKeyRef:
name: <random>-db-secret
key: db-user
- name: DB_PASS
valueFrom:
secretKeyRef:
name: <random>-db-secret
key: db-pass
- name: spring-profile
valueFrom:
configMapKeyRef:
name: spring-profile-stage
key: ENV
- name: securityaudit.hostname
valueFrom:
configMapKeyRef:
name: securityaudit-config
key: securityaudit.hostname
- name: securityaudit.ipaddress
valueFrom:
configMapKeyRef:
name: securityaudit-config
key: securityaudit.ipaddress
- name: securityaudit.product
valueFrom:
configMapKeyRef:
name: securityaudit-config
key: securityaudit.product
- name: splunkurl
valueFrom:
configMapKeyRef:
name: splunk-config
key: splunkurl
- name: splunktoken
valueFrom:
secretKeyRef:
name: hec-token-secret
key: hec-token
- name: splunkindex
valueFrom:
configMapKeyRef:
name: splunk-config
key: splunkindex
- name: splunkCertValidDisable
valueFrom:
configMapKeyRef:
name: splunk-config
key: splunkDisableCertValidation
- name: MY_POD_ID
valueFrom:
fieldRef:
fieldPath: metadata.name
resources:
limits:
memory: 4Gi
cpu: 1
requests:
memory: 4Gi
cpu: 1
securityContext:
privileged: false
ports:
- containerPort: 8090
livenessProbe:
httpGet:
path: /<app-name>/actuator/health/liveness
port: 8090
initialDelaySeconds: 180
readinessProbe:
httpGet:
path: /<app-name>/actuator/health/readiness
port: 8090
initialDelaySeconds: 180
periodSeconds: 10
volumeMounts:
- name: <app-name>-heapdump
mountPath: /<app-name>heapdump
volumes:
- name: <app-name>-heapdump
persistentVolumeClaim:
claimName: <app-name>heap-dump-stage
Answer by #Rakesh Gupta resolved my problem. Pod was up, problem was logging level was not set and i thought something was wrong with the pod itself. Being new to this project , i missed the point to check the logging level
Does this answer your question - stackoverflow.com/questions/40762547/… –

Elasticsearch statefulset does not create volumesClaims on rook-cephfs

I am trying to deploy an elasticsearch statfulset and have the storage provisioned by rook-ceph storageclass.
The pod is in pending mode because of:
Warning FailedScheduling 87s default-scheduler 0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
The statefull set looks like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
namespace: rcc
labels:
app: elasticsearch
tier: backend
type: db
spec:
serviceName: es
replicas: 3
selector:
matchLabels:
app: elasticsearch
tier: backend
type: db
template:
metadata:
labels:
app: elasticsearch
tier: backend
type: db
spec:
terminationGracePeriodSeconds: 300
initContainers:
- name: fix-the-volume-permission
image: busybox
command:
- sh
- -c
- chown -R 1000:1000 /usr/share/elasticsearch/data
securityContext:
privileged: true
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
- name: increase-the-vm-max-map-count
image: busybox
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
- name: increase-the-ulimit
image: busybox
command:
- sh
- -c
- ulimit -n 65536
securityContext:
privileged: true
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.7.1
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: tcp
resources:
requests:
memory: 4Gi
limits:
memory: 6Gi
env:
- name: cluster.name
value: elasticsearch
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.zen.ping.unicast.hosts
value: "elasticsearch-0.es.default.svc.cluster.local,elasticsearch-1.es.default.svc.cluster.local,elasticsearch-2.es.default.svc.cluster.local"
- name: ES_JAVA_OPTS
value: -Xms4g -Xmx4g
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
storageClassName: rook-cephfs
resources:
requests:
storage: 5Gi
and the reson of the pvc is not getting created is:
Normal FailedBinding 47s (x62 over 15m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Any idea what I do wrong?
After adding rook-ceph-block storage class and changing the storageclass to that it worked without any issues.

Resources