Can I mount volume to Katib Experiment? - go

I am using the .yaml file below to create Katib Experiment in Kubeflow. However, I am getting
Failed to reconcile: cannot restore struct from: string
errors. Have any solutions for this? Most of the Katib Experiment example codes doesn't have a volume in it, but I'm trying to mount a volume after downloading the data from my S3.
apiVersion: "kubeflow.org/v1alpha3"
kind: Experiment
metadata:
namespace: apple
labels:
controller-tools.k8s.io: "1.0"
name: transformer-experiment
spec:
objective:
type: maximize
goal: 0.8
objectiveMetricName: Train-accuracy
additionalMetricNames:
- Train-loss
algorithm:
algorithmName: random
parallelTrialCount: 3
maxTrialCount: 12
maxFailedTrialCount: 3
metricsCollectorSpec:
collector:
kind: StdOut
parameters:
- name: --lr
parameterType: double
feasibleSpace:
min: "0.01"
max: "0.03"
- name: --dropout_rate
parameterType: double
feasibleSpace:
min: "0.005"
max: "0.020"
- name: --layer_count
parameterType: int
feasibleSpace:
min: "2"
max: "5"
- name: --d_model_count
parameterType: categorical
feasibleSpace:
list:
- "64"
- "128"
- "256"
trialTemplate:
goTemplate:
rawTemplate: |-
apiVersion: batch/v1
kind: Job
metadata:
name: {{.Trial}}
namespace: {{.NameSpace}}
spec:
template:
spec:
volumes:
- name: train-data
emptyDir: {}
containers:
- name: data-download
image: amazon/aws-cli
command:
- "aws s3 sync s3://kubeflow/kubeflowdata.tar.gz /train-data"
volumeMounts:
- name: train-data
mountPath: /train-data
- name: {{.Trial}}
image: <Our Image>
command:
- "cd /train-data"
- "ls"
- "python"
- "/opt/ml/src/main.py"
- "--train_batch=64"
- "--test_batch=64"
- "--num_workers=4"
volumeMounts:
- name: train-data
mountPath: /train-data
{{- with .HyperParameters}}
{{- range .}}
- "{{.Name}}={{.Value}}"
{{- end}}
{{- end}}
restartPolicy: Never

As answered here, the following works for me:
apiVersion: batch/v1
kind: Job
spec:
template:
spec:
containers:
- name: training-container
image: docker.io/romeokienzler/claimed-train-mobilenet_v2:0.4
command:
- "ipython"
- "/train-mobilenet_v2.ipynb"
- "optimizer=${trialParameters.optimizer}"
volumeMounts:
- mountPath: /data/
name: data-volume
restartPolicy: Never
volumes:
- name: data-volume
persistentVolumeClaim:
claimName: data-pvc

Related

Keep getting error: "0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims."

I'm trying to setup an elasticsearch stateful set. I realise there a some similar questions that have been asked but none help in my circumstance.
The first version of setting up an elasticsearch stateful set worked fine with the following config:
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-volume
labels:
type: local
spec:
storageClassName: do-block-storage
capacity:
storage: 100M
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/elasticsearch"
---
apiVersion: v1
kind: PersistentVolumeClaim # Create PVC
metadata:
name: elasticsearch-volume-claim # Sets PVC's name
labels:
app: elasticsearch # Defines app to create PVC for
spec:
storageClassName: do-block-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100M # Sets PVC's size
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
spec:
type: ClusterIP
clusterIP: None
selector:
app: elasticsearch
ports:
- port: 9200 # To get at the elasticsearch container, just hit the service on 9200
targetPort: 9200 # routes to the exposed port on elasticsearch
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch # name of stateful
namespace: default
spec:
serviceName: elasticsearch
replicas: 1
selector:
matchLabels:
app: elasticsearch # should match service > spec.slector.app.
template:
metadata:
labels:
app: elasticsearch
spec:
volumes:
- name: elasticsearch-pvc
persistentVolumeClaim:
claimName: elasticsearch-volume-claim
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:8.2.3
resources:
limits:
cpu: 100m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: elasticsearch-pvc
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: search
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.type
value: single-node
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
- name: xpack.security.enabled
value: "false"
initContainers:
- name: fix-permissions
image: busybox
command:
["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: elasticsearch-pvc
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
I then tried to implement a version of this with multiple replica's:
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
spec:
type: ClusterIP
clusterIP: None
selector:
app: elasticsearch
ports:
- port: 9200 # To get at the elasticsearch container, just hit the service on 9200
targetPort: 9200 # routes to the exposed port on elasticsearch
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster # name of stateful
spec:
serviceName: elasticsearch
replicas: 2
selector:
matchLabels:
app: elasticsearch # should match service > spec.slector.app.
volumeClaimTemplates:
- metadata:
name: elasticsearch-pvc
labels:
app: elasticsearch
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100M
storageClassName: do-block-storage
template:
metadata:
labels:
app: elasticsearch
spec:
# volumes:
# - name: elasticsearch-pvc
# persistentVolumeClaim:
# claimName: elasticsearch-volume-claim
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:8.2.3
resources:
limits:
cpu: 100m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: elasticsearch-pvc
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: search
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command:
["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: elasticsearch-pvc
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
However I ran into the error: 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
I subsequently reduced the replica's to just 1 and manually created the PV in case DO was having an issue creating the PVC without a PV (even though DO should dynamically create the PVC and PV because it works with the postgres multi-replica stateful set which I set up in exactly the same way):
apiVersion: v1
kind: PersistentVolume
metadata:
name: es-volume-1
spec:
capacity:
storage: 100M
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: do-block-storage
hostPath:
path: "/data/elasticsearch"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- es-cluster-0
This again yielded the error: 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
After spending a while de-bugging I gave up and decided to revert back to my single replica elasticsearch stateful set using the method I had originally used.
But once again I got the error 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.!!!
I don't have a clue what's going on here. Why am I getting this error even though I'm only trying to create a single replica and I have manually defined the PV and PVC which worked fine before??
Turns out the issue was indeed Digital Ocean specific. In the second attempt when I tried to create multiple replica's I had to use dynamic volume provisioning via volumeClaimTemplates and set the storage class to do-block-storage which as it turns out has a minimum limit of 1Gi!
Alas when I updated to 1Gi it all started working.

How to include different script inside k8s kind Secret.stringData

I have a k8s cronjob which exports metrics periodically and there's k8s secret.yaml and I execute a script from it, called run.sh
Within run.sh, I would like to refer another script and I can't seem to find a right way to access this script or specify it in cronjob.yaml
cronjob.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: exporter
labels:
app: metrics-exporter
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
metadata:
labels:
app: exporter
spec:
volumes:
- name: db-dir
emptyDir: { }
- name: home-dir
emptyDir: { }
- name: run-sh
secret:
secretName: exporter-run-sh
- name: clusters
emptyDir: { }
containers:
- name: stats-exporter
image: XXXX
imagePullPolicy: IfNotPresent
command:
- /bin/bash
- /home/scripts/run.sh
resources: { }
volumeMounts:
- name: db-dir
mountPath: /.db
- name: home-dir
mountPath: /home
- name: run-sh
mountPath: /home/scripts
- name: clusters
mountPath: /db-clusters
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
securityContext:
capabilities:
drop:
- ALL
privileged: false
runAsUser: 1000
runAsNonRoot: true
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
terminationGracePeriodSeconds: 30
restartPolicy: OnFailure
Here's how in secret.yaml, I run script run.sh and refer to another script inside /db-clusters.
apiVersion: v1
kind: Secret
metadata:
name: exporter-run-sh
type: Opaque
stringData:
run.sh: |
#!/bin/sh
source $(dirname $0)/db-clusters/cluster1.sh
# further work here
Here's the
Error Message:
/home/scripts/run.sh: line 57: /home/scripts/db-clusters/cluster1.sh: No such file or directory
As per secret yaml, In string data you need to mention “run-sh” as you need to include the secret in the run-sh volume mount.
Have a try by using run-sh in stringData as below :
apiVersion: v1
kind: Secret
metadata:
name: exporter-run-sh
type: Opaque
stringData:
run-sh: |
#!/bin/sh
source $(dirname $0)/db-clusters/cluster1.sh
# further work here

Elasticsearch statefulset does not create volumesClaims on rook-cephfs

I am trying to deploy an elasticsearch statfulset and have the storage provisioned by rook-ceph storageclass.
The pod is in pending mode because of:
Warning FailedScheduling 87s default-scheduler 0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
The statefull set looks like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
namespace: rcc
labels:
app: elasticsearch
tier: backend
type: db
spec:
serviceName: es
replicas: 3
selector:
matchLabels:
app: elasticsearch
tier: backend
type: db
template:
metadata:
labels:
app: elasticsearch
tier: backend
type: db
spec:
terminationGracePeriodSeconds: 300
initContainers:
- name: fix-the-volume-permission
image: busybox
command:
- sh
- -c
- chown -R 1000:1000 /usr/share/elasticsearch/data
securityContext:
privileged: true
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
- name: increase-the-vm-max-map-count
image: busybox
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
- name: increase-the-ulimit
image: busybox
command:
- sh
- -c
- ulimit -n 65536
securityContext:
privileged: true
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.7.1
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: tcp
resources:
requests:
memory: 4Gi
limits:
memory: 6Gi
env:
- name: cluster.name
value: elasticsearch
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.zen.ping.unicast.hosts
value: "elasticsearch-0.es.default.svc.cluster.local,elasticsearch-1.es.default.svc.cluster.local,elasticsearch-2.es.default.svc.cluster.local"
- name: ES_JAVA_OPTS
value: -Xms4g -Xmx4g
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
storageClassName: rook-cephfs
resources:
requests:
storage: 5Gi
and the reson of the pvc is not getting created is:
Normal FailedBinding 47s (x62 over 15m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Any idea what I do wrong?
After adding rook-ceph-block storage class and changing the storageclass to that it worked without any issues.

How do you pass a variable value to .Files.Glob in a Helm chart?

The invocation of .Files.Glob below needs to be from a variable supplied as a value from .Values.initDBFilesGlob. The value is getting set properly, but the if condition is not evaluating to truthy, even though .Values.initDBConfigMap is empty.
How do I pass a variable argument to .Files.Glob?
Template in question (templates/initdb-configmap.yaml from my WIP chart https://github.com/northscaler/charts/tree/support-env-specific-init/bitnami/cassandra which I'll submit to https://github.com/bitnami/charts/tree/master/bitnami/cassandra as a PR once this is fixed):
{{- $initDBFilesGlob := .Values.initDBFilesGlob -}}
# "{{ $initDBFilesGlob }}" "{{ .Values.initDBConfigMap }}"
# There should be content below this
{{- if and (.Files.Glob $initDBFilesGlob) (not .Values.initDBConfigMap) }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "cassandra.fullname" . }}-init-scripts
labels: {{- include "cassandra.labels" . | nindent 4 }}
data:
{{ (.Files.Glob $initDBFilesGlob).AsConfig | indent 2 }}
{{- end }}
File values.yaml:
dbUser:
forcePassword: true
password: cassandra
initDBFilesGlob: 'files/devops/docker-entrypoint-initdb.d/*'
Command: helm template -f values.yaml foobar /Users/matthewadams/dev/bitnami/charts/bitnami/cassandra
There are files in files/devops/docker-entrypoint-initdb.d, relative to the directory from which I'm invoking the command.
Output:
---
# Source: cassandra/templates/pdb.yaml
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: foobar-cassandra-headless
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
spec:
selector:
matchLabels:
app: cassandra
release: foobar
maxUnavailable: 1
---
# Source: cassandra/templates/cassandra-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: foobar-cassandra
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
type: Opaque
data:
cassandra-password: "Y2Fzc2FuZHJh"
---
# Source: cassandra/templates/configuration-cm.yaml
# files/conf/*
apiVersion: v1
kind: ConfigMap
# files/conf/*
metadata:
name: foobar-cassandra-configuration
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
data:
README.md: |
Place your Cassandra configuration files here. This will override the values set in any configuration environment variable. This will not be used in case the value *existingConfiguration* is used.
More information [here](https://github.com/bitnami/bitnami-docker-cassandra#configuration)
---
# Source: cassandra/templates/headless-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: foobar-cassandra-headless
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
spec:
clusterIP: None
publishNotReadyAddresses: true
ports:
- name: intra
port: 7000
targetPort: intra
- name: tls
port: 7001
targetPort: tls
- name: jmx
port: 7199
targetPort: jmx
- name: cql
port: 9042
targetPort: cql
- name: thrift
port: 9160
targetPort: thrift
selector:
app: cassandra
release: foobar
---
# Source: cassandra/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: foobar-cassandra
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
annotations:
{}
spec:
type: ClusterIP
ports:
- name: cql
port: 9042
targetPort: cql
nodePort: null
- name: thrift
port: 9160
targetPort: thrift
nodePort: null
selector:
app: cassandra
release: foobar
---
# Source: cassandra/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: foobar-cassandra
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
spec:
selector:
matchLabels:
app: cassandra
release: foobar
serviceName: foobar-cassandra-headless
replicas: 1
updateStrategy:
type: OnDelete
template:
metadata:
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
spec:
securityContext:
fsGroup: 1001
runAsUser: 1001
containers:
- name: cassandra
command:
- bash
- -ec
# Node 0 is the password seeder
- |
if [[ $HOSTNAME =~ (.*)-0$ ]]; then
echo "Setting node as password seeder"
export CASSANDRA_PASSWORD_SEEDER=yes
else
# Only node 0 will execute the startup initdb scripts
export CASSANDRA_IGNORE_INITDB_SCRIPTS=1
fi
/entrypoint.sh /run.sh
image: docker.io/bitnami/cassandra:3.11.6-debian-10-r26
imagePullPolicy: "IfNotPresent"
env:
- name: BITNAMI_DEBUG
value: "false"
- name: CASSANDRA_CLUSTER_NAME
value: cassandra
- name: CASSANDRA_SEEDS
value: "foobar-cassandra-0.foobar-cassandra-headless.default.svc.cluster.local"
- name: CASSANDRA_PASSWORD
valueFrom:
secretKeyRef:
name: foobar-cassandra
key: cassandra-password
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: CASSANDRA_USER
value: "cassandra"
- name: CASSANDRA_NUM_TOKENS
value: "256"
- name: CASSANDRA_DATACENTER
value: dc1
- name: CASSANDRA_ENDPOINT_SNITCH
value: SimpleSnitch
- name: CASSANDRA_ENDPOINT_SNITCH
value: SimpleSnitch
- name: CASSANDRA_RACK
value: rack1
- name: CASSANDRA_ENABLE_RPC
value: "true"
livenessProbe:
exec:
command: ["/bin/sh", "-c", "nodetool status"]
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
exec:
command: ["/bin/sh", "-c", "nodetool status | grep -E \"^UN\\s+${POD_IP}\""]
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
ports:
- name: intra
containerPort: 7000
- name: tls
containerPort: 7001
- name: jmx
containerPort: 7199
- name: cql
containerPort: 9042
- name: thrift
containerPort: 9160
resources:
limits: {}
requests: {}
volumeMounts:
- name: data
mountPath: /bitnami/cassandra
- name: init-db
mountPath: /docker-entrypoint-initdb.d
- name: configurations
mountPath: /bitnami/cassandra/conf
volumes:
- name: configurations
configMap:
name: foobar-cassandra-configuration
- name: init-db
configMap:
name: foobar-cassandra-init-scripts
volumeClaimTemplates:
- metadata:
name: data
labels:
app: cassandra
release: foobar
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "8Gi"
---
# Source: cassandra/templates/initdb-configmap.yaml
# "files/devops/docker-entrypoint-initdb.d/*" ""
# There should be content below this
If I comment out the line in my values.yaml that sets initDBFilesGlob, the template renders correctly:
...
---
# Source: cassandra/templates/initdb-configmap.yaml
# "files/docker-entrypoint-initdb.d/*" ""
# There should be content below this
apiVersion: v1
kind: ConfigMap
metadata:
name: foobar-cassandra-init-scripts
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
data:
README.md: |
You can copy here your custom `.sh` or `.cql` file so they are executed during the first boot of the image.
More info in the [bitnami-docker-cassandra](https://github.com/bitnami/bitnami-docker-cassandra#initializing-a-new-instance) repository.
I was able to accomplish this by using a printf function to initialize the variable, like this:
{{- $initDBFilesGlob := printf "%s" .Values.initDBFilesGlob -}}

Getting error 'unknown field hostPath' Kubernetes Elasticsearch using with local volume

I am trying to deploy elastic-search in kubernetes with local drive volume but I get the following error, can you please correct me.
using ubuntu 16.04
kubernetes v1.11.0
Docker version 17.03.2-ce
Getting error 'unknown field hostPath' Kubernetes Elasticsearch using with local volume
error: error validating "es-d.yaml": error validating data: ValidationError(StatefulSet.spec.template.spec.containers[1]): unknown field "hostPath" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
This is the yaml file of the statefulSet:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: es-data
labels:
component: elasticsearch
role: data
spec:
serviceName: elasticsearch-data
replicas: 1
template:
metadata:
labels:
component: elasticsearch
role: data
spec:
initContainers:
- name: init-sysctl
image: alpine:3.6
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: es-data
image: quay.io/pires/docker-elasticsearch-kubernetes:6.3.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: myesdb
- name: NODE_MASTER
value: "false"
- name: NODE_INGEST
value: "false"
- name: HTTP_ENABLE
value: "true"
- name: ES_JAVA_OPTS
value: -Xms256m -Xmx256m
- name: PROCESSORS
valueFrom:
resourceFieldRef:
resource: limits.cpu
resources:
requests:
cpu: 0.25
limits:
cpu: 1
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: transport
livenessProbe:
tcpSocket:
port: transport
initialDelaySeconds: 20
periodSeconds: 10
readinessProbe:
httpGet:
path: /_cluster/health
port: http
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: storage
mountPath: /es
volumes:
- name: storage
You have the wrong structure. volumes must be on the same level as containers, initContainers.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: es-data
labels:
component: elasticsearch
role: data
spec:
serviceName: elasticsearch-data
replicas: 1
template:
metadata:
labels:
component: elasticsearch
role: data
spec:
initContainers:
- name: init-sysctl
image: alpine:3.6
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: es-data
image: quay.io/pires/docker-elasticsearch-kubernetes:6.3.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: myesdb
- name: NODE_MASTER
value: "false"
- name: NODE_INGEST
value: "false"
- name: HTTP_ENABLE
value: "true"
- name: ES_JAVA_OPTS
value: -Xms256m -Xmx256m
- name: PROCESSORS
valueFrom:
resourceFieldRef:
resource: limits.cpu
resources:
requests:
cpu: 0.25
limits:
cpu: 1
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: transport
livenessProbe:
tcpSocket:
port: transport
initialDelaySeconds: 20
periodSeconds: 10
readinessProbe:
httpGet:
path: /_cluster/health
port: http
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: storage
mountPath: /es
volumes:
- name: storage
You can find example here.
Check your format, hostPath is not supposed to be under container part, 'volume' is not in it's position.

Resources