Multiple json_query in Ansible? - ansible

I have the following yaml file.
resources:
- apiVersion: v1
kind: Deployment
metadata:
labels:
app: test
name: test-cluster-operator
namespace: destiny001
spec:
selector:
matchLabels:
name: test-cluster-operator
test.io/kind: cluster-operator
strategy:
type: Recreate
template:
metadata:
labels:
name: test-cluster-operator
test.io/kind: cluster-operator
spec:
containers:
- args:
- /path/test/bin/cluster_operator_run.sh
env:
- name: MY_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthy
port: 8080
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 1
name: test-cluster-operator
readinessProbe:
failureThreshold: 3
httpGet:
path: /ready
port: 8080
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: '1'
memory: 256Mi
requests:
cpu: 200m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/data
name: data-cluster-operator
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: test-cluster-operator
serviceAccountName: test-cluster-operator
terminationGracePeriodSeconds: 30
volumes:
- name: data-cluster-operator
persistentVolumeClaim:
claimName: data-cluster-operator
I am trying to get the the value of env variable called MY_NAMESPACE.
This is what I tried in Ansible to get to the env tree path.
- name: "set test fact"
set_fact:
myresult: "{{ yaml_file_variable | json_query(\"resources[?metadata.name=='test-cluster-operator'].spec.template.spec\") | json_query(\"containers[?name=='test-cluster-operator'].env\") }}"
- name: "debug"
debug:
msg: "{{ myresult }}"
This produces an empty list, however the first json_query works well.
How do I use json_query correctly in this case?
Can I achieve this with just one json_query?
EDIT:
I seem to be closer to a solution but the result ist a list and not string, which I find annoying.
- name: "set test fact"
set_fact:
myresult: "{{ yaml_file_variable | json_query(\"resources[?metadata.name=='test-cluster-operator'].spec.template.spec\") | json_query(\"[].containers[?name=='test-cluster-operator']\") | json_query(\"[].env[?name=='MY_NAMESPACE'].name\") }}"
This prints - - MY_NAMESPACE instead of just MY_NAMESPACE.
Do I have to use first filter every time after json_query? I know for sure that there is only one containers element. I don't understand why json_query returns a list.
This is finally working but no idea whether it's correct way to do it.
- name: "set test fact"
set_fact:
myresult: "{{ yaml_file_variable | json_query(\"resources[?metadata.name=='test-cluster-operator'].spec.template.spec\") | first | json_query(\"containers[?name=='test-cluster-operator']\") | first | json_query(\"env[?name=='MY_NAMESPACE'].valueFrom \") | first }}"

json_query uses jmespath and jmespath always returns a list. This is why your first example isn't working. The first query returns a list but the second is trying to query a key. You've corrected that in the second with [].
You're also missing the jmespath pipe expression: | which is used pretty much as you might expect - the result of the first query can be piped into a new one. Note that this is separate from ansible filters using the same character.
This query:
resources[?metadata.name=='test-cluster-operator'].spec.template.spec | [].containers[?name=='test-cluster-operator'][].env[].valueFrom
Should give you the following output:
[
{
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "metadata.namespace"
}
}
]
Your task should look like this:
- name: "set test fact"
set_fact:
myresult: "{{ yaml_file_variable | json_query(\"resources[?metadata.name=='test-cluster-operator'].spec.template.spec | [].containers[?name=='test-cluster-operator'][].env[].valueFrom\") | first }}"
To answer your other question, yes you'll need to the first filter. As mentioned jmespath will always return a list, so if you just want the value of a key you'll need to pull it out.

Related

Is there a way to give different parameters to a single job in helm charts having parallelism

This is my job configuration when parallelism: 2
spec:
ttlSecondsAfterFinished: 60
backoffLimit: 0
parallelism: 2
Two pods for the same job will come up how do I give two different parameters to the two pods using helm charts
I have tried to do this like this
Below is my helm template for the job.
{{- range $index := until (.Values.replicaCount | int) }}
apiVersion: batch/v1
kind: Job
metadata:
name: alpine-base-{{ index $.Values.ip_range $index }}
labels:
app: alpine-app
chart: alpine-chart
annotations:
"helm.sh/hook-delete-policy": "hook-succeeded"
spec:
ttlSecondsAfterFinished: 60
backoffLimit: 0
completionMode: Indexed
parallelism: {{ $.Values.replicaCount }}
completions: {{ $.Values.replicaCount }}
template:
metadata:
labels:
app_type: base
spec:
containers:
- name: alpine-base
image: "{{ $.Values.global.imageRegistry }}/{{ $.Values.global.image.repository }}:{{ $.Values.global.image.tag }}"
imagePullPolicy: "{{ $.Values.global.imagePullPolicy }}"
volumeMounts:
- mountPath: /scripts
name: scripts-vol
env:
- name: IP
value: {{ index $.Values.slave_producer.ip_range $index }}
- name: SLEEP_TIME
value: {{ $.Values.slave_producer.sleepTime }}
command: ["/bin/bash"]
args: ["/scripts/slave_trigger_script.sh"]
ports:
- containerPort: 1093
- containerPort: 5000
- containerPort: 3445
resources:
requests:
memory: "1024Mi"
cpu: "1024m"
limits:
memory: "1024Mi"
cpu: "1024m"
volumes:
- name: scripts-vol
configMap:
name: scripts-configmap
restartPolicy: Never
{{- end }}
But both pods get the same IP in the environment variable

Multiple expression RegEx in Ansible

Note: I have next to zero experience with Ansible.
I need to be able to conditionally modify the configuration of a Kubernetes cluster control plane service. To do this I need to be able to find a specific piece of information in the file and if its value matches a specific pattern, change the value to something else.
To illustrate, consider the following YAML file:
apiVersion: v1
kind: Pod
metadata:
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
...
In this scenario, the line I'm interested in is the line containing --bind-address. If that field's value is "127.0.0.1", it needs to be changed to "0.0.0.0". If it's already "0.0.0.0", nothing needs to be done. (I could also approach it from the point of view of: if its not "0.0.0.0" then it needs to change to that.)
The initial thought that comes to mind is: just search for "--bind-address=127.0.0.1" and replace it with "--bind-address=0.0.0.0". Simple enough, eh? No, not that simple. What if, for some reason, there is another piece of configuration in this file that also matches that pattern? Which one is the right one?
The only way I can think of to ensure I find the right text to change, is a multiple expression RegEx match. Something along the lines of:
find spec:
if found, find containers: "within" or "under" spec:
if found, find - command: "within" or "under" containers: (Note: there can be more than one "command")
if found, find - kube-controller-manager "within" or "under" - command:
if found, find - --bind-address "within" or "under" - kube-controller-manager
if found, get the value after the =
if 127.0.0.1 change it to 0.0.0.0, otherwise do nothing
How could I write an Ansible playbook to perform these steps, in sequence and only if each step returns true?
Read the data from the file into a dictionary
- include_vars:
file: conf.yml
name: conf
gives
conf:
apiVersion: v1
kind: Pod
metadata:
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
Update the containers
- set_fact:
containers: []
- set_fact:
containers: "{{ containers +
(update_candidate is all)|ternary([_item], [item]) }}"
loop: "{{ conf.spec.containers|d([]) }}"
vars:
update_candidate:
- item is contains 'command'
- item.command is contains 'kube-controller-manager'
- item.command|select('match', '--bind-address')|length > 0
update: "{{ item.command|map('regex_replace',
'--bind-address=127.0.0.1',
'--bind-address=0.0.0.0') }}"
_item: "{{ item|combine({'command': update}) }}"
gives
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=0.0.0.0
Update conf
conf_update: "{{ conf|combine({'spec': spec}) }}"
spec: "{{ conf.spec|combine({'containers': containers}) }}"
give
spec:
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=0.0.0.0
conf_update:
apiVersion: v1
kind: Pod
metadata:
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=0.0.0.0
Write the update to the file
- copy:
dest: /tmp/conf.yml
content: |
{{ conf_update|to_nice_yaml(indent=2) }}
gives
shell> cat /tmp/conf.yml
apiVersion: v1
kind: Pod
metadata:
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=0.0.0.0
Example of a complete playbook for testing
- hosts: localhost
vars:
conf_update: "{{ conf|combine({'spec': spec}) }}"
spec: "{{ conf.spec|combine({'containers': containers}) }}"
tasks:
- include_vars:
file: conf.yml
name: conf
- debug:
var: conf
- set_fact:
containers: []
- set_fact:
containers: "{{ containers +
(update_candidate is all)|ternary([_item], [item]) }}"
loop: "{{ conf.spec.containers|d([]) }}"
vars:
update_candidate:
- item is contains 'command'
- item.command is contains 'kube-controller-manager'
- item.command|select('match', '--bind-address')|length > 0
update: "{{ item.command|map('regex_replace',
'--bind-address=127.0.0.1',
'--bind-address=0.0.0.0') }}"
_item: "{{ item|combine({'command': update}) }}"
- debug:
var: containers
- debug:
var: spec
- debug:
var: conf_update
- copy:
dest: /tmp/conf.yml
content: |
{{ conf_update|to_nice_yaml(indent=2) }}

Can I mount volume to Katib Experiment?

I am using the .yaml file below to create Katib Experiment in Kubeflow. However, I am getting
Failed to reconcile: cannot restore struct from: string
errors. Have any solutions for this? Most of the Katib Experiment example codes doesn't have a volume in it, but I'm trying to mount a volume after downloading the data from my S3.
apiVersion: "kubeflow.org/v1alpha3"
kind: Experiment
metadata:
namespace: apple
labels:
controller-tools.k8s.io: "1.0"
name: transformer-experiment
spec:
objective:
type: maximize
goal: 0.8
objectiveMetricName: Train-accuracy
additionalMetricNames:
- Train-loss
algorithm:
algorithmName: random
parallelTrialCount: 3
maxTrialCount: 12
maxFailedTrialCount: 3
metricsCollectorSpec:
collector:
kind: StdOut
parameters:
- name: --lr
parameterType: double
feasibleSpace:
min: "0.01"
max: "0.03"
- name: --dropout_rate
parameterType: double
feasibleSpace:
min: "0.005"
max: "0.020"
- name: --layer_count
parameterType: int
feasibleSpace:
min: "2"
max: "5"
- name: --d_model_count
parameterType: categorical
feasibleSpace:
list:
- "64"
- "128"
- "256"
trialTemplate:
goTemplate:
rawTemplate: |-
apiVersion: batch/v1
kind: Job
metadata:
name: {{.Trial}}
namespace: {{.NameSpace}}
spec:
template:
spec:
volumes:
- name: train-data
emptyDir: {}
containers:
- name: data-download
image: amazon/aws-cli
command:
- "aws s3 sync s3://kubeflow/kubeflowdata.tar.gz /train-data"
volumeMounts:
- name: train-data
mountPath: /train-data
- name: {{.Trial}}
image: <Our Image>
command:
- "cd /train-data"
- "ls"
- "python"
- "/opt/ml/src/main.py"
- "--train_batch=64"
- "--test_batch=64"
- "--num_workers=4"
volumeMounts:
- name: train-data
mountPath: /train-data
{{- with .HyperParameters}}
{{- range .}}
- "{{.Name}}={{.Value}}"
{{- end}}
{{- end}}
restartPolicy: Never
As answered here, the following works for me:
apiVersion: batch/v1
kind: Job
spec:
template:
spec:
containers:
- name: training-container
image: docker.io/romeokienzler/claimed-train-mobilenet_v2:0.4
command:
- "ipython"
- "/train-mobilenet_v2.ipynb"
- "optimizer=${trialParameters.optimizer}"
volumeMounts:
- mountPath: /data/
name: data-volume
restartPolicy: Never
volumes:
- name: data-volume
persistentVolumeClaim:
claimName: data-pvc

How do you pass a variable value to .Files.Glob in a Helm chart?

The invocation of .Files.Glob below needs to be from a variable supplied as a value from .Values.initDBFilesGlob. The value is getting set properly, but the if condition is not evaluating to truthy, even though .Values.initDBConfigMap is empty.
How do I pass a variable argument to .Files.Glob?
Template in question (templates/initdb-configmap.yaml from my WIP chart https://github.com/northscaler/charts/tree/support-env-specific-init/bitnami/cassandra which I'll submit to https://github.com/bitnami/charts/tree/master/bitnami/cassandra as a PR once this is fixed):
{{- $initDBFilesGlob := .Values.initDBFilesGlob -}}
# "{{ $initDBFilesGlob }}" "{{ .Values.initDBConfigMap }}"
# There should be content below this
{{- if and (.Files.Glob $initDBFilesGlob) (not .Values.initDBConfigMap) }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "cassandra.fullname" . }}-init-scripts
labels: {{- include "cassandra.labels" . | nindent 4 }}
data:
{{ (.Files.Glob $initDBFilesGlob).AsConfig | indent 2 }}
{{- end }}
File values.yaml:
dbUser:
forcePassword: true
password: cassandra
initDBFilesGlob: 'files/devops/docker-entrypoint-initdb.d/*'
Command: helm template -f values.yaml foobar /Users/matthewadams/dev/bitnami/charts/bitnami/cassandra
There are files in files/devops/docker-entrypoint-initdb.d, relative to the directory from which I'm invoking the command.
Output:
---
# Source: cassandra/templates/pdb.yaml
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: foobar-cassandra-headless
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
spec:
selector:
matchLabels:
app: cassandra
release: foobar
maxUnavailable: 1
---
# Source: cassandra/templates/cassandra-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: foobar-cassandra
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
type: Opaque
data:
cassandra-password: "Y2Fzc2FuZHJh"
---
# Source: cassandra/templates/configuration-cm.yaml
# files/conf/*
apiVersion: v1
kind: ConfigMap
# files/conf/*
metadata:
name: foobar-cassandra-configuration
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
data:
README.md: |
Place your Cassandra configuration files here. This will override the values set in any configuration environment variable. This will not be used in case the value *existingConfiguration* is used.
More information [here](https://github.com/bitnami/bitnami-docker-cassandra#configuration)
---
# Source: cassandra/templates/headless-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: foobar-cassandra-headless
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
spec:
clusterIP: None
publishNotReadyAddresses: true
ports:
- name: intra
port: 7000
targetPort: intra
- name: tls
port: 7001
targetPort: tls
- name: jmx
port: 7199
targetPort: jmx
- name: cql
port: 9042
targetPort: cql
- name: thrift
port: 9160
targetPort: thrift
selector:
app: cassandra
release: foobar
---
# Source: cassandra/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: foobar-cassandra
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
annotations:
{}
spec:
type: ClusterIP
ports:
- name: cql
port: 9042
targetPort: cql
nodePort: null
- name: thrift
port: 9160
targetPort: thrift
nodePort: null
selector:
app: cassandra
release: foobar
---
# Source: cassandra/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: foobar-cassandra
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
spec:
selector:
matchLabels:
app: cassandra
release: foobar
serviceName: foobar-cassandra-headless
replicas: 1
updateStrategy:
type: OnDelete
template:
metadata:
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
spec:
securityContext:
fsGroup: 1001
runAsUser: 1001
containers:
- name: cassandra
command:
- bash
- -ec
# Node 0 is the password seeder
- |
if [[ $HOSTNAME =~ (.*)-0$ ]]; then
echo "Setting node as password seeder"
export CASSANDRA_PASSWORD_SEEDER=yes
else
# Only node 0 will execute the startup initdb scripts
export CASSANDRA_IGNORE_INITDB_SCRIPTS=1
fi
/entrypoint.sh /run.sh
image: docker.io/bitnami/cassandra:3.11.6-debian-10-r26
imagePullPolicy: "IfNotPresent"
env:
- name: BITNAMI_DEBUG
value: "false"
- name: CASSANDRA_CLUSTER_NAME
value: cassandra
- name: CASSANDRA_SEEDS
value: "foobar-cassandra-0.foobar-cassandra-headless.default.svc.cluster.local"
- name: CASSANDRA_PASSWORD
valueFrom:
secretKeyRef:
name: foobar-cassandra
key: cassandra-password
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: CASSANDRA_USER
value: "cassandra"
- name: CASSANDRA_NUM_TOKENS
value: "256"
- name: CASSANDRA_DATACENTER
value: dc1
- name: CASSANDRA_ENDPOINT_SNITCH
value: SimpleSnitch
- name: CASSANDRA_ENDPOINT_SNITCH
value: SimpleSnitch
- name: CASSANDRA_RACK
value: rack1
- name: CASSANDRA_ENABLE_RPC
value: "true"
livenessProbe:
exec:
command: ["/bin/sh", "-c", "nodetool status"]
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
exec:
command: ["/bin/sh", "-c", "nodetool status | grep -E \"^UN\\s+${POD_IP}\""]
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
ports:
- name: intra
containerPort: 7000
- name: tls
containerPort: 7001
- name: jmx
containerPort: 7199
- name: cql
containerPort: 9042
- name: thrift
containerPort: 9160
resources:
limits: {}
requests: {}
volumeMounts:
- name: data
mountPath: /bitnami/cassandra
- name: init-db
mountPath: /docker-entrypoint-initdb.d
- name: configurations
mountPath: /bitnami/cassandra/conf
volumes:
- name: configurations
configMap:
name: foobar-cassandra-configuration
- name: init-db
configMap:
name: foobar-cassandra-init-scripts
volumeClaimTemplates:
- metadata:
name: data
labels:
app: cassandra
release: foobar
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "8Gi"
---
# Source: cassandra/templates/initdb-configmap.yaml
# "files/devops/docker-entrypoint-initdb.d/*" ""
# There should be content below this
If I comment out the line in my values.yaml that sets initDBFilesGlob, the template renders correctly:
...
---
# Source: cassandra/templates/initdb-configmap.yaml
# "files/docker-entrypoint-initdb.d/*" ""
# There should be content below this
apiVersion: v1
kind: ConfigMap
metadata:
name: foobar-cassandra-init-scripts
labels:
app: cassandra
chart: cassandra-5.1.2
release: foobar
heritage: Helm
data:
README.md: |
You can copy here your custom `.sh` or `.cql` file so they are executed during the first boot of the image.
More info in the [bitnami-docker-cassandra](https://github.com/bitnami/bitnami-docker-cassandra#initializing-a-new-instance) repository.
I was able to accomplish this by using a printf function to initialize the variable, like this:
{{- $initDBFilesGlob := printf "%s" .Values.initDBFilesGlob -}}

Getting error 'unknown field hostPath' Kubernetes Elasticsearch using with local volume

I am trying to deploy elastic-search in kubernetes with local drive volume but I get the following error, can you please correct me.
using ubuntu 16.04
kubernetes v1.11.0
Docker version 17.03.2-ce
Getting error 'unknown field hostPath' Kubernetes Elasticsearch using with local volume
error: error validating "es-d.yaml": error validating data: ValidationError(StatefulSet.spec.template.spec.containers[1]): unknown field "hostPath" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
This is the yaml file of the statefulSet:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: es-data
labels:
component: elasticsearch
role: data
spec:
serviceName: elasticsearch-data
replicas: 1
template:
metadata:
labels:
component: elasticsearch
role: data
spec:
initContainers:
- name: init-sysctl
image: alpine:3.6
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: es-data
image: quay.io/pires/docker-elasticsearch-kubernetes:6.3.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: myesdb
- name: NODE_MASTER
value: "false"
- name: NODE_INGEST
value: "false"
- name: HTTP_ENABLE
value: "true"
- name: ES_JAVA_OPTS
value: -Xms256m -Xmx256m
- name: PROCESSORS
valueFrom:
resourceFieldRef:
resource: limits.cpu
resources:
requests:
cpu: 0.25
limits:
cpu: 1
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: transport
livenessProbe:
tcpSocket:
port: transport
initialDelaySeconds: 20
periodSeconds: 10
readinessProbe:
httpGet:
path: /_cluster/health
port: http
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: storage
mountPath: /es
volumes:
- name: storage
You have the wrong structure. volumes must be on the same level as containers, initContainers.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: es-data
labels:
component: elasticsearch
role: data
spec:
serviceName: elasticsearch-data
replicas: 1
template:
metadata:
labels:
component: elasticsearch
role: data
spec:
initContainers:
- name: init-sysctl
image: alpine:3.6
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: es-data
image: quay.io/pires/docker-elasticsearch-kubernetes:6.3.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: myesdb
- name: NODE_MASTER
value: "false"
- name: NODE_INGEST
value: "false"
- name: HTTP_ENABLE
value: "true"
- name: ES_JAVA_OPTS
value: -Xms256m -Xmx256m
- name: PROCESSORS
valueFrom:
resourceFieldRef:
resource: limits.cpu
resources:
requests:
cpu: 0.25
limits:
cpu: 1
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: transport
livenessProbe:
tcpSocket:
port: transport
initialDelaySeconds: 20
periodSeconds: 10
readinessProbe:
httpGet:
path: /_cluster/health
port: http
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: storage
mountPath: /es
volumes:
- name: storage
You can find example here.
Check your format, hostPath is not supposed to be under container part, 'volume' is not in it's position.

Resources