include inside range with merge using helm - include

i have a helm deployment.yaml template file containing the below:
containers:
{{- include "app.container" (merge .Values.app $) | nindent 8 }}
{{- range $k, $v := .Values.extraContainers }}
{{- $nameDict := dict "name" $k -}}
{{- include "app.container" (mustMergeOverwrite $.Values.app $nameDict $v) | nindent 8 }}
{{- end }}
a values.yaml that looks like this:
The values under .Values.app are intended to be the values for the main container, but should also act as the defaults for the extraContainers when values are not specified explicitly for the extraContainers.
app:
image:
name: xxxxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/foobar
tag: latest
probes:
readinessProbe:
path: /_readiness
enabled: true
livenessProbe:
path: /_liveness
enabled: true
port: 80
resources:
limits:
cpu: 10Mi
memory: 2Gi
requests:
cpu: 10Mi
memory: 2Gi
extraContainers:
extra-container1:
image:
name: xxxxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/foobar-extra
tag: test
command:
- sleep
- 10
envVars:
- name: FOO
value: BAR
port: 8080
probes:
readinessProbe:
path: /extraContainerReadinessPath
port: 8080
livenessProbe:
path: /extraContainerLivenessPath
port: 8080
resources:
limits:
cpu: 1000Mi
memory: 1Gi
requests:
cpu: 1000Mi
memory: 1Gi
extra-container2:
image:
name: 211161777205.dkr.ecr.eu-west-1.amazonaws.com/foobar-two
tag: latest
command:
- sleep
- 10
probes:
readinessProbe:
enabled: false
livenessProbe:
enabled: false
and a helper containing the below:
{{/*
app container base
*/}}
{{- define "app.containerBase" -}}
- name: {{ .name | default "app" }}
image: {{ .image.name }}:{{ .image.tag }}
{{- if .command }}
command:
{{- range .command }}
- {{ . | quote }}
{{- end }}
{{- end }}
{{- if .args }}
args:
{{- range .args }}
- {{ . | quote }}
{{- end }}
{{- end }}
env:
{{- range .envVars }}
- name: {{ .name }}
{{- with .value }}
value: {{ . | quote }}
{{- end }}
{{- with .valueFrom }}
valueFrom:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
{{- end -}}
{{/*
app container
*/}}
{{- define "app.container" -}}
{{- include "app.containerBase" . }}
imagePullPolicy: {{ .image.pullPolicy | default "Always" }}
{{- if .port }}
ports:
- containerPort: {{ .port }}
protocol: {{ .protocol | upper }}
{{- end }}
{{- if .probes.readinessProbe.enabled }}
readinessProbe:
{{- if .probes.readinessProbe.custom }}
{{- toYaml .probes.readinessProbe.custom | nindent 4 }}
{{- else }}
httpGet:
port: {{ .probes.readinessProbe.port | default .port }}
path: "{{ required "app.probes.readinessProbe.path is required" .probes.readinessProbe.path }}"
scheme: HTTP
initialDelaySeconds: {{ .probes.readinessProbe.initialDelaySeconds | default "40" }}
periodSeconds: {{ .probes.readinessProbe.periodSeconds | default 10 }}
timeoutSeconds: {{ .probes.readinessProbe.timeoutSeconds | default "30" }}
successThreshold: {{ .probes.readinessProbe.successThreshold | default "3" }}
failureThreshold: {{ .probes.readinessProbe.failureThreshold | default "2" }}
{{- end }}
{{- end }}
{{- if .probes.livenessProbe.enabled }}
livenessProbe:
{{- if .probes.livenessProbe.custom }}
{{- toYaml .probes.livenessProbe.custom | nindent 4 }}
{{- else }}
httpGet:
port: {{ .probes.livenessProbe.port | default .port }}
path: "{{ required "app.probes.livenessProbe.path is required" .probes.livenessProbe.path }}"
scheme: HTTP
initialDelaySeconds: {{ .probes.livenessProbe.initialDelaySeconds | default "40" }}
periodSeconds: {{ .probes.livenessProbe.periodSeconds | default "10" }}
timeoutSeconds: {{ .probes.livenessProbe.timeoutSeconds | default "30" }}
successThreshold: {{ .probes.livenessProbe.successThreshold | default "1" }}
failureThreshold: {{ .probes.livenessProbe.failureThreshold | default "2" }}
{{- end }}
{{- end }}
{{- if .probes.startupProbe.enabled }}
startupProbe:
{{- if .probes.startupProbe.custom }}
{{- toYaml .probes.startupProbe.custom | nindent 4 }}
{{- else }}
httpGet:
port: {{ .probes.startupProbe.port | default .port }}
path: "{{ required "app.probes.startupProbe.path is required" .probes.startupProbe.path }}"
scheme: HTTP
initialDelaySeconds: {{ .probes.startupProbe.initialDelaySeconds | default "40" }}
periodSeconds: {{ .probes.startupProbe.periodSeconds | default "10" }}
timeoutSeconds: {{ .probes.startupProbe.timeoutSeconds | default "30" }}
successThreshold: {{ .probes.startupProbe.successThreshold | default "1" }}
failureThreshold: {{ .probes.startupProbe.failureThreshold | default "2" }}
{{- end }}
{{- end }}
resources:
limits:
cpu: {{ .resources.limits.cpu | default "100m" | quote }}
memory: {{ .resources.limits.memory | default "128Mi" | quote }}
{{- if .resources.limits.ephemeralStorage }}
ephemeral-storage: {{ .resources.limits.ephemeralStorage }}
{{- end }}
requests:
cpu: {{ .resources.requests.cpu | default "100m" | quote }}
memory: {{ .resources.requests.memory | default "128Mi" | quote }}
{{- if .resources.requests.ephemeralStorage }}
ephemeral-storage: {{ .resources.requests.ephemeralStorage }}
{{- end }}
{{- end -}}
running helm template results in the below:
containers:
- name: app
image: xxxxx.dkr.ecr.eu-west-1.amazonaws.com/foobar:latest
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
readinessProbe:
httpGet:
port: 80
path: "/_readiness"
scheme: HTTP
initialDelaySeconds: 40
periodSeconds: 10
timeoutSeconds: 30
successThreshold: 3
failureThreshold: 2
livenessProbe:
httpGet:
port: 80
path: "/_liveness"
scheme: HTTP
initialDelaySeconds: 40
periodSeconds: 10
timeoutSeconds: 30
successThreshold: 1
failureThreshold: 2
resources:
limits:
cpu: "10Mi"
memory: "2Gi"
requests:
cpu: "10Mi"
memory: "2Gi"
- name: extra-container1
image: xxxx.dkr.ecr.eu-west-1.amazonaws.com/foobar-extra:test
command:
- "sleep"
- "10"
env:
- name: FOO
value: "BAR"
imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
readinessProbe:
httpGet:
port: 8080
path: "/extraContainerReadinessPath"
scheme: HTTP
initialDelaySeconds: 40
periodSeconds: 10
timeoutSeconds: 30
successThreshold: 3
failureThreshold: 2
livenessProbe:
httpGet:
port: 8080
path: "/extraContainerLivenessPath"
scheme: HTTP
initialDelaySeconds: 40
periodSeconds: 10
timeoutSeconds: 30
successThreshold: 1
failureThreshold: 2
resources:
limits:
cpu: "1000Mi"
memory: "1Gi"
requests:
cpu: "1000Mi"
memory: "1Gi"
- name: extra-container2
image: xxxxx.dkr.ecr.eu-west-1.amazonaws.com/foobar-two:latest
command:
- "sleep"
- "10"
env:
- name: FOO
value: "BAR"
imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
resources:
limits:
cpu: "1000Mi"
memory: "1Gi"
requests:
cpu: "1000Mi"
memory: "1Gi"
As you can see, extra-container2 should have the default resource limits/requests of 10Mi and 2Gi, but they seem to have the values from the previous iteration of the loop(extra-container1). The same occurs with the presence of the FOO=BAR env var present in extra-container2, although it is only set in extra-container1.
essentially it looks like values from the first loop in the range, are persisting into the second loop, when the second loop doesn't explicitly set those values.
thank you for reading

Related

How to set an SpringBoot array property as a kubernetes secret?

I want to use the direct translation from k8s secret-keys to SpringBoot properties.
Therefore I have a helm chart (but similar with plain k8s):
apiVersion: v1
data:
app.entry[0].name: {{.Values.firstEntry.name | b64enc }}
kind: Secret
metadata:
name: my-secret
type: Opaque
With that my intention is that this behaves as if I'd set the spring property file:
app.entry[0].name: "someName"
But when I do this I get an error:
Invalid value: "[app.entry[0].name]": a valid config key must consist of alphanumeric characters, '-', '_' or '.' (e.g. 'key.name', or 'KEY_NAME', or 'key-name', regex used for validation is '[-._a-zA-Z0-9]+'),
So, [0] seems not to be allowed as a key name for the secrets.
Any idea how I can inject an array entry into spring directly from a k8s secret name?
Shooting around wildly I tried these that all failed:
app.entry[0].name: ... -- k8s rejects '['
app.entry__0.name: ... -- k8s ok, but Spring does not recognize this as array (I think)
"app.entry[0].name": ... -- k8s rejects '['
'app.entry[0].name': ... -- k8s rejects '['
You should be able to use environnment variables like described in sprint-boot-env.
app.entry[0].name property will be set using APP_ENTRY_0_NAME environment variable. This could be set in your deployment.
Using secret like:
apiVersion: v1
data:
value: {{.Values.firstEntry.name | b64enc }}
kind: Secret
metadata:
name: my-secret
type: Opaque
and then use it with
env:
- name: APP_ENTRY_0_NAME
valueFrom:
secretKeyRef:
name: my-secret
key: value
What you can do is passing the application.properties file specified within a k8s Secret to your Spring Boot application.
For instance, define your k8s Opaque Secret this way:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: my-secret
data:
application.properties: "app.entry[0].name={{ .Values.firstEntry.name }}"
Of course you will have more properties that you want to set in your application.properties file, so just see this as an example with the type of entry that you need to specify, as stated in your question. I'm not a Spring Boot specialist, but an idea could be (if possible) to tell the Spring Boot application to look for more than a single application.properties file so that you would only need to pass some of the configuration parameters from the outside in instead of all of the parameters.
When using kubernetes secrets as files in pods, as specified within the official kubernetes documentation, each key in the secret data map becomes a filename under a volume mountpath (See point 4).
Hence, you can just mount the application.properties file defined within your k8s secret into your container in which your Spring Boot application is running. Assuming that you make use of a deployment template in your helm chart, here is a sample deployment.yaml template would do the job (please focus on the part where the volumes and volumeMount are specified):
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "sample.fullname" . }}
labels:
{{- include "sample.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "sample.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "sample.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "sample.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: my-awesome-volume
mountPath: /path/where/springboot/app/expects/application.properties
subPath: application.properties
volumes:
- name: my-awesome-volume
secret:
secretName: my-secret
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
As desired, this gives you a solution with no necessity of changing any of your application code. I hope that this gets you going in the intended way.
You can do saving json file as a secret
Step 1:
Create json file which needs to be stored as secret example : secret-data.json
{
"entry": [
{
"name1": "data1",
"key1": "dataX"
},
{
"name2": "data2",
"key2": "dataY"
}
]
}
Step2 : Create a secret from a file
kubectl create secret generic data-1 --from-file=secret-data.json
Step 3: Attach secret to pod
env:
- name: APP_DATA
valueFrom:
secretKeyRef:
name: data-1
key: secret-data.json
You can verify the same by exec into container and checking env

How to properly set up Health and Liveliness Probes in Helm

As a stepping stone to a more complicated Problem, I have been following this example: https://blog.gopheracademy.com/advent-2017/kubernetes-ready-service/, step by step. The next step that I have been trying to learn is using Helm files to deploy the Golang service instead of a makefile.
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .ServiceName }}
labels:
app: {{ .ServiceName }}
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 50%
maxSurge: 1
template:
metadata:
labels:
app: {{ .ServiceName }}
spec:
containers:
- name: {{ .ServiceName }}
image: docker.io/<my Dockerhub name>/{{ .ServiceName }}:{{ .Release }}
imagePullPolicy: Always
ports:
- containerPort: 8000
livenessProbe:
httpGet:
path: /healthz
port: 8000
readinessProbe:
httpGet:
path: /readyz
port: 8000
resources:
limits:
cpu: 10m
memory: 30Mi
requests:
cpu: 10m
memory: 30Mi
terminationGracePeriodSeconds: 30
to a helm deployment.yaml that looks like
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "mychart.fullname" . }}
labels:
{{- include "mychart.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "mychart.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
ano 5.4 mychart/templates/deployment.yaml
{{- include "mychart.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "mychart.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 8000
readinessProbe:
httpGet:
path: /readyz
port: 8000
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
However, when I run the helm chart, the probes (which when not using helm work perfectly fine) fail with errors - specifically when describing the pod, I get the error "Warning Unhealthy 16s (x3 over 24s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 503"
I have obviously set up the probes wrong on the helm chart. How do I convert these probes from one system to another?
Solution:
The solution I found was that probes in the Helm Charts were initial time delays. When I replaced
livenessProbe:
httpGet:
path: /healthz
port: 8000
readinessProbe:
httpGet:
path: /readyz
port: 8000
with
livenessProbe:
httpGet:
path: /healthz
port: 8000
initialDelaySeconds: 15
readinessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 15
Because the probes where running before the container was fully started, they were automatically concluding that they failed.

ingress variables syntax from values.yaml

I have a simple api that I've been deploying to K8s as a NodePort service via Helm. I'm working to add an ingress to the Helm chart but I'm having some difficulty getting the variables correct
values.yaml
ingress:
metadata:
name: {}
labels: {}
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: "testapi.local.dev"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: {}
port:
number: 80
templates/ingress.yaml, showing only the spec section where I'm having issues.
spec:
rules:
{{- range .Values.ingress.spec.rules }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path | quote }}
pathType: {{ .pathType | quote }}
backend:
service:
name: {{ include "testapi.service.name" . }}
port:
{{- range $key, $value := (include "testapi.deployment.ports" . | fromYaml) }}
number: {{ .port }}
{{- end}}
{{- end}}
{{- end}}
When running helm template it just leaves these values blank and I'm not sure where the syntax is wrong. Removing the {{- range .paths }} and the following .path and .pathType and replacing them with the value corrects the issue
spec:
rules:
- host: "testapi.local.dev"
http:
paths:
Comments revealed I should be using {{- range .http.paths }}.

How to range optional array on Helm

Consider the following template:
...
{{- range .Values.additionalMetrics }}
- interval: 1m
port: {{ .name }}
{{- end }}
...
And the following values:
additionalMetrics:
- name: kamon-metrics
port: 9096
targetPort: 9096
If additionalMetrics is missing, the helm template will fail.
Is it possible to check first if additionalMetrics is defined, and then the range the values or continue otherwise?
Note: without making first if and then range, but in one condition, for example this is a not my desired solution:
{{- if .Values.additionalMetrics }}
{{- range .Values.additionalMetrics }}
- name: {{ .name }}
port: {{ .port }}
targetPort: {{ .targetPort }}
{{- end }}
{{- end }}
Thanks in advvance
The solution, which is not desired by you, is alright imo. It's simple and does what it's supposed to do. There is no need to make things complicated.
You could make it a bit more pretty with a with clause:
{{- with .Values.additionalMetrics }}
{{- range . }}
- name: {{ .name }}
port: {{ .port }}
targetPort: {{ .targetPort }}
{{- end }}
{{- end }}
If you really want to do it in a single statement, you could use as default an empty list:
{{- range .Values.additionalMetrics | default list }}
- name: {{ .name }}
port: {{ .port }}
targetPort: {{ .targetPort }}
{{- end }}

Pod to Pod communication within a Service

Is it possible to send a http Rest request to another K8 Pod that belongs to the same Service in Kubernetes?
E. G.
Service name = UserService , 2 Pods (replica = 2)
Pod 1 --> Pod 2 //using pod ip not load balanced hostname
Pod 2 --> Pod 1
The connection is over Rest GET 1.2.3.4:7079/user/1
The value for host + port is taken from kubectl get ep
Both of the pod IP's work successfully outside of the pods but when I do a kubectl exec -it into the pod and make the request via CURL, it returns a 404 not found for the endpoint.
Q What I would like to know if it is possible to make a request to another K8 Pod that is in the same Service?
Q Why am I able to get a successful ping 1.2.3.4, but not hit the Rest API?
below is my config files
#values.yml
replicaCount: 1
image:
repository: "docker.hosted/app"
tag: "0.1.0"
pullPolicy: Always
pullSecret: "a_secret"
service:
name: http
type: NodePort
externalPort: 7079
internalPort: 7079
ingress:
enabled: false
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "app.fullname" . }}
labels:
app: {{ template "app.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ template "app.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_PORT
value: "{{ .Values.service.internalPort }}"
ports:
- containerPort: {{ .Values.service.internalPort }}
livenessProbe:
httpGet:
path: /actuator/alive
port: {{ .Values.service.internalPort }}
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /actuator/ready
port: {{ .Values.service.internalPort }}
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
imagePullSecrets:
- name: {{ .Values.image.pullSecret }
service.yml
kind: Service
metadata:
name: {{ template "app.fullname" . }}
labels:
app: {{ template "app.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.externalPort }}
targetPort: {{ .Values.service.internalPort }}
protocol: TCP
name: {{ .Values.service.name }}
selector:
app: {{ template "app.name" . }}
release: {{ .Release.Name }}
executed from master
executed from inside a pod of the same MicroService
Is it possible to send a http Rest request to another K8 Pod that belongs to the same Service in Kubernetes?
For sure, yes, that's actually exactly why every Pod in the cluster has a cluster-wide routable address. You can programmatically ask kubernetes for the list of the Pod's "peers" by requesting the Endpoint object that is named the same as the Service, then subtract out your own Pod's IP address. It seems like you kind of knew that from kubectl get ep, but then you asked the question, so I thought I would be explicit that your experience wasn't an accident.
Q Why am I able to get a successful ping 1.2.3.4, but not hit the Rest API?
We can't help you troubleshoot your app without some app logs, but the fact that you got a 404 and not "connection refused" or 504 or such means your connectivity worked fine, it's just the app that is broken.
Yes as Mathew answered you can indeed communicate between pods in a Service with Kubernetes, the problem I was having was Istio was blocking the requests to each other.
Solution:
Disabling Istio injection solved this problem for me , I then enabled it after wards and load balancing continued, hopefully it can help some one in the future.
see answer here: Envoy Pod to Pod communication within a Service in K8

Resources