This is my job configuration when parallelism: 2
spec:
ttlSecondsAfterFinished: 60
backoffLimit: 0
parallelism: 2
Two pods for the same job will come up how do I give two different parameters to the two pods using helm charts
I have tried to do this like this
Below is my helm template for the job.
{{- range $index := until (.Values.replicaCount | int) }}
apiVersion: batch/v1
kind: Job
metadata:
name: alpine-base-{{ index $.Values.ip_range $index }}
labels:
app: alpine-app
chart: alpine-chart
annotations:
"helm.sh/hook-delete-policy": "hook-succeeded"
spec:
ttlSecondsAfterFinished: 60
backoffLimit: 0
completionMode: Indexed
parallelism: {{ $.Values.replicaCount }}
completions: {{ $.Values.replicaCount }}
template:
metadata:
labels:
app_type: base
spec:
containers:
- name: alpine-base
image: "{{ $.Values.global.imageRegistry }}/{{ $.Values.global.image.repository }}:{{ $.Values.global.image.tag }}"
imagePullPolicy: "{{ $.Values.global.imagePullPolicy }}"
volumeMounts:
- mountPath: /scripts
name: scripts-vol
env:
- name: IP
value: {{ index $.Values.slave_producer.ip_range $index }}
- name: SLEEP_TIME
value: {{ $.Values.slave_producer.sleepTime }}
command: ["/bin/bash"]
args: ["/scripts/slave_trigger_script.sh"]
ports:
- containerPort: 1093
- containerPort: 5000
- containerPort: 3445
resources:
requests:
memory: "1024Mi"
cpu: "1024m"
limits:
memory: "1024Mi"
cpu: "1024m"
volumes:
- name: scripts-vol
configMap:
name: scripts-configmap
restartPolicy: Never
{{- end }}
But both pods get the same IP in the environment variable
I want to use the direct translation from k8s secret-keys to SpringBoot properties.
Therefore I have a helm chart (but similar with plain k8s):
apiVersion: v1
data:
app.entry[0].name: {{.Values.firstEntry.name | b64enc }}
kind: Secret
metadata:
name: my-secret
type: Opaque
With that my intention is that this behaves as if I'd set the spring property file:
app.entry[0].name: "someName"
But when I do this I get an error:
Invalid value: "[app.entry[0].name]": a valid config key must consist of alphanumeric characters, '-', '_' or '.' (e.g. 'key.name', or 'KEY_NAME', or 'key-name', regex used for validation is '[-._a-zA-Z0-9]+'),
So, [0] seems not to be allowed as a key name for the secrets.
Any idea how I can inject an array entry into spring directly from a k8s secret name?
Shooting around wildly I tried these that all failed:
app.entry[0].name: ... -- k8s rejects '['
app.entry__0.name: ... -- k8s ok, but Spring does not recognize this as array (I think)
"app.entry[0].name": ... -- k8s rejects '['
'app.entry[0].name': ... -- k8s rejects '['
You should be able to use environnment variables like described in sprint-boot-env.
app.entry[0].name property will be set using APP_ENTRY_0_NAME environment variable. This could be set in your deployment.
Using secret like:
apiVersion: v1
data:
value: {{.Values.firstEntry.name | b64enc }}
kind: Secret
metadata:
name: my-secret
type: Opaque
and then use it with
env:
- name: APP_ENTRY_0_NAME
valueFrom:
secretKeyRef:
name: my-secret
key: value
What you can do is passing the application.properties file specified within a k8s Secret to your Spring Boot application.
For instance, define your k8s Opaque Secret this way:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: my-secret
data:
application.properties: "app.entry[0].name={{ .Values.firstEntry.name }}"
Of course you will have more properties that you want to set in your application.properties file, so just see this as an example with the type of entry that you need to specify, as stated in your question. I'm not a Spring Boot specialist, but an idea could be (if possible) to tell the Spring Boot application to look for more than a single application.properties file so that you would only need to pass some of the configuration parameters from the outside in instead of all of the parameters.
When using kubernetes secrets as files in pods, as specified within the official kubernetes documentation, each key in the secret data map becomes a filename under a volume mountpath (See point 4).
Hence, you can just mount the application.properties file defined within your k8s secret into your container in which your Spring Boot application is running. Assuming that you make use of a deployment template in your helm chart, here is a sample deployment.yaml template would do the job (please focus on the part where the volumes and volumeMount are specified):
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "sample.fullname" . }}
labels:
{{- include "sample.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "sample.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "sample.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "sample.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: my-awesome-volume
mountPath: /path/where/springboot/app/expects/application.properties
subPath: application.properties
volumes:
- name: my-awesome-volume
secret:
secretName: my-secret
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
As desired, this gives you a solution with no necessity of changing any of your application code. I hope that this gets you going in the intended way.
You can do saving json file as a secret
Step 1:
Create json file which needs to be stored as secret example : secret-data.json
{
"entry": [
{
"name1": "data1",
"key1": "dataX"
},
{
"name2": "data2",
"key2": "dataY"
}
]
}
Step2 : Create a secret from a file
kubectl create secret generic data-1 --from-file=secret-data.json
Step 3: Attach secret to pod
env:
- name: APP_DATA
valueFrom:
secretKeyRef:
name: data-1
key: secret-data.json
You can verify the same by exec into container and checking env
I need to run multiple containers in one cronjob execution. Currently I have the following cronjob.yaml template:
jobTemplate:
spec:
template:
metadata:
labels:
app: {{ .Release.Name }}
cron: {{ .Values.filesjob.jobName }}
spec:
containers:
- image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
env:
- name: FILE_MASK
value: "{{ .Values.filesjob.fileMask }}"
- name: ID
value: "{{ .Values.filesjob.id }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
name: {{ .Values.filesjob.jobName }}
volumeMounts:
- mountPath: /data
name: path-to-clean
- name: path-logfiles
mountPath: /log
- image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
env:
- name: FILE_MASK
value: "{{ .Values.filesjob.fileMask }}"
- name: ID
value: "{{ .Values.filesjob.id }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
name: {{ .Values.filesjob.jobName2 }}
volumeMounts:
- mountPath: /data
name: path2-to-clean
- name: path2-logfiles
mountPath: /log
The above generates a cronjob which executes two containers with passing different env variables. can I generate the same using values.yaml by iterating over a variable?
I was able to solve this basing on example from this article - https://nikhils-devops.medium.com/helm-chart-for-kubernetes-cronjob-a694b47479a
Here is my template:
{{- if .Values.cleanup.enabled -}}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: registry-cleanup-job
namespace: service
labels:
{{- include "some.labels" . | nindent 4 }}
spec:
schedule: {{ .Values.cleanup.schedule }}
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 2
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
{{- range .Values.cleanup.crons }}
- name: {{ .name | quote }}
image: {{ $.Values.cleanup.imageName }}
imagePullPolicy: {{ $.Values.cleanup.pullPolicy }}
args:
- {{ .command }}
{{- end}}
restartPolicy: Never
{{- end }}
And related values:
cleanup:
enabled: true
schedule: "0 8 * * 5"
imageName: digitalocean/doctl:1.60.0
pullPolicy: IfNotPresent
crons:
- command0:
name: "cleanup0"
command: command0
- command1:
name: "cleanup1"
command: command1
- command2:
name: "cleanup2"
command: command2
- command3:
name: "cleanup3"
command: command3
- command4:
name: "cleanup4"
command: command4
I'm creating a helm package for a application and it's working fine if i use the helm without replacing the values file. But when i try to replace it with the --values flag the final yaml is different of what it's supposed to be.
In chart i've included a mariadb data base and added the following PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Release.Name }}-mariadb-data
labels:
app.funpresp.k8s/part-of: {{ .Release.Name }}
spec:
storageClassName: {{ .Values.mariadb.storage.class }}
accessModes:
- {{ .Values.mariadb.storage.accessMode }}
resources:
requests:
storage: {{ .Values.mariadb.storage.size }}
witch in the package values.yaml is used like:
mariadb:
storage:
class: resize-sc
accessMode: ReadWriteOnce
size: 2Gi
That application chart is used in another chart as a dependency. And in the new chart the value is replaced with:
mariadb:
storage:
class: resize-sc
accessMode: ReadWriteOnce
size: 10Gi
The expectation here is that the final yaml declares a 10Gi storage size, but it turns out to be 2Gi. To view that yaml i'm running helm template --debug
I've uploaded the chart code to https://github.com/bspa10/helmcharts/tree/master/rundeck alongside with the client chart code
I want to put the following CRD into helm chart, but it contains go raw template. How to make helm not translate {{ and }} inside rawTemplate. Thanks for your response.
https://github.com/kubeflow/katib/blob/master/examples/random-example.yaml
apiVersion: "kubeflow.org/v1alpha1"
kind: StudyJob
metadata:
namespace: katib
labels:
controller-tools.k8s.io: "1.0"
name: random-example
spec:
studyName: random-example
owner: crd
optimizationtype: maximize
objectivevaluename: Validation-accuracy
optimizationgoal: 0.99
requestcount: 4
metricsnames:
- accuracy
workerSpec:
goTemplate:
rawTemplate: |-
apiVersion: batch/v1
kind: Job
metadata:
name: {{.WorkerId}}
namespace: katib
spec:
template:
spec:
containers:
- name: {{.WorkerId}}
image: katib/mxnet-mnist-example
command:
- "python"
- "/mxnet/example/image-classification/train_mnist.py"
- "--batch-size=64"
{{- with .HyperParameters}}
{{- range .}}
- "{{.Name}}={{.Value}}"
{{- end}}
{{- end}}
restartPolicy: Never
In the Go template language, the expression
{{ "{{" }}
will expand to two open curly braces, for cases when you need to use Go template syntax to generate documents in Go template syntax; for example
{{ "{{" }}- if .Values.foo }}
- name: FOO
value: {{ "{{" }} .Values.foo }}
{{ "{{" }}- end }}
(In a Kubernetes Helm context where you're using this syntax to generate YAML, be extra careful with how whitespace is handled; consider using helm template to dump out what gets generated.)