I'm creating a helm package for a application and it's working fine if i use the helm without replacing the values file. But when i try to replace it with the --values flag the final yaml is different of what it's supposed to be.
In chart i've included a mariadb data base and added the following PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Release.Name }}-mariadb-data
labels:
app.funpresp.k8s/part-of: {{ .Release.Name }}
spec:
storageClassName: {{ .Values.mariadb.storage.class }}
accessModes:
- {{ .Values.mariadb.storage.accessMode }}
resources:
requests:
storage: {{ .Values.mariadb.storage.size }}
witch in the package values.yaml is used like:
mariadb:
storage:
class: resize-sc
accessMode: ReadWriteOnce
size: 2Gi
That application chart is used in another chart as a dependency. And in the new chart the value is replaced with:
mariadb:
storage:
class: resize-sc
accessMode: ReadWriteOnce
size: 10Gi
The expectation here is that the final yaml declares a 10Gi storage size, but it turns out to be 2Gi. To view that yaml i'm running helm template --debug
I've uploaded the chart code to https://github.com/bspa10/helmcharts/tree/master/rundeck alongside with the client chart code
Related
I trying to create a default template for many similar applications, I need share same PVC with two or more pod and need modify the chart for create ot not PVC, if already exists.
This is my portion of values.yml about volumes:
persistence:
enabled: true
volumeMounts:
- name: vol1
mountPath: /opt/vol1
- name: vol2
mountPath: /opt/vol2
volumes:
- name: vol1
create: true
claimName: claim-vol1
storageClassName: gp2
accessModes: ReadWriteOnce
storage: 1Gi
- name: vol2
create: false
claimName: claim-vol2
storageClassName: gp2
accessModes: ReadWriteOnce
storage: 1Gi
And this is my pvclaim.yml:
{{- if .Values.persistence.enabled }}
{{- if .Values.volumes.create }}
{{- range .Values.volumes }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .claimName }}
spec:
storageClassName: {{ .storageClassName }}
accessModes:
- {{ .accessModes }}
resources:
requests:
storage: {{ .storage }}
{{- end }}
{{- end }}
{{- end }}
I thought I'd add the field create into range of volumes to manage creations of PVC (assuming in this example that PVC vol2 already exists from other helm chart).
If it were possible I would like helm to read the create field inside the range, this way I get an error:
evaluate field create in type interface {}
If you have any other ideas they are welcome, thanks!
volumes is an array, it does not have a create field.
Elements of volumes have that field. So .Values.volumes.create does not make any sense. Inside the range you may check the create field of the element using .create, e.g.
{{- range .Values.volumes }}
{{if .create}}do something here{{end}}
{{- end}}
This is my job configuration when parallelism: 2
spec:
ttlSecondsAfterFinished: 60
backoffLimit: 0
parallelism: 2
Two pods for the same job will come up how do I give two different parameters to the two pods using helm charts
I have tried to do this like this
Below is my helm template for the job.
{{- range $index := until (.Values.replicaCount | int) }}
apiVersion: batch/v1
kind: Job
metadata:
name: alpine-base-{{ index $.Values.ip_range $index }}
labels:
app: alpine-app
chart: alpine-chart
annotations:
"helm.sh/hook-delete-policy": "hook-succeeded"
spec:
ttlSecondsAfterFinished: 60
backoffLimit: 0
completionMode: Indexed
parallelism: {{ $.Values.replicaCount }}
completions: {{ $.Values.replicaCount }}
template:
metadata:
labels:
app_type: base
spec:
containers:
- name: alpine-base
image: "{{ $.Values.global.imageRegistry }}/{{ $.Values.global.image.repository }}:{{ $.Values.global.image.tag }}"
imagePullPolicy: "{{ $.Values.global.imagePullPolicy }}"
volumeMounts:
- mountPath: /scripts
name: scripts-vol
env:
- name: IP
value: {{ index $.Values.slave_producer.ip_range $index }}
- name: SLEEP_TIME
value: {{ $.Values.slave_producer.sleepTime }}
command: ["/bin/bash"]
args: ["/scripts/slave_trigger_script.sh"]
ports:
- containerPort: 1093
- containerPort: 5000
- containerPort: 3445
resources:
requests:
memory: "1024Mi"
cpu: "1024m"
limits:
memory: "1024Mi"
cpu: "1024m"
volumes:
- name: scripts-vol
configMap:
name: scripts-configmap
restartPolicy: Never
{{- end }}
But both pods get the same IP in the environment variable
Looking for some help / guidance on writing a shell script to generate a configmap to use in my cluster. I wanted to write a shell script to loop through a map or dictionary of values to fill out a configmap.
Example:
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: prometheus-adapter
meta.helm.sh/release-namespace: prometheus
labels:
app.kubernetes.io/component: metrics
app.kubernetes.io/instance: prometheus-adapter
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/part-of: prometheus-adapter
app.kubernetes.io/version: v0.10.0
helm.sh/chart: prometheus-adapter-3.4.1
name: prometheus-adapter
namespace: prometheus
data:
config.yaml: |
rules:
- seriesQuery: 'haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="pod1"}'
metricsQuery: 'sum(rate(haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="pod1"}[2m])) by (<<.GroupBy>>)'
resources:
overrides:
namespace: {resource: "namespace"}
name:
matches: "^(.*)_total$"
# The metric we want the individual pod to match
as: "dfs_requests_per_second"
I wanted to write a shell script that adds everything from -seriesQuery and below replacing values such as the proxy="value" and the [2m] metric interval wherever that assignment variable exists.
And for each key / value pair, it would add the same block with indents. Kind of like this
pod names: [pod1, pod2]
interval: [2m, 3m]
- seriesQuery: 'haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="pod1"}'
metricsQuery: 'sum(rate(haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="pod1"}[2m])) by (<<.GroupBy>>)'
resources:
overrides:
namespace: {resource: "namespace"}
name:
matches: "^(.*)_total$"
# The metric we want the individual pod to match
as: "dfs_requests_per_second"
- seriesQuery: 'haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="pod2"}'
metricsQuery: 'sum(rate(haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="pod2"}[3m])) by (<<.GroupBy>>)'
resources:
overrides:
namespace: {resource: "namespace"}
name:
matches: "^(.*)_total$"
# The metric we want the individual pod to match
as: "dfs_requests_per_second"
And would append that to everything below
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: prometheus-adapter
meta.helm.sh/release-namespace: prometheus
labels:
app.kubernetes.io/component: metrics
app.kubernetes.io/instance: prometheus-adapter
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/part-of: prometheus-adapter
app.kubernetes.io/version: v0.10.0
helm.sh/chart: prometheus-adapter-3.4.1
name: prometheus-adapter
namespace: prometheus
data:
config.yaml: |
rules:
It would be preferable to use a template engine that's YAML-aware but, provided that your pod names and intervals don't contain any problematic character, then you can generate your config file easily with bash:
#!/bin/bash
pods=( pod1 pod2 )
intervals=( 2m 3m )
cat <<'EOF'
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: prometheus-adapter
meta.helm.sh/release-namespace: prometheus
labels:
app.kubernetes.io/component: metrics
app.kubernetes.io/instance: prometheus-adapter
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/part-of: prometheus-adapter
app.kubernetes.io/version: v0.10.0
helm.sh/chart: prometheus-adapter-3.4.1
name: prometheus-adapter
namespace: prometheus
data:
config.yaml: |
rules:
EOF
for i in "${!pods[#]}"
do
cat <<EOF
- seriesQuery: 'haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="${pods[i]}"}'
metricsQuery: 'sum(rate(haproxy_backend_http_requests_total{namespace!="", pod!="", proxy="${pods[i]}"}[${intervals[i]}])) by (<<.GroupBy>>)'
resources:
overrides:
namespace: {resource: "namespace"}
name:
matches: "^(.*)_total$"
# The metric we want the individual pod to match
as: "dfs_requests_per_second"
EOF
done
I install elasticsearch from helm stable chart and get stuck at pending bound pvc to pc.
I have create a PC with same label to PVC of helm elastic.
- How to bound this elastic pvc to pv.
Update config:
PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
finalizers:
- kubernetes.io/pvc-protection
labels:
app: elasticsearch
component: data
release: es
role: data
name: data-es-elasticsearch-data-0
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/data-es-elasticsearch-data-0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
volumeMode: Filesystem
status: {}
PV:
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"type":"local"},"name":"task-pv-volume"},"spec":{"accessModes":["ReadWriteOnce"],"capacity":{"storage":"15Gi"},"hostPath":{"path":"/app/k8s-volumes"},"storageClassName":"manual"}}
creationTimestamp: null
finalizers:
- kubernetes.io/pv-protection
labels:
app: elasticsearch
component: data
release: es
role: data
type: local
name: task-pv-volume
selfLink: /api/v1/persistentvolumes/task-pv-volume
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 15Gi
hostPath:
path: /app/k8s-volumes
type: ""
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
volumeMode: Filesystem
status: {}
The PersistentVolumeClaim and PersistentVolume classes (not labels) must match.The PVC has no storageClassName, so change the PV to have storageClassName: '' and it should work.
I guess you're using no storage class if you create the PV manually. But you're defining a storage class in the PV. So, try to re-create the PV without the storageClassName field:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 15Gi
hostPath:
path: /app/k8s-volumes
type: ""
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
I want to put the following CRD into helm chart, but it contains go raw template. How to make helm not translate {{ and }} inside rawTemplate. Thanks for your response.
https://github.com/kubeflow/katib/blob/master/examples/random-example.yaml
apiVersion: "kubeflow.org/v1alpha1"
kind: StudyJob
metadata:
namespace: katib
labels:
controller-tools.k8s.io: "1.0"
name: random-example
spec:
studyName: random-example
owner: crd
optimizationtype: maximize
objectivevaluename: Validation-accuracy
optimizationgoal: 0.99
requestcount: 4
metricsnames:
- accuracy
workerSpec:
goTemplate:
rawTemplate: |-
apiVersion: batch/v1
kind: Job
metadata:
name: {{.WorkerId}}
namespace: katib
spec:
template:
spec:
containers:
- name: {{.WorkerId}}
image: katib/mxnet-mnist-example
command:
- "python"
- "/mxnet/example/image-classification/train_mnist.py"
- "--batch-size=64"
{{- with .HyperParameters}}
{{- range .}}
- "{{.Name}}={{.Value}}"
{{- end}}
{{- end}}
restartPolicy: Never
In the Go template language, the expression
{{ "{{" }}
will expand to two open curly braces, for cases when you need to use Go template syntax to generate documents in Go template syntax; for example
{{ "{{" }}- if .Values.foo }}
- name: FOO
value: {{ "{{" }} .Values.foo }}
{{ "{{" }}- end }}
(In a Kubernetes Helm context where you're using this syntax to generate YAML, be extra careful with how whitespace is handled; consider using helm template to dump out what gets generated.)