Assigning an environment variable's value to a helm variable - ruby

I'm new to helm and I want to be able to write gitlab project variables to files using config maps and shared environment variables.
I have a set of environment variables defined as gitlab project variables (the gitlab runner exposes them as environment variables) for each environment (where <ENV> is DEV/TEST/PROD for the sake of brevity):
MYSQL_USER_<ENV> = "user"
MYSQL_PASSWORD_<ENV> = "pass"
In the helm chart every environment has a map of its variables. For example, values.<ENV>.yaml contains:
envVars:
MYSQL_USER: $MYSQL_USER_<ENV>
MYSQL_PASSWORD: $MYSQL_PASSWORD_<ENV>
values.yaml contains a Ruby file which will consume those variables:
files:
config.rb: |
mysql['username'] = ENV["MYSQL_USER"]
mysql['password'] = ENV["MYSQL_PASSWORD"]
configmap.env.yaml defines:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "mychart.fullname" . }}-config-env
data:
{{- range $config_key, $config_value := .Values.envVars }}
{{ $config_key }}: {{ $config_value | quote | nindent 4 }}
{{- end }}
configmap.files.yaml defines:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "mychart.fullname" . }}-config-volume
data:
{{- range $file_path, $file_content := .Values.files }}
{{ $file_path }}: |
{{ $file_content | indent 4 -}}
{{- end }}
Finally, the deployment of the config map (only the config map part is shown here and I'm not using secrets here because this question is long enough as it is):
volumes:
- name: {{ include "mychart.fullname" . }}-config-volume
configMap:
name: {{ include "mychart.fullname" . }}-config-volume
containers:
- name: {{ .Chart.Name }}
volumeMounts:
- name: {{ include "mychart.fullname" . }}-config-volume
mountPath: /etc/my-config-dir
envFrom:
- configMapRef:
name: {{ include "mychart.fullname" . }}-config-env
So, in one sentence the workflow should be:
MYSQL_USER/PASSWORD_<ENV> saved into MYSQL_USER/PASSWORD, which are then written to /etc/my-config-dir/config.rb
I can't seem to make the environment variables of values.yaml (MYSQL_USER, MYSQL_PASSWORD) get the value of the project variables (MYSQL_USER_<ENV>, MYSQL_PASSWORD_<ENV>).
I use helm 3, but {{ env "MYSQL_USER_<ENV>" }} fails.
I could use string interpolation with the environment variable's name it the Ruby file, but then I would have to know what environment variables should be created for every container.
I'm trying to avoid having multiple --set arguments. Also I'm not sure how envsubst can be used here...
Any help will be greatly appreciated, thanks!

So eventually I used envsubst:
script:
- VALUES_FILE=values.${ENV}.yaml
- envsubst < ${VALUES_FILE}.env > ${VALUES_FILE}
- helm upgrade ... -f ${VALUES_FILE}

Related

How to set an SpringBoot array property as a kubernetes secret?

I want to use the direct translation from k8s secret-keys to SpringBoot properties.
Therefore I have a helm chart (but similar with plain k8s):
apiVersion: v1
data:
app.entry[0].name: {{.Values.firstEntry.name | b64enc }}
kind: Secret
metadata:
name: my-secret
type: Opaque
With that my intention is that this behaves as if I'd set the spring property file:
app.entry[0].name: "someName"
But when I do this I get an error:
Invalid value: "[app.entry[0].name]": a valid config key must consist of alphanumeric characters, '-', '_' or '.' (e.g. 'key.name', or 'KEY_NAME', or 'key-name', regex used for validation is '[-._a-zA-Z0-9]+'),
So, [0] seems not to be allowed as a key name for the secrets.
Any idea how I can inject an array entry into spring directly from a k8s secret name?
Shooting around wildly I tried these that all failed:
app.entry[0].name: ... -- k8s rejects '['
app.entry__0.name: ... -- k8s ok, but Spring does not recognize this as array (I think)
"app.entry[0].name": ... -- k8s rejects '['
'app.entry[0].name': ... -- k8s rejects '['
You should be able to use environnment variables like described in sprint-boot-env.
app.entry[0].name property will be set using APP_ENTRY_0_NAME environment variable. This could be set in your deployment.
Using secret like:
apiVersion: v1
data:
value: {{.Values.firstEntry.name | b64enc }}
kind: Secret
metadata:
name: my-secret
type: Opaque
and then use it with
env:
- name: APP_ENTRY_0_NAME
valueFrom:
secretKeyRef:
name: my-secret
key: value
What you can do is passing the application.properties file specified within a k8s Secret to your Spring Boot application.
For instance, define your k8s Opaque Secret this way:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: my-secret
data:
application.properties: "app.entry[0].name={{ .Values.firstEntry.name }}"
Of course you will have more properties that you want to set in your application.properties file, so just see this as an example with the type of entry that you need to specify, as stated in your question. I'm not a Spring Boot specialist, but an idea could be (if possible) to tell the Spring Boot application to look for more than a single application.properties file so that you would only need to pass some of the configuration parameters from the outside in instead of all of the parameters.
When using kubernetes secrets as files in pods, as specified within the official kubernetes documentation, each key in the secret data map becomes a filename under a volume mountpath (See point 4).
Hence, you can just mount the application.properties file defined within your k8s secret into your container in which your Spring Boot application is running. Assuming that you make use of a deployment template in your helm chart, here is a sample deployment.yaml template would do the job (please focus on the part where the volumes and volumeMount are specified):
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "sample.fullname" . }}
labels:
{{- include "sample.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "sample.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "sample.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "sample.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: my-awesome-volume
mountPath: /path/where/springboot/app/expects/application.properties
subPath: application.properties
volumes:
- name: my-awesome-volume
secret:
secretName: my-secret
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
As desired, this gives you a solution with no necessity of changing any of your application code. I hope that this gets you going in the intended way.
You can do saving json file as a secret
Step 1:
Create json file which needs to be stored as secret example : secret-data.json
{
"entry": [
{
"name1": "data1",
"key1": "dataX"
},
{
"name2": "data2",
"key2": "dataY"
}
]
}
Step2 : Create a secret from a file
kubectl create secret generic data-1 --from-file=secret-data.json
Step 3: Attach secret to pod
env:
- name: APP_DATA
valueFrom:
secretKeyRef:
name: data-1
key: secret-data.json
You can verify the same by exec into container and checking env

ingress variables syntax from values.yaml

I have a simple api that I've been deploying to K8s as a NodePort service via Helm. I'm working to add an ingress to the Helm chart but I'm having some difficulty getting the variables correct
values.yaml
ingress:
metadata:
name: {}
labels: {}
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: "testapi.local.dev"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: {}
port:
number: 80
templates/ingress.yaml, showing only the spec section where I'm having issues.
spec:
rules:
{{- range .Values.ingress.spec.rules }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path | quote }}
pathType: {{ .pathType | quote }}
backend:
service:
name: {{ include "testapi.service.name" . }}
port:
{{- range $key, $value := (include "testapi.deployment.ports" . | fromYaml) }}
number: {{ .port }}
{{- end}}
{{- end}}
{{- end}}
When running helm template it just leaves these values blank and I'm not sure where the syntax is wrong. Removing the {{- range .paths }} and the following .path and .pathType and replacing them with the value corrects the issue
spec:
rules:
- host: "testapi.local.dev"
http:
paths:
Comments revealed I should be using {{- range .http.paths }}.

Call variable in Ansible

Apologies for this simple question, but I tried various approach without success.
This is my vars file
---
preprod:
name: nginx
prod:
name: apache
I am trying to pass the value of name based on the environment name user provides (preprod, prod etc).
This is my template
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{ env.name }}
name: {{ env.name }}
namespace: default
spec:
selector:
matchLabels:
app: {{ env.name }}
template:
metadata:
labels:
app: {{ env.name }}
spec:
containers:
- image: {{ env.name }}
imagePullPolicy: Always
name: {{ env.name }}
resources: {}
However, when I try with this using following command:
ansible-playbook playbook.yaml -e env=preprod
I am getting the following error.
fatal: [localhost]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'str object' has no attribute 'name'"}
My expectation is the {{ env.name }} should have been replaced with the value of preprod.name as in nginx in this case.
I want users to provide the value for env via -e on the command line, it seems if I do like preprod.name directly on the template, it seems to work, but I don't want that.
I hope this clarifies what I am trying to do, but it didn't work.
May I know what I am missing?
This error message indicates that the extra var passed on command line as -e is a string, and not the key (we expect) of the dict we are loading from the vars file.
I'm making up an example playbook as you have not shown how you load your vars file. I'm using include_vars as we can name the variable to load dict into.
# include vars with a name, so that we can access testvars[env]
- include_vars:
file: testvars.yml
name: testvars
- template:
src: test/deployment.yaml.j2
dest: /tmp/deployment.yaml
vars:
_name: "{{ testvars[env]['name'] }}"
With this approach, the prod and preprod keys will be available under testvars, and can be referenced with a variable such as env.
Then the template should use _name variable, like:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{ _name }}
Given the variables in a place where the play can find it, e.g. group_vars/all. Optionally add default_env
shell> cat group_vars/all
preprod:
name: nginx
prod:
name: apache
default_env:
name: lighttpd
Use vars lookup plugin to "Retrieve the value of an Ansible variable". See
shell> ansible-doc -t lookup vars
For example, the playbook
shell> cat playbook.yml
- hosts: test_11
vars:
env: "{{ lookup('vars', my_env|default('default_env')) }}"
app: "{{ env.name }}"
tasks:
- debug:
var: app
by default displays
shell> ansible-playbook playbook.yml
...
app: lighttpd
Now you can select the environment by declaring the variable my_env, .e.g
shell> ansible-playbook playbook.yml -e my_env=prod
...
app: apache
and
shell> ansible-playbook playbook.yml -e my_env=preprod
...
app: nginx

How to avoid translate some `{{` of the helm chart?

I want to put the following CRD into helm chart, but it contains go raw template. How to make helm not translate {{ and }} inside rawTemplate. Thanks for your response.
https://github.com/kubeflow/katib/blob/master/examples/random-example.yaml
apiVersion: "kubeflow.org/v1alpha1"
kind: StudyJob
metadata:
namespace: katib
labels:
controller-tools.k8s.io: "1.0"
name: random-example
spec:
studyName: random-example
owner: crd
optimizationtype: maximize
objectivevaluename: Validation-accuracy
optimizationgoal: 0.99
requestcount: 4
metricsnames:
- accuracy
workerSpec:
goTemplate:
rawTemplate: |-
apiVersion: batch/v1
kind: Job
metadata:
name: {{.WorkerId}}
namespace: katib
spec:
template:
spec:
containers:
- name: {{.WorkerId}}
image: katib/mxnet-mnist-example
command:
- "python"
- "/mxnet/example/image-classification/train_mnist.py"
- "--batch-size=64"
{{- with .HyperParameters}}
{{- range .}}
- "{{.Name}}={{.Value}}"
{{- end}}
{{- end}}
restartPolicy: Never
In the Go template language, the expression
{{ "{{" }}
will expand to two open curly braces, for cases when you need to use Go template syntax to generate documents in Go template syntax; for example
{{ "{{" }}- if .Values.foo }}
- name: FOO
value: {{ "{{" }} .Values.foo }}
{{ "{{" }}- end }}
(In a Kubernetes Helm context where you're using this syntax to generate YAML, be extra careful with how whitespace is handled; consider using helm template to dump out what gets generated.)

Is it possible to use a template inside a template with go template

Using https://golang.org/pkg/text/template/, I sometimes need to use variables in the accessed path (for kubernetes deployments).
I end up writing something like :
{{ if (eq .Values.cluster "aws" }}{{ .Values.redis.aws.masterHost | quote }}{{else}}{{ .Values.redis.gcp.masterHost | quote }}{{end}}
What I'd really like to write is pretty much {{ .Values.redis.{{.Values.cluster}}.masterHost | quote }} , which doesn't compile.
Is there a way to write something similar ? (so having a kind of variable in the accessed path).
You can use _helpers.tpl file to define logic and operate with values.
_helpers.tpl
{{/*
Get redis host based on cluster.
*/}}
{{- define "chart.getRedis" -}}
{{- if eq .Values.cluster "aws" -}}
{{- .Values.redis.aws.masterHost | quote -}}
{{- else -}}
{{- .Values.redis.gcp.masterHost | quote -}}
{{- end -}}
{{- end -}}
values.yaml
cluster: local
redis:
aws:
masterHost: "my-aws-host"
gcp:
masterHost: "my-gcp-host"
And use it in your Deployment (here's a ConfigMap example to keep it shorter)
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: Configmap
data:
redis: {{ template "chart.getRedis" . }}
Output:
helm install --dry-run --debug mychart
[debug] Created tunnel using local port: '64712'
...
COMPUTED VALUES:
cluster: local
redis:
aws:
masterHost: my-aws-host
gcp:
masterHost: my-gcp-host
HOOKS:
MANIFEST:
---
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: Configmap
data:
redis: "my-gcp-host"
Set cluster value to aws:
helm install --dry-run --debug mychart --set-string=cluster=aws
[debug] Created tunnel using local port: '64712'
...
COMPUTED VALUES:
cluster: local
redis:
aws:
masterHost: my-aws-host
gcp:
masterHost: my-gcp-host
HOOKS:
MANIFEST:
---
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: Configmap
data:
redis: "my-aws-host"

Resources