Set multiple environment variable to helm chart from file - bash

I am writing an Helm Chart that should deploy a simple NGINX resource, this resource need to have all the environment variables located in an .env file.
I was wandering if is possible to cycle inside the template on some data structure and recover it in the command line the data needed.
templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.applicationName }}-{{ .Values.country }}-deployment
labels:
name: {{ .Values.applicationName }}-{{ .Values.country }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Values.applicationName }}-{{ .Values.country }}
template:
metadata:
labels:
app: {{ .Values.applicationName }}-{{ .Values.country }}
spec:
containers:
- name: {{ .Values.applicationName }}-{{ .Values.country }}
image: localhost:5000/{{ .Values.applicationName }}
env:
- name: APP_NAME
value: "{{ .Values.applicationName }}"
- name: COUNTRY
value: "{{ .Values.country }}"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
Is possible use something dynamic in the env section? The Environment variables are located in a file like that and they can change.
staging.env
APP_NAME=app2-staging
APP_VERSION=1.0.2
...
...

I am not aware how you can do it in helm directly, but you could create a secret or a configmap based on your staging.env file and then later use envFrom in your pod.
First create the secret based on the content of your file (or a configmap if the content in not sensitive):
kubectl create secret generic prod-secret-envs --from-env-file=staging.env
Then reference it in your pod in the env section:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.applicationName }}-{{ .Values.country }}-deployment
labels:
name: {{ .Values.applicationName }}-{{ .Values.country }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Values.applicationName }}-{{ .Values.country }}
template:
metadata:
labels:
app: {{ .Values.applicationName }}-{{ .Values.country }}
spec:
containers:
- name: {{ .Values.applicationName }}-{{ .Values.country }}
image: localhost:5000/{{ .Values.applicationName }}
envFrom:
- secretRef:
name: prod-secret-envs
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80

Related

Getting ERR_TOO_MANY_REDIRECTS on Ingress

i've build a simple spring boot app that contaoins a Rest API "http://localhost:9000/check" that returns "the server is alive" and it works fine in my local machine. i build helm chart for my application and i deployed it in a cluster.i also implemented ingress controller with nginx to redirect my requests to the clusterIp service.
this is my service:
apiVersion: v1
kind: Service
metadata:
name: {{ include "oauth-idp.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "oauth-idp.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.Version }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
selector:
app.kubernetes.io/name: {{ include "oauth-idp.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
ports:
- name: http
port: 9000
targetPort: 9000
this is my ingress resource config:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: oauth-idp
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: oauth-idp.test.com
http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: provider-oauth-idp //this is the name of my service.
port:
number: 9000
when i try to call "http://oauth-idp.test.com/check" i get an error of "ERR_TOO_MANY_REDIRECTS".

Error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{"if .Values.configmap.enabled":interface {}(nil)} [kubeval] linter

`{{- if .Values.configmap.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "fullname" . }}
namespace: {{ .Values.namespace }}
data:
{{ toYaml $.Values.configmap.data | indent 2 }}
{{- end }}`
make sure that there must be space after/before curly brackets
namespace: {{ .Values.namespace }}

Ansible Nested variables and Jinja2 templates

I'm trying to figure out why my jinja2 template (and ansible for that matter) cannot find my variables in my inventory file.
Here is my inventory file:
all:
hosts:
test05:
ansible_host: 192.168.x.x
filebeat:
version: 7.15.2
applog:
- title: Separate Application Log Path with Tags
type: log
paths:
- /var/log/something/moresomething/current
tags: '["something", "application"]'
- title: Separate Application Log Path, with Tags, and "decode_json_fields" processor.
type: log
paths:
- /var/log/something/moresomething/blah-shell.log
tags: ["application", "something"]
fields: ["message"]
depth: 2
- title: Separate Application Log Path, with Tags, and Multiline fields
type: log
paths:
- /var/log/something/moresomething/production.log
tags: ["application", "something"]
multiline_type: pattern
multiline_patern: 'Started'
multiline_negate: true
multiline_match: after
Then attempting to get the first title. I'm doing the following:
- name: debugging
debug:
var: filebeat.applog.title
when I run this I end up getting filebeat.applog.title: VARIABLE IS NOT DEFINED! which I think is good since it doesn't know what title I want. So changing this to
- name: debugging
debug:
var: filebeat.applog.0.title
I end up getting what I want filebeat.applog.0.title: Separate Application Log Path with Tags. However, how do I use this in a jinja2 template?
I have the following for a template, I know I need to update this to loop through the different items in my inventory. That's a different problem on how to loop through this.
title: {{ filebeat.applog.title }}
- type: {{ filebeat.applog.type }}
enabled: true
paths:
- {{ filebeat.applog.path }}
tags: {{ filebeat.applog.tags }}
{% if filebeat.applog.fields is defined %}
processors:
- decode_json_fields:
fields: {{ filebeat.applog.fields }}
max_depth: {{ filebeat.applog.depth }}
target: {{ filebeat.applog.target | default "" }}
{% endif %}
{% if filebeat.applog.multiline_pattern is defined %}
multiline.type: {{ filebeat.applog.multiline_type }}
multiline.pattern: {{ filebeat.applog.multiline_pattern }}
multiline.negate: {{ filebeat.applog.multiline_negate }}
multiline.match: {{ filebeat.applog.multiline_match }}
{% endif %}
each time I get the following, even when I do use {{ filebeat.applog.0.logtitle }} in the template:
fatal: [test05]: FAILED! => changed=false
msg: |-
AnsibleError: template error while templating string: expected token 'end of print statement', got 'string'. String: title: {{ filebeat.applog.title }}
- type: {{ filebeat.applog.type }}
enabled: true
paths:
- {{ filebeat.applog.path }}
tags: {{ filebeat.applog.tags }}
{% if filebeat.applog.fields is defined %}
processors:
- decode_json_fields:
fields: {{ filebeat.applog.fields }}
max_depth: {{ filebeat.applog.depth }}
target: {{ filebeat.applog.target | default "" }}
{% endif %}
{% if filebeat.applog.multiline_pattern is defined %}
multiline.type: {{ filebeat.applog.multiline_type }}
multiline.pattern: {{ filebeat.applog.multiline_pattern }}
multiline.negate: {{ filebeat.applog.multiline_negate }}
multiline.match: {{ filebeat.applog.multiline_match }}
{% endif %}
I'm not sure what I'm missing or doing wrong. I'm thinking I'm doing something wrong since this the first time doing something like this.
So the template you have should either:
have a for loop to iterate over filebeat.applog
OR
reference n'th element of filebeat.applog
Aside from that, there are some errors like below:
1.
target: {{ filebeat.applog.target | default "" }}
This is the main one, and this is what the error message is complaining about, i.e. got 'string'. The correct usage for default filter is {{ some_variable | default("") }}.
2.
{% if filebeat.applog.multiline_pattern is defined %}
In the inventory this variable is mis-spelled, i.e. multiline_patern (missing one "t"). Fix this in your inventory.
3.
when I do use {{ filebeat.applog.0.logtitle }} in the template
This should be {{ filebeat.applog.0.title }} to work.
Considering the above fixes, a template that loops over filebeat.applog such as below should work:
{% for applog in filebeat.applog %}
title: {{ applog.title }}
- type: {{ applog.type }}
enabled: true
paths: {{ applog.paths }}
tags: {{ applog.tags }}
{% if applog.fields is defined %}
processors:
- decode_json_fields:
fields: {{ applog.fields }}
max_depth: {{ applog.depth }}
target: {{ applog.target | default("") }}
{% endif %}
{% if applog.multiline_pattern is defined %}
multiline.type: {{ applog.multiline_type }}
multiline.pattern: {{ applog.multiline_pattern }}
multiline.negate: {{ applog.multiline_negate }}
multiline.match: {{ applog.multiline_match }}
{% endif %}
{% endfor %}

My helm Chart does not works but the original yamls yes

This is my problem. Y have a yamls deployment for kubernetes that works fine. So, I am putting this yamls in a Helm Chart, but when I deploy de helm, I recived a 5032 error from Nginx:
[error] 41#41: *1176907 connect() failed (111: Connection refused) while connecting to upstream, client: 79.144.175.25, server: envtest.westeurope.cloudapp.azure.com, request: "POST /es/api/api/v1/terminals/login/123456789/monitor HTTP/1.1", upstream: "http://10.0.63.136:80/v1/terminals/login/123456789/monitor", host: "envtest.westeurope.cloudapp.azure.com"
I am starter level, so I am so confused about the problem. I have compared my original files and the generated helm files and I dont see the error.
As my original yamls works, Im just going to copy here my helms files:
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "example.fullname" . }}
namespace: {{ .Values.namespace }}
labels:
{{- include "example.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "example.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "example.selectorLabels" . | nindent 8 }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.env }}
env:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ include "example.fullname" . }}
namespace: {{ .Values.namespace }}
labels:
{{- include "example.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
microservice: {{ .Values.microservice }}
{{- include "example.selectorLabels" . | nindent 4 }}
I have a secret. yaml too:
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.secretname }}
namespace: {{ .Values.namespace }}
type: Opaque
data:
redis-connection-string: {{ .Values.redisconnectionstring | b64enc }}
event-hub-connection-string:
{{ .Values.eventhubconnectionstring | b64enc }}
blob-storage-connection-string:
{{ .Values.blobstorageconnectionstring | b64enc }}
#(ingesters) sql
sql-user-for-admin-user: {{ .Values.sqluserforadminuser | b64enc }}
sql-user-for-admin-password:
{{ .Values.sqluserforadminpassword | b64enc }}
By other hand, I still have out of the helm, in a convencional yaml, the ingress and the external service (as a parts of the set of yamls that works fine)
External Service
kind: Service
apiVersion: v1
metadata:
name: example
namespace: example-ingress
spec:
type: ExternalName
externalName: example.example-tpvs.svc.cluster.local
ports:
- port: 80
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-logic
namespace: example-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/issuer: letsencrypt
nginx.ingress.kubernetes.io/rewrite-target: /$1
labels:
version: "0.1"
spec:
rules:
- host: envtest.westeurope.cloudapp.azure.com
http:
paths:
- path: /es/example/api/(.*)
backend:
serviceName: example
servicePort: 80
So, when I install the helm, I know the pod is up, and I can see in the pod logs that some backjobs are working, so Im almost sure that the problem are in the service of the heml... because this external and ingress are working when I deploy the originals yamls.
I hope someone could help me. thanks!
Ok, the problem is that I forgot the selectors and matchlabel in service and deployment

helm chart for Istio Gateway parsing issue: can't evaluate field ... in type interface {}

I got a parsing issue when I tried to install with a helm chart that has the Istio Gateway and virtual service. Can anyone please help to find out what's wrong with this? Thanks a lot!
Error:
$ helm install my-release ./pipeline-nodejs-app/ --dry-run
Error: template: pipeline-nodejs-app/templates/gateway.yaml:16:17: executing "pipeline-nodejs-app/templates/gateway.yaml" at <.number>: can't evaluate field number in type interface {}
templates/gateway.yaml
{{- if .Values.gateway.enabled -}}
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: {{ .Values.gateway.name }}
{{- with .Values.gateway.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
selector:
istio: ingressgateway
servers:
{{- range .Values.gateway.port }}
- port:
number: {{ .number }}
name: {{ .portname }}
protocol: {{ .protocol }}
{{- end }}
hosts:
{{- range .Values.gateway.hosts }}
- {{ .host | quote }}
{{- end }}
{{- end }}
values.yaml
...
gateway:
enabled: true
annotations: {}
name: pipeline-nodejs-app-gateway
hosts:
host: '*'
port:
number: 80
name: http
protocol: HTTP
...
my expected output of the gateway.yaml is:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: pipeline-javascript-app-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
I think, Your issue is here
{{- range .Values.gateway.port }}
- port:
number: {{ .number }}
name: {{ .portname }}
protocol: {{ .protocol }}
{{- end }}
To use range you have to have list of port like number started with -.
port:
- number: 80
name: http
protocol: HTTP
just change the values.yaml like
...
gateway:
enabled: true
annotations: {}
name: pipeline-nodejs-app-gateway
hosts:
host: '*'
port:
- number: 80
name: http
protocol: HTTP
...

Resources