error converting YAML to JSON: yaml: found character that cannot start any token - yaml

I get the error from the title when I try to run a Helm chart that contains the following code:
{{ if eq .Values.environment "generic" }}
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: core-ingress
annotations:
kubernetes.io/ingress.class: {{ .Values.core_ingress.class }}
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
host: {{ .Values.core_ingress.host }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ .Values.core_ingress.service }}
port:
number: {{ .Values.core_ingress.port }}
{{ end }}

Related

Unable to load configtree if we use spring-cloud-config-server

NOT WORKING SCENARIO:
I am trying to deploy spring-boot application in EKS cluster.
spring-config-server helps to load myapplication(project) properties from repository
myapplication-dev.yml (From repository)
spring:
config:
import: optional:configtree:/usr/src/apps/secrets/database/postgresql/ # not effective while loading into application by configserver
datasource:
url: jdbc:postgresql://{xxxxxxxxxxxxxx}:{5444}/postgres
username: ${DB_USERNAME}
password: ${DB_PASSWORD}
hikari:
schema: mydbschema
spring-config-server is up and running!
When i am trying to deploy myapplication with deployment.yaml file in EKS then configtree:/usr/src/apps/secrets/database/postgresql/ property not showing any effect in order to set env props.
deployment.yaml
volumes:
- name: tmpfs-1
emptyDir: {}
- name: secret-volume-postgresql
secret:
secretName: db_secrets
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args:
- "--server.port=8080"
- {{ default "dev" .Values.applicationActiveProfiles }}
env:
- name: ACTIVE_PROFILES
value: {{ .Values.applicationActiveProfiles }}
- name: DB_USERNAME
value: {{ .Values.db.userName }}
- name: NAME_SPACE
value: {{ default "dev" .Values.selectedNamespace }}
ports:
- name: http
containerPort: {{ .Values.containerPort }}
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /actuator/health/readiness
port: http
scheme: HTTP
initialDelaySeconds: 15
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
livenessProbe:
failureThreshold: 3
httpGet:
path: /actuator/health/liveness
port: http
scheme: HTTP
initialDelaySeconds: 15
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
volumeMounts:
- name: secret-volume-postgresql
mountPath: /usr/src/apps/secrets/database/postgresql
- name: tmpfs-1
mountPath: /tmp
myapplication(project)
application.yml
---
spring:
application:
name: myapplication
config:
import: optional:configserver:http://config-server.dev.svc:8080/
cloud:
config:
label: develop
profiles:
active:
- dev
Working scenario:
With one additional change in deployment.yaml file, that setting up spring.config.import as arg.
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args:
- "--server.port=8080"
- {{ default "dev" .Values.applicationActiveProfiles }}
# ******It's working after adding the below line******
- "--spring.config.import=optional:configtree:/usr/src/apps/secrets/database/postgresql/"
Question: Why it's not considering the property that i set in myapplication-dev.yml file ? What is missing here ?

How would I configure a dictionary which requires variables in a loop?

I have a docker_container which I want to deploy for multiple users and name traefik routes after the users. But i'm confused on how I can achieve this.
Here is what I have:
- name: Run syncthing
docker_container:
name: "{{ item.name }}-syncthing"
image: "lscr.io/linuxserver/syncthing"
state: started
restart_policy: "always"
env:
PUID: "1000"
PGID: "1000"
volumes:
- "{{ item.data_dir }}:/data"
... other volumes
labels:
traefik.enable: true
"traefik.http.routers.{{ item.name }}-syncthing.entrypoints": websecure
"traefik.http.routers.{{ item.name }}-syncthing.rule": Host(`{{ item.name }}.{{ fqdn_real }}`)
"traefik.http.routers.{{ item.name }}-syncthing.tls": true
"traefik.http.routers.{{ item.name }}-syncthing.tls.certresolver": le
"traefik.http.routers.{{ item.name }}-syncthing.service": "{{ item.name }}-syncthing"
"traefik.http.routers.{{ item.name }}-syncthing.middlewares": "{{ item.name }}-basicauth"
"traefik.http.services.{{ item.name }}-syncthing.loadbalancer.server.port": 8080
"traefik.http.middlewares.{{ item.name }}-syncthing-basicauth.basicauth.users": "{{ item.auth }}"
with_items: "{{ syncthing_containers_info }}"
And a syncthing_config_info like this:
syncthing_containers_info:
- { name: "c1", data_dir: "/mnt/data/c1/data", auth: "..." }
- { name: "c2", data_dir: "/mnt/data/c2/data", auth: "..." }
- { name: "c3", data_dir: "/mnt/data/c3/data", auth: "..." }
That snippet doesn't work because ansible doesn't like the syntax so I have tried this with a with_nested but I faced a similar problem there with the nested loop issue while trying to set_fact as in the example since the set of labels depends on syncthing_containers_info. Is there a better way for me to do this?
It sounds like you need the labels: to be an actual dict since yaml keys are not subject to jinja2 interpolation
labels: >-
{%- set key_prefix = "traefik.http.routers." ~ item.name ~"-syncthing" -%}
{{ {
"traefik.enable": True,
key_prefix ~ ".entrypoints": "websecure",
key_prefix ~ ".rule": "Host(`" ~ item.name ~"."~ fqdn_real ~"`)",
key_prefix ~ ".tls": True,
key_prefix ~ ".tls.certresolver": "le",
key_prefix ~ ".service": item.name ~ "-syncthing",
key_prefix ~ ".middlewares": item.name ~ "-basicauth",
"traefik.http.services." ~ item.name ~ "-syncthing.loadbalancer.server.port": 8080,
"traefik.http.middlewares." ~ item.name ~"-syncthing-basicauth.basicauth.users": item.auth,
} }}
(be aware I didn't test that, just eyeballed it from your question, but that's the general idea)

Append new webhook to alertmanager config file using shell

I am using alertmanager for notification purpose and by default alaertmanager only have one webhook, As per my requirement one more webhook is required.
Before
global:
resolve_timeout: 5m
route:
group_by: ['alertname']
group_wait: {{ getsdcv "system" "group_wait" "10s"}}
group_interval: {{ getsdcv "system" "group_interval" "10s"}}
{{- $repeat_interval := getsdcv "system" "repeat_interval" "" }}
{{- if not (eq $repeat_interval "") }}
repeat_interval: {{ $repeat_interval }}
{{- end }}
receiver: 'web.hook'
receivers:
- name: 'web.hook'
webhook_configs:
- url: 'http://{{ getsdcv "system" "ip" "localhost"}}:{{ getsdcv "system" "port" "8005"}}/'
inhibit_rules:
- source_match:
severity: 'critical'
target_match:
severity: 'warning'
equal: ['alertname', 'dev', 'instance']
Expected output after adding new Webhook
global:
resolve_timeout: 5m
route:
group_by: ['alertname']
group_wait: {{ getsdcv "system" "group_wait" "10s"}}
group_interval: {{ getsdcv "system" "group_interval" "10s"}}
{{- $repeat_interval := getsdcv "system" "repeat_interval" "" }}
{{- if not (eq $repeat_interval "") }}
repeat_interval: {{ $repeat_interval }}
{{- end }}
receiver: 'web.hook'
routes:
- receiver: "web.hook"
continue: true
- receiver: "web.hook1"
match:
alertname: BusinessKpiDown
continue: true
receivers:
- name: 'web.hook'
webhook_configs:
- url: 'http://{{ getsdcv "system" "ip" "localhost"}}:{{ getsdcv "system" "port" "8005"}}/'
- name: 'web.hook1'
webhook_configs:
- url: 'http://0.0.0.0:8010/hooks/my-webhook'
inhibit_rules:
- source_match:
severity: 'critical'
target_match:
severity: 'warning'
equal: ['alertname', 'dev', 'instance']
I tried doing it through shell script but failed
awk -v RS='^$' '{$0=gensub(/(receivers:\s+users:)(\s+)/,"\\1\\2- name:
'web.hook2'\\2",1)}1'
I also tried with sed:
sed -i "/receivers:/a- name: 'webhook1'\n webhook_configs:\n - url:
''" config.yaml
I'm not allowed to use yq library due to restricted environment.

helm yaml iterate causes nil pointer

I am trying to iterate over jobsContainer array to generate multiple instances in the cronjob I am creating.
Ny values.yaml look like the below:
jobContainers:
- cleaner1:
env:
keepRunning: false
logsPath: /nfs/data_etl/logs
- cleaner2:
env:
keepRunning: false
logsPath: /nfs/data_etl/logs
and my template cronJob.yaml looks like:
{{- range $job, $val := .Values.jobContainers }}
- image: "{{ $image.repository }}:{{ $image.tag }}"
imagePullPolicy: {{ $image.pullPolicy }}
name: {{ $job }}
env:
- name: KEEP_RUNNING
value: "{{ .env.keepRunning }}"
volumeMounts:
- name: {{ .logsPathName }}
mountPath: /log
restartPolicy: Never
{{- end }}
helm install returns the following error:
executing "/templates/cronjob.yaml" at <.env.keepRunning>: nil pointer evaluating interface {}.keepRunning
My cronjob.Yaml is below:
{{- $image := .Values.image }}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ .Release.Name }}
labels:
app: {{ .Release.Name }}
chart: ni-filecleaner
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
jobTemplate:
spec:
template:
metadata:
labels:
app: {{ .Release.Name }}
cron: {{ .Values.filesjob.jobName }}
spec:
containers:
{{- range $job, $val := .Values.jobContainers }}
- image: "{{ $image.repository }}:{{ $image.tag }}"
imagePullPolicy: {{ $image.pullPolicy }}
name: {{ $job }}
env:
- name: KEEP_RUNNING
value: "{{ $val.env.keepRunning }}"
- name: FILE_RETENTION_DAYS
value: "{{ .env.retentionPeriod }}"
- name: FILE_MASK
value: "{{ .env.fileMask }}"
- name: ID
value: "{{ .env.id }}"
volumeMounts:
- mountPath: /data
name: {{ .dataPathName }}
- name: {{ .logsPathName }}
mountPath: /log
restartPolicy: Never
volumes:
- name: {{ .dataPathName }}
nfs:
server: {{ .nfsIp }}
path: {{ .dataPath }}
- name: {{ .logsPathName }}
nfs:
server: {{ .nfsIp }}
path: {{ .logsPath }}
{{- end }}
schedule: "{{ .Values.filesjob.schedule }}"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: {{ .Values.filesjob.successfulJobsHistoryLimit }}
failedJobsHistoryLimit: {{ .Values.filesjob.failedJobsHistoryLimit }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 12 }}
{{- end }}
The full values.yaml is below:
replicaCount: 1
image:
repository: app.corp/ni-etl-filecleaner
tag: "3.0.3.1"
pullPolicy: IfNotPresent
jobContainers:
- processed:
env:
keepRunning: false
fileMask: ne*.*
retentionPeriod: 3
id: processed
jobName: processed
dataPathName: path-to-clean-processed
logsPathName: path-logfiles-processed
dataPath: /nfs/data_etl/loader/processed
logsPath: /nfs/data_etl/logs
nfsIp: ngs.corp
- incoming:
env:
keepRunning: false
fileMask: ne*.*
retentionPeriod: 3
id: incoming
jobName: incoming
dataPathName: path-to-clean-incoming
logsPathName: path-logfiles-incoming
dataPath: /nfs/data_etl/loader/incoming
logsPath: /nfs/data_etl/logs
nfsIp: ngs.corp
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
It seems there are some indentation issue with both of your values file and template file. Here, is the correct template and values file.
{{- $image := .Values.image }}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ .Release.Name }}
labels:
app: {{ .Release.Name }}
chart: ni-filecleaner
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
jobTemplate:
spec:
template:
metadata:
labels:
app: {{ .Release.Name }}
cron: {{ .Values.filesjob.jobName }}
spec:
containers:
{{- range $job, $val := .Values.jobContainers }}
- image: "{{ $image.repository }}:{{ $image.tag }}"
imagePullPolicy: {{ $image.pullPolicy }}
name: "{{ $job }}"
env:
- name: KEEP_RUNNING
value: "{{ $val.env.keepRunning }}"
- name: FILE_RETENTION_DAYS
value: "{{ $val.env.retentionPeriod }}"
- name: FILE_MASK
value: "{{ $val.env.fileMask }}"
- name: ID
value: "{{ $val.env.id }}"
volumeMounts:
- mountPath: /data
name: {{ $val.dataPathName }}
- name: {{ $val.logsPathName }}
mountPath: /log
restartPolicy: Never
volumes:
- name: {{ $val.dataPathName }}
nfs:
server: {{ $val.nfsIp }}
path: {{ $val.dataPath }}
- name: {{ $val.logsPathName }}
nfs:
server: {{ $val.nfsIp }}
path: {{ $val.logsPath }}
{{- end }}
schedule: "{{ .Values.filesjob.schedule }}"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: {{ .Values.filesjob.successfulJobsHistoryLimit }}
failedJobsHistoryLimit: {{ .Values.filesjob.failedJobsHistoryLimit }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 12 }}
{{- end }}
replicaCount: 1
image:
repository: app.corp/ni-etl-filecleaner
tag: "3.0.3.1"
pullPolicy: IfNotPresent
filesjob:
name: cleaner
jobContainers:
- processed:
env:
keepRunning: false
fileMask: ne*.*
retentionPeriod: 3
id: processed
jobName: processed
dataPathName: path-to-clean-processed
logsPathName: path-logfiles-processed
dataPath: /nfs/data_etl/loader/processed
logsPath: /nfs/data_etl/logs
nfsIp: ngs.corp
- incoming:
env:
keepRunning: false
fileMask: ne*.*
retentionPeriod: 3
id: incoming
jobName: incoming
dataPathName: path-to-clean-incoming
logsPathName: path-logfiles-incoming
dataPath: /nfs/data_etl/loader/incoming
logsPath: /nfs/data_etl/logs
nfsIp: ngs.corp
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}

Jinja2 nested varibles

Below are my jinja2 template file and the variables used to populate it. However I want to include a new section only if aditional_keys = true. Is this possible?
My variable
- { name: 'container1', version: '1.0.0.0', port: '', registry_path: 'container1', replicas: '1', namespace: 'general', aditional_keys: 'false'}
- { name: 'container2', version: '3.6.14.1', port: '8080', registry_path: 'container2', replicas: '1', namespace: 'general', aditional_keys: 'true'}
My template
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{ item.name }}
environment: {{ location }}_{{ env }}
name: {{ item.name }}
namespace: "{{ item.namespace }}"
spec:
replicas: {{ item.replicas }}
selector:
matchLabels:
app: {{ item.name }}
environment: {{ location }}_{{ env }}
template:
metadata:
labels:
app: {{ item.name }}
environment: {{ location }}_{{ env }}
spec:
containers:
- envFrom:
- configMapRef:
name: {{ item.name }}
image: registry.com/{{ item.registry_path }}:{{ item.version }}
imagePullPolicy: Always
name: {{ item.name }}
ports:
- containerPort: {{ item.port }}
protocol: TCP
I tried adding this but I am obviously not calling the variable correctly
{% if item.additional_keys == true %}
env:
- name: PRIVATE_KEY
valueFrom:
secretKeyRef:
key: id_{{ item.name }}_rsa
name: id-{{ item.name }}-rsa-priv
optional: false
- name: PUBLIC_KEY
valueFrom:
secretKeyRef:
key: id_{{ item.name }}.pub
name: id-{{ item.name }}-rsa-pub
optional: false
{% else %}
{% endif %}
To start with, literal compare to boolean values is one of the ansible-lint rules you might want to follow.
Now there are 2 real issues in your above example.
You have a typo in your variable definition (aditional_keys) while you spelled it correctly in your template (additional_keys)
Your variable is specified as a string ('false') while you expect a boolean (false). Meanwhile, it often happens in ansible that correct boolean values can be turned into strings upon parsing (e.g. extra_vars on the command line). To overcome this, the good practice is to systematically transform the value to a boolean with the bool filter when you don't totally trust the source.
Once your fix the variable name and boolean definition in your var file as additional_keys: false, the following conditional in your template will make sure you don't get into that trouble again:
{% if item.additional_keys | bool %}

Resources