Generate a file from a yaml template - yaml

Is there a possibility in Helm to generate a file from a yaml template?
I need to create a configuration that is dynamic depending on the setup.
I need to add it as a secret/configuration file to the container when starting it.
Update:
This is the file contents that I would like to parameterize:
version: 1.4.9
port: 7054
debug: {{ $debug }}
...
tls:
enabled: {{ $tls_enable }}
certfile: {{ $tls_certfile }}
keyfile: {{ $tls_keyfile }}
....
ca:
name: {{ $ca_name }}
keyfile: {{ $ca_keyfile }}
certfile: {{ $ca_certfile }}
....
affiliations:
{{- range .Values.Organiza }}: []
All these values are
I don't have a clue how to pass this file contents into ConfigMap or any other k8s object that would generate a final version of the file.

Related

Spring boot admin Kubernetes Ingress

I'm trying to add an spring boot admin interface to some microservice that are deployed in kubernetes cluster. the spring boot admin app has the following configuration:
spring:
application:
name: administrator-interface
boot:
admin:
context-path: "/ui"
server:
use-forward-headers: true
The kubernetes cluster has an ingress that works as an api gateway:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "fullname" . }}
labels:
app: {{ template "name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
{{- range $key, $value := .Values.ingress.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
{{- range $host := .Values.ingress.hosts }}
- host: {{ $host }}
http:
paths:
- path: /admin/(.+)
backend:
serviceName: administrator-interface-back
servicePort: 8080
{{- end -}}
{{- if .Values.ingress.tls }}
tls:
{{ toYaml .Values.ingress.tls | indent 4 }}
{{- end -}}
{{- end -}}
when I try to see the spring boot admin ui I had the following error:
URL in explorer: https://XXXXXX(thisisgivenBYDNS)/admin/ui
GET https://XXXXXX/ui/assets/css/chunk-common.6aba055e.css net::ERR_ABORTED 404
The URL is wrong because it should be https://XXXXXX/admin/ui/assets/css/chunk-common.6aba055e.css
It is not adding the /admin path that is given by the ingess
How can i solve this and configure an aditional path to serve the static content in the request from the right URL?
Thanks in advance
The problem is that your spring boot admin interface has no way to know that you're using "/admin" suburl.
nginx.ingress.kubernetes.io/rewrite-target: /$1 ask nginx to rewrite your url matching the second group.
So when you're hitting: https://XXXXX/admin/ui nginx rewrite your url to https://XXXXXX/ui and then send it to spring boot.
I don't know well spring boot but you should have a way to provide him a suburl so instead of serving to /ui it server to /$BASE_URL/ui.
Then weather how it works you might need to change how nginx rewrite the url by something like:
path: ^(/admin/)(.+)\
nginx.ingress.kubernetes.io/rewrite-target: $1/$2
Finally I found the solution.
Spring boot admin has a tool for that called "public URL"
spring:
application:
name: administrator-interface
boot:
admin:
context-path: "/ui"
ui:
public-url: "https://XXX/admin/ui"
server:
use-forward-headers: true
whith such configuration I am telling spring boot admin that i want to connect with a context /ui but when trying to load resources it should make the request to /admin/ui.
Now i can connect with the interface trought https:/XXX/ui and is sending request to load resources from https://XXX/admin/ui adding the prefix set by ingress
Thank you #NoƩ

How to access vars outside item when using with_items?

I have a role which uses with_items:
- name: Create backup config files
template:
src: "config.yml.j2"
dest: "/tmp/{{ project }}_{{ env }}_{{ item.type }}.yml"
with_items:
- "{{ backups }}"
I can access the item.type, as usual, but not project or env which are defined outside the collection:
deploy/main.yml
- hosts: ...
vars:
project: ...
rails_env: qa
roles:
- role: ../../../roles/deploy/dolly
project: "{{ project }}"
env: "{{ rails_env }}"
backups:
- type: mysql
username: ...
password: ...
The error I get is:
Error was a <class 'ansible.errors.AnsibleError'>, original message: An unhandled exception occurred while templating '{{ project }}'
The template, config.j2.yml, is:
type: {{ item.type }}
project: {{ project }}
env: {{ env }}
database:
username: {{ item.username }}
password: {{ item.password }}
It turns out for can't redefine a var with the same name as an existing var, so project: {{ project }} will always fail with an error.
Instead project can be omitted and the existing definition, in vars, can be used.
- hosts: ...
vars:
project: ... # <- already defined here
roles:
- role: ../../../roles/deploy/dolly
backups:
- type: mysql
username: ...
password: ...
If the var is not defined in vars can be defined in the role:
- hosts: ...
vars:
name: ...
roles:
- role: ../../../roles/deploy/dolly
project: "{{ name }}" # <- define here
backups:
- type: mysql
username: ...
password: ...

Add slack fields to Prometheus alert manager slack notifications

The new version of Prometheus alert manager added support for fields section in slack attachments. I'm trying to setup a go template to loop generate fields for each alert label. After testing the config, I got syntax error "cannot read an implicit mapping pair; a colon is missed". Did anyone tried the same thing and succeed? Thanks very much. My config is below:
global:
resolve_timeout: 5m
templates:
- '/etc/alertmanager/template/*.tmpl'
route:
# All alerts in a notification have the same value for these labels.
group_by: ['alertname', 'instance', 'pod']
group_wait: 30s
group_interval: 5m
repeat_interval: 4h
receiver: 'slack-test'
routes:
# Go spam channel
- match:
alertname: DeadMansSwitch
receiver: 'null'
- name: 'slack-test'
slack_configs:
- channel: '#alert'
api_url: 'https://hooks.slack.com/services/XXXXX/XXXX/XXXX'
username: 'Prometheus Event Notification'
color: '{{ if eq .Status "firing" }}danger{{ else }}good{{ end }}'
title: '[`{{ .Labels.severity }}`] Server alert'
text: |-
{{ range .Alerts }}
{{ .Annotations.message }}
{{ end }}
short_fields: true
fields:
{{ range .Labels.SortedPairs }}
title:{{ .Name }}
value:`{{ .Value }}`
{{ end }}
send_resolved: true
#email_configs:
#- to: 'your_alert_email_address'
# send_resolved: true
- name: 'null'
Tried this not work too.
fields:
{{ range .Labels.SortedPairs }}
- title: {{ .Name }}
value: `{{ .Value }}`
{{ end }}
The issue is that you are using a go template inside the config file, but prometheus only supports go templating inside the config values. Title and Value both are of type "tmpl_string", meaning they are a string which is a go template. https://prometheus.io/docs/alerting/configuration/#field_config
correct
fields:
title: '{{ if (true) }}inside the title VALUE{{ end }}'
value: 'foo'
incorrect
fields:
{{ if (true) }}outside the config values
title: 'inside the title VALUE'
value: 'foo'
{{ end }}

Loop through Ansible variable in in .j2 file

I have one ansible var list defined in group_vars
member_list:
- a
- b
I have one proxy.j2 template
{% for var in members_list %}
server {
server_name {{ var }}-{{ server_name }};
{% endfor %}
How could I loop through that list to get the value in .j2 file?
You should be using {{ item }} instead of {{ var }}.

Is it possible to use and ansible-fact with the with_dict module?

I'm trying to write a role to configure a keepalived cluster. I was hoping to pass unique info into the a template based on the IP of the target box.
In this scenario: Server A is 192.168.1.140 and Server B is 192.182.1.141 and the VIP would be 192.168.1.142
the dictionary would look something like this:
---
192.168.1.140:
peer: 192.168.1.141
priority: 110
vip: 192.168.1.142
192.1.168.1.141
peer:192.168.1.140
priority: 100
vip: 192.168.1.142
I was hoping the task would look like this:
---
- name: keepalived template
template:
src: keepalived.j2
dest: /etc/keepalived/keepalived.conf
owner: root
group: root
mode: 0644
with_dict: '{{ ansible_default_ipv4.address }}'
and the template would look like this:
}
vrrp_instance VI_1 {
interface eth0
priority {{ item.value.priority }}
...
unicast_scr {{ ansible_default_ipv4.address }}
unicast_peer {
{{ item.value.peer }}
}
virtual_ipaddresses {
{{ item.value.vip }} }
}
Any insight would be greatly appreciated
John
Group your peers details under some common dictionary:
---
peer_configs:
192.168.1.140:
peer: 192.168.1.141
priority: 110
vip: 192.168.1.142
192.1.168.1.141
peer:192.168.1.140
priority: 100
vip: 192.168.1.142
with_... is generally for looping, you don't need any loop, as I see, so use:
- name: keepalived template
template:
src: keepalived.j2
dest: /etc/keepalived/keepalived.conf
owner: root
group: root
mode: 0644
vars:
peer_config: '{{ peer_configs[ansible_default_ipv4.address] }}'
and template:
vrrp_instance VI_1 {
interface eth0
priority {{ peer_config.priority }}
...
unicast_scr {{ ansible_default_ipv4.address }}
unicast_peer {
{{ peer_config.peer }}
}
virtual_ipaddresses {
{{ peer_config.vip }} }
}

Resources