I need to update the key values using regex and replace like all user input ,username and password using a separate new task.
inventory/main.yml
service1 : abc.com/s1
service2 : def.com/s2
username : user
password: pass
values.yml
---
service1:
image:
name: <user input> # service1 image
service2:
config:
range: 127.0.0.0
image:
name: <user input> # service2 image
id:
username: # DB username
password: # DB Password
Expectation :
values.yml
---
service1:
image:
name: abc.com/s1 # service1 image
service2:
config:
range: 127.0.0.0
image:
name: def.com/s2 # service2 image
id:
username: user # DB username
password: pass # DB Password
Should create a update_values.yml task which will find the values.yml path and replace the required values
My attempt as update_values.yml
---
- name: update value
replace:
path: <path>/values.yml
regexp: "service1.image.name: <user input>"
replace: "abc.com/s1"
In Ansible, this is a typical use-case of a template. Create the template
shell> cat values.yml.j2
---
service1:
image:
name: {{ service1 }} # service1 image
service2:
config:
range: 127.0.0.0
image:
name: {{ service2 }} # service2 image
id:
username: {{ username }} # DB username
password: {{ password }} # DB Password
Then, the tasks below
- include_vars: inventory/main.yml
- template:
src: values.yml.j2
dest: values.yml
create the file
shell> cat values.yml
---
service1:
image:
name: abc.com/s1 # service1 image
service2:
config:
range: 127.0.0.0
image:
name: def.com/s2 # service2 image
id:
username: user # DB username
password: pass # DB Password
Q: "Is there any other method using regex and replace?"
A: You can use lineinfile if you want to. For example
- lineinfile:
path: values.yml
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
backrefs: true
loop:
- {regexp: '^(\s*)name: <user input>(.*)$', line: '\1name: {{ service2 }}\2'}
- {regexp: '^(\s*)name: <user input>(.*)$', line: '\1name: {{ service1 }}\2'}
- {regexp: '^(\s*)username:\s+.*?(\s+#.*)$', line: '\1username: {{ username }}\2'}
- {regexp: '^(\s*)password:\s+.*?(\s+#.*)$', line: '\1password: {{ password }}\2'}
change the file
shell> cat values.yml
---
service1:
image:
name: abc.com/s1 # service1 image
service2:
config:
range: 127.0.0.0
image:
name: def.com/s2 # service2 image
id:
username: user # DB username
password: pass # DB Password
Related
i am deploying my springboot application docker image on GCP by using helm charts .For env specific configuration i use helm-override.yaml file.However i noticed my configured values in application-stage.properties are not being taken by application.Attaching below helm chart and build.gradle files
below is project structure
```
xyz <br>
settings.gradle <br>
build.gradle <br>
config <br>
prod
application-prod.properties
stage
application.properties
gradle/ <br>
wrapper/ <br>
gradle-wrapper.jar <br>
gradle-wrapper.properties <br>
src/ <br>
main/ <br>
java/ <br>
resources/ <br>
application.properties <br>
xyzcharts/ <br>
values.yaml <br>
config/ <br>
stage/ <br>
helm-override-stage.yaml <br>
templates/ <br>
configmap.yaml <br>
cronjob.yaml <br>
```
build.gradle
plugins {
id 'org.springframework.boot' version "${springBootVersion}"
id 'io.spring.dependency-management' version '1.0.15.RELEASE'
id 'java'
id 'eclipse'
id 'jacoco'
id 'org.sonarqube' version "3.3"
id 'com.google.cloud.tools.jib' version "${jibVersion}"
}
group = 'com.vsi.postgrestoattentive'
if (!project.hasProperty('buildName')) {
throw new GradleException("Usage for CLI:"
+ System.getProperty("line.separator")
+ "gradlew <taskName> -Dorg.gradle.java.home=<java-home-dir> -PbuildName=<major>.<minor>.<buildNumber> -PgcpProject=<gcloudProject>"
+ System.getProperty("line.separator")
+ "<org.gradle.java.home> - OPTIONAL if available in PATH"
+ System.getProperty("line.separator")
+ "<buildName> - MANDATORY, example 0.1.23")
+ System.getProperty("line.separator")
+ "<gcpProject> - OPTIONAL, project name in GCP";
}
project.ext {
buildName = project.property('buildName');
}
version = "${project.ext.buildName}"
sourceCompatibility = '1.8'
apply from: 'gradle/sonar.gradle'
apply from: 'gradle/tests.gradle'
apply from: 'gradle/image-build-gcp.gradle'
repositories {
mavenCentral()
}
dependencies {
implementation("org.springframework.boot:spring-boot-starter-web:${springBootVersion}")
implementation("org.springframework.boot:spring-boot-starter-actuator:${springBootVersion}")
implementation 'org.springframework.boot:spring-boot-starter-web:2.7.0'
developmentOnly 'org.springframework.boot:spring-boot-devtools'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
testImplementation 'org.springframework.integration:spring-integration-test'
testImplementation 'org.springframework.batch:spring-batch-test:4.3.0'
implementation("org.springframework.boot:spring-boot-starter-data-jpa:${springBootVersion}")
implementation 'org.postgresql:postgresql:42.1.4'
implementation 'org.springframework.batch:spring-batch-core:4.1.1.RELEASE'
implementation 'com.fasterxml.jackson.datatype:jackson-datatype-jsr310:2.14.1'
implementation group: 'io.micrometer', name: 'micrometer-registry-datadog', version: '1.7.0'
implementation 'com.google.cloud:libraries-bom:26.3.0'
implementation 'com.google.cloud:google-cloud-storage:2.16.0'
}
bootJar {
archiveFileName = "${project.name}.${archiveExtension.get()}"
}
springBoot {
buildInfo()
}
test {
finalizedBy jacocoTestReport
}
jacoco {
toolVersion = "0.8.8"
}
jacocoTestReport {
dependsOn test
}
//SMS2-28: Code to make build check code coverage ratio
project.tasks["bootJar"].dependsOn "jacocoTestReport","jacocoTestCoverageVerification"
cronjob.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ include "xyz.fullname" . }}
labels:
{{ include "xyz.labels" . | nindent 4 }}
spec:
schedule: "{{ .Values.schedule }}"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 5
failedJobsHistoryLimit: 2
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: POSTGRES_DB_USER_NAME
valueFrom:
secretKeyRef:
name: xyz-feed-secret
key: DB_USER_NAME
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: xyz-feed-secret
key: DB_PASSWORD
- name: POSTGRES_DB_URL
valueFrom:
secretKeyRef:
name: xyz-feed-secret
key: DB_URL
- name: POSTGRES_TO_ATTENTIVE_TOKEN
valueFrom:
secretKeyRef:
name: xyz-feed-secret
key: ATTENTIVE_TOKEN
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DD_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: DD_ENV
value: {{ .Values.datadog.env }}
- name: DD_SERVICE
value: {{ include "xyz.name" . }}
- name: DD_VERSION
value: {{ include "xyz.AppVersion" . }}
- name: DD_LOGS_INJECTION
value: "true"
- name: DD_RUNTIME_METRICS_ENABLED
value: "true"
volumeMounts:
- mountPath: /app/config
name: logback
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
volumes:
- configMap:
name: {{ include "xyz.name" . }}
name: logback
backoffLimit: 0
metadata:
{{ with .Values.podAnnotations }}
annotations:
{{ toYaml . | nindent 8 }}
labels:
{{ include "xyz.selectorLabels" . | nindent 8 }}
{{- end }}
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "xyz.name" . }}
labels:
{{- include "xyz.labels" . | nindent 4 }}
data:
application.properties: |-
{{- range .Files.Lines .Values.application.configoveride }}
{{ . }}{{ end }}
logback-spring.xml: |+
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<include resource="org/springframework/boot/logging/logback/console-appender.xml" />
<include resource="org/springframework/cloud/gcp/logging/logback-json-appender.xml" />
<property name="projectId" value="${projectId:-${GOOGLE_CLOUD_PROJECT}}"/>
<appender name="CONSOLE_JSON" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="org.springframework.cloud.gcp.logging.StackdriverJsonLayout">
<projectId>${projectId}</projectId>
<includeTraceId>true</includeTraceId>
<includeSpanId>true</includeSpanId>
<includeLevel>true</includeLevel>
<includeThreadName>true</includeThreadName>
<includeMDC>true</includeMDC>
<includeLoggerName>true</includeLoggerName>
<includeFormattedMessage>true</includeFormattedMessage>
<includeExceptionInMessage>false</includeExceptionInMessage>
<includeContextName>true</includeContextName>
<includeMessage>true</includeMessage>
<includeException>true</includeException>
<jsonFormatter
class="ch.qos.logback.contrib.jackson.JacksonJsonFormatter">
</jsonFormatter>
</layout>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="CONSOLE_JSON"/>
</root>
</configuration>
values.yaml
# Default values for postgres_to_attentive_product_catalog.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
###SMS2-40 - replica count indicates no of instances we need
### - If we want 3 intances then we will metion 3 -then 3 pods will be created on server
### - For staging env we usually keep 1
replicaCount: 1
image:
###SMS2-40 - Below is image name which is created duuring build-->GCP Build image
### --->We can also give local Image details also here
### --->We can create image in Docker repository and use that image URL here
repository: gcr.io/mgcp-1308657-vsi-operations/smscatalogfeed
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: "smscatalogfeed"
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
schedule: "56 17 * * *"
###SMS2-40 - There are 2 ways how we want to serve our applications-->1st->LoadBalancer or 2-->NodePort
service:
type: NodePort
port: 8087
liveness: /actuator/health/liveness
readiness: /actuator/health/readiness
###service:
### type: ClusterIP
### port: 80
restartPolicy: "Never"
ingress:
enabled: false
className: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
###SMS2-40 - Below command is used to override configuration with config/application.properties
application:
configoveride: "config/application.properties"
helm-override-stage.yaml
replicaCount: 1
#SMS2-12 : mgcp-1308657-vsi-operations is our server/project in GCP
image:
repository: gcr.io/mgcp-1308657-vsi-operations/smscatalogfeed
tag: <IMAGE_TAG_PLACEHOLDER_TO_BE_REPLACED>
application:
configoveride: "config/stage/application-stage.properties"
datadog:
enabled: true
env: stage
Looks like there are many problems with your project structure and your yaml configs:
application-stage.properties file is missing;
Other chart files like Chart.yaml, _helpers.tpl are not reflected in your project structure;
secret.yaml is missing for your chart template, but used in your CronJob jobTemplate envs fetching with secretKeyRef;
Your application-stage.properties file is only stored in a configmap.yaml;
Your configmap.yaml data is defined with filename as key and file content as value, which is not suited for exposing as container environment variables.
So as to the question why configured values in application-stage.properties are not being taken by application, the reason is that you defined your CronJob container to fetch envs from a secret which is missing. You only have your application-stage.properties stored in a configmap as file, not key-value pairs.
The tutorial: spring boot application.properties in kubernetes may provide you more guidelines.
I am trying come up with a way to pass multiple variables to the same field in a role but not having any luck getting it to work using the Role duplication and execution method I've been using. As an example I want an SLB Server to have multiple ports assigned to it using the port_number variable. I'm new to Ansible so making some rookie mistakes like the code below (port_number: "80", port_number: "8080" returns duplicate entry so only uses the first) but I have tried just about every syntax I have found examples for and nothing is working right. The end result is basically having test3 with both of the port_number: entries assigned to it but at this point I'm not even sure it's possible doing it this way or if I have to run a separate module after the fact to add the entries. Any help is greatly appreciated. Thanks.
---
- name: Deploy A10 config
connection: local
hosts: all
roles:
- role: server
vars:
name: "test1"
fqdn_name: "test1.test.domain.net"
health_check: "TCP-8080-HALFOPEN"
port_number: "80"
- { role: server, vars: { name: "test2", fqdn_name: "test2.test.domain.net", port_number: "8080" }}
- { role: server, vars: { name: "test3", fqdn_name: "test3.test.domain.net", port_number: "80", port_number: "8080" }}
---
- name: Test server create
a10_slb_server:
a10_host: "10.1.1.1"
a10_username: "admin"
a10_password: "admin"
a10_port: "443"
a10_protocol: "https"
state: present
name: "{{ name }}"
fqdn_name: "{{ fqdn_name }}"
port_list:
- port_number: "{{ port_number }}"
In your code vars is dictionary. The keys in a dictionary must be unique.
vars:
name: "test1"
fqdn_name: "test1.test.domain.net"
health_check: "TCP-8080-HALFOPEN"
port_number: "80"
YAML resolves the duplication of the keys simply by overriding the value. This expression
vars: { name: "test3", fqdn_name: "test3.test.domain.net", port_number: "80", port_number: "8080" }
would give
"vars": {
"fqdn_name": "test3.test.domain.net",
"name": "test3",
"port_number": "8080"
}
In your code port_list is list. It's a list of dictionaries. This seems to be the proper way to declare multiple port numbers.
port_list:
- port_number: "80"
- port_number: "8080"
In serialized format
port_list: [{port_number: "80"}, {port_number: "8080"}]
But, in your code role: server it's not clear how these variables are used in the role. It is necessary to review the role to learn how to submit the data.
For example:
- role: server
vars:
name: "test1"
fqdn_name: "test1.test.domain.net"
health_check: "TCP-8080-HALFOPEN"
port_number1: "80"
port_number2: "8080"
--
- name: Test server create
a10_slb_server:
a10_host: "10.1.1.1"
a10_username: "admin"
a10_password: "admin"
a10_port: "443"
a10_protocol: "https"
state: present
name: "{{ name }}"
fqdn_name: "{{ fqdn_name }}"
port_list:
- port_number: "{{ port_number1 }}"
- port_number: "{{ port_number2 }}"
I have a role which uses with_items:
- name: Create backup config files
template:
src: "config.yml.j2"
dest: "/tmp/{{ project }}_{{ env }}_{{ item.type }}.yml"
with_items:
- "{{ backups }}"
I can access the item.type, as usual, but not project or env which are defined outside the collection:
deploy/main.yml
- hosts: ...
vars:
project: ...
rails_env: qa
roles:
- role: ../../../roles/deploy/dolly
project: "{{ project }}"
env: "{{ rails_env }}"
backups:
- type: mysql
username: ...
password: ...
The error I get is:
Error was a <class 'ansible.errors.AnsibleError'>, original message: An unhandled exception occurred while templating '{{ project }}'
The template, config.j2.yml, is:
type: {{ item.type }}
project: {{ project }}
env: {{ env }}
database:
username: {{ item.username }}
password: {{ item.password }}
It turns out for can't redefine a var with the same name as an existing var, so project: {{ project }} will always fail with an error.
Instead project can be omitted and the existing definition, in vars, can be used.
- hosts: ...
vars:
project: ... # <- already defined here
roles:
- role: ../../../roles/deploy/dolly
backups:
- type: mysql
username: ...
password: ...
If the var is not defined in vars can be defined in the role:
- hosts: ...
vars:
name: ...
roles:
- role: ../../../roles/deploy/dolly
project: "{{ name }}" # <- define here
backups:
- type: mysql
username: ...
password: ...
I don't understand properly how to use correctly template_parameters parameter (http://docs.ansible.com/ansible/cloudformation_module.html)
So, looks like I could use this param to override some param in template. Very simple configuration is
sandbox_cloudformation.yml
---
- name: Sandbox CloudFormation
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Launch Ansible CloudFormation Stack
cloudformation:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
stack_name: "{{ aws_default_cloudformation_stack_name }}"
state: "present"
region: "{{ aws_default_region }}"
disable_rollback: true
template: "files/cloudformation.yml"
args:
template_parameters:
GroupDescription: "Sandbox Security Group"
cloudformation.yml
---
AWSTemplateFormatVersion: '2010-09-09'
Description: Sandbox Stack
Resources:
SandboxSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: "DEMO"
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '80'
ToPort: '80'
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: '22'
ToPort: '22'
CidrIp: 0.0.0.0/0
But I got next error:
fatal: [127.0.0.1]: FAILED! => {"changed": false, "failed": true, "msg": "Parameter values specified for a template which does not require them."}
In addition, I got this error, if try to use, for example
template_parameters:
KeyName: "{{ aws_default_keypair }}"
InstanceType: "{{ aws_default_instance_type }}"
Also, please advice with best approach to use cloudformation module for Ansible. Maybe best way is generate cloud formation template and in the next step use it? Like ..
- name: Render Cloud Formation Template
template: src=cloudformation.yml.j2 dest=rendered_templates/cloudformation.yml
- name: Launch Ansible CloudFormation Stack
cloudformation:
template: "rendered_templates/cloudformation.yml"
Thanks in advance!
You can use template_parameters to pass parameters to a CloudFormation template. In the template you'll refer to the parameters using Ref. In your case:
playbook:
...
args:
template_parameters:
GroupDescriptionParam: "Sandbox Security Group"
...
template:
...
Resources:
SandboxSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription:
Ref: GroupDescriptionParam
...
See
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-security-group.html#w1ab2c19c12d282c19
for examples of AWS SecurityGroup CloudFormation templates.
Instead of using template_parameters arguments in Ansible cloudformation module, I have found it handy to create a Jinja2 template, convert it to a CloudFormation template using Ansible template module, just like you suggested in the end of your question. Using this approach you can then omit args and template_parameters in cloudformation module call.
For example:
- name: Generate CloudFormation template
become: no
run_once: yes
local_action:
module: template
src: "mycloudformation.yml.j2"
dest: "{{ cf_templ_dir }}/mycloudformation.yml"
tags:
- cloudformation
- cf_template
- name: Deploy the CloudFormation stack
become: no
run_once: yes
local_action:
module: cloudformation
stack_name: "{{ cf_stack_name }}"
state: present
region: "{{ ec2_default_region }}"
template: "{{ cf_templ_dir }}/mycloudformation.yml"
register: cf_result
tags:
- cloudformation
- name: Show CloudFormation output
run_once: yes
become: no
debug: var=cf_result
tags:
- cloudformation
And mycloudformation.yml.j2:
---
AWSTemplateFormatVersion: '2010-09-09'
Description: Sandbox Stack
Resources:
SandboxSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: {{ group_desc_param_defined_somewhere_in_ansible }}
...
top_level_main.yml
roles:
- { role: deploy_nds }
roles/deploy_nds/vars/main.yml
artifact_url: urlsomething
roles/deploy_nds/meta/main.yml
dependencies:
- {role: download_artifact, url: artifact_url }
roles/download_artifactory/tasks/main.yml
- name: download artifact from jfrog
get_url:
url: "{{ url }}"
dest: /var/tmp
I tried using variable name as "{{ artifact_url }}" but still it does not work as expect. Can someone please help?
I have explicitly included vars file of perticular role in playbook then it worked.
vars_files:
- roles/deploy_nds/vars/main.yml
roles:
- { role: deploy_nds }