Modify a line and append if needed - ansible

Using Ansible i want to modify an existing configuration file and change a specific setting (variable) depending on one or more vars that i specify in a customer specific playbook.
The configuration file contains:
JVM_SUPPORT_RECOMMENDED_ARGS=""
The following option should always be added;
-Datlassian.plugins.enable.wait=300
Thus resulting in:
JVM_SUPPORT_RECOMMENDED_ARGS="-Datlassian.plugins.enable.wait=300"
However, I have also some optional options to set, like when proxy settings are required;
-Datlassian.plugins.enable.wait=300 -Dhttp.proxyHost=proxy.xxx.com -Dhttp.proxyPort=3128 -Dhttps.proxyHost=proxy.xxx.com -Dhttps.proxyPort=3128 -Dhttp.nonProxyHosts='localhost'"
Or when we need to disable some sort of SSL Endpoint Identification:
JVM_SUPPORT_RECOMMENDED_ARGS="-Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true"
Some customers do not require the proxy or endpoint setting, some require both and some only one of these.
I tried to provide 'as-is' lines so it will just replace the entire thing, but this is not neat and is not desired either as there can be multiple combinations and I don't want to program all of them on beforehand. I rather have the variable build by adding the options I need.
lineinfile:
name: "{{ file.stat.path }}"
regexp: '^JVM_SUPPORT_RECOMMENDED_ARGS="'
line: "JVM_SUPPORT_RECOMMENDED_ARGS=\"-Datlassian.plugins.enable.wait=300\""
when: file.stat.exists
- name: Ensure JVM_SUPPORT_RECOMMENDED_ARGS proxy settings are present
lineinfile:
name: "{{ file.stat.path }}"
regexp: '^JVM_SUPPORT_RECOMMENDED_ARGS="'
line: "JVM_SUPPORT_RECOMMENDED_ARGS=\"-Datlassian.plugins.enable.wait=300
-Dhttp.proxyHost={{ jira_http_proxy }} -Dhttp.proxyPort={{ jira_http_proxyport }} -Dhttps.proxyHost={{ jira_https_proxy }}
-Dhttps.proxyPort={{ jira_https_proxyport }} -Dhttp.nonProxyHosts='{{ jira_non_proxy_hosts|default([])|join('|') }}'\""
when: file.stat.exists and (jira_http_proxy|default(false) or jira_https_proxy|default(false))
edit
I can do it like this;
jira_base_args: "-Datlassian.plugins.enable.wait=300"
jira_proxy_args: "-Dhttp.proxyHost={{ jira_http_proxy }} -Dhttp.proxyPort={{ jira_http_proxyport }}
-Dhttps.proxyHost={{ jira_https_proxy }} -Dhttps.proxyPort={{ jira_https_proxyport }}
-Dhttp.nonProxyHosts='{{ jira_non_proxy_hosts|default([])|join('|') }}'"
jira_ldap_args: "-Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true"
And in the Playbook role;
replace:
name: "{{ file.stat.path }}"
regexp: '^JVM_SUPPORT_RECOMMENDED_ARGS=(.*)'
replace: "JVM_SUPPORT_RECOMMENDED_ARGS=\"{{ jira_base_args }}\""
when: file.stat.exists
- name: Ensure JVM_SUPPORT_RECOMMENDED_ARGS proxy settings are present
replace:
name: "{{ file.stat.path }}"
regexp: '^JVM_SUPPORT_RECOMMENDED_ARGS=(.*)'
replace: "JVM_SUPPORT_RECOMMENDED_ARGS=\"{{ jira_base_args }} {{ jira_proxy_args }}\""
when: file.stat.exists and (jira_http_proxy|default(false) or jira_https_proxy|default(false))
- name: Ensure JVM_SUPPORT_RECOMMENDED_ARGS ldap settings are present
replace:
name: "{{ file.stat.path }}"
regexp: '^JVM_SUPPORT_RECOMMENDED_ARGS=(.*)'
replace: "JVM_SUPPORT_RECOMMENDED_ARGS=\"{{ jira_base_args }} {{ jira_ldap_args }}\""
when: file.stat.exists and (jira_disable_endpoint_ident|default(false))
But how to handle situations where both proxy and ldap settings are needed? Because the latter is now overwriting the proxy ones.
edit
Fixed it like this, not very nice but who has something better?
# We know that the default plugin timeout for JIRA is too low in most cases.
- name: Set plugin timeout to 300s
replace:
name: "{{ file.stat.path }}"
regexp: '^JVM_SUPPORT_RECOMMENDED_ARGS=(.*)'
replace: "JVM_SUPPORT_RECOMMENDED_ARGS=\"{{ jira_base_args }}\""
when:
- file.stat.exists
- not jira_http_proxy|default(false)
- not jira_https_proxy|default(false)
- not jira_disable_endpoint_ident|default(false)
- name: Ensure JVM_SUPPORT_RECOMMENDED_ARGS proxy settings are present
replace:
name: "{{ file.stat.path }}"
regexp: '^JVM_SUPPORT_RECOMMENDED_ARGS=(.*)'
replace: "JVM_SUPPORT_RECOMMENDED_ARGS=\"{{ jira_base_args }} {{ jira_proxy_args }}\""
when:
- file.stat.exists
- jira_http_proxy|default(false)
- jira_https_proxy|default(false)
- not jira_disable_endpoint_ident|default(false)
- name: Ensure JVM_SUPPORT_RECOMMENDED_ARGS ldap settings are present
replace:
name: "{{ file.stat.path }}"
regexp: '^JVM_SUPPORT_RECOMMENDED_ARGS=(.*)'
replace: "JVM_SUPPORT_RECOMMENDED_ARGS=\"{{ jira_base_args }} {{ jira_ldap_args }}\""
when:
- file.stat.exists
- not jira_http_proxy|default(false)
- not jira_https_proxy|default(false)
- jira_disable_endpoint_ident|default(false)
- name: Ensure JVM_SUPPORT_RECOMMENDED_ARGS proxy + ldap settings are present
replace:
name: "{{ file.stat.path }}"
regexp: '^JVM_SUPPORT_RECOMMENDED_ARGS=(.*)'
replace: "JVM_SUPPORT_RECOMMENDED_ARGS=\"{{ jira_base_args }} {{ jira_proxy_args }} {{ jira_ldap_args }}\""
when:
- file.stat.exists
- jira_http_proxy|default(false)
- jira_https_proxy|default(false)
- jira_disable_endpoint_ident|default(false)

You can leverage insertbefore and backrefs:yes in lineinfile module.
Example:
- name: Set Atlassian enable.wait to 300s
lineinfile:
path: file.txt
regexp: '^(JVM_SUPPORT_RECOMMENDED_ARGS=".*)"$'
insertbefore: '"$'
line: '\1 -Datlassian.plugins.enable.wait=300"'
state: present
backrefs: yes

Related

ansible playbook inline to add multiple lines

i got following pb to add multiple lines to chrony.conf
- name: add line
lineinfile:
backup: no
backrefs: no
state: present
path: "{{ file_path }}"
insertafter: "{{ line.replace_with }}"
line: "{{ line.line_to_add }}"
with_items:
- { search: "{{ line.replace_with }}", add: "{{ line.line_to_add }}" }
in my vars files I got it like this
line:
line_to_add:
- "server ntp1.domain.com iburst"
- "server ntp2.domain.com iburst"
- "server ntp3.domain.com iburst"
but the change put all 3 ntp servers in one line instead of 3.
any idea?
when i change my yml to
- name: add line
lineinfile:
backup: no
backrefs: no
state: present
path: "{{ file_path }}"
#regexp: '^(\s*)[#]?{{ item.search }}(: )*'
insertafter: "{{ line.replace_with }}"
line: "{{ item }}"
create: true
loop: "{{ line.line_to_add }}"
with_items:
- { search: "{{ line.replace_with }}", add: "{{ line.line_to_add }}" }
I get suplicate loop in task: items
You can use loop for this to iterate each line in the line.line_to_add variable. I also assumed that you have another line.replace_with variable.
- name: Example of multiple lines
hosts: localhost
gather_facts: no
vars:
line:
line_to_add:
- "server ntp1.domain.com iburst"
- "server ntp2.domain.com iburst"
- "server ntp3.domain.com iburst"
replace_with:
- test_line
tasks:
- name: add line
lineinfile:
backup: no
backrefs: no
state: present
path: test_file
insertafter: "{{ line.replace_with }}"
line: "{{ item }}"
create: true
loop: "{{ line.line_to_add }}"
Gives:
server ntp1.domain.com iburst
server ntp2.domain.com iburst
server ntp3.domain.com iburst

Use Ansible to ensure a file exists, ignoring any extra lines

I'm trying to update the sssd.conf file on about 200 servers with a standardized configuration file, however, there is one possible exception to the standard. Most servers will have a config that looks like this:
[domain/domainname.local]
id_provider = ad
access_provider = simple
simple_allow_groups = unixsystemsadmins, datacenteradmins, sysengineeringadmins, webgroup
default_shell = /bin/bash
fallback_homedir = /export/home/%u
debug_level = 0
ldap_id_mapping = false
case_sensitive = false
cache_credentials = true
dyndns_update = true
dyndns_refresh_interval = 43200
dyndns_update_ptr = true
dyndns_ttl = 3600
ad_use_ldaps = True
[sssd]
services = nss, pam
config_file_version = 2
domains = domainname.local
[nss]
[pam]
However, on some servers, there's an additional line after simple_allow_groups called simple_allow_users, and each server that has this line has it configured for specific users to be allowed to connect without being a member of an LDAP group.
My objective is to replace the sssd.conf file on all servers, but not to remove this simple_allow_users line, if it exists. I looked into lineinfile and blockinfile, but neither of these seems to really handle this exception. I'm thinking I'm going to have to check the file for the existance of the line, store it to a variable, push the new file, and then add the line back, using the variable afterwards, but I'm not entirely sure if this is the best way to handle it. Any suggestions on the best way to accomplish what I'm looking to do?
Thanks!
I would do the following
See if the simple_allow_users exists in the current sssd.conf file
Change your model configuration to add the current value of the line simple_allow_users is exists
overwrite the sssd.conf file with the new content
You can use jinja2 conditional to achieve step 2 https://jinja2docs.readthedocs.io/
I beleive the above tasks will solve what you need, just remember to test on a simngle host and backup the original file just for good measure ;-)
- shell:
grep 'simple_allow_users' {{ sssd_conf_path }}
vars:
sssd_conf_path: /etc/sssd.conf
register: grep_result
- set_fact:
configuration_template: |
[domain/domainname.local]
id_provider = ad
access_provider = simple
simple_allow_groups = unixsystemsadmins, datacenteradmins, sysengineeringadmins, webgroup
{% if 'simple_allow_users' in grep_result.stdout %}
{{ grep_result.stdout.rstrip() }}
{% endif %}
default_shell = /bin/bash
..... Rest of your config file
- copy:
content: "{{ configuration_template }}"
dest: "{{ sssd_conf_path }}"
vars:
sssd_conf_path: /etc/sssd.conf
I used Zeitounator's tip, along with this question Only check whether a line present in a file (ansible)
This is what I came up with:
*as it turns out, the simple_allow_groups are being changed after the systems are deployed (thanks for telling the admins about that, you guys... /snark for the people messing with my config files)
---
- name: Get Remote SSSD Config
become: true
slurp:
src: /etc/sssd/sssd.conf
register: slurpsssd
- name: Set simple_allow_users if exists
set_fact:
simpleallowusers: "{{ linetomatch }}"
loop: "{{ file_lines }}"
loop_control:
loop_var: linetomatch
vars:
- decode_content: "{{ slurpsssd['content'] | b64decode }}"
- file_lines: "{{ decode_content.split('\n') }}"
when: '"simple_allow_users" in linetomatch'
- name: Set simple_allow_groups
set_fact:
simpleallowgroups: "{{ linetomatch }}"
loop: "{{ file_lines }}"
loop_control:
loop_var: linetomatch
vars:
- decode_content: "{{ slurpsssd['content'] | b64decode }}"
- file_lines: "{{ decode_content.split('\n') }}"
when: '"simple_allow_groups" in linetomatch'
- name: Install SSSD Config
copy:
src: etc/sssd/sssd.conf
dest: /etc/sssd/sssd.conf
owner: root
group: root
mode: 0600
backup: yes
become: true
- name: Add simple_allow_users back to file if it existed
lineinfile:
path: /etc/sssd/sssd.conf
line: "{{ simpleallowusers }}"
insertafter: "^simple_allow_groups"
when: simpleallowusers is defined
become: true
- name: Replace simple allow groups with existing values
lineinfile:
path: /etc/sssd/sssd.conf
line: "{{ simpleallowgroups }}"
regexp: "^simple_allow_groups"
backrefs: true
when: simpleallowgroups is defined
become: true

iterate over list of dictionaries within with_nested list

I have the following with_nested:
- name: Create solr schema for solr_cores
uri:
url: http://{{ cassandra_cluster_ips.split(',') | random }}:8983/solr/admin/cores?action={{ solr_core_action }}&core={{ item[1] }}.{{ item[0] }}
timeout: "{{ solr_create_timeout }}"
sudo: True
with_nested:
- "{{ solr_cores }}"
- "{{ client_names }}"
I want to change to my extra vars from:
#solr_cores: ['dom_chunk_meta', 'dom', 'tra_chunk_meta','tra','dom_difference_results','me_output']
#solr_core_action: "reload"
to:
solr_cores: [{‘dom_difference_results’:‘create’}, {‘dom’:‘reload’},
{‘tra’:‘reload’}, {‘me_output’:‘reload’}]
I looked at subelements, but don’t know how to pass it as a simple dictionary list so that I can set them into uri up to access it.
I am not 100% sure what you are trying to do but with_dict might be the one to use. If this doesn't work for you, comment and I will adjust the answer.
In your vars/inventory:
solr:
- name: core1
create_timeout: 30
action: reload
- name: core2
action: create
create_timeout: 60
In your playbook:
- name: Create solr schema for solr_cores
uri:
url: http://{{ cassandra_cluster_ips.split(',') | random }}:8983/solr/admin/cores?action={{ item.value.core_action }}&core={{ item.value.name }}.{{ item.value.other }}
timeout: "{{ item.value.create_timeout }}"
sudo: True
with_dict: "{{ solr }}"

Use Jinja2 dict as part of an Ansible modules options

I have the following dict:
endpoint:
esxi_hostname: servername.domain.com
I'm trying to use it as an option via jinja2 for the vmware_guest but have been unsuccessful. The reason I'm trying to do it this way is because the dict is dynamic...it can either be cluster: clustername or esxi_hostname: hostname, both mutually exclusive in the vmware_guest module.
Here is how I'm presenting it to the module:
- name: Create VM pysphere
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ username }}"
password: "{{ password }}"
validate_certs: no
datacenter: "{{ ansible_host_datacenter }}"
folder: "/DCC/{{ ansible_host_datacenter }}/vm"
"{{ endpoint }}"
name: "{{ guest }}"
state: present
guest_id: "{{ osid }}"
disk: "{{ disks }}"
networks: "{{ niclist }}"
hardware:
memory_mb: "{{ memory_gb|int * 1024 }}"
num_cpus: "{{ num_cpus|int }}"
scsi: "{{ scsi }}"
customvalues: "{{ customvalues }}"
cdrom:
type: client
delegate_to: localhost
And here is the error I'm getting when including the tasks file:
TASK [Preparation : Include VM tasks] *********************************************************************************************************************************************************************************
fatal: [10.10.10.10]: FAILED! => {"reason": "Syntax Error while loading YAML.
The error appears to have been in '/data01/home/hit/tools/ansible/playbooks/roles/Preparation/tasks/prepareVM.yml': line 36, column 4, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
"{{ endpoint }}"
hostname: "{{ vcenter_hostname }}"
^ here
We could be wrong, but this one looks like it might be an issue with
missing quotes. Always quote template expression brackets when they
start a value. For instance:
with_items:
- {{ foo }}
Should be written as:
with_items:
- "{{ foo }}"
exception type: <class 'yaml.parser.ParserError'>
exception: while parsing a block mapping
in "<unicode string>", line 33, column 3
did not find expected key
in "<unicode string>", line 36, column 4"}
So in summary, I'm not sure how to format this or if it is even possible.
The post from techraf sums up your problem, but for a possible solution, in the docs, especially regarding Jinja filters, there is the following bit:
Omitting Parameters
As of Ansible 1.8, it is possible to use the default filter to omit
module parameters using the special omit variable:
- name: touch files with an optional mode
file: dest={{item.path}} state=touch mode={{item.mode|default(omit)}} > with_items:
- path: /tmp/foo
- path: /tmp/bar
- path: /tmp/baz
mode: "0444"
For the first two files in the list, the default mode will be
determined by the umask of the system as the mode= parameter will not
be sent to the file module while the final file will receive the
mode=0444 option.
So it looks like what should be tried is:
esxi_hostname: "{{ endpoint.esxi_hostname | default(omit) }}"
# however you want the alternative cluster settings done.
# I dont know this module.
cluster: "{{ cluster | default(omit) }}"
This is obviously reliant on the vars to only have one choice set.
There is no way you could ever use the syntax you tried in the question, because firstly and foremostly Ansible requires a valid YAML file.
The closest workaround would be to use a YAML anchor/alias although it would work only with literals:
# ...
vars:
endpoint: &endpoint
esxi_hostname: servername.domain.com
tasks:
- name: Create VM pysphere
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ username }}"
password: "{{ password }}"
validate_certs: no
datacenter: "{{ ansible_host_datacenter }}"
folder: "/DCC/{{ ansible_host_datacenter }}/vm"
<<: *endpoint
name: "{{ guest }}"
state: present
guest_id: "{{ osid }}"
disk: "{{ disks }}"
networks: "{{ niclist }}"
hardware:
memory_mb: "{{ memory_gb|int * 1024 }}"
num_cpus: "{{ num_cpus|int }}"
scsi: "{{ scsi }}"
customvalues: "{{ customvalues }}"
cdrom:
type: client
delegate_to: localhost

Ansible, iterate over multiple registers and make extended use of build in methods

To start off I have all my variables defined in YAML
app_dir: "/mnt/{{ item.name }}"
app_dir_ext: "/mnt/{{ item.0.name }}"
workstreams:
- name: tigers
service_workstream: tigers-svc
app_sub_dir:
- inbound
- inbound/incoming
- inbound/error
- inbound/error/processed
- name: lions
service_workstream: lions-svc
app_sub_dir:
- inbound
- inbound/incoming
- inbound/error
- inbound/error/processed
You may note app_dir: "/mnt/{{ item.name }}" and app_dir_ext: "/mnt/{{ item.0.name }}" looking odd, so I originally had my variables set as below in YAML but decided to use the above mainly due to less lines in YAML when I have a large amount of workstreams.
workstreams:
- name: tigers
service_workstream: tigers-svc
app_dir: /mnt/tigers
...
I then have Ansible code to check if the directories exists, if not create them and apply permissions (!note, have taken this approach due to a ssh timeout on operation when using the file: module on a number of very big NFS mounted shares).
- name: Check workstreams app_dir
stat:
path: "{{ app_dir }}"
register: app_dir_status
with_items:
- "{{ workstreams }}"
- name: Check workstreams app_sub_dir
stat:
path: "{{ app_dir_ext }}/{{ item.1 }}/"
register: app_sub_dir_status
with_subelements:
- "{{ workstreams }}"
- app_sub_dir
- name: create workstreams app_dir
file:
path: "/mnt/{{ item.0.name }}"
state: directory
owner: "ftp"
group: "ftp"
mode: '0770'
recurse: yes
with_nested:
- '{{ workstreams }}'
- app_dir_status.results
when:
- '{{ item.1.stat.exists }} == false'
This is a little hacky but works, however I have a 2nd, 3rd, 4th path to check..etc
My question here is how to I update/refactor the above code to use <register_name>.stat.exists == false from both app_dir_status and app_sub_dir_status to control my task ?
You don't need to make nested loop! There's all required data inside app_sub_dir_status – just strip unnecessary items.
Here's simplified example:
---
- hosts: localhost
gather_facts: no
vars:
my_list:
- name: zzz1
sub:
- aaa
- ccc
- name: zzz2
sub:
- aaa
- bbb
tasks:
- stat:
path: /tmp/{{ item.0.name }}/{{ item.1 }}
register: stat_res
with_subelements:
- "{{ my_list }}"
- sub
- debug:
msg: "path to create '{{ item }}'"
with_items: "{{ stat_res.results | rejectattr('stat.exists') | map(attribute='invocation.module_args.path') | list }}"
You can iterate over stat_res.results | rejectattr('stat.exists') | list as well, but will have to construct path again as /tmp/{{ item.item.0.name }}/{{ item.item.1 }} – note double item, because first item is an element of stat_res.results which contain another item as element of your original loop for stat task.
P.S. Also I see no reason for your first task, as subdir task can detect all missing directories.

Resources