Ansible Looping over dynamic inventory using jinja template - ansible

Here is my Ansible playbook. It reads /etc/waagent.conf file and checks if variable AutoUpdate.Enabled=y or not.And it uses Jinja template to generate output.csv file.
ansibleuser#server:~/plays$ cat report_waagent_local.yaml
---
- name: waagent auto update report
hosts: localhost
connection: ssh
remote_user: ewxxxx
become: true
become_user: root
gather_facts: true
tasks:
- name: "Ensure status of AutoUpdate.Enabled in /etc/waagent.conf"
lineinfile:
name: /etc/waagent.conf
line: AutoUpdate.Enabled=y
state: present
check_mode: yes #means make no change , just check
register: conf
failed_when: (conf is changed) or (conf is failed)
ignore_errors: yes
# logic
# if "conf.changed": false --> that mean AutoUpdate.Enabled=y
# if "conf.changed": true --> that means value is not set in file.
- name: generate report
template:
src: report_waagent_local.j2
dest: ./output.csv
ansibleuser#server:~/plays$
And here is Jinja Template.
ansibleuser#server:~/plays$ cat templates/report_waagent_local.j2
{% if conf.changed == false %}
{{ ansible_host }} , AutoUpdate.Enabled=y
{% else %}
{{ ansible_host }} , AutoUpdate.Enabled=n
{% endif %}
ansibleuser#server:~/plays$
It produces output.csv as expected.
127.0.0.1, AutoUpdate.Enabled=y
Now, I need to fetch similar reports for all the servers present in the Azure subscription.
I modified my playbook. Note: I am using dynamic inventory in azure, I have a group named "all_pls" on which I need to run a playbook.
ansibleuser#server:~/plays$ cat report_waagent.yaml
---
- name: "generate waagent report"
hosts: all
connection: ssh
remote_user : ewxxxxx
become: True
become_user: root
gather_facts: True
tasks:
- name: "Ensure status of AutoUpdate.Enabled in /etc/waagent.conf"
lineinfile:
name: /etc/waagent.conf
line: AutoUpdate.Enabled=y
state: present
check_mode: yes #means make no change , just check
register: conf
failed_when: (conf is changed) or (conf is failed)
ignore_errors: yes
# if "conf.changed": false --> that mean AutoUpdate.Enabled=y
# if "conf.changed": true --> that means the value is not set in the file.
- name: generate report
template:
src: report_waagent_local.j2
dest: ./output.csv
ansibleuser#server:~/plays$
I am running my playbook and getting no issues.
But I am getting no output in output.csv.
ansible-playbook --limit all_pls report_waagent.yaml
I guess I need to loop over hosts in a group name and also check conf.changed in the Jinja template. Can someone help please?

I fixed the issue.
- name: log conf.changed in output.csv using a template
lineinfile:
line: "{{ lookup('template', 'report_waagent_local.j2') }}"
insertafter: EOF
dest: output.csv
delegate_to: localhost

Related

Ansible run shell module on multiple host's and redirect output to 1 file

I need to run shell module on all hosts group and copy the register variable to a file on any server.
NOTE : I don't want to copy the results in my local i need it on server
- name: date.
shell: cat /ngs/app/user/test
register: date_res
changed_when: false
- debug:
msg: "{{ ansible_play_hosts | map('extract', hostvars, 'date_res') | map(attribute='stdout') | list }}"
run_once: yes
- name: copy bulk output
copy:
content: "{{ allhost_out.stdout }}"
dest: "/ngs/app/{{ app_user }}/test"
Jinja2 loop can be helpful for your case.
Tested on ansible [core 2.13.3]
- name: date.
shell: cat /ngs/app/user/test
register: date_res
changed_when: false
- name: copy bulk output
copy:
content: |
{% for host in vars['play_hosts'] %}
{{ host }} {{ hostvars[host].date_res.stdout }}
{% endfor %}
dest: "/ngs/app/{{ app_user }}/bldall"
when: inventory_hostname == host-001.example.com
On host-001.example.com placed a file contains: host1 file content.
On host-002.example.com placed a file contains: host2 file content.
Output on host specified in the copy task:
host-001.example.com host1 file content
host-002.example.com host2 file content

Create Local File With Ansible Template From Variables

I'm running an ansible playbook against a number of ec2 instances to check if a directory exists.
---
- hosts: all
become: true
tasks:
- name: Check if foo is installed
stat:
path:
/etc/foo
register: path
- debug: msg="{{path.stat.exists}}"
And I would like to generate a localfile that lists the private IP addresses of the ec2 instances and states whether or not the directory foo does exist.
I can get the private IP addresses of the instances with this task
- name: Get info from remote
shell: curl http://169.254.169.254/latest/meta-data/local-ipv4
register: bar
- debug: msg="{{bar.stdout}}"
How do I create a local file with content
IP address: 10.100.0.151 directory foo - false
IP address: 10.100.0.152 directory foo - true
I've tried adding a block for this as such
- hosts: localhost
become: false
vars:
installed: "{{bar.stdout}}"
status: "{{path.stat.exists}}"
local_file: "./Report.txt"
tasks:
- name: Create local file with info
copy:
dest: "{{ local_file }}"
content: |
"IP address {{ installed }} foo - {{ status }}"
But it doesn't look like I can read values of variables from earlier steps.
What am I doing wrong please?
A similar question has been answered here.
Basically what you want is to reference it through the host var variable.
This should work.
- hosts: localhost
become: false
vars:
local_file: "./Report.txt"
tasks:
- name: Create local file with info
lineinfile:
path: "{{ local_file }}"
line:
"IP Address: {{ hostvars[item]['bar'].stdout }} - Installed: {{ hostvars[item]['path'].stat.exists }}"
with_items: "{{ query('inventory_hostnames', 'all') }}"
And this should populate your local ./Report.txt file, with the info you need.
I've used the ec2_metadata_facts module to get the IP address us ingansible_ec2_local_ipv4
I've also created the directory /tmp/testdir on the second host.
- hosts: test_hosts
gather_facts: no
vars:
directory_name: /tmp/testdir
tasks:
- ec2_metadata_facts:
- name: check if directory '{{ directory_name }}' exsists
stat:
path: "{{ directory_name }}"
register: path
# I make the outputfile empty
# because the module lineinfile(as I know) can't overwrite a file
# but appends line to the old content
- name: create empty output file
copy:
content: ""
dest: outputfile
delegate_to: localhost
- name: write output to outputfile
lineinfile:
dest: outputfile
line: "IP Address: {{ ansible_ec2_local_ipv4 }} {{ directory_name }} - {{ path.stat.exists }}"
state: present
with_items: "{{ groups.all }}"
# with_items: "{{ ansible_play_hosts }}" can also be used here
delegate_to: localhost
The outputfile looks like:
IP Address: xxx.xx.x.133 /tmp/testdir - False
IP Address: xxx.xx.x.45 /tmp/testdir - True

How to change and use a global variable for each host in Ansible?

I am trying to write a playbook to setup mysql master-slave replication with multiple slave servers. For each slave server, I need access to a variable called next_id that should be incremented before use for each host. For example, for the first slave server, next_id should be 2 and for the second slave server it should be 3 and so on. What is the way to achieve this in Ansible?
This is the yaml file I use to run my role.
- name: Setup the environment
hosts: master , slave_hosts
serial: 1
roles:
- setup-master-slave
vars:
master_ip_address : "188.66.192.11"
slave_ip_list :
- "188.66.192.17"
- "188.66.192.22"
This is the yaml file where I use the variable.
- name: Replace conf file with template
template:
src: masterslave.j2
dest: /etc/mysql/mariadb.conf.d/50-server.cnf
when: inventory_hostname != 'master'
vars:
- ip_address : "{{ master_ip_address }}"
- server_id : "{{ next_id }}"
I can suggest a way which will work according to you requirement for any number of slave servers but it is not based on any any module but self conscience.
Here my master_ip_address is 10.x.x.x and input is any value of next_id you want to increment for every slave server
- hosts: master,slave1,slave2,slave3,slave4
serial: 1
gather_facts: no
tasks:
- shell: echo "{{ input }}" > yes.txt
delegate_to: localhost
when: inventory_hostname == '10.x.x.x'
- shell: cat yes.txt
register: var
delegate_to: localhost
when: inventory_hostname != '10.x.x.x'
- shell: echo "$(({{var.stdout}}+1))"
register: next_id
delegate_to: localhost
when: inventory_hostname != '10.x.x.x'
- shell: echo "{{ next_id.stdout }}" > yes.txt
delegate_to: localhost
when: inventory_hostname != '10.x.x.x'
- name: Replace conf file with template
template:
src: masterslave.j2
dest: 50-server.cnf
when: inventory_hostname != '10.x.x.x'
vars:
- ip_address : "{{ master_ip_address }}"
- server_id : "{{ next_id.stdout }}"
[ansibleserver#172 test_increment]$ cat masterslave.j2
- {{ ip_address }}
- {{ server_id }}
Now, if you run
ansible-playbook increment.yml -e 'master_ip_address=10.x.x.x input=1'
slave1 server
[root#slave1 ~]# cat 50-server.cnf
- 10.x.x.x
- 2
slave2 server
[root#slave2 ~]# cat 50-server.cnf
- 10.x.x.x
- 3
slave3 server
[root#slave3 ~]# cat 50-server.cnf
- 10.x.x.x
- 4
and so on
"serial" is available in a playbooks only and not working in roles
therefore, for fully automatic generation of server_id for MySQL in Ansible roles, you can use the following:
roles/my_role/defaults/main.yml
---
cluster_alias: test_cluster
mysql_server_id_config: "settings/mysql/{{ cluster_alias }}/server_id.ini"
roles/my_role/tasks/server-id.yml
---
- name: Ensures '{{ mysql_server_id_config | dirname }}' dir exists
file:
path: '{{ mysql_server_id_config | dirname }}'
state: directory
owner: root
group: root
mode: 0755
delegate_to: localhost
- name: Ensure mysql server id config file exists
copy:
content: "0"
dest: "{{ mysql_server_id_config }}"
force: no
owner: root
mode: '0755'
delegate_to: localhost
- name: server-id
include_tasks: server-id-tasks.yml
when: inventory_hostname == item
with_items: "{{ ansible_play_hosts }}"
tags:
- server-id
roles/my_role/tasks/server-id-tasks.yml
# tasks file
---
- name: get last server id
shell: >
cat "{{ mysql_server_id_config }}"
register: _last_mysql_server_id
check_mode: no
delegate_to: localhost
tags:
- server-id
- name: get new server id
shell: >
echo "$(({{_last_mysql_server_id.stdout}}+1))"
register: _new_mysql_server_id
check_mode: no
delegate_to: localhost
tags:
- server-id
- name: save new server id
shell: >
echo -n "{{ _new_mysql_server_id.stdout }}" > "{{ mysql_server_id_config }}"
check_mode: no
delegate_to: localhost
tags:
- server-id
- debug:
var: _new_mysql_server_id.stdout
tags:
- server-id
- name: Replace conf file with template
template:
src: server-id.j2
dest: server-id.cnf

Use Ansible to ensure a file exists, ignoring any extra lines

I'm trying to update the sssd.conf file on about 200 servers with a standardized configuration file, however, there is one possible exception to the standard. Most servers will have a config that looks like this:
[domain/domainname.local]
id_provider = ad
access_provider = simple
simple_allow_groups = unixsystemsadmins, datacenteradmins, sysengineeringadmins, webgroup
default_shell = /bin/bash
fallback_homedir = /export/home/%u
debug_level = 0
ldap_id_mapping = false
case_sensitive = false
cache_credentials = true
dyndns_update = true
dyndns_refresh_interval = 43200
dyndns_update_ptr = true
dyndns_ttl = 3600
ad_use_ldaps = True
[sssd]
services = nss, pam
config_file_version = 2
domains = domainname.local
[nss]
[pam]
However, on some servers, there's an additional line after simple_allow_groups called simple_allow_users, and each server that has this line has it configured for specific users to be allowed to connect without being a member of an LDAP group.
My objective is to replace the sssd.conf file on all servers, but not to remove this simple_allow_users line, if it exists. I looked into lineinfile and blockinfile, but neither of these seems to really handle this exception. I'm thinking I'm going to have to check the file for the existance of the line, store it to a variable, push the new file, and then add the line back, using the variable afterwards, but I'm not entirely sure if this is the best way to handle it. Any suggestions on the best way to accomplish what I'm looking to do?
Thanks!
I would do the following
See if the simple_allow_users exists in the current sssd.conf file
Change your model configuration to add the current value of the line simple_allow_users is exists
overwrite the sssd.conf file with the new content
You can use jinja2 conditional to achieve step 2 https://jinja2docs.readthedocs.io/
I beleive the above tasks will solve what you need, just remember to test on a simngle host and backup the original file just for good measure ;-)
- shell:
grep 'simple_allow_users' {{ sssd_conf_path }}
vars:
sssd_conf_path: /etc/sssd.conf
register: grep_result
- set_fact:
configuration_template: |
[domain/domainname.local]
id_provider = ad
access_provider = simple
simple_allow_groups = unixsystemsadmins, datacenteradmins, sysengineeringadmins, webgroup
{% if 'simple_allow_users' in grep_result.stdout %}
{{ grep_result.stdout.rstrip() }}
{% endif %}
default_shell = /bin/bash
..... Rest of your config file
- copy:
content: "{{ configuration_template }}"
dest: "{{ sssd_conf_path }}"
vars:
sssd_conf_path: /etc/sssd.conf
I used Zeitounator's tip, along with this question Only check whether a line present in a file (ansible)
This is what I came up with:
*as it turns out, the simple_allow_groups are being changed after the systems are deployed (thanks for telling the admins about that, you guys... /snark for the people messing with my config files)
---
- name: Get Remote SSSD Config
become: true
slurp:
src: /etc/sssd/sssd.conf
register: slurpsssd
- name: Set simple_allow_users if exists
set_fact:
simpleallowusers: "{{ linetomatch }}"
loop: "{{ file_lines }}"
loop_control:
loop_var: linetomatch
vars:
- decode_content: "{{ slurpsssd['content'] | b64decode }}"
- file_lines: "{{ decode_content.split('\n') }}"
when: '"simple_allow_users" in linetomatch'
- name: Set simple_allow_groups
set_fact:
simpleallowgroups: "{{ linetomatch }}"
loop: "{{ file_lines }}"
loop_control:
loop_var: linetomatch
vars:
- decode_content: "{{ slurpsssd['content'] | b64decode }}"
- file_lines: "{{ decode_content.split('\n') }}"
when: '"simple_allow_groups" in linetomatch'
- name: Install SSSD Config
copy:
src: etc/sssd/sssd.conf
dest: /etc/sssd/sssd.conf
owner: root
group: root
mode: 0600
backup: yes
become: true
- name: Add simple_allow_users back to file if it existed
lineinfile:
path: /etc/sssd/sssd.conf
line: "{{ simpleallowusers }}"
insertafter: "^simple_allow_groups"
when: simpleallowusers is defined
become: true
- name: Replace simple allow groups with existing values
lineinfile:
path: /etc/sssd/sssd.conf
line: "{{ simpleallowgroups }}"
regexp: "^simple_allow_groups"
backrefs: true
when: simpleallowgroups is defined
become: true

saving variables from playbook run to ansible host local file

I'm sort of trying to build an inventory file from an ansible playbook run.
I'm trying to list out all the kvm hosts and the guests running on them, by running both service libvirtd status and if successful, virsh list --all, and to store the values in a file on the ansible host.
Ive tried a few different playbook structures but none have been successful in writing the file (using local_action wrote the ansible_hostname from just one host).
Please can someone guide me on what I'm doing wrong?
This is what I'm running:
- name: Determine KVM hosts
hosts: all
become: yes
#gather_facts: false
tasks:
- name: Check if libvirtd service exists
shell: "service libvirtd status"
register: libvirtd_status
failed_when: not(libvirtd_status.rc == 0)
ignore_errors: true
- name: List KVM guests
shell: "virsh list --all"
register: list_vms
when: libvirtd_status.rc == 0
ignore_errors: true
- name: Write hostname to file
lineinfile:
path: /tmp/libvirtd_hosts
line: "{{ ansible_hostname }} kvm guests: "
create: true
#local_action: copy content="{{ item.value }}" dest="/tmp/libvirtd_hosts"
with_items:
- variable: ansible_hostname
value: "{{ ansible_hostname }}"
- variable: list_vms
value: "{{ list_vms }}"
when: libvirtd_status.rc == 0 or list_vms.rc == 0
Was able to cobble something that's mostly working:
- name: Check if libvirtd service exists
shell: "service libvirtd status"
register: libvirtd_status
failed_when: libvirtd_status.rc not in [0, 1]
- name: List KVM guests
#shell: "virsh list --all"
virt:
command: list_vms
register: all_vms
when: libvirtd_status.rc == 0
---
- name: List all KVM hosts
hosts: production, admin_hosts, kvm_hosts
become: yes
tasks:
- name: create file
file:
dest: /tmp/libvirtd_hosts
state: touch
delegate_to: localhost
- name: Copy VMs list
include_tasks: run_libvirtd_commands.yaml
- name: saving cumulative result
lineinfile:
line: '{{ ansible_hostname }} has {{ all_vms }}'
dest: /tmp/libvirtd_hosts
insertafter: EOF
delegate_to: localhost
when: groups["list_vms"] is defined and (groups["list_vms"] | length > 0)
Now if only I could clean up the output to filter out false positives (machines that don't have libvirtd status, and have an empty/no list of VMs, because the above doesn't really work.
But at least there is output from all the KVM hosts!

Resources