Adding entry to end of line Ansible SLES 12 - ansible

On SLES 12 server. Trying to append to this entry in my /etc/security/pam_winbind.conf file with this extra entry S-1-5-21-84296906-944397292-530207130-587119.
The line is
require_membership_of=S-1-5-21-84296906-944397292-530207130-496773,S-1-5-21-84296906-944397292-530207130-71056,S-1-5-21-84296906-944397292-530207130-218591
My playbook
---
- name: Configuring ad_access_filter for RHEL systems.
hosts: smt-test
become: yes
tasks:
- name: Taking Backup.
copy:
src: /etc/security/pam_winbind.conf
dest: /etc/security/pam_winbind.conf.backup
remote_src: yes
- name: Add HQCloud to the sssd.conf file
lineinfile:
path: /etc/security/pam_winbind.conf
backrefs: yes
regexp: '(^*2185915*)$'
line: '\1,S-1-5-21-84296906-944397292-530207130-587119'
- name: Add HQCloudScapeSupp to the sudoers file.
lineinfile:
path: /etc/sudoers
line: 'HQCloudScapeSupp ALL=(ALL) NOPASSWD: ALL'
- name: Restarting WinBind Service
service:
name: winbind
state: restarted
Since the pam_winbind.conf will be different on each server, how do I just add that entry to the end of that line regardless of the other memberships?

There are a few problems with your approach IMO
It might be possible to do add your membership line with only a regex and backrefences but achieving idempotence will be a real pain. Indeed, you actually need to add your required membership if it does not already exist anywhere in the string (it might be present but not in last position). If it is already present anywhere, you should not touch anything.
You are making a backup of your file separately where the lineinfile module can do this automatically for you and only when there is a change
you are unconditionally restarting your service where it should only restart when something has actually changed requiring a restart.
The below playbook addresses the above issues:
---
- name: Configuring ad_access_filter for RHEL systems.
hosts: smt-test
become: yes
vars:
config_file: /etc/security/pam_winbind.conf
required_member: S-1-5-21-84296906-944397292-530207130-587119
search_needle: require_membership_of=
search_regexp: "^{{ search_needle }}(.*)$"
tasks:
- name: slurp file content to get existing membership entries
slurp:
path: "{{ config_file }}"
register: slurped_file
- name: Add HQCloud to the sssd.conf file if it does not exist + backup if any change
vars:
file_content_lines: "{{ (slurped_file.content | b64decode).splitlines() }}"
requirement_line: "{{ file_content_lines | select('match', search_needle) | first }}"
existing_members: "{{ (requirement_line | regex_replace(search_regexp, '\\g<1>')).split(',') | map('trim') }}"
wanted_members: "{{ existing_members | union([required_member]) }}"
lineinfile:
path: "{{ config_file }}"
regexp: "{{ search_regexp }}"
backup: true
line: "{{ search_needle }}{{ wanted_members | join(',') }}"
- name: Add HQCloudScapeSupp to the sudoers file.
lineinfile:
path: /etc/sudoers
line: 'HQCloudScapeSupp ALL=(ALL) NOPASSWD: ALL'
# Not really sure this is needed
notify: Restart winbind
handlers:
- name: Restart winbind
service:
name: winbind
state: restarted

Related

File contents are copied again and again, ansible is not acting as idempotent

Im trying to add file contents into remote nodes, below script is working but problem is If I run it again the file contents are copied again and again to remote nodes, ansible is not acting as idempotent.Any suggestions will be appreciated
hosts: all
vars:
content: "{{ lookup('file','/etc/foo.txt')}}"
tasks:
- name: finding all files present in directory
find:
paths: /etc/something.d/
file_type: file
patterns: '*.d'
register: c1
become: true
- lineinfile:
path: "{{ item.path }}"
line: "{{ contents }}"
state: present
create: yes
backup: yes
register: c2
become: true
with_items: "{{ c1.files }}"
- debug:
var: c1
- debug:
var: c2
Your lineinfile task is behaving as documented since you did neither specify the parameter regexp nor search_string.
Since we do not know whether the file you read might contain a regular expression, better use search_string as it can just contain the same line again.
- lineinfile:
path: "{{ item.path }}"
line: "{{ contents }}"
search_string: "{{ contents }}"
state: present
create: yes
backup: yes
register: c2
become: true
with_items: "{{ c1.files }}"
See https://docs.ansible.com/ansible/latest/collections/ansible/builtin/lineinfile_module.html:

How to define multiple with_items using registered variables in Ansible

AM in a process of achieving below list of tasks, and could someone please rectify the playbook or suggest a way to get the requirement done.
High level purpose of the activity is below:
find previous day's log files in multiple paths and archive them under a date wise folder (folder has to be created for particular date) in a different path.
My approach is:
Create a date wise directory and then search the previous day's log files and then copy them in to the newly created directory and then archive it.
I am having an issue when defining paths and variables in copy section. Can someone help with this?
- name: Purge old spider logs
become: true
hosts: node1
vars:
date: "{{ lookup('pipe', 'date +%Y-%m-%d') }}"
tasks:
- name: create a directory
file:
path: /path/{{ date }}
state: directory
mode: '777'
register: logdir
- name: Find log files
find:
path: /test/logs
age: 3600
patterns:
- "name.log.*"
recurse: yes
register: testlogs
- debug:
var: testlogs.path
- debug:
var=item.files
with_items: '{{ testlogs.files }}'
- name: Copy files in to backup location
copy:
src: "{{ item.files }}"
dest: "{{ item.path }}"
with_items:
- '{{ item.files.testlog.files }}'
- '{{ item.path.logdir.path }}'
if i understand your problem you want to copy all remote log files to another destination with a folder dated:
- name: Purge old spider logs
become: true
hosts: node1
vars:
date: "{{ lookup('pipe', 'date +%Y-%m-%d') }}"
tasks:
- name: create a remote directory
file:
path: /path/{{ date }}
state: directory
mode: '777'
register: logdir
- name: Find log files
find:
path: logs
age: 3600
patterns:
- "name.log.*"
recurse: yes
register: testlogs
- name: Copy (remote) files in to backup location (remote)
copy:
remote_src: yes
src: "{{ item.path }}"
dest: "{{logdir.path}}/"
with_items:
- '{{ testlogs.files }}'

ansible - ensure content of file is the same across servers

Using Ansible 2.9.12
Question: How do I configure Ansible to ensure the contents of a file is equal amongst at least 3 hosts, when the file is present at at least one host?
Imagine there are 3 hosts.
Host 1 does not has /file.txt.
Host 2 has /file.txt with contents hello.
Host 3 has /file.txt with contents hello.
Before the play is run, I am unaware whether the file is present or not. So the file could exist on host1, or host2 or host3. But the file exists on at least one of the hosts.
How would I ensure each time Ansible runs, the files across the hosts are equal. So in the end, Host 1 has the same file with the same contents as Host 2 or Host 3.
I'd like this to be dynamically set, instead of specifying the host names or group names, e.g. when: inventory_hostname == host1.
I am not expecting a check to see whether the contents of host 2 and 3 are equal
I do however, want this to be setup in an idempotent fashion.
The play below does the job, I think
shell> cat pb.yml
- hosts: all
tasks:
- name: Get status.
stat:
path: /file.txt
register: status
- block:
- name: Create dictionary status.
set_fact:
status: "{{ dict(keys|zip(values)) }}"
vars:
keys: "{{ ansible_play_hosts }}"
values: "{{ ansible_play_hosts|
map('extract', hostvars, ['status','stat','exists'])|
list }}"
- name: Fail. No file exists.
fail:
msg: No file exists
when: status.values()|list is not any
- name: Set reference to first host with file present.
set_fact:
reference: "{{ status|dict2items|
selectattr('value')|
map(attribute='key')|
first }}"
- name: Fetch file.
fetch:
src: /file.txt
dest: /tmp
delegate_to: "{{ reference }}"
run_once: true
- name: Copy file if not exist
copy:
src: "/tmp/{{ reference }}/file.txt"
dest: /file.txt
when: not status[inventory_hostname]
But, this doesn't check the existing files are in sync. It would be safer to sync all hosts, I think
- name: Synchronize file
synchronize:
src: "/tmp/{{ reference }}/file.txt"
dest: /file.txt
when: not status[inventory_hostname]
Q: "FATAL. could not find or access '/tmp/test-multi-01/file.txt on the Ansible controller. However, folder /tmp/test-multi-03 is present with the file.txt in it."
A: There is a problem with the fetch module when the task is delegated to another host. When the TASK [Fetch file.] is delegated to test-multi-01 which is localhost in this case changed: [test-multi-03 -> 127.0.0.1] the file will be fetched from test-multi-01 but will be stored in /tmp/test-multi-03/file.txt. The conclusion is, the fetch module ignores delegate_to when it comes to creating host-specific directories (not reported yet).
As a workaround, it's possible to set flat: true and store the files in a specific directory. For example, add the variable sync_files_dir with the directory, set fetch flat: true, and use the directory to both fetch and copy the file
- hosts: all
vars:
sync_files_dir: /tmp/sync_files
tasks:
- name: Get status.
stat:
path: /file.txt
register: status
- block:
- name: Create dir for files to be fetched and synced
file:
state: directory
path: "{{ sync_files_dir }}"
delegate_to: localhost
- name: Create dictionary status.
set_fact:
status: "{{ dict(keys|zip(values)) }}"
vars:
keys: "{{ ansible_play_hosts }}"
values: "{{ ansible_play_hosts|
map('extract', hostvars, ['status','stat','exists'])|
list }}"
- debug:
var: status
- name: Fail. No file exists.
fail:
msg: No file exists
when: status.values()|list is not any
- name: Set reference to first host with file present.
set_fact:
reference: "{{ status|dict2items|
selectattr('value')|
map(attribute='key')|
first }}"
- name: Fetch file.
fetch:
src: /file.txt
dest: "{{ sync_files_dir }}/"
flat: true
delegate_to: "{{ reference }}"
run_once: true
- name: Copy file if not exist
copy:
src: "{{ sync_files_dir }}/file.txt"
dest: /file.txt
when: not status[inventory_hostname]
We can achieve it by fetching the file from hosts where the file exists. The file(s) will be available on the control machine. However if the file which will be the source, exists on more than 1 node, then there will be no single source of truth.
Consider an inventory:
[my_hosts]
host1
host2
host3
Then the below play can fetch the file, then use that file to copy to all nodes.
# Fetch the file from remote host if it exists
- hosts: my_hosts
tasks:
- stat:
path: /file.txt
register: my_file
- fetch:
src: /file.txt
dest: /tmp/
when: my_file.stat.exists
- find:
paths:
- /tmp
patterns: file.txt
recurse: yes
register: local_file
delegate_to: localhost
- copy:
src: "{{ local_file.files[0].path }}"
dest: /tmp
If multiple hosts had this file then it would be in /tmp/{{ ansible_host }}. Then as we won't have a single source of truth, our best estimate can be to use the first file and apply on all hosts.
Well i believe the get_url module is pretty versatile - allows for local file paths or paths from a web server. Try it and let me know.
- name: Download files in all host
hosts: all
tasks:
- name: Download file from a file path
get_url:
url: file:///tmp/file.txt
dest: /tmp/
Edited ans:
(From documentation: For the synchronize module, the “local host” is the host the synchronize task originates on, and the “destination host” is the host synchronize is connecting to)
- name: Check that the file exists
stat:
path: /etc/file.txt
register: stat_result
- name: copy the file to other hosts by delegating the task to the source host
synchronize:
src: path/host
dest: path/host
delegate_to: my_source_host
when: stat_result.stat.exists

ansible error 'first argument must be string or compiled pattern'

I have this code in my playbook:
- hosts: standby
remote_user: root
tasks:
- name: replace hostname in config
replace:
path: /opt/agentd.conf
regexp: #\s+Hostname\=
replace: Hostname={{hname}}
backup: yes
- name: add database array in files
lineinfile:
path: /opt/zabbix_agent/share/scripts/{{ item }}
line: 'DBNAME_ARRAY=( {{dbname}} )'
insertafter: DB2PATH=/home/db2inst1/sqllib/bin/db2
backup: yes
with_items:
- Connections
- HadrAndLog
- Memory
- Regular
- name: restart service
shell: /etc/init.d/agent restart
register: command_output
become: yes
become_user: root
tags: restart
- debug: msg="{{command_output.stdout_lines}}"
tags: set_config_st
it will replace # Hostname= in a config file with Hostname= givenhostname and add an array in 4 scripts. array is the name of given database. then it will restart the agent to apply the changes.
when i run this command:
ansible-playbook -i /Ansible/inventory/hostfile /Ansible/provision/nconf.yml --tags set_config_st --extra-vars "hname=fazi dbname=fazidb"
i get this error:
first argument must be string or compiled pattern
i searched a bit but couldn't find the reason. what should i do?
The problem is in this line:
regexp: #\s+Hostname\=
You have to quote the regex because YAML comments start with #, so everything after the # will be ignored by ansible and that is why the error message occures.
So the correct line should be:
regexp: '#\s+Hostname\='
or
regexp: "#\s+Hostname\="
I think the problem is with indention. Please try as below.
- hosts: standby
remote_user: root
tasks:
- name: replace hostname in config
replace:
path: /opt/agentd.conf
regexp: #\s+Hostname\=
replace: Hostname={{hname}}
backup: yes
- name: add database array in files
lineinfile:
path: /opt/zabbix_agent/share/scripts/{{ item }}
line: 'DBNAME_ARRAY=( {{dbname}} )'
insertafter: DB2PATH=/home/db2inst1/sqllib/bin/db2
backup: yes
with_items:
- Connections
- HadrAndLog
- Memory
- Regular
- name: restart service
shell: /etc/init.d/agent restart
register: command_output
become: yes
become_user: root
tags: restart
- debug: msg="{{command_output.stdout_lines}}"
tags: set_config_st

Multiple with_items for copy module

I am trying to create switch backups. And i want to create dynamic files based on config output and switch hostname.
As an example, the configuration of switch1 should be saved in file names hostname1, config of switch2 should be saved in file names hostname2 and so on.
I am getting the hostnames from the switches from a file.
And my problem is, that the config of switch1 gets saved in file hostname1, hostname2 etc.
How can I loop the variables correctly to get the right config in the right file?
My current playbook looks like this:
---
- hosts: cisco
connection: local
gather_facts: false
vars:
backup_path: /etc/ansible/tests
cli:
host: "{{ inventory_hostname }}"
username: test
password: test
tasks:
- name: show run on switches
ios_command:
commands: show running-config
provider: "{{ cli }}"
register: config
- name: creating folder
file:
path: "{{ backup_path }}"
state: directory
run_once: yes
- name: get hostnames
become: yes
shell: cat /etc/ansible/tests/hostname_ios.txt
register: hostnames
- name: copy config
copy:
content: "{{ config.stdout[0] }}"
dest: "{{ backup_path }}/{{ item }}.txt"
with_together: "{{ hostnames.stdout_lines }}"
...
Rely on Ansible native host loop instead of inventing your own one.
It's as simple as:
- name: show run on switches
ios_command:
commands: show running-config
provider: "{{ cli }}"
register: config
- name: copy config
copy:
content: "{{ config.stdout[0] }}"
dest: "{{ backup_path }}/{{ inventory_hostname }}.txt"
Finally got it running.
Defined names in my inventory as followed:
test-switch1 ansible_host=ip
Changed the host var in my playbook
vars:
backup_path: /etc/ansible/tests
cli:
host: "{{ ansible_host }}"
username: test
password: test
And then executing my tasks:
- name: show run on switches
ios_command:
commands: show running-config
provider: "{{ cli }}"
register: config
- name: copy config
copy:
content: "{{ config.stdout[0] }}"
dest: "{{ backup_path }}/{{ inventory_hostname }}.txt"

Resources