I am trying to attach multiple files in single mail. I have to attach my host logs within this mail, which gets generated dynamically. As off now I have 2 hosts. Dynamic files are generated in /ansible_log/10.0.0.1_log.txt & /ansible_log/10.0.0.2_log.txt (and so on). Here I can send mail, below is the script:
Inventory File:
[logs]
x.x.x.1
x.x.x.2
- name: Send e-mail to users, attaching report
mail:
host: x.x.x.x
port: xx
to: "{{ mailid }}"
subject: Server Logs
body: Please find attached logs.
attach:
- /ansible_log/{{ item }}_log.txt
delegate_to: localhost
with_items: "{{ inventory_hostname }}"
run_once: True
tags: send_mail
Here I want to send a mail which attaches both log files in a single mail. If I remove run_once: True then it sends two seperate mails with 2 host log files. If the inventory list grows, then these log files mails will bombard user mail box. To avoid this I want to consolidate all the log files in a single mail and to the recipient as a bunch.
You cannot use a loop there. You must give it a prebuild list of files.
You can use a setfact task to build a list of files before running this task.
Related
On our Linux workstations we have an AD user setup, so that when a user access a machine it will generate a /home/{username} folder, with all of the items defined in our skel. So each machine in our pool have different and multiple user folders.
I need to modify some files that are located in each of these user folders. How can I make ansible loop through each folder in the /home/* folder, so that the playbook is being applied to all of the users?
It should be noted that the playbook is being run as root, so I don't need to run the playbook itself as the user.
IMHO, this is the wrong way to approach this - hopefully your company has a record of who should have access to each system, and a provision to get those accounts setup automatically on each system. Given that, this is a good example of "we need to fix this today while we get the better solution setup."
The example that #mdaniel provided should work. You can implement this in a playbook like this:
---
- hosts: all
gather_facts: false
tasks:
- name: "Get home directories"
shell: /bin/ls -d /home/*
register: home_dirs
- name: "Touch files"
debug:
msg: "Working on {{ item }}"
loop: "{{ home_dirs.stdout_lines }}"
Of course replace the "debug:" module with your tasks to automate.
Running Ansible 2.9.3
Working in a large environment with hosts coming and going on a daily basis, I need to use wildcard hostnames in a host group: ie:
[excluded_hosts]
host01
host02
host03
[everyone]
host*
in my playbook I have
name: "Test working with host groups"
hosts: everyone,!excluded_hosts
connection: local
tasks:
The problem is, the task is running on hosts in the excluded group.
If I specifically list one of the excluded hosts in the everyone group, that host then gets properly excluded.
So Ansible isn't working as one might assume it would.
What's the best way to get this to work?
I tried:
hosts: "{{ ansible_hostname }}",!excluded_hosts
but it errored as invalid yaml syntax.
requirements: I can not specifically list each host, they come and go too frequently.
The playbooks are going to be automatically copied down to each host and the execution started afterwards, therefore I need to use the same ansible command line on all hosts.
I was able to come up with a solution to my problem:
---
- name: "Add host name to thishost group"
hosts: localhost
connection: local
tasks:
- name: "add host"
ini_file:
path: /opt/ansible/etc/hosts
section: thishost
option: "{{ ansible_hostname }}"
allow_no_value: yes
- meta: refresh_inventory
- name: "Do tasks on all except excluded_hosts"
hosts: thishost,!excluded_hosts
connection: local
tasks:
What this does is it adds the host's name to a group called "thishost" when the playbook runs. Then it refreshs the inventory file and runs the next play.
This avoids a having to constantly update the inventory with thousands of hosts, and avoids the use of wildcards and ranges.
Blaster,
Have you tried assigning hosts by IP address yet?
You can use wildcard patterns ... IP addresses, as long as the hosts are named in your inventory by ... IP address:
192.0.\*
\*.example.com
\*.com**
https://docs.ansible.com/ansible/latest/user_guide/intro_patterns.html
Regarding monthly patching, we currently have an pre_task mail notification sent out prior to the updates being installed but due to the {{ inventory_hostname }} being included in the body of the email, this is sent out on a per server basis.
Is there a way to replace the inventory_hostname in the body to reference all servers within the hosts group (in this case groupOne) so that one email is sent out with all hostnames rather than individual emails?
hosts: groupOne
become: true
any_errors_fatal: true
pre_tasks:
name: Notification email of patching beginning
mail:
host: XXXXXXXXXXXXXXXXXXXXX.com
port: XXXXXX
to: hello#gmail.com
sender: patching#gmail.com
subject: Patching_Notification
body: "Monthly patching is about to commence for {{ inventory_hostname }}. A further email will be sent on completion"
You can use the group_names variable to get the group name(s) of your host.
https://docs.ansible.com/ansible/latest/reference_appendices/special_variables.html#magic
You probably want to run your pre_tasks notification only once with run_once: yes.
Is there an easy way to log output from multiple remote hosts to a single file on the server running ansible-playbook?
I have a variable called validate which stores the output of a command executed on each server. I want to take validate.stdout_lines and drop the lines from each host into one file locally.
Here is one of the snippets I wrote but did not work:
- name: Write results to logfile
blockinfile:
create: yes
path: "/var/log/ansible/log"
insertafter: BOF
block: "{{ validate.stdout }}"
delegate_to: localhost
When I executed my playbook w/ the above, it was only able to capture the output from one of the remote hosts. I want to capture the lines from all hosts in that single /var/log/ansible/log file.
One thing you should do is to add a marker to the blockinfile to wrap the result from each single host in a unique block.
The second problem is that the tasks would run in parallel (even with delegate_to: localhost, because the loop here is realised by the Ansible engine) with effectively one task overwriting the other's /var/log/ansible/log file.
As a quick workaround you can serialise the whole play:
- hosts: ...
serial: 1
tasks:
- name: Write results to logfile
blockinfile:
create: yes
path: "/var/log/ansible/log"
insertafter: BOF
block: "{{ validate.stdout }}"
marker: "# {{ inventory_hostname }} {mark}"
delegate_to: localhost
The above produces the intended result, but if serial execution is a problem, you might consider writing your own loop for this single task (for ideas refer to support for "serial" on an individual task #12170).
Speaking of other methods, in two tasks: you can concatenate the results into a single list (no issue with parallel execution then, but pay attention to delegated facts) and then write to a file using copy module (see Write variable to a file in Ansible).
I have a playbook in which on of the tasks within is to copy template files to a specific server with delegate_to: named "grover"
The inventory list with servers of consequence here is: bigbird, grover, oscar
These template files MUST have a name that matches each server's hostnames, and the delegate server grover also must have it's own instance of said file. This template copy operation should only take place if the file does not already exist. /tmp/grover pre-exists on server grover, and it needs to remain unaltered by the playbook run.
Eventually in addition to /tmp/grover on server grover, after the run there should also exist: /tmp/bigbird and /tmp/oscar also on server grover.
The problem I'm having is that when the playbook runs without any conditionals, it does function, but then it also clobbers/overwrites /tmp/grover which is unacceptable.
BUT, if I add tasks in previous plays to pre-check for these files, and then a conditional at the template play to skip grover if the file already exists, it not only skips grover, but it skips every other server that would be run on grover for that play as well. If I try to set it to run on every other server BUT grover, it will still fail because the delegate server is grover and that would be skipped.
Here is the actual example code snipits, playbook is running on all 3 servers due to host pattern:
- hosts: bigbird:grover:oscar
- name: File clobber check.
stat:
path: /tmp/{{ansible_hostname}}
register: clobber_check
delegate_to: "{{ item }}"
with_items:
- "{{ grover }}"
- name: Copy templates to grover.
template:
backup: yes
src: /opt/template
dest: /tmp/{{ansible_hostname}}
group: root
mode: "u=rw,g=rw"
owner: root
delegate_to: "{{ item }}"
with_items:
- "{{ grover }}"
when: ( not clobber_check.stat.exists ) and ( clobber_check is defined )
If I run that, and /tmp/grover exists on grover, then it will simply skip the entire copy play because the conditional failed on grover. Thus the other servers will never have their /tmp/bigbird and /tmp/oscar templates copied to grover due to this problem.
Lastly, I'd like to avoid ghetto solutions like saving a backup of grover's original config file, allowing the clobber, and then copying the saved file back as the last task.
I must be missing something here, I can't have been the only person to run into this scenario. Anyone have any idea on how to code for this?
The answer to this question is to remove the unnecessary with_items in the code. While with_items used as it is does allow you to delegate_to a host pattern group, it also makes it so that variables can't be defined/assigned properly to the hosts within that group either.
Defining a single host entity for delegate_to fixes this issue and then the tasks execute as expected on all hosts defined.
Konstantin Suvorov should get the credit for this, as he was the original answer-er.
Additionally, I am surprised that Ansible doesn't allow for easy delegation to a group of hosts. There must be reasons they didn't allow it.