Run a task on localhost only once - ansible

I have a playbook which will be run on many servers (say ten). The first three tasks will be run on the remote servers. The last task of merging is done on localhost (Ansible controller).
When I run this playbook, the merging is happening each time (i.e.: ten times).
I want to do the merging task only one time, once all the above tasks are completed on all servers.
---
- name: Find the location
debug:
- name: Extract details
debug:
- name: Create csv file
debug:
- name: Merge files
debug:
delegate_to: localhost

Use run_once to achieve this:
- hosts: all
tasks:
- name: do this on every host
debug:
- name: do this once on localhost
debug:
delegate_to: localhost
run_once: true

Create a block containing 'Find', 'Extract', 'Create' that runs on remote servers.
Another block containing 'Merge' that runs only on localhost.
The preferred way is to create a role for the first block, another role for second and use it in playbook:
- hosts: all
roles:
- find_extract_create
- hosts: localhost
roles:
- merge

Related

Ansible how to run all tasks on localhost except one on remote server

I have a playbook that runs multiple tasks on loacalhost like below, Except one where I need to store the result in a file to a remote server and use it in the next task as condition based on the content of the file.
What is the best way to do this and how do we define credentials for this server?
- hosts: localhost
tasks:
- name: run task1
debug: msg="running task on localhost"
- name: run task 2
debug: msg="running all others also localhost"
register: output
- name: store output in remote storage server
debug: msg="Copy the content of register output to a file in remote server"
delegate_to: "remote.storageserver.com"
Make two plays, and at the end of the first play set the variables you need for the tasks on the remote host. Set hosts to only localhost for the first play, and remotehost for the second play. Include whatever remote tasks you need in the second play. You can also define different credentials this way.
Ex:
- name: Local play
hosts: localhost
tasks:
- name: run task1
debug: msg="running task on localhost"
- name: run task 2
debug: msg="running all others also localhost"
register: output
- name: Remote play
hosts: remote.storageserver.com
tasks:
- name: store output in remote storage server
debug: msg="Copy the content of register output to a file in remote server"

Run tasks on different hosts within an imported task

The calling playbook has:
- hosts: ssh_servers
tasks:
- import_tasks: create_files.yml
Then, in create_files.yml, I'd like to run some tasks on hosts other than ssh_servers, such as:
- Hosts: other_servers
tasks:
- file:
I get: ERROR! conflicting action statements: hosts, tasks .
Is this because I'm trying to run against hosts that were never included in the calling task ?
Is there a way to accomplish this other than in the calling playbook have:
- hosts:
- ssh_servers
- other_servers
tasks:
- import_tasks: create_files.yml
Thank you.
Is this because I'm trying to run against hosts that were never included in the calling task ? Is there a way to accomplish this other than in the calling playbook
I believe the answer is yes, although it'll be weird and could cause subsequent folks who interact with your playbook some confusion
given a hypothetical create_files.yml of:
- name: create /tmp/hello_world on hosts "not_known_at_launch_time"
file:
path: /tmp/hello_world
state: present
delegate_to: '{{ item }}'
with_items: '{{ groups["not_known_at_launch_time"] }}'
then the glue needed to bridge them together is the dynamic creation of a group and that delegate_to: keyword
- hosts: ssh_hosts
tasks:
- add_host:
groups: not_known_at_launch_time
name: secret-host-0
ansible_host: 192.168.1.1 # or whatever
# ... other hostvars here ...
- include_tasks: create_files.yml
it may be possible to combine those inside create_files.yml, via some shared vars: that say which host-and-ip should be added to the magic group name, which also has the benefit of keeping the magic group name localized to the file that consumes it.
BE AWARE, I did actually test this, but not extensively, so there may be some weird things such as the need to run_once: yes on them to keep the tasks from being run groups.ssh_hosts|length times or similar stuff
As Vladimir correctly pointed out, what you actually want to happen is to make that relationship formal:
- hosts: ssh_hosts
tasks:
... whatever tasks you had before
- add_host: ... as before ...
- hosts: anonymous_group_name_goes_here
tasks:
- include_tasks: create_files.yml
- hosts: ssh_hosts
tasks:
- debug:
msg: and now you are back to the ssh_hosts to pick up what they were supposed to be doing when you stopped to post on SO

Fail task/play when host not reachable Ansible

I have written a playbook which copies a file from source to destination on multiple hosts. The playbook works if all hosts are reachable but it doesn't fail if one of the host is unreachable.
ansible-playbook -i "10.11.12.13,10.11.12.14," -e "hostid=12345" test.yml
.e.g. if the host "10.11.12.13" is not reachable task execution skips the unreachable host and move onto the next host.
Sample playbook
- hosts: localhost
gather_facts: no
tasks:
- debug: msg="backup_restore.py file not found"
- name: Copy file
hosts: all
remote_user: test
gather_facts: no
vars:
srcFolder: "/home/test"
destFolder: "/opt/config"
tasks:
- block:
- name: Copy file to node
copy:
src: '{{srcFolder}}/self.config'
dest: '{{destFolder}}/self.config'
Is there a way to fail the task if the any of the hosts are not reachable. I am using ansible 2.6.1. Thank you in advance.
Brute force solution is any_errors_fatal
- name: Copy file
hosts: all
any_errors_fatal: true
Overview of other options is in Abort execution of remaining task if certain condition is failed.

Filter hosts using a variable from with_items in ansible

I have the following set up for Ansible, and I would like to parameterize a filter that will loop, and filter out specific hosts.
- name: run on hosts
hosts: "{{ item }}"
roles:
- directory/role-name
with_items:
- us-east-1a
- us-east-1b
- us-east-1c
The result would be that the role called role-name would be first run on us-east-1a hosts, then us-east-1b... etc.
The above simple errors out with
ERROR! 'with_items' is not a valid attribute for a Play
Is there a way to accomplish what I am trying to do, which is chunking my host list into groups, and running the same role against them, one at a time?
The following achieves the result I am looking for, but is clunky, and not dynamic in length.
- name: run on us-east-1a
hosts: "us-east-1a"
roles:
- my-role
- name: run on us-east-1b
hosts: "us-east-1b"
roles:
- my-role
- name: run on us-east-1c
hosts: "us-east-1c"
roles:
- my-role
I think the only way to (1) have a common code and (2) serialise play execution per group of hosts (with targets inside a group running in parallel) would be to split your playbook into two:
playbook-main.yml
---
- import_playbook: playbook-sub.yml
vars:
host_group_to_run: us-east-1a
- import_playbook: playbook-sub.yml
vars:
host_group_to_run: us-east-1b
- import_playbook: playbook-sub.yml
vars:
host_group_to_run: us-east-1c
playbook-sub.yml
- hosts: "{{ host_group_to_run }}"
roles:
- my-role
# other common code
If you wanted to serialise per host, then there is a serial declaration that might be used in conjunction with this suggestion, but despite your comments and edit, it's unclear because once you refer to us-east-1a as a "host" in singular form, other times as a "group of hosts" or an "availability zone".
Will host patterns do the job?:
- name: run on us-east-1a
hosts: us-east-1a,us-east-1b,us-east-1c
roles:
- my-role
Update: #techraf has opened my eyes with his comment – host pattern alone will not do the job.
It will just concatenate all hosts from all groups.
But in a predictable way, which in some cases can be used to iterate hosts in every group separately.
See this answer for details.

Ansible: removing hosts

I know that one can add host with the following task:
- name: Add new instance to host group
add_host:
hostname: '{{ item.public_ip }}'
groupname: "tag_Name_api_production"
with_items: ec2.instances
But I can't seem to find a way to remove a host from inventory. Is there any way to do this?
Unfortunately, it seems, that you can't do this using Ansible 2. There is no such a module called remove_host or another one.
However, using Ansible 2 you can refresh your inventory mid-play:
- meta: refresh_inventory
Have a look at this question
Another idea might be to filter hosts beforehand. Try adding them to group, and then excluding this group in a play lately, e.g. :
- hosts: '!databases'
You can just stop the play for those hosts:
- name: Remove unwanted hosts from play_hosts
meta: end_host
when: unwanted
This assumes, of course, that the unwanted variable exists on all the hosts and is set properly.
Can not removing hosts, but can choose to run on the new created group.
- name: kubectl
hosts: localhost
gather_facts: false
tasks:
- add_host:
name: nextcloud
ansible_connection: kubectl
ansible_kubectl_context: cluster
ansible_kubectl_namespace: default
ansible_kubectl_pod: nextcloud-75fc7f5c6f-hxrq6
groupname: "pods"
# only run on pods
- hosts: pods
gather_facts: false
tasks:
- raw: pwd
register: raw_result
- debug:
msg: "{{raw_result.stdout_lines[0]}}"
We typically do it like this in a playbook using multiple hosts: sections.
- hosts: auth:!ocp
roles:
- ntp-server
- hosts: all:!auth:!ocp
roles:
- ntp-client
This will remove the groups of hosts from consideration via the !group mechanism. Specifically here in the 1st block we're removing the !ocp group and in the 2nd we're removing both the !auth and !ocp groups.
References
https://docs.ansible.com/ansible/latest/user_guide/intro_patterns.html
I just got the same problem. I have tests which run converge for a given playbook (which I can't modify), and then I need to run the same playbook with smaller set of hosts in the group.
My solution is:
Let's say you have group target you want to make smaller.
Put smallest number of hosts in the target group.
Put additional hosts into target_addon group.
- hosts: target_addon
tasks:
- add_host:
name: '{{ inventory_hostname }}'
groups: [target]
- import_playbook: converge.yaml # uses group `target` with added hosts
# Removing hosts added from target_addon group from group target by reloading inventory
- hosts: localhost
tasks:
- meta: refresh_inventory
- import_playbook: converge.yaml # uses group `target` without added hosts

Resources