ansible delegate_to run for multiple times - ansible

I am trying to run certain tasks using delegate_to localhost or connection: local and other tasks on the remote host. however the task is executed on the localhost multiple times when i use "delegate_to: localhost"
my inventory
localhost ansible_host=127.0.0.1 ansible_connection=local ansible_python_interpreter="{{ansible_playbook_python}}"
[master1]
ip-10-90-148-195.ap-southeast-1.compute.internal
[master2]
ip-10-90-149-130.ap-southeast-1.compute.internal
[master3]
ip-10-90-150-239.ap-southeast-1.compute.internal
[master:children]
master1
master2
master3
[worker]
ip-10-90-148-206.ap-southeast-1.compute.internal
ip-10-90-149-178.ap-southeast-1.compute.internal
ip-10-90-150-86.ap-southeast-1.compute.internal
[all:vars]
ansible_user="core"
ansible_ssh_private_key_file="~/.ssh/id_rsa"
my task:
- name: host name
shell: echo `hostname` >> /tmp/123
#delegate_to: localhost
#connection: local
if i comment delegate_to: localhost and connection: local, i will get a file /tmp/123 on each remote host with their own hostname inside it. expected result.
however if i uncomment either one of them,the task will be executed 6 times on the localhost. meaning /tmp/ls on localhost will have localhost's hostname printed 6 times in it.
my goal is simple, i just want to run certain task on all host as per define in playbook hosts: groupa:groupb, and certain task on localhost, but 1 time only. i thought this is straight forward but i have been spending hours but no result.

if your hosts contains groupa:groupb then yes make sense to have 6 entries (it runs the tasks 6 times on localhost)
you need to add the option run_once: true in your task level.
or modify the playbook to run on the localhost only.

Related

Ansible- start multiple custom processes with overriding variables (same name) on same host (different group)?

So,
We have a scenario, where we need the ability to execute a custom command on a single or multiple hosts from a group with various possible values of the same variable.
For example-
#Inventory:
[ServerGroup_1]
abc0001 node=node1
abc0002 node=node2
[ServerGroup_2]
abc0001 node=node3
abc0002 node=node4
[ServersGroups: children]
ServerGroup_1
ServerGroup_2
group_vars/ServerGroup_1
JAVA_HOME: /home/java
PORT: 9998
group_vars/ServerGroup_2
JAVA_HOME: /home/java
PORT: 9999
The goal is to execute below shell command on host abc0001 with Ports as 9998 and 9999 within a single playbook run.
shell: {{ JAVA_HOME }} -Dprocess.port={{ PORT }}
Currently every time as per Ansible default variable behavior it is only getting executed for port 9999. Now, as an alternative, we could manually separate out the tasks and call it twice inside our playbook as explained here.
But, if we have 50 different ports that would be tedious to write and also we would want the configuration in such a way that it dynamically picks up from either inventory file or variable files, so for adding any new instance or running the command on different port, we just have to add it to our inventory/variable files rather than writing a separate task covering the port. The end configuration should work for all possible scenarios of running that command on one host of a group or all hosts from a group or a particular host and node combination....
ansible-playbook -i staging test_multinode.yml --limit=ServersGroups -l abc0001
The above playbook run should execute the shell command for both the ports 9998 and 9999 on abc0001 and the playbook needs to be flexible enough if just want to say start the process only for port 9998 on abc0001.
Note:
We have tried the with_items block by setting a Port variable in inventory file for the host, but that set up is very rigid and will not work for other scenarios.
We have also tried hash_behavior=merge and hash_behavior=replace settings in ansible.cfg, did not notice any change.
Hope this makes sense and We have not over-complicated things! Please suggest few options!!!
Q: "Execute a custom command on a single or multiple hosts from a group with various possible values of the same variable. Execute shell command on host abc0001 with Ports as 9998 and 9999 within a single playbook run."
A: Only dictionaries can be merged instead of default behavior replaced. See DEFAULT_HASH_BEHAVIOUR. Change the group_vars data to dictionaries. For example
shell> cat group_vars/ServerGroup_1
my_sets:
set1:
JAVA_HOME: /home/java
PORT: 9998
shell> cat group_vars/ServerGroup_2
my_sets:
set2:
JAVA_HOME: /home/java
PORT: 9999
Then, the playbook
shell> cat test.yml
- hosts: ServersGroups
tasks:
- debug:
msg: "{{ item.value.JAVA_HOME }} -Dprocess.port={{ item.value.PORT }}"
loop: "{{ my_sets|dict2items }}"
loop_control:
label: "{{ item.key }}"
gives (abridged)
shell> ANSIBLE_HASH_BEHAVIOUR=merge ansible-playbook -l abc0001 test.yml
ok: [abc0001] => (item=set1) =>
msg: /home/java -Dprocess.port=9998
ok: [abc0001] => (item=set2) =>
msg: /home/java -Dprocess.port=9999
Q: "We have also tried hash_behavior=merge and hash_behavior=replace settings in ansible.cfg, did not notice any change."
A: The replace option works as expected. The same playbook gives
shell> ANSIBLE_HASH_BEHAVIOUR=replace ansible-playbook -l abc0001 test.yml
ok: [abc0001] => (item=set2) =>
msg: /home/java -Dprocess.port=9999
Detailed Resolution
Short Answer- Rewrite the inventory file using aliases
#Inventory:
[ServerGroup_1]
#variable with name PORT on host abc0001 from group1
group1_node1 ansible_host=abc0001 PORT=9998
group1_node2 ansible_host=abc0002 PORT=9999
[ServerGroup_2]
#same variable name Port on the same host abc0001 present in a different group
group2_node1 ansible_host=abc0001 PORT=9998
group2_node2 ansible_host=abc0002 PORT=9999
[ServersGroups: children]
ServerGroup_1
ServerGroup_2
We are using group1_node1 as an alias, so by doing this Ansible will register group1_node1 and group2_node1 as two different hosts even though it’s the same host abc0001.
Now, we will be able to start two processes on the same host abc0001 using different parameters for the same variable name PORT.
ansible-playbook -i staging test_multinode.yml --limit=ServersGroups -l group1_node1:group2_node1
Hope this is clear.

Using wildcard in Ansible inventory group doesn't work as expected

Running Ansible 2.9.3
Working in a large environment with hosts coming and going on a daily basis, I need to use wildcard hostnames in a host group: ie:
[excluded_hosts]
host01
host02
host03
[everyone]
host*
in my playbook I have
name: "Test working with host groups"
hosts: everyone,!excluded_hosts
connection: local
tasks:
The problem is, the task is running on hosts in the excluded group.
If I specifically list one of the excluded hosts in the everyone group, that host then gets properly excluded.
So Ansible isn't working as one might assume it would.
What's the best way to get this to work?
I tried:
hosts: "{{ ansible_hostname }}",!excluded_hosts
but it errored as invalid yaml syntax.
requirements: I can not specifically list each host, they come and go too frequently.
The playbooks are going to be automatically copied down to each host and the execution started afterwards, therefore I need to use the same ansible command line on all hosts.
I was able to come up with a solution to my problem:
---
- name: "Add host name to thishost group"
hosts: localhost
connection: local
tasks:
- name: "add host"
ini_file:
path: /opt/ansible/etc/hosts
section: thishost
option: "{{ ansible_hostname }}"
allow_no_value: yes
- meta: refresh_inventory
- name: "Do tasks on all except excluded_hosts"
hosts: thishost,!excluded_hosts
connection: local
tasks:
What this does is it adds the host's name to a group called "thishost" when the playbook runs. Then it refreshs the inventory file and runs the next play.
This avoids a having to constantly update the inventory with thousands of hosts, and avoids the use of wildcards and ranges.
Blaster,
Have you tried assigning hosts by IP address yet?
You can use wildcard patterns ... IP addresses, as long as the hosts are named in your inventory by ... IP address:
192.0.\*
\*.example.com
\*.com**
https://docs.ansible.com/ansible/latest/user_guide/intro_patterns.html

Single role multiple hosts different tasks

I have a playbook with multiple tasks for a single role, i want to divide the tasks say 80% to first host and remaining 20% to second host , the first and second host will be picked from
ansible-playbook -i 1.2.3.4, 2.3.4.5, update.yml
where 1.2.3.4 is first server ip and 2.3.4.5 is second server ip. How can i achieve this.
To recap:
You have one role with 10 tasks. 6 of which you want to execute on server 1 and the rest on server 2
A way would be to write 2 different playbooks which will include the tasks you want to execute on the specified hosts.
Another might be to use tags on each task and execute ansible with --tags and specify them on playbook level
- hosts: all
tags:
- foo
role:
...
- hosts: all
tags:
- bar
role:
...
ref https://docs.ansible.com/ansible/latest/user_guide/playbooks_tags.html
Playbook tasks execution can be controlled by tags or blocks. My previous answer was related to the task execution on few of the hosts(I miss understood)
For eg.
serial: "80%"
would mean that all the tasks will be performed on 80% of the hosts first then will be performed on the remaining hosts.
For playbook to execute some tasks on few hosts and some on few hosts you can may be use when with ansible_hostname set to some hosts

Host not found in Ansible inventory

I am trying to do some testing against a specific host with Ansible 2.5 but ansible can't figure out my inventory. I've either done something wrong or there's a bug. I've done this in the past but maybe something changed in 2.5
I have an inventory file specified like this:
localhost ansible_connection=local
testhost ansible_ssh_host=1.2.3.4
I have a playbook that runs totally fine if i just run it with ansible playbook.yml. It starts like this:
- hosts: localhost
become: yes
become_user: root
become_method: sudo
gather_facts: yes
If I run ansible-inventory --list I see both of my hosts listed as "ungrouped"
However, if I try to run my playbook against the remote host using ansible -l testhost playbook.yml it errors with the following:
[WARNING]: Could not match supplied host pattern, ignoring: playbook.yml
ERROR! Specified hosts and/or --limit does not match any hosts
I can't figure out how to actually make Ansible run against my remote host.
Your playbook specifies:
hosts: localhost
It will not run on testfile regardless of the arguments you supply. --limit does not replace the hosts declaration.
As your hosts are ungrouped, you need to change this to:
hosts: all
Then you can use limit option to filter the hosts from the given target group.
You are also using wrong command to run an Ansible playbook, it should be ansible-playbook not ansible (and although the effect is the same, the latter does not fail with an error in such case).
use simple method wherever you have to connect on local system? just specify connection : local to hosts block
- hosts: localhost
connection : local
become: yes
become_user: root

run task once all tasks on all servers completed

Consider next scenario:
multiple hosts needs to be configured independently. At some point in time, after ALL configuration tasks on ALL hosts been completed successfully, some final tasks needs to be run on ONLY ONE host. what would be the proper solution for ansible playbook ?
Use run_once for that: http://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html#run-once
Example:
---
- hosts: all
tasks:
- command: echo preparing stuff on all hosts
- command: echo run only on single host
run_once: True

Resources