Ansible - include one playbook into another in loop - ansible

I am new to ansible and trying to figure out how can I call one playbook from another playbook in loop. I also want to consume the output back in master playbook. Not sure if it could be possible in Ansible.
Below is a stub from other programming languages -
masterplaybook.yml - from where I want to invoke auditplaybook
for devicePair in devicePairList
output = auditdevice.yml -e "d1=devicePair.A d2=devicePair.B"
save/process output
auditdevice.yml playbook is using d1 and d2 as hosts on which it is performing auditing, running commands etc. It is performing audit on dynamic inventory passed as part of argument.
Is it possible to achieve above using Ansible? If yes, can someone point to any example?

Q: "How can I call one playbook from another playbook in the loop?"
A: It is not possible. Quoting from import_playbook
"You cannot use this action inside a play."
See the example.
FWIW. ansible-runner is able to controll playbooks withing projects similar to AWX. See example.

Related

How to exclude instances of the EC2 inventory in Ansible?

We have an Ansible server using EC2 dynamic inventory:
https://github.com/ansible/ansible/blob/devel/contrib/inventory/ec2.py
https://github.com/ansible/ansible/blob/devel/contrib/inventory/ec2.ini
However, with the number of instances we have, running ./ec2.py --list or ./ec2.py --refresh-cache returns a 28,000 line JSON response.
This I assume, causes it to randomly fail (returns a Python stack trace) as it only receives a partial response when sending a call to AWS, but is then fine if ran again.
Which is why I want to know if there's a way to cut this down.
I know there is a way to include specific instances by tag in the ec2.ini (i.e. # instance_filters = tag:env=staging), but with
the way our instances are tagged, is there a way to exclude
instances instead (something that would look similar to: # instance_filters = tag:name=!dev)?
is there a way to exclude instances instead
Just for completeness, I wanted to point out that the "inventory protocol" for ansible is super straightforward to implement, and they even have a JSON Schema for it.
You can see an example of the output it is expecting by running the newly included ansible-inventory script with --list to see the output it generates from one of the .ini style inventories, and then use that to emit your own:
$ printf 'somehost ansible_user=bob\n\n[some_group]\nsomehost\n' > sample
$ ansible-inventory -i ./sample --list
What I am suggesting is that you might have better luck making a custom inventory script, that does know your local business practices, rather than trying to force ec2.py into running a negation query (which, as best I can tell, it will not do).
To generate dynamic inventory, just make an executable -- as far as I know it can be in any language at all -- and then point the -i at the executable script instead of a "normal" file. Ansible will invoke that program, and operate on the JSON output as the inventory. There are several examples people have posted as gists, in all kinds of languages.
I would still love it if you would file an issue with ansible about ec2.py, because you have the situation that can make the bug report concrete for them in ways that a simple "it doesn't work for large inventories" doesn't capture. But in the mean time, writing your own inventory provider is actually less work than it sounds.
I use the option pattern_exclude in ec2.ini:
# If you want to exclude any hosts that match a certain regular expression
pattern_exclude = staging-*
and
hostname_variable = tag_Name

Remove Config Lines on ASA with Ansible

I have an ansible playbook that creates a network object and sets ACL policies. It's working well, but I would like to create the complementary playbook to remove the object and its associated config but I don't know the correct way to approach the task.
I could just use asa_command to issue the 'no' prefix for the appropriate lines, however, that doesn't feel like the "Ansible Way" since it would try to execute the commands even if they were already absent in the config.
I have seen that some modules have a state: absent operator. However, the asa_ modules don't indicate that as an option.
Any suggestions would be much appreciated.
I think having a state: absent option makes a lot of sense, as I don't think there is a simple way of doing this more efficiently with the current asa_ modules. The Ansible team is extremely responsive to issues and PRs, so I would submit one for this feature.
It looks like there isn't a clean way to do this as of Ansible 2.4. I have a working playbook, however, I had to settle for issuing the no commands using asa_config and putting ignore_errors: yes in for each play. It's inelegant to say the least and in some cases can break down. I think there may be a way to use an error handling along with check_mode: yes. My initial attempt at this failed because when registering the result of a play to a variable, I cannot use that variable to interpret which of the affected hosts actually required a change it's just a generic yes/no for the entire play.
What I'm doing currently:
- name: Remove Network Object
asa_config:
commands:
- no object network {{ object_name }}
provider: "{{ cli }}"
ignore_errors: yes
register: dno

How to access ansible controller hostname or ip?

I need to know the hostname or IP of the machine where ansible was invoked, not the ones from the inventory.
What is the easiest way to access this? Please note that I need to use this variable in tasks that executed on various hosts.
Note that all these are not valid answers:
- ansible_hostname
- inventory_hostname
Use lookups, they are always executed on localhost.
For example: {{lookup("pipe","hostname")}}
If you use this value extensively, better to do set_fact first, otherwise lookup command will be executed every time it is referenced.
while using in set_fact: use the quotes appropriately I had issue with quotes and below worked for me
controller_host: "{{lookup('pipe','hostname')}}"

How to loop over playbook include?

(I'm currently running Ansible 2.1)
I have a playbook that gathers a list of elements and I have another playbook (that calls different hosts and whatnot) using said element as the basis for most operations. Therefore, whenever I use with_items over the playbook, it causes an error.
The loop control section of the docs say that "In 2.0 you are again able to use with_ loops and task includes (but not playbook includes) ". Is there a workaround? I really need to be able to call multiple hosts in an included playbook that runs over a set of entries. Any workarounds, ideas for such or anything are greatly appreciated!
P.S. I could technically command: ansible-playbook but I dont want to go down that rabbit hole if necessary
I think I faced same issues, and by the way, migrating to shows more than in 'item' already in use.
refering to http://docs.ansible.com/ansible/playbooks_best_practices.html , you should have an inventory (that contains all your hosts), and a master playbook (even if theorical).
A good way, instead of including playbooks, is to design roles, even if empty. Try to find a "common" role for everything that could be applied to most of your hosts.Then, include additional roles depending of usage, this will permit you to trigg on correct hosts.
You can also have roles that do nothing (meaning, nothing in 'tasks'), but that contain set of variables that can be common for two roles (you avoid then duplicate entries).

Prevent duplicate key warnings in Ansible 2

I use a lot of YAML anchors and references in my roles to keep the logic in a single spot instead of repeating myself in multiple tasks. Following is a very very basic example.
- &sometask
name: "Some Task"
some_module: with a lot of parameters
with_items: list_A
- <<: *sometask
name: "Some OTHER Task"
with_items: list_B
This example might not show how this is actually useful, but it is. Imagine you loop over a list of dicts, passing various keys from each dict to the module, maybe having quite complex "when", "failed_when" and "changed_when" conditions. You simply want to DRY.
So instead of defining the whole task twice, I use an anchor to the first one and merge all its content into a new task, then override the differing pieces. That works fine.
Just to be clear, this is basic YAML functionality and has nothing to do with Ansible itself.
The result of above definition (and what Ansible sees when it parsed the YAML file) would evaluate to:
- name: "Some Task"
some_module: with a lot of parameters
with_items: list_A
- name: "Some Task"
some_module: with a lot of parameters
with_items: list_A
name: "Some OTHER Task"
with_items: list_B
Ansible 2 now has a feature to complain when keys have been defined multiple times in a task. It still works, but creates unwanted noise when running the playbook:
TASK [Some OTHER Task] *******************************************************
[WARNING]: While constructing a mapping from /some/file.yml, line 42, column 3, found a duplicate dict key (name). Using last defined value only.
[WARNING]: While constructing a mapping from /some/file.yml, line 42, column 3, found a duplicate dict key (with_items). Using last defined value only.
Ansible configuration allows to prevent deprecation_warnings and command_warnings. Is there a way to also prevent this kind of warning?
Coming in late here I'm going to disagree with the other answers and endorse YAML merge. Playbook layout is highly subjective and what's best for you depends on the config you need to describe.
Yes, ansible has merge-like functionality with includes or with_items / with_dict loops.
The use case I've found for YAML merge is where tasks have only a few outliers, therefore a default value which can be overridden is the most compact and readable representation. Having ansible complain about perfectly valid syntax is frustrating.
The comment in the relevant ansible code suggests The Devs Know Better than the users.
Most of this is from yaml.constructor.SafeConstructor. We replicate it here so that we can warn users when they have duplicate dict keys (pyyaml silently allows overwriting keys)
PyYAML silently allows "overwriting" keys because key precedence is explicitly dealt with in the YAML standard.
As of Ansible 2.9.0, this can be achieved by setting the ANSIBLE_DUPLICATE_YAML_DICT_KEY environment variable to ignore. Other possible values for this variable are warn, which is the default and preserves the original behaviour, and error, which makes the playbook execution fail.
See this pull request for details about the implementation.
There is also a system_warnings configuration option, but none of these will silence that output you're seeing.
Here is the code generating that message from
ansible/lib/ansible/parsing/yaml/constructor.py
if key in mapping:
display.warning('While constructing a mapping from {1}, line {2}, column {3}, found a duplicate dict key ({0}). Using last defined value only.'.format(key, *mapping.ansible_pos))
While your use of YAML references is quite clever I doubt this is going to change anytime soon as a core tenant of Ansible is the human readability of the playbooks and tasks. Blocks will help with the repetition of conditionals on tasks, although they seem to be limited to tasks within a playbook at this time..
You could always submit a pull request adding an option for disabling these warnings and see where it goes.
To create reusable functionality at the task level in Ansible you should look into task includes. Task includes will allow you more freedom to do things like iterate using with_items, etc. At my employer we use anchors/references liberally, but only for variables. Given several existing ways of creating reusable tasks in Ansible such as task includes, playbook includes, and roles we have no need to use anchors/references for tasks the way you have described.
If you just want the module args to be copied between tasks you could go the templating route:
args_for_case_x: arg1='some value' arg2=42 arg3='value3'
- name: a task of case x for a particular scenario
the_module: "{{ args_for_case_x }}"
when: condition_a
- name: a different use of case x
the_module: "{{ args_for_case_x }}"
when: condition_b
As you can see though, this doesn't easily support varying the args based on loop iteration which you could get if you used one of the aforementioned reuse features.

Resources