I have two playbooks in ansible tower that each have surveys.
---
- name: Provision Routers
import_playbook: provision_evpn_routers.yml
- name: Provision Switches
import_playbook: provision_evpn_switches.yml
When I execute this main playbook, the surveys are not prompted and I get errors since the variables (that would populate from the surveys) do not exist.
In the job template, you can store variables and create the main survey. But, since Tower controls the input and stdout from playbook execution, you'll not be able to give any arguments or entries after job template starts.
You can create a workflow instead calling playbooks directly.
Check this out:
https://docs.ansible.com/ansible-tower/latest/html/userguide/workflow_templates.html#surveys
Related
In ansible, there are include_role and include_tasks modules
what's the difference between them? when to use which of them?
I have a role running on a windows host that requires the variables generated from a linux host. which one should I use in order to run a role in a different host?
Include_role: Includes the full role, not only a task file, for example include roles will include: vars, meta, handlers...
Include_tasks: you can call a simple playbook.yml with tasks inside, just a file dont need to be a full role.
I have an an Ansible playbook that includes a role for creating some Azure cloud resources. Group variables are used to set parameters for the creation of those resources. An inventory file contains multiple groups which reference that play as a descendant node.
The problem is that since the target is localhost for running the cloud actions, all the group variables are picked up at once. Here is the inventory:
[cloud:children]
cloud_instance_a
cloud_instance_b
[cloud_instance_a:children]
azure_infrastructure
[cloud_instance_b:children]
azure_infrastructure
[azure_infrastructure]
127.0.0.1 ansible_connection=local ansible_python_interpreter=python
The playbook contains an azure_infrastructure play that references the actual role to be run.
What happens is that this role is run twice against localhost, but each time the group variables from cloud_instance_a and cloud_instance_b have both been loaded. I want it to run twice, but with cloud_instance_a variables loaded the first time, and cloud_instance_b variables loaded the second.
Is there anyway to do this? In essence, I'm looking for a pseudo-host for localhost that makes it think these are different targets. The only way I've been able to workaround this is to create two different inventories.
It's a bit hard to guess how you playbook look like, anyway...
Keep in mind that inventory host/group variables are host-bound, so any host always have only one set of inventory variables (variables defined in different groups overwrite each other).
If you want to execute some tasks or plays on your control machine, you can use connection: local for plays or local_action: for tasks.
For example, for this hosts file:
[group1]
server1
[group2]
server2
[group1:vars]
testvar=aaa
[group2:vars]
testvar=zzz
You can do this:
- hosts: group1:group2
connection: local
tasks:
- name: provision
azure: ...
- hosts: group1:group2
tasks:
- name: install things
apk: ...
Or this:
- hosts: group1:group2
gather_facts: no
tasks:
- name: provision
local_action: azure: ...
- name: gather facts
setup:
- name: install things
apk:
In this examples testvar=aaa for server1 and testvar=zzz for server2.
Still azure action is executed from control host.
In the second example you should turn off fact gathering and call setup manually to prevent Ansible from connecting to possibly unprovisioned servers.
I have a role wp-vhost that has one role it depends on:
// roles/wp-vhost/meta/main.yml
---
dependencies:
- { role: nginx }
Each time I run wp-vhost, the nginx role will also run. I understand that this is fine and it's a desired behavior.
However, during my local development, time is unnecessarily lost on running the nginx role, when I want to run only the tasks defined in wp-vhosts since I know that nginx had run before and set-up the necessary environment for wp-vhost.
Is there a way to execute a playbook with roles, without executing roles' dependencies?
The way I would do this is to use Ansible tags and apply them to your "wp-vhost" specific code.
Assuming your wp-vhost role's main playbook is in main.yml, a good pattern is to spin out the actual tasks into a sub-playbook called something like wp-vhost.yml, included from main.yml, so the non-nginx code gets a tag that doesn't get applied to the nginx role. In this case:
- include: wp-vhost.yml
tags: wp-vhost
In order to use a tag for every chunk of Ansible code (whether an included tasks file or a role), try writing every role mentioned in dependencies like this:
- role: nginx
tags: nginx
When in testing mode, you can run just the wp-vhost specific parts like this:
$ ansible-playbook --tags wp-vhost main.yml
Or you can run the whole playbook including any dependencies like this - default is to run everything ignoring tags:
$ ansible-playbook main.yml
This makes it easy to quickly run just parts of a complex set of cascading roles and include files when testing, and also use the wp-vhost role normally in other roles' dependencies.
Impact on role structure
Careful use of tags doesn't affect role structure or use at all, and you would typically use tags only for testing.
For more complex roles, it's common to structure the tasks into separate files in any case, keeping the main.yml simple, like this:
- name: Set up base OS
include: base_os.yml
tags: base_os
- name: Ensure logs are rotated
include: logrotate.yml
tags: logrotate
- name: Create users and groups
include: users_groups.yml
tags: users_groups
Solution without include files
If you don't want to change the wp-vhosts use of include files, you would need to use blocks in the playbook (Ansible 2.0+):
- hosts: all
roles:
- role: nginx
tags: nginx
tasks:
- block:
- debug: msg=hello
- someaction: ...
tags: wp-vhosts
Note that the final tags: is aligned with the block: so applies to all tasks in that block. This is cleaner than splitting the playbook into multiple plays.
Non-tag alternative
You can use a when: condition on the role invocation in the wp-vhost role dependencies, and define a variable such as debug_mode to control this. However, such debug/test logic will clutter your codebase compared to defining a tag per role invocation or task file.
The real scenario, want to get a resource id of sqs in AWS, which will be returned after the execution of a playbook. So, using this variable in files to configure the application.
Persisting variables from one playbook to another
checking out the documentation, modules like set_fact and register have scope only for that specific host. There are many purpose of using the variables from one host to another.
Alternatives I can think of:
using Command module and echoing the variables to a file. Later, using the variable file using vars section or include.
Setting the env variables and then accessing it but this will be difficult.
So what is the solution?
If you're gathering facts, you can access hostvars via the normal jinja2 + variable lookup:
e.g.
- hosts: serverA.example.org
gather_facts: True
...
tasks:
- set_fact:
taco_tuesday: False
and then, if this has run, on another host:
- hosts: serverB.example.org
...
tasks:
- debug: var="{{ hostvars['serverA.example.org']['ansible_memtotal_mb'] }}"
- debug: var="{{ hostvars['serverA.example.org']['taco_tuesday'] }}"
Keep in mind that if you have multiple Ansible control machines (where you call ansible and ansible-playbook from), you should take advantage of the fact that Ansible can store its facts/variables in a cache (currently Redis and json), that way the control machines are less likely to have different hostvars. With this, you could set your control machines to use a file in a shared folder (which has its risks -- what if two control machines are running on the same host at the same time?), or set/get facts from a Redis server.
For my uses of Amazon data, I prefer to just fetch the resource each time using a tag/metadata lookup. I wrote an Ansible plugin that allows me to do this a little more easily as I prefer this to thinking about hostvars and run ordering (but your mileage may vary).
You can pass variables On The Command Line: http://docs.ansible.com/ansible/playbooks_variables.html#passing-variables-on-the-command-line
ansible-playbook release.yml --extra-vars "version=1.23.45 other_variable=foo"
You can use local connection to run playbook = get variable and apply it to another playbook:
- hosts: 127.0.0.1
connection: local
- shell: ansible-playbook -i ...
register: sqs_id
- shell: ansible-playbook -i ... -e "sqs_id={{sqs_id.stdout}}"
Also delegation might be useful in this scenario:
http://docs.ansible.com/ansible/playbooks_delegation.html#delegation
Also you can store output in the local file and use (http://docs.ansible.com/ansible/playbooks_delegation.html#delegation):
- name: take a sqs id
local_action: command cat ~/sqs_id
PS:
I don't understand why you can't write complex playbook where will be included many roles that will share variables?
You can write "common" variables to a host_vars or group_vars this way all the servers has access to it.
Another way may be to create a custom ansible module/lookup plugin to hide all the boilerplate code and get an easy and flexible access to the variables you need.
I had a similar issue with azure DevOps pipelines.
I created VM:s with terraform, ssh-keys and windows username/password was generated by terraform and stored it in a KeyVault.
So I then needed to query KeyVault before running Ansible on all created VM:s. I ended up using Azure python SDK to get all secrets. I also generate an inventory file and a host_vars folder with a file for each VM.
The actual play-book is now very basic and does the job perfectly. All variables for terraform and ansible is in a json file. And the python script is less than 30 lines.
I have 2 playbooks running on ansible, one after another. After playbook 1 finishes, I want to run the second one on only the hosts for which the first playbook fully succeeded. Looking through the ansible docs, I can't find any accessible info on which hosts failed a specific playbook. How could this be done?
FYI I need separate playbooks because the second one must be run with in serial, which is only available at the playbook level
Where all hosts - successful hosts = failed hosts, you can use the following task to get the difference between the two special variables for all hosts in the play (including failed hosts) and all hosts that have not yet failed. Use of serial will affect the result.
- name: Print play hosts that failed
debug:
msg: "The hosts that failed are {{ ansible_play_hosts_all| difference(ansible_play_batch) |join('\n') }}"
Source: https://docs.ansible.com/ansible/latest/reference_appendices/special_variables.html
Honestly, the best way is to have some queryable state on each host. A simple method is to check for a file's existence, which is created after your first playbook succeeds. You can then have a task which checks for that state and notifies a notify task that it has been "updated", which will get what you want.
In an aside, I stopped using ansible because it wasn't configurable enough; I also had issues getting the parallelism controls I wanted. You my try hitting up the Ansible Project Google Group to put in a feature suggestion or describe your use case.
There is a difference between a play and a playbook. The serial argument is available on the play level. A playbook may contain multiple plays.
---
- name: Play 1
hosts: some_hosts
tasks:
- debug:
- name: Play 2
hosts: some_hosts
serial: 1
tasks:
- debug:
...
Hosts which failed in play 1 will not be processed in play 2.
If your really want to have separate playbooks, I see two options:
Create a callback plugin. You can register a function which gets fired when a task or host fails. You can store this info then locally and use in the next playbook run.
Activate Ansible logging. It will log pretty much the same stuff you see as raw output when running ansible-playbook.
Second option is bit ugly, but easier than creating a callback plugin.
It both cases you then need to create a dynamic inventory script which checks previously saved data and returns valid hosts. Or return all hosts but set a property to mark those hosts. You then can use group_by to create an ad-hoc group.