Ansible patterns is a useful feature when targeting a playbook for a subset hosts of an inventory.
Here is an example:
---
- hosts: all
tasks:
- name: create a file on a remote machine
file:
dest: /tmp/file
state: '{{file_state}}'
- hosts: web
tasks:
- name: create file on web machines
file:
dest: /tmp/web-file
state: '{{file_state}}'
- hosts: all:!db
tasks:
- name: create file on web machines
file:
dest: /tmp/web-not-db-file
state: '{{file_state}}'
- hosts: all:&backup:!web
tasks:
- name: create file on web machines
file:
dest: /tmp/backup-file
state: '{{file_state}}'
Credits: Robert Starmer
So far, I know Ansible utilizes YAML and Jinja2 for parsing its configuration files.
Ansible patterns are not using Jinja2 because they are not surrounded by {{ }} blocks.
Now my question is, are they a custom modification done by Ansible itself?
If yes, is using such modifications documented by Ansible anywhere?
Now my question is, are they a custom modification done by Ansible itself? If yes, is using such modifications documented by Ansible anywhere?
I think your question may be based on some fundamental misunderstandings.
YAML is just way of encoding data structures -- such as dictionaries and lists -- in a text format, so that it's easy (-ish) for people to read/edit them. What an application does with the data presented in a YAML file is entirely application dependent.
So when you write something like:
- hosts: all
This does "mean" anything in YAML other than "here's a list of one item, which is a dictionary with a single key hosts with the value all".
Of course, when Ansible reads a playbook and finds a top-level list, it knows that this is a play, and it expects the play to have a hosts key. It interprets the value of the hosts key using the ansible pattern logic.
Any other application could read that YAML file and decide to do something entirely different with the content.
you're looking for the host pattern documentation:
https://docs.ansible.com/ansible/latest/user_guide/intro_patterns.html#intro-patterns
edit: I did not notice you already linked to this page.
Yes, this is neither yaml nor jinja2, and it's fully documented on the page linked, so I'm not sure we need more than that.
Related
I'd like to use an environmental variable inside aws_ec2 inventory file for the simple case of being able to easily separate different environments. Let's take this configuration as an example:
plugin: aws_ec2
filters:
#tag:Cust_code: "01"
tag:Cust_code: "{{ lookup('env','CUSTOMER_CODE') }}"
While the first line works (commented out), the second obviously doesn't and an empty host list is returned:
$ export CUSTOMER_CODE="01"
$ echo $CUSTOMER_CODE
01
$ ansible-inventory -i inventory/aws_ec2.yaml --graph
#all:
|--#aws_ec2:
|--#ungrouped:
I've read that the reason is because jinja2 templates are not supported in inventory files, even though they seem to work for some specific parameters according to this post - https://stackoverflow.com/a/72241930/19407408 .
I don't want to use dynamic inventory script because I feel it might be too complicated and I don't understand the official documentation for it. I also would prefer not to use different inventory files for different environments as I already have to use 2 different inventory files for the same environment (because for some hosts I need to use "ansible_host: private_ip_address" via compose and for jumphosts I can't). Although the latter will have to be the solution if there's no better alternative.
Has anyone been able to come up with a clever solution to this problem?
No, filters: (and its exclude_filters: and include_filters: siblings) are not jinja2 aware. The good thing about ansible being open source is that one can see under the hood how things work:
_query applies the include and exclude filters
ansible_dict_to_boto3_filter_list merely pivots the tag:Name=whatever over to [{"Name":"tag:Name","Values":["whatever"]}] format that boto wants, without further touching the key nor the values
_get_instances_by_region just calls describe-instances with those filters, again, without involving jinja2
Depending on how many instances the unfiltered list contains, using groups: or keyed_groups: may be an option along with a parameterized - hosts: in your playbook (e.g. - hosts: cust_code{{ CUSTOMER_CODE }})
Otherwise, I'd guess your best bet would be to use add_host: in a separate play in the playbook, since that allows you to have almost unlimited customization:
- hosts: localhost
tasks:
- add_host: ...
groups:
- cust_code_machines
- hosts: cust_code_machines
tasks:
- debug: msg="off to the races"
Is there anything like save_vars in Ansible that would automatically save modified var/fact imported with include_vars?
The following code serves the purpose but it is not an elegant solution.
# vars/my_vars.json
{
"my_password": "Velicanstveni",
"my_username": "Franjo"
}
# playbook.yaml
---
- hosts: localhost
gather_facts: no
tasks:
- name: fetch vars
include_vars:
file: my_vars.json
name: playbook_var
- name: change var
set_fact:
playbook_var: "{{ playbook_var|combine({'my_username': 'Njofra' }) }}"
- name: save var to file
copy:
content: "{{ playbook_var | to_nice_json }}"
dest: vars/my_vars.json
I was wondering if there is an option in Ansible we can turn on so that whenever the var/fact is changed, during the playbook execution, this is automatically updated in the file var was imported from. The background problem I am trying to solve is having global variables in Ansible. The idea is to save global vars in a file. Ansible has system of hostavars which is collection of per-hostname variables but not all data is always host-related.
I think a fact cache will plus or minus do that, although it may have some sharp edges to getting it right, since that's not exactly the problem it is trying to solve. The json cache plugin sounds like it is most likely what you want, but there is a yaml cache plugin, too.
The bad news is that, as far as I can tell, one cannot configure those caching URIs via the playbook, rather one must use either the environment variable or an ansible.cfg setting.
I am new to anisble...
I want to make sure mongo is running on all the hosts with tag tag_role_mongo and node is running on tag_role_node
is it possible to override hosts variable
hosts: {{ item.tag_name }}
tasks:
command: // check ps output {{ item.process_name }}
with_items:
- tag_name: tag_role_mongo
process_name: "mongo"
- tag_name: tag_role_node
process_name: "node"
I am pretty sure, my syntax is not, correct, my question is it even possible to do such thing using a playbook.
In all the playbook examples, hosts is fixed or can be overridden from command line using extra-args option.
Any examples would be very helpful
I'm not fully sure if I understand your question correctly but tags are not added on a host-level but on task-level - see the documentation. What you probably mean is to execute the same command for two different groups (mongo and node).
For this you can just split your playbook in two parts:
hosts: mongo_hosts
tasks:
command: ...
hosts: node_hosts
tasks:
command: ...
Not very nice but should solve the problem. You can also create a role and pass to this role just the process name you want to check for, so maintenance become easier.
I have a playbook in which on of the tasks within is to copy template files to a specific server with delegate_to: named "grover"
The inventory list with servers of consequence here is: bigbird, grover, oscar
These template files MUST have a name that matches each server's hostnames, and the delegate server grover also must have it's own instance of said file. This template copy operation should only take place if the file does not already exist. /tmp/grover pre-exists on server grover, and it needs to remain unaltered by the playbook run.
Eventually in addition to /tmp/grover on server grover, after the run there should also exist: /tmp/bigbird and /tmp/oscar also on server grover.
The problem I'm having is that when the playbook runs without any conditionals, it does function, but then it also clobbers/overwrites /tmp/grover which is unacceptable.
BUT, if I add tasks in previous plays to pre-check for these files, and then a conditional at the template play to skip grover if the file already exists, it not only skips grover, but it skips every other server that would be run on grover for that play as well. If I try to set it to run on every other server BUT grover, it will still fail because the delegate server is grover and that would be skipped.
Here is the actual example code snipits, playbook is running on all 3 servers due to host pattern:
- hosts: bigbird:grover:oscar
- name: File clobber check.
stat:
path: /tmp/{{ansible_hostname}}
register: clobber_check
delegate_to: "{{ item }}"
with_items:
- "{{ grover }}"
- name: Copy templates to grover.
template:
backup: yes
src: /opt/template
dest: /tmp/{{ansible_hostname}}
group: root
mode: "u=rw,g=rw"
owner: root
delegate_to: "{{ item }}"
with_items:
- "{{ grover }}"
when: ( not clobber_check.stat.exists ) and ( clobber_check is defined )
If I run that, and /tmp/grover exists on grover, then it will simply skip the entire copy play because the conditional failed on grover. Thus the other servers will never have their /tmp/bigbird and /tmp/oscar templates copied to grover due to this problem.
Lastly, I'd like to avoid ghetto solutions like saving a backup of grover's original config file, allowing the clobber, and then copying the saved file back as the last task.
I must be missing something here, I can't have been the only person to run into this scenario. Anyone have any idea on how to code for this?
The answer to this question is to remove the unnecessary with_items in the code. While with_items used as it is does allow you to delegate_to a host pattern group, it also makes it so that variables can't be defined/assigned properly to the hosts within that group either.
Defining a single host entity for delegate_to fixes this issue and then the tasks execute as expected on all hosts defined.
Konstantin Suvorov should get the credit for this, as he was the original answer-er.
Additionally, I am surprised that Ansible doesn't allow for easy delegation to a group of hosts. There must be reasons they didn't allow it.
With Ansible I'd like to store key-value pairs in a file on a on a target machine.
It would be created/changed by separate Ansible roles, possibly with actions like add/remove. I can already use ansible-xml module for that purpose (however if the following was possible using different format, I don't mind).
Is there any "Ansibly" way to fetch the contents of the remote XML (or another format) file and populate the values into the facts (variables)?
No sure what you mean by "remote file on the target machine", but take a look at Local facts.
You can store a static file at /etc/ansible/facts.d/ on the target machine with some facts.
You can also write an executable script and put it there – it can do any stuff you want and then should print facts to stdout as JSON.
If the Local facts mechanism doesn't allow for enough flexibility, you can do this the manual way by using the built-in modules copy and slurp.
Storing facts can be done with the copy module using the content parameter. To load them, use the slurp module. Note that slurp encodes the file contents in Base64 encoding to prevent the Jinja2 parser from parsing the contents. But since that's exactly what you want, you can decode the contents with the b64decode filter.
Example:
- name: Set facts
set_fact:
data:
testing: test string
does_it_work: yes it does!
- name: Store facts
copy:
dest: /tmp/any_path_you_want
content: "{{ data }}"
- name: Read facts
slurp:
src: /tmp/any_path_you_want
register: slurp_output
- name: Load facts
set_fact:
data2: "{{ slurp_output['content'] | b64decode }}"
- name: Display facts
debug:
var: data2